In order to reduce data dimensions, autoencoders with neural networks have been proposed by Hinton et al. Autoencoders are composed of input, one hidden, and output layers, which tune weights and biases by a back propagation to minimize an error between inputs and outputs. The learned weights have input features, and can be applied to pretrainings of deep neural networks. However, these autoencoders have been developed for real-valued neural networks. In this study, we propose complex and quaternion autoencoders for complex and quaternion neural networks, respectively. In the complex-valued autoencoder, inputs, weights, biases and outputs of the real-valued autoencoder are extended to complex numbers. In the quaternion autoencoder, these parameters are extended to quaternion numbers. We show the learning abilities of the proposed methods using handwritten digit images. The results show that the proposed methods can recognize the images as the real-valued methods.