For the purpose of enhancing discriminability of convolutional neural networks (CNNs) and facilitating optimization, a multilayer structured variant of the maxout unit (named Multilayer Maxout Network, MMN) is proposed in this paper. CNNs with maxout units employ linear convolution filters followed by maxout units to abstract representations from less abstract ones. Our model instead applies MMNs as activation functions of CNNs to abstract representations, which inherits advantages of both maxout units and deep neural networks, and is a more general nonlinear function approximator as well. Experimental results show that our proposed model yields better performance on three image classification benchmark datasets (CIFAR-10, CIFAR-100 and MNIST) than some state-of-the-art methods. Furthermore, the influence of MMN in different hidden layers is analyzed, and a trade-off scheme between the accuracy and computing resources is given.