This paper decomposes a large-scale learning problem into multiple limited-scale pairs of training subsets and cross validation (CV) subsets. One training subset only consists of its own class and some most neighboring samples from the other categories. Naturally, modular multilayer perceptrons (MLPs) come into being. If the final decision region of an MLP is open, its real outputs must be amended. According to the fuzzy set theory, each output of MLPs is added a correction coefficient, which is related to the class mean and covariance. In addition, weight increment correction factors are added to solve the sample disequilibrium problems in training subsets. The result for letter recognition shows that the above methods are quite effective