Feed-forward neural networks has been used in many areas, but still with limited generalization and slow convergence. This research uses the simple form of the feed-forward neural networks, the multi-layer perceptrons, continuing the other previous research that use inverse function of the activation function with weight ratio, to cut down the execution time from days into minutes in a learning system, also to remove the oscillating iterations into directly only two iterations. We proposed the new approach of computing the new weights based on the ratio of the initial and the new weights using the inverse of activation function. The weights of the feed-forward neural network connection is modified straightforwardly in order to produce small errors. This paper also found that adjusted linear function, instead of the classical sigmoid and hyperbolic tangent function, can also be used as the inverted activation function to speed convergence of errors. This work has been successfully applied in learning rainfall data in a rainfall prediction system.