General function approximation can be obtained by feed-forward neural nets consisting of just one hidden layer of non-linear neurons. The innovations described in this paper is that training of the weights between the input and hidden layers is not required. By taking these so-called “activation weights” as random, the resulting problem is linear in the parameters and can easily be solved by standard ordinary least-squares methods. This paper demonstrates that by using such “random activation weight nets” (RAWNs), excellent mappings can be obtained, provided that the activation weights are chosen so that the regression matrix is non-singular. Further improvements can be achieved by regularization, and by the proper choice of the excitation signal used for training. It is found that (i) much smaller errors are obtained as compared to backpropagation nets with the same degrees of freedom, (ii) the mapping depends only slightly upon the actual values of the hidden weights, provided the net has sufficient neurons, and (iii) since no iteration is needed, the speed of computation is incomparably much faster than with backpropagation. These properties make the RAW-Net particularly suitable for control applications. Various examples, both static and dynamic, are given to show the feasibility and advantages of the approach.