In the present paper an implementation of multilayer perceptron (MLP) in a new generation SRAM-FPGA device is discussed. The presented solution enables easy realization of MLP with arbitrary structure and calculation accuracy. The solution is based on utilization of the structural parallelism of the FPGA device and economical realization of individual neurons. A flexible and effective method has been applied for approximation of the nonlinear activation function by a series of linear segments. Selection mechanisms have been also introduced for a compromise between the amount of logical resources used and the network operation speed. Therefore the presented solution can be applied both for implementation of big networks in small FPGA devices and for implementation working in real time, for which high operation speed is required.