This paper reports an empirical investigation on the use of functionally expanded input data for the constructive learning of neural networks; a functional expansion can be helpful when approximating nonlinear functions. The investigation was conducted considering six constructive neural network algorithms (Tower, Pyramid, Tiling, PTI, Perceptron Cascade and Shift), six data domains (four real data and two artificial) and two polynomial expansions (power set series and trigonometric). Results from experiments conducted are presented; a comparative analysis is given as evidence of the benefits of functionally expanding the input data, as a pre-processing step prior to learning, when constructive neural network algorithms are used.
Financed by the National Centre for Research and Development under grant No. SP/I/1/77065/10 by the strategic scientific research and experimental development program:
SYNAT - “Interdisciplinary System for Interactive Scientific and Scientific-Technical Information”.