The selection of the frequencies of the new hidden units for sequential Feed-forward Neural Networks (FNNs) usually involves a non-linear optimization problem that cannot be solved analytically. Most models found in the literature choose the new frequency so that it matches the previous residue as best as possible. Several exceptions to the idea of matching the residue perform an (implicit or explicit) orthogonalization of the output vectors of the hidden units. An experimental study of the aforementioned approaches to select the frequencies in sequential FNNs is presented. Our experimental results indicate that the orthogonalization of the hidden vectors outperforms the strategy of matching the residue, both for approximation and generalization purposes.