This chapter explores the feasibility of using some kind of tailored neural networks to automatically classify sounds into either speech or non-speech in hearing aids. These classes have been preliminary selected aiming at focusing on speech intelligibility and user's comfort. Hearing aids in the market have important constraints in terms of computational complexity and battery life, and thus a set of trade-offs have to be considered. Tailoring the neural network requires a balance consisting in reducing the computational demands (that is the number of neurons) without degrading the classification performance. Special emphasis will be placed on designing the size and complexity of the multilayer perceptron constructed by a growing method. The number of simple operations will be evaluated, to ensure that it is lower than the maximum sustained by the computational resources of the hearing aid.