This paper presents some complexity results for the specific case of a VLSI friendly neural network used in classification problems. A VLSI-friendly neural network is a neural network using exclusively integer weights in a narrow interval. The results presented here give updated worst-case lower bounds for the number of weights used by the network. It is shown that the number of weights can be lower bounded by an expression calculated using parameters depending exclusively on the problem (the minimum distance between patterns of opposite classes, the maximum distance between any patterns, the number of patterns and the number of dimensions). The theoretical approach is used to calculate the necessary weight range, a lower bound for the number of bits necessary to solve the problem in the worst case and the necessary number of weights for several problems. Then, a constructive algorithm using limited precision integer weights is used to construct and train neural networks for the same problems. The experimental values obtained are then compared with the theoretical values calculated. The comparison shows that the necessary weight precision can be estimated accurately using the given approach. However, the estimated numbers of weights are in general larger than the values obtained experimentally.