Limited precision neural networks are better suited for hardware implementations. Several researchers have proposed various algorithms which are able to train neural networks with limited precision weights. Also it has been suggested that the limits introduced by the limited precision weights can be compensated by an increased number of layers. This paper shows that, from a theoretical point of view, neural networks with integer weights in the range [-p,p] can solve classification problems for which the minimum euclidian distance in-between two patterns from opposite classes is 1/p. This result can be used in an information theory context to calculate a bound on the number of bits necessary for solving a problem. It is shown that the number of bits is limited by m*n*log(2pD) where m is the number of patterns, n is the dimensionality of the space, p is the weight range and D is the radius of a sphere including all patterns.