This paper presents a method of modern deep machine learning and its application in dimension reduction and lossy compression. Deep belief networks (or DBN's), first proposed by Yoshua Bengio, are hierarchic, stochastic, neural networks with appropriate architecture and dedicated training algorithms. They are composed of layers, each one of which is a restricted Boltzmann machine (or RBM). There exists a consistent and well formulated mathematical model describing how DBN's work. Such adaptive systems are used to learn presented training sets with or without a supervisor. In this paper, the knowledge acquired in this manner is used to classify handwritten digits, stored in a database; then compress and shape the explored, abstract information for the transmission purposes. The experiments performed show, that recognition error of order of less than 5% can be achieved by iterative training.