This paper presents a novel probabilistic model that represents a joint probability of two visible variables with a deep architecture, called a deep relational model (DRM). The model stacks several layers from one visible layer on to another visible layer, sandwiching hidden layers between them. As with restricted Boltzmann machines (RBMs) and deep Boltzmann machines (DBMs), all connections (weights) between two adjacent layers are undirected. During the maximum-likelihood (ML)-based training, the network attempts to capture latent complex relationships between two visible variables (e.g., an image showing a certain number and its corresponding label) thanks to its deep architecture. Unlike deep neural networks, 1) the proposed DRM is a totally generative model, and 2) the weights can be optimized in a probabilistic manner. This paper presents and discusses the experiments conduced to evaluate our DRM's performance in recognition and generation tasks.