Although there are several attempts to resolve the obvious tension between neural network learning and symbolic reasoning devices, no generally acceptable resolution of this problem is available. In this paper, we propose a hybrid neuro-symbolic architecture that bridges this gap (in one direction), first, by translating a first-order input into a variable-free topos representation and second, by learning models of logical theories on the neural level by equations induced by this topos. As a side-effect of this approach the network memorizes a whole model of the training input and allows to build the core of a framework for integrated cognition.