In this paper, we propose a scalable, decentralized learning algorithm for Random Weights Fuzzy Neural Networks, when training data is distributed through a network of interconnected computing agents. In this scenario, the aim is for all the agents to converge to a single model, with the requirement that only local communications between the agents are permitted. In this work we assume that all the agents know the parameters of the antecedents, while the parameters of the consequents are estimated by using the Alternating Direction Method of Multipliers strategy. Experimental results show that the performance of the proposed algorithm is comparable to that of a centralized model, where all the data is collected by a single agent before the training process. To this date, this is the first publication that addressed the problem of training a fuzzy neural network over a fully decentralized infrastructure.