The problem of distributed learning in wireless sensor networks is addressed, with the perspective of implementing nearest-neighbor (NN) regression in a decentralized way, with communication constraints. Elaborating on the ordered-transmission idea of Blum and Sadler <citerefgrp><citeref refid="ref1"/></citerefgrp>, a universal channel access policy is designed that, without inter-sensor coordination, enables the fusion center to recover exactly the training-set labels it needs, while less informative labels are not delivered at all. Exploiting the aforementioned access policy, two different paradigms are then considered. In the first one, a constraint is imposed on the number of channel accesses, and a distributed regression algorithm is proposed reaching an asymptotic performance of twice the minimum achievable mean-squared error, while requiring just a single channel access. In the second one, a constraint is imposed on the number of quantization bits, and the focus is on devising consistent -NN regression rules. The noiseless case with quantized data is preliminarily addressed. Then, the role of the channel is explicitly taken into account, and a scheme with one-bit quantizers is proposed, reaching consistency over binary symmetric channels. Finally, it is argued that the inference task can be naturally suited to uncoded communications. Accordingly, two schemes are proposed, ensuring consistency over coherent and noncoherent channels, respectively; possible gains over the coded schemes are discussed.