We develop a gradient-descent distributed adaptive estimation strategy that compensates for error in both input and output data. To this end, we utilize the concepts of total least-squares estimation and gradient-descent optimization in conjunction with a recently-proposed framework for diffusion adaptation over networks. The proposed strategy does not require any prior knowledge about the noise variances and has a computational complexity comparable to the diffusion least mean square (DLMS) strategy. Simulation results demonstrate that the proposed strategy provides significantly improved estimation performance compared with the DLMS and bias-compensated DLMS (BC-DLMS) strategies when both the input and output signals are noisy.