The problem of optimal (fixed) wireless sensor network (WSN) design for distributed detection of a randomly-located target is addressed. This is an extension of the previous work reported in [1] where the problem was addressed for a one-dimensional (1D) network assuming wireless channels between sensors and the fusion center undergo only the path-loss attenuation. In this paper we consider both one and two-dimensional (2-D), equi-spaced WSN models in the presence of short-term fading in addition to path-loss attenuation. The target is assumed to be exponentially distributed with a known mean. The optimal inter-node spacing is derived by optimizing the Bhattacharya bound on the error probability of the Bayesian detector. In the presence of fading, it is shown that the optimal node placement depends on the channel SNR, path loss exponent and the mean target location. However, we show that for low channel SNRpsilas, the optimal spacing obtained for no fading case, which is only a function of path-loss exponent and the mean target location, is a good approximation to that with fading. In particular, it is not a function of the channel SNR. It is shown that in many cases the deviation from optimal inter-node spacing can cost significant performance penalty. From numerical results, it is verified that the optimal inter-node spacing obtained based on the Bhattacharya bound holds true if the performance measure were to be the exact fusion error probability.