The use of Haar-like filtering for resourced-constrained speech detection in sensornet application is explored. The simple Haar-like filters having variable filter width and shift width are trained to learn appropriate filter parameters from the training samples to detect speech. To further refine the accuracy, the center-clipped emphasis is proposed as a new degree of freedom for more adaptive Haar-like filter designs. Our method yielded speech/nonspeech classification accuracy of 98.33% for the input length of 0.1 s. Compared with high performance feature extraction method MFCC (mel-frequency cepstrum coefficient), the proposed Haar-like filtering can be approximately 98.40% efficient in terms of the amount of add and multiply computation while capable of achieving the error rate of only 1.63% relative to MFCC.