Machine learning models deployed in real world applications, operate in a dynamic environment where the datadistribution can change constantly. These changes, calledconcept drifts, cause the performance of the learned modelto degrade over time. As such it is essential to detect andadapt to changes in the data, for the model to be of any realuse. While, model adaptation requires labeled data (for retraining), the detection process does not have to. Labelingdata is time consuming and expensive, and if data changesare infrequent, most of the labeling effort, spent on verification, is wasted. In this paper, an ensemble based detectionmethod is proposed, which tracks the number of samples inthe critical disagreement regions of the ensemble, to detectconcept drift from unlabeled data. The proposed algorithm, is-distribution and model independent, unsupervised andcan be used in an online incremental fashion. Experimentalanalysis on 4 real world concept drift datasets showsthat the proposed methodology gives high prediction performance, low false alarm rate and uses only 11.3% overalllabeling, on average.