In image processing and computer vision, significant progress has been made in feature learning for exploiting important cues in data that elude non-learned features. While the field of deep learning has demonstrated state-of-the-art performance, the Evolution-COnstructed (ECO) work of Lillywhite et. al has the advantage of interpretability, and it does not predispose the solution to one of convolution. This paper presents a novel approach for extending the ECO framework. We achieve this through two overarching ideas. First, we address a potential major shortcoming of ECO features — the “features” themselves. The so-called ECO features are simply a transformed image that has been unrolled into a large one dimensional vector. We propose employing feature descriptors to extract pertinent information from the ECO imagery. Furthermore, it is our hypothesis that there exists a unique set of transforms for each feature descriptor used on a given problem domain that leads to the descriptors extracting maximal discriminative information. Second, we introduce constraints on each individual's chromosome to promote population diversity and prevent infeasible solutions. We show through experiments that our proposed iECO framework results in, and benefits from, a unique series of transforms for each descriptor being learned and maintaining population diversity.