Kernel descriptors have been proven to outperform existing histogram based local descriptors as such descriptors are extracted from the match kernels which measure similarities between image patches using different pixel attributes (gradient, colour or LBP pattern). The extraction of kernel descriptors does not require coarse quantization of pixel attributes. Instead, each pixel equally participates in matching between two image patches. In this paper, by leveraging the kernel properties, we propose a unique approach which simultaneously increases the effectiveness and efficiency of the existing kernel descriptors. Specifically, this is done by improving the similarity measure between two different patches in terms of any pixel attribute. The proposed kernel descriptors are more discriminant, take less time to be extracted and have much lower dimensions. Our experiments on Scene Categories and Caltech 101 databases show that our proposed approach outperforms the existing kernel descriptors.