Kernel independent component analysis (KICA) detects primary independent components of data by minimizing kernelized canonical correlation of random variables in a reproducing kernel Hilbert space. KICA has been widely used in many practical tasks, e.g., blind source separation and speech recognition. However, the dense kernel matrix in traditional KICA causes high computational complexity which prohibits it from the tasks on large-scale datasets. This paper proposes a fast KICA algorithm termed Nyström-KICA to approximate the kernel matrix by a low-rank matrix with the Nyström method, and such strategy can reduce the computational complexity without loss of accuracy. In particular, Nyström-KICA randomly chooses a few examples from the dataset, and smartly constructs an approximation of kernel matrix based on the subset. The experimental results on both simulated and TIMIT datasets confirm that Nyström-KICA can run much faster than traditional KICA methods with comparable accuracy.