Efficient $\chi ^{2}$ Kernel Linearization via Random Feature Maps

IEEE Trans Neural Netw Learn Syst. 2016 Nov;27(11):2448-2453. doi: 10.1109/TNNLS.2015.2476659. Epub 2015 Sep 23.

Abstract

Explicit feature mapping is an appealing way to linearize additive kernels, such as χ2 kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ2 kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ2 kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ2 multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ2 kernel SVMs at almost no cost of testing accuracy.