Learning Invariant Representations with Kernel Warping for Convolutional Kernel Networks

Towards effective utilization of invariance priors, many existing learning algorithms pre-fix the representation of data and search space, using invariance as a bias for empirical risk minimization. Prevalent methods that do learn representations typically employ parametric models that "hard code" invariance over the \emph{entire} sample space (e.g., pooling, scattering), hence restricting the range of transformations. In this work, we propose a new trade-off which accommodates a broader range of transformations that correspond to bounded linear operators in a Reproducing Kernel Hilbert Space (RKHS), although it only enforces local invariances around observed examples. Our key observation is that this RKHS can be "warped" into a new RKHS which admits both invariance-aware feature representations and efficient finite-dimensional approximations. As an example of this generic framework, we applied it to convolutional kernel networks, established its stability in theory, and demonstrated its empirical effectiveness in modeling invariance.


Xinhua Zhang is currently an Assistant Professor of Computer Science at the University of Illinois at Chicago. From Oct 2012 to Oct 2015, he was a researcher at the Machine Learning Research Group of NICTA (now Data61). From April 2010 to September 2012, he was a postdoc at the University of Alberta working with Prof Dale Schuurmans. He obtained his Ph.D. in Computer Science from ANU in June 2010, working with Prof SVN Vishwanathan and Prof Alex Smola. His research interest is machine learning, especially convex and nonconvex optimization, convex relaxation of deep networks, and kernel methods.

Date & time

11am 4 Jun 2018


Updated:  10 August 2021/Responsible Officer:  Dean, CECS/Page Contact:  CECS Marketing