Faster Kernel Ridge Regression Using Sketching and Preconditioning

Kernel Ridge Regression (KRR) is a simple yet powerful technique for non-parametric regression whose computation amounts to solving a linear system. This system is usually dense and highly ill-conditioned. In addition, the dimensions of the matrix are the same as the number of data points, so direct methods are unrealistic for large-scale datasets. In this paper, we propose a preconditioning technique for accelerating the solution of the aforementioned linear system. The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations. However, random feature maps only provide crude approximations to the kernel function, so delivering state-of-the-art results by directly solving the approximated system requires the number of random features to be very large. We show that random feature maps can be much more effective in forming preconditioners, since under certain conditions a not-too-large number of random features is sufficient to yield an effective preconditioner. We empirically evaluate our method and show it is highly effective for datasets of up to one million training examples.


Similar Publications

Understanding the flow of incompressible fluids through porous media plays a crucial role in many technological applications such as enhanced oil recovery and geological carbon-dioxide sequestration. Several natural and synthetic porous materials exhibit multiple pore-networks, and the classical Darcy equations are not adequate to describe the flow of fluids in these porous materials. Mathematical models that adequately describe the flow of fluids in porous media with multiple pore-networks have been proposed in the literature. Read More


This work is focused on the application of functional-type a posteriori error estimates and corresponding indicators to a class of time-dependent problems. We consider the algorithmic part of their derivation and implementation and also discuss the numerical properties of these bounds that comply with obtained numerical results. This paper examines two different methods of approximate solution reconstruction for evolutionary models, i. Read More


2017May
Affiliations: 1University of Maryland, College Park, MD, USA, 2University of Maryland, College Park, MD, USA, 3University of Maryland, College Park, MD, USA, 4University of Maryland, College Park, MD, USA, 5University of Maryland, College Park, MD, USA

Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. Read More


Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in each of the two phases. The relation between high-order criticality and penalization techniques is finally considered, showing that standard algorithmic approaches will fail if approximate constrained high-order critical points are sought. Read More


\emph{Tensor train (TT) decomposition} is a powerful representation for high-order tensors, which has been successfully applied to various machine learning tasks in recent years. However, since the tensor product is not commutative, permutation of data dimensions makes solutions and TT-ranks of TT decomposition inconsistent. To alleviate this problem, we propose a permutation symmetric network structure by employing circular multilinear products over a sequence of low-order core tensors. Read More


Support Vector Machine is one of the most classical approaches for classification and regression. Despite being studied for decades, obtaining practical algorithms for SVM is still an active research problem in machine learning. In this paper, we propose a new perspective for SVM via saddle point optimization. Read More


We propose an approximation method for thresholding of singular values using Chebyshev polynomial approximation (CPA). Many signal processing problems require iterative application of singular value decomposition (SVD) for minimizing the rank of a given data matrix with other cost functions and/or constraints, which is called matrix rank minimization. In matrix rank minimization, singular values of a matrix are shrunk by hard-thresholding, soft-thresholding, or weighted soft-thresholding. Read More


The problem of increasing the accuracy of an approximate solution is considered for boundary value problems for parabolic equations. For ordinary differential equations (ODEs), nonstandard finite difference schemes are in common use for this problem. They are based on a modification of standard discretizations of time derivatives and, in some cases, allow to obtain the exact solution of problems. Read More


We introduce LRT, a new Lagrangian-based ReachTube computation algorithm that conservatively approximates the set of reachable states of a nonlinear dynamical system. LRT makes use of the Cauchy-Green stretching factor (SF), which is derived from an over-approximation of the gradient of the solution flows. The SF measures the discrepancy between two states propagated by the system solution from two initial states lying in a well-defined region, thereby allowing LRT to compute a reachtube with a ball-overestimate in a metric where the computed enclosure is as tight as possible. Read More


Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices -- an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. Read More