Aaron Sidford

Aaron Sidford
Are you Aaron Sidford?

Claim your profile, edit publications, add additional information:

Contact Details

Name
Aaron Sidford
Affiliation
Location

Pubs By Year

Pub Categories

 
Computer Science - Data Structures and Algorithms (23)
 
Mathematics - Optimization and Control (11)
 
Computer Science - Learning (11)
 
Statistics - Machine Learning (8)
 
Mathematics - Numerical Analysis (7)
 
Computer Science - Numerical Analysis (2)
 
Computer Science - Discrete Mathematics (1)
 
Statistics - Theory (1)
 
Mathematics - Statistics (1)
 
Computer Science - Neural and Evolutionary Computing (1)

Publications Authored By Aaron Sidford

We develop and analyze a variant of Nesterov's accelerated gradient descent (AGD) for minimization of smooth non-convex functions. We prove that one of two cases occurs: either our AGD variant converges quickly, as if the function was convex, or we produce a certificate that the function is "guilty" of being non-convex. This non-convexity certificate allows us to exploit negative curvature and obtain deterministic, dimension-free acceleration of convergence for non-convex functions. Read More

There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e.g. Nesterov's acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d'Aspremont 2008 and Devolder, Glineur, and Nesterov 2014. Read More

Understanding the singular value spectrum of a matrix $A \in \mathbb{R}^{n \times n}$ is a fundamental task in countless applications. In matrix multiplication time, it is possible to perform a full SVD and directly compute the singular values $\sigma_1,.. Read More

In this paper we introduce a notion of spectral approximation for directed graphs. While there are many potential ways one might define approximation for directed graphs, most of them are too strong to allow sparse approximations in general. In contrast, we prove that for our notion of approximation, such sparsifiers do exist, and we show how to compute them in almost linear time. Read More

We present an accelerated gradient method for non-convex optimization problems with Lipschitz continuous first and second derivatives. The method requires time $O(\epsilon^{-7/4} \log(1/ \epsilon) )$ to find an $\epsilon$-stationary point, meaning a point $x$ such that $\|\nabla f(x)\| \le \epsilon$. The method improves upon the $O(\epsilon^{-2} )$ complexity of gradient descent and provides the additional second-order guarantee that $\nabla^2 f(x) \succeq -O(\epsilon^{1/2})I$ for the computed $x$. Read More

Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, FOCS 2015] run in $O(n^{2}\log nM\cdot\textrm{EO} +n^{3}\log^{O(1)}nM)$ time and $O(n^{3}\log^{2}n\cdot \textrm{EO} +n^{4}\log^{O(1)}n$) time respectively, where $M$ is the largest absolute value of the function (assuming the range is integers) and $\textrm{EO}$ is the time taken to evaluate the function on any set. Read More

This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). In particular, this work sharply analyzes: (1) mini-batching, a method of averaging many samples of the gradient to both reduce the variance of a stochastic gradient estimate and for parallelizing SGD and (2) tail-averaging, a method involving averaging the final few iterates of SGD in order to decrease the variance in SGD's final iterate. This work presents the first tight non-asymptotic generalization error bounds for these schemes for the stochastic approximation problem of least squares regression. Read More

In this paper, we provide faster algorithms for computing various fundamental quantities associated with random walks on a directed graph, including the stationary distribution, personalized PageRank vectors, hitting times, and escape probabilities. In particular, on a directed graph with $n$ vertices and $m$ edges, we show how to compute each quantity in time $\tilde{O}(m^{3/4}n+mn^{2/3})$, where the $\tilde{O}$ notation suppresses polylogarithmic factors in $n$, the desired accuracy, and the appropriate condition number (i.e. Read More

In this paper we provide faster algorithms for solving the geometric median problem: given $n$ points in $\mathbb{R}^{d}$ compute a point that minimizes the sum of Euclidean distances to the points. This is one of the oldest non-trivial problems in computational geometry yet despite an abundance of research the previous fastest algorithms for computing a $(1+\epsilon)$-approximate geometric median were $O(d\cdot n^{4/3}\epsilon^{-8/3})$ by Chin et. al, $\tilde{O}(d\exp{\epsilon^{-4}\log\epsilon^{-1}})$ by Badoiu et. Read More

We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix $\Sigma$ -- i.e. computing a unit vector $x$ such that $x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)$: Offline Eigenvector Estimation: Given an explicit $A \in \mathbb{R}^{n \times d}$ with $\Sigma = A^TA$, we show how to compute an $\epsilon$ approximate top eigenvector in time $\tilde O([nnz(A) + \frac{d*sr(A)}{gap^2} ]* \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d*sr(A))^{1/4}}{\sqrt{gap}} ] * \log 1/\epsilon )$. Read More

This paper considers the problem of canonical-correlation analysis (CCA) (Hotelling, 1936) and, more broadly, the generalized eigenvector problem for a pair of symmetric matrices. These are two fundamental problems in data analysis and scientific computing with numerous applications in machine learning and statistics (Shi and Malik, 2000; Hardoon et al., 2004; Witten et al. Read More

We introduce the notion of balance for directed graphs: a weighted directed graph is $\alpha$-balanced if for every cut $S \subseteq V$, the total weight of edges going from $S$ to $V\setminus S$ is within factor $\alpha$ of the total weight of edges going from $V\setminus S$ to $S$. Several important families of graphs are nearly balanced, in particular, Eulerian graphs (with $\alpha = 1$) and residual graphs of $(1+\epsilon)$-approximate undirected maximum flows (with $\alpha=O(1/\epsilon)$). We use the notion of balance to give a more fine-grained understanding of several well-studied routing questions that are considerably harder in directed graphs. Read More

We show how to efficiently project a vector onto the top principal components of a matrix, without explicitly computing these components. Specifically, we introduce an iterative algorithm that provably computes the projection using few calls to any black-box routine for ridge regression. By avoiding explicit principal component analysis (PCA), our algorithm is the first with no runtime dependence on the number of top principal components. Read More

This work provides improved guarantees for streaming principle component analysis (PCA). Given $A_1, \ldots, A_n\in \mathbb{R}^{d\times d}$ sampled independently from distributions satisfying $\mathbb{E}[A_i] = \Sigma$ for $\Sigma \succeq \mathbf{0}$, this work provides an $O(d)$-space linear-time single-pass streaming algorithm for estimating the top eigenvector of $\Sigma$. The algorithm nearly matches (and in certain cases improves upon) the accuracy obtained by the standard batch method that computes top eigenvector of the empirical covariance $\frac{1}{n} \sum_{i \in [n]} A_i$ as analyzed by the matrix Bernstein inequality. Read More

We provide faster algorithms and improved sample complexities for approximating the top eigenvector of a matrix. Offline Setting: Given an $n \times d$ matrix $A$, we show how to compute an $\epsilon$ approximate top eigenvector in time $\tilde O ( [nnz(A) + \frac{d \cdot sr(A)}{gap^2}]\cdot \log 1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d \cdot sr(A))^{1/4}}{\sqrt{gap}}]\cdot \log1/\epsilon )$. Here $sr(A)$ is the stable rank and $gap$ is the multiplicative eigenvalue gap. Read More

We improve upon the running time for finding a point in a convex set given a separation oracle. In particular, given a separation oracle for a convex set $K\subset \mathbb{R}^n$ contained in a box of radius $R$, we show how to either find a point in $K$ or prove that $K$ does not contain a ball of radius $\epsilon$ using an expected $O(n\log(nR/\epsilon))$ oracle evaluations and additional time $O(n^3\log^{O(1)}(nR/\epsilon))$. This matches the oracle complexity and improves upon the $O(n^{\omega+1}\log(nR/\epsilon))$ additional time of the previous fastest algorithm achieved over 25 years ago by Vaidya for the current matrix multiplication constant $\omega<2. Read More

We develop a family of accelerated stochastic algorithms that minimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework based on the classical proximal point algorithm. Read More

In this paper, we consider the following inverse maintenance problem: given $A \in \mathbb{R}^{n\times d}$ and a number of rounds $r$, we receive a $n\times n$ diagonal matrix $D^{(k)}$ at round $k$ and we wish to maintain an efficient linear system solver for $A^{T}D^{(k)}A$ under the assumption $D^{(k)}$ does not change too rapidly. This inverse maintenance problem is the computational bottleneck in solving multiple optimization problems. We show how to solve this problem with $\tilde{O}(nnz(A)+d^{\omega})$ preprocessing time and amortized $\tilde{O}(nnz(A)+d^{2})$ time per round, improving upon previous running times for solving this problem. Read More

In many estimation problems, e.g. linear and logistic regression, we wish to minimize an unknown objective given only unbiased samples of the objective function. Read More

Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Read More

We present the first single pass algorithm for computing spectral sparsifiers of graphs in the dynamic semi-streaming model. Given a single pass over a stream containing insertions and deletions of edges to a graph G, our algorithm maintains a randomized linear sketch of the incidence matrix of G into dimension O((1/epsilon^2) n polylog(n)). Using this sketch, at any point, the algorithm can output a (1 +/- epsilon) spectral sparsifier for G with high probability. Read More

In this paper we present a new algorithm for solving linear programs that requires only $\tilde{O}(\sqrt{rank(A)}L)$ iterations to solve a linear program with $m$ constraints, $n$ variables, and constraint matrix $A$, and bit complexity $L$. Each iteration of our method consists of solving $\tilde{O}(1)$ linear systems and additional nearly linear time computation. Our method improves upon the previous best iteration bound by factor of $\tilde{\Omega}((m/rank(A))^{1/4})$ for methods with polynomial time computable iterations and by $\tilde{\Omega}((m/rank(A))^{1/2})$ for methods which solve at most $\tilde{O}(1)$ linear systems in each iteration. Read More

In this paper we present an $\tilde{O}(m\sqrt{n}\log^{O(1)}U)$ time algorithm for solving the maximum flow problem on directed graphs with $m$ edges, $n$ vertices, and capacity ratio $U$. This improves upon the previous fastest running time of $O(m\min\left(n^{2/3},m^{1/2}\right)\log\left(n^{2}/m\right)\log U)$ achieved over 15 years ago by Goldberg and Rao. In the special case of solving dense directed unit capacity graphs our algorithm improves upon the previous fastest running times of of $O(\min\{m^{3/2},mn^{^{2/3}}\})$ achieved by Even and Tarjan and Karzanov over 35 years ago and of $\tilde{O}(m^{10/7})$ achieved recently by M\k{a}dry. Read More

In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying per-iteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. Read More

In this paper, we introduce a new framework for approximately solving flow problems in capacitated, undirected graphs and apply it to provide asymptotically faster algorithms for the maximum $s$-$t$ flow and maximum concurrent multicommodity flow problems. For graphs with $n$ vertices and $m$ edges, it allows us to find an $\epsilon$-approximate maximum $s$-$t$ flow in time $O(m^{1+o(1)}\epsilon^{-2})$, improving on the previous best bound of $\tilde{O}(mn^{1/3} poly(1/\epsilon))$. Applying the same framework in the multicommodity setting solves a maximum concurrent multicommodity flow problem with $k$ commodities in $O(m^{1+o(1)}\epsilon^{-2}k^2)$ time, improving on the existing bound of $\tilde{O}(m^{4/3} poly(k,\epsilon^{-1})$. Read More

In this paper, we present a simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time. It uses very little of the machinery that previously appeared to be necessary for a such an algorithm. It does not require recursive preconditioning, spectral sparsification, or even the Chebyshev Method or Conjugate Gradient. Read More