Robert Krauthgamer - Seffi

Robert Krauthgamer
Are you Robert Krauthgamer?

Claim your profile, edit publications, add additional information:

Contact Details

Robert Krauthgamer

Pubs By Year

External Links

Pub Categories

Computer Science - Data Structures and Algorithms (33)
Statistics - Machine Learning (3)
Computer Science - Computational Geometry (3)
Computer Science - Learning (3)
Mathematics - Combinatorics (3)
Mathematics - Metric Geometry (3)
Mathematics - Functional Analysis (2)
Computer Science - Computational Complexity (2)
Mathematics - Statistics (1)
Statistics - Theory (1)

Publications Authored By Robert Krauthgamer

We study the following version of cut sparsification. Given a large edge-weighted network $G$ with $k$ terminal vertices, compress it into a small network $H$ with the same terminals, such that the minimum cut in $H$ between every bipartition of the terminals approximates the corresponding one in $G$ within factor $q\geq 1$, called the quality. We provide two new insights about the structure of cut sparsifiers, and then apply them to obtain improved cut sparsifiers (and data structures) for planar graphs. Read More

We provide evidence that computing the maximum flow value between every pair of nodes in a directed graph on $n$ nodes, $m$ edges,and capacities in the range $[1..n]$, which we call the All-Pairs Max-Flow problem, cannot be solved in time that is faster significantly (i. Read More

We design new sketching algorithms for unitarily invariant matrix norms, including the Schatten $p$-norms~$\|{\cdot}\|_{S_p}$, and obtain, as a by-product, streaming algorithms that approximate the norm of a matrix $A$ presented as a turnstile data stream. The primary advantage of our streaming algorithms is that they are simpler and faster than previous algorithms, while requiring the same or less storage. Our three main results are a faster sketch for estimating $\|{A}\|_{S_p}$, a smaller-space $O(1)$-pass sketch for $\|{A}\|_{S_p}$, and more general sketching technique that yields sublinear-space approximations for a wide class of matrix norms. Read More

By a classical result of Gomory and Hu (1961), in every edge-weighted graph $G=(V,E,w)$, the minimum $st$-cut values, when ranging over all $s,t\in V$, take at most $|V|-1$ distinct values. That is, these $\binom{|V|}{2}$ instances exhibit redundancy factor $\Omega(|V|)$. They further showed how to construct from $G$ a tree $(V,E',w')$ that stores all minimum $st$-cut values. Read More

We undertake a systematic study of sketching a quadratic form: given an $n \times n$ matrix $A$, create a succinct sketch $\textbf{sk}(A)$ which can produce (without further access to $A$) a multiplicative $(1+\epsilon)$-approximation to $x^T A x$ for any desired query $x \in \mathbb{R}^n$. While a general matrix does not admit non-trivial sketches, positive semi-definite (PSD) matrices admit sketches of size $\Theta(\epsilon^{-2} n)$, via the Johnson-Lindenstrauss lemma, achieving the "for each" guarantee, namely, for each query $x$, with a constant probability the sketch succeeds. (For the stronger "for all" guarantee, where the sketch succeeds for all $x$'s simultaneously, again there are no non-trivial sketches. Read More

We characterize the streaming space complexity of every symmetric norm $l$ (a norm on $\mathbb{R}^n$ invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of $l$. Specifically, we provide nearly matching upper and lower bounds on the space complexity of calculating a $(1\pm\epsilon)$-approximation to the norm of the stream, for every $0<\epsilon\leq 1/2$. (The bounds match up to $poly(\epsilon^{-1} \log n)$ factors. Read More

A valued constraint satisfaction problem (VCSP) instance $(V,\Pi,w)$ is a set of variables $V$ with a set of constraints $\Pi$ weighted by $w$. Given a VCSP instance, we are interested in a re-weighted sub-instance $(V,\Pi'\subset \Pi,w')$ such that preserves the value of the given instance (under every assignment to the variables) within factor $1\pm\epsilon$. A well-studied special case is cut sparsification in graphs, which has found various applications. Read More

We study resistance sparsification of graphs, in which the goal is to find a sparse subgraph (with reweighted edges) that approximately preserves the effective resistances between every pair of nodes. We show that every dense regular expander admits a $(1+\epsilon)$-resistance sparsifier of size $\tilde O(n/\epsilon)$, and conjecture this bound holds for all graphs on $n$ nodes. In comparison, spectral sparsification is a strictly stronger notion and requires $\Omega(n/\epsilon^2)$ edges even on the complete graph. Read More

A prominent tool in many problems involving metric spaces is a notion of randomized low-diameter decomposition. Loosely speaking, $\beta$-decomposition refers to a probability distribution over partitions of the metric into sets of low diameter, such that nearby points (parameterized by $\beta>0$) are likely to be "clustered" together. Applying this notion to the shortest-path metric in edge-weighted graphs, it is known that $n$-vertex graphs admit an $O(\ln n)$-padded decomposition (Bartal, 1996), and that excluded-minor graphs admit $O(1)$-padded decomposition (Klein, Plotkin and Rao 1993, Fakcharoenphol and Talwar 2003, Abraham et al. Read More

An outstanding open question posed by Guha and Indyk in 2006 asks to characterize metric spaces in which distances can be estimated using efficient sketches. Specifically, we say that a sketching algorithm is efficient if it achieves constant approximation using constant sketch size. A well-known result of Indyk (J. Read More

We introduce the $st$-cut version the Sparsest-Cut problem, where the goal is to find a cut of minimum sparsity among those separating two distinguished vertices $s,t\in V$. Clearly, this problem is at least as hard as the usual (non-$st$) version. Our main result is a polynomial-time algorithm for the product-demands setting, that produces a cut of sparsity $O(\sqrt{\OPT})$, where $\OPT$ denotes the optimum, and the total edge capacity and the total demand are assumed (by normalization) to be $1$. Read More

Sketching and streaming algorithms are in the forefront of current research directions for cut problems in graphs. In the streaming model, we show that $(1-\epsilon)$-approximation for Max-Cut must use $n^{1-O(\epsilon)}$ space; moreover, beating $4/5$-approximation requires polynomial space. For the sketching model, we show that $r$-uniform hypergraphs admit a $(1+\epsilon)$-cut-sparsifier (i. Read More

We study spectral algorithms for the high-dimensional Nearest Neighbor Search problem (NNS). In particular, we consider a semi-random setting where a dataset $P$ in $\mathbb{R}^d$ is chosen arbitrarily from an unknown subspace of low dimension $k\ll d$, and then perturbed by fully $d$-dimensional Gaussian noise. We design spectral NNS algorithms whose query time depends polynomially on $d$ and $\log n$ (where $n=|P|$) for large ranges of $k$, $d$ and $n$. Read More

We study the problem of sketching an input graph, so that given the sketch, one can estimate the weight of any cut in the graph within factor $1+\epsilon$. We present lower and upper bounds on the size of a randomized sketch, focusing on the dependence on the accuracy parameter $\epsilon>0$. First, we prove that for every $\epsilon > 1/\sqrt n$, every sketch that succeeds (with constant probability) in estimating the weight of all cuts $(S,\bar S)$ in an $n$-vertex graph (simultaneously), must be of size $\Omega(n/\epsilon^2)$ bits. Read More

In edge orientations, the goal is usually to orient (direct) the edges of an undirected $n$-vertex graph $G$ such that all out-degrees are bounded. When the graph $G$ is fully dynamic, i.e. Read More

A useful approach to "compress" a large network $G$ is to represent it with a {\em flow-sparsifier}, i.e., a small network $H$ that supports the same flows as $G$, up to a factor $q \geq 1$ called the quality of sparsifier. Read More

Estimating the leading principal components of data, assuming they are sparse, is a central task in modern high-dimensional statistics. Many algorithms were developed for this sparse PCA problem, from simple diagonal thresholding to sophisticated semidefinite programming (SDP) methods. A key theoretical question is under what conditions can such algorithms recover the sparse principal components? We study this question for a single-spike model with an $\ell_0$-sparse eigenvector, in the asymptotic regime as dimension $p$ and sample size $n$ both tend to infinity. Read More

Recent advances in large-margin classification of data residing in general metric spaces (rather than Hilbert spaces) enable classification under various natural metrics, such as string edit and earthmover distance. A general framework developed for this purpose by von Luxburg and Bousquet [JMLR, 2004] left open the questions of computational efficiency and of providing direct bounds on generalization error. We design a new algorithm for classification in general metric spaces, whose runtime and accuracy depend on the doubling dimension of the data points, and can thus achieve superior classification performance in many common scenarios. Read More

Our main result is that the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which answers in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, we prove that for every edge-weighted graph $G = (V,E,w)$ and a subset of terminals $T \subseteq V$, there is a graph $G'=(T,E',w')$ that is isomorphic to a minor of $G$, such that for every two terminals $u,v\in T$, the shortest-path distances between them in $G$ and in $G'$ satisfy $d_{G,w}(u,v) \le d_{G',w'}(u,v) \le O(\log^5|T|) \cdot d_{G,w}(u,v)$. Our existence proof actually gives a randomized polynomial-time algorithm. Read More

We study adaptive data-dependent dimensionality reduction in the context of supervised learning in general metric spaces. Our main statistical contribution is a generalization bound for Lipschitz functions in metric spaces that are doubling, or nearly doubling. On the algorithmic front, we describe an analogue of PCA for metric spaces: namely an efficient procedure that approximates the data's intrinsic dimension, which is often much lower than the ambient dimension. Read More

We examine the efficiency of clustering a set of points, when the encompassing metric space may be preprocessed in advance. In computational problems of this genre, there is a first stage of preprocessing, whose input is a collection of points $M$; the next stage receives as input a query set $Q\subset M$, and should report a clustering of $Q$ according to some objective, such as 1-median, in which case the answer is a point $a\in M$ minimizing $\sum_{q\in Q} d_M(a,q)$. We design fast algorithms that approximately solve such problems under standard clustering objectives like $p$-center and $p$-median, when the metric $M$ has low doubling dimension. Read More

Given a large edge-weighted network $G$ with $k$ terminal vertices, we wish to compress it and store, using little memory, the value of the minimum cut (or equivalently, maximum flow) between every bipartition of terminals. One appealing methodology to implement a compression of $G$ is to construct a \emph{mimicking network}: a small network $G'$ with the same $k$ terminals, in which the minimum cut value between every bipartition of terminals is the same as in $G$. This notion was introduced by Hagerup, Katajainen, Nishimura, and Ragde [JCSS '98], who proved that such $G'$ of size at most $2^{2^k}$ always exists. Read More

The significant progress in constructing graph spanners that are sparse (small number of edges) or light (low total weight) has skipped spanners that are everywhere-sparse (small maximum degree). This disparity is in line with other network design problems, where the maximum-degree objective has been a notorious technical challenge. Our main result is for the Lowest Degree 2-Spanner (LD2S) problem, where the goal is to compute a 2-spanner of an input graph so as to minimize the maximum degree. Read More

We introduce the following notion of compressing an undirected graph G with edge-lengths and terminal vertices $R\subseteq V(G)$. A distance-preserving minor is a minor G' (of G) with possibly different edge-lengths, such that $R\subseteq V(G')$ and the shortest-path distance between every pair of terminals is exactly the same in G and in G'. What is the smallest f*(k) such that every graph G with k=|R| terminals admits a distance-preserving minor G' with at most f*(k) vertices? Simple analysis shows that $f*(k)\leq O(k^4)$. Read More

The Traveling Salesman Problem (TSP) is among the most famous NP-hard optimization problems. We design for this problem a randomized polynomial-time algorithm that computes a (1+eps)-approximation to the optimal tour, for any fixed eps>0, in TSP instances that form an arbitrary metric space with bounded intrinsic dimension. The celebrated results of Arora (A-98) and Mitchell (M-99) prove that the above result holds in the special case of TSP in a fixed-dimensional Euclidean space. Read More

We present a framework for performing efficient regression in general metric spaces. Roughly speaking, our regressor predicts the value at a new point by computing a Lipschitz extension --- the smoothest function consistent with the observed data --- after performing structural risk minimization to avoid overfitting. We obtain finite-sample risk bounds with minimal structural and noise assumptions, and a natural speed-precision tradeoff. Read More

We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are where the k parts need to be of equal-size, and where they must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an $O(\sqrt{\log n\log k})$-approximation algorithm. Read More

A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults $r$, for all stretch bounds. For stretch $k \geq 3$ we design a simple transformation that converts every $k$-spanner construction with at most $f(n)$ edges into an $r$-fault-tolerant $k$-spanner construction with at most $O(r^3 \log n) \cdot f(2n/r)$ edges. Read More

We examine directed spanners through flow-based linear programming relaxations. We design an $\~O(n^{2/3})$-approximation algorithm for the directed $k$-spanner problem that works for all $k\geq 1$, which is the first sublinear approximation for arbitrary edge-lengths. Even in the more restricted setting of unit edge-lengths, our algorithm improves over the previous $\~O(n^{1-1/k})$ approximation of Bhattacharyya et al. Read More

A technique introduced by Indyk and Woodruff [STOC 2005] has inspired several recent advances in data-stream algorithms. We show that a number of these results follow easily from the application of a single probabilistic method called Precision Sampling. Using this method, we obtain simple data-stream algorithms that maintain a randomized sketch of an input vector $x=(x_1,. Read More

Given a capacitated graph $G = (V,E)$ and a set of terminals $K \subseteq V$, how should we produce a graph $H$ only on the terminals $K$ so that every (multicommodity) flow between the terminals in $G$ could be supported in $H$ with low congestion, and vice versa? (Such a graph $H$ is called a flow-sparsifier for $G$.) What if we want $H$ to be a "simple" graph? What if we allow $H$ to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier $H$ that maintains congestion up to a factor of $O(\log k/\log \log k)$, where $k = |K|$, (b) a convex combination of trees over the terminals $K$ that maintains congestion up to a factor of $O(\log k)$, and (c) for a planar graph $G$, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in $G$. Read More

We give the first constant-factor approximation algorithm for Sparsest Cut with general demands in bounded treewidth graphs. In contrast to previous algorithms, which rely on the flow-cut gap and/or metric embeddings, our approach exploits the Sherali-Adams hierarchy of linear programming relaxations. Read More

We present a near-linear time algorithm that approximates the edit distance between two strings within a polylogarithmic factor; specifically, for strings of length n and every fixed epsilon>0, it can compute a (log n)^O(1/epsilon) approximation in n^(1+epsilon) time. This is an exponential improvement over the previously known factor, 2^(O (sqrt(log n))), with a comparable running time (Ostrovsky and Rabani J.ACM 2007; Andoni and Onak STOC 2009). Read More

The $l_2$ flattening lemma of Johnson and Lindenstrauss [JL84] is a powerful tool for dimension reduction. It has been conjectured that the target dimension bounds can be refined and bounded in terms of the intrinsic dimensionality of the data set (for example, the doubling dimension). One such problem was proposed by Lang and Plaut [LP01] (see also [GKL03,MatousekProblems07,ABN08,CGT10]), and is still open. Read More

We devise a new embedding technique, which we call measured descent, based on decomposing a metric space locally, at varying speeds, according to the density of some probability measure. This provides a refined and unified framework for the two primary methods of constructing Frechet embeddings for finite metrics, due to [Bourgain, 1985] and [Rao, 1999]. We prove that any n-point metric space (X,d) embeds in Hilbert space with distortion O(sqrt{alpha_X log n}), where alpha_X is a geometric estimate on the decomposability of X. Read More