Andrew McGregor - University of Massachusetts

Andrew McGregor
Are you Andrew McGregor?

Claim your profile, edit publications, add additional information:

Contact Details

Name
Andrew McGregor
Affiliation
University of Massachusetts
City
Amherst
Country
United States

Pubs By Year

External Links

Pub Categories

 
Computer Science - Data Structures and Algorithms (13)
 
Mathematics - Information Theory (4)
 
Computer Science - Information Theory (4)
 
Computer Science - Databases (2)
 
Computer Science - Cryptography and Security (1)
 
Computer Science - Computational Complexity (1)
 
Computer Science - Learning (1)
 
Computer Science - Computational Geometry (1)
 
Statistics - Machine Learning (1)

Publications Authored By Andrew McGregor

We present a data stream algorithm for estimating the size of the maximum matching of a low arboricity graph. Recall that a graph has arboricity $\alpha$ if its edges can be partitioned into at most $\alpha$ forests and that a planar graph has arboricity $\alpha=3$. Estimating the size of the maximum matching in such graphs has been a focus of recent data stream research. Read More

We study the classic NP-Hard problem of finding the maximum $k$-set coverage in the data stream model: given a set system of $m$ sets that are subsets of a universe $\{1,\ldots,n \}$, find the $k$ sets that cover the most number of distinct elements. The problem can be approximated up to a factor $1-1/e$ in polynomial time. In the streaming-set model, the sets and their elements are revealed online. Read More

In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a $(1+\epsilon)$ approximation of the maximum density with high probability; the algorithm uses $O(\epsilon^{-2} n \polylog n)$ space, processes each stream update in $\polylog (n)$ time, and uses $\poly(n)$ post-processing time where $n$ is the number of nodes. Read More

The degree distribution is one of the most fundamental graph properties of interest for real-world graphs. It has been widely observed in numerous domains that graphs typically have a tailed or scale-free degree distribution. While the average degree is usually quite small, the variance is quite high and there are vertices with degrees at all scales. Read More

In this paper we present a simple but powerful subgraph sampling primitive that is applicable in a variety of computational models including dynamic graph streams (where the input graph is defined by a sequence of edge/hyperedge insertions and deletions) and distributed systems such as MapReduce. In the case of dynamic graph streams, we use this primitive to prove the following results: -- Matching: First, there exists an $\tilde{O}(k^2)$ space algorithm that returns an exact maximum matching on the assumption the cardinality is at most $k$. The best previous algorithm used $\tilde{O}(kn)$ space where $n$ is the number of vertices in the graph and we prove our result is optimal up to logarithmic factors. Read More

2015Apr
Affiliations: 1Stony Brook University, 2Stony Brook University, 3University of Massachusetts, Amherst, 4Stony Brook University, 5University of Massachusetts, Amherst

In this paper, we revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M , and output runs (contiguously sorted chunks of elements) that are as long as possible. We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. Read More

Information distances like the Hellinger distance and the Jensen-Shannon divergence have deep roots in information theory and machine learning. They are used extensively in data analysis especially when the objects being compared are high dimensional empirical probability distributions built from data. However, we lack common tools needed to actually use information distances in applications efficiently and at scale with any kind of provable guarantees. Read More

2012Jun
Affiliations: 1University of Massachusetts, 2University of Massachusetts, 3University of Massachusetts

We introduce a new spatial data structure for high dimensional data called the \emph{approximate principal direction tree} (APD tree) that adapts to the intrinsic dimension of the data. Our algorithm ensures vector-quantization accuracy similar to that of computationally-expensive PCA trees with similar time-complexity to that of lower-accuracy RP trees. APD trees use a small number of power-method iterations to find splitting planes for recursively partitioning the data. Read More

This paper makes three main contributions to the theory of communication complexity and stream computation. First, we present new bounds on the information complexity of AUGMENTED-INDEX. In contrast to analogous results for INDEX by Jain, Radhakrishnan and Sen [J. Read More

Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. Despite much recent work, optimal strategies for answering a collection of correlated queries are not known. We study the problem of devising a set of strategy queries, to be submitted and answered privately, that will support the answers to a given workload of queries. Read More

Estimating frequency moments of data streams is a very well studied problem and tight bounds are known on the amount of space that is necessary and sufficient when the stream is adversarially ordered. Recently, motivated by various practical considerations and applications in learning and statistics, there has been growing interest into studying streams that are randomly ordered. In the paper we improve the previous lower bounds on the space required to estimate the frequency moments of a randomly ordered streams. Read More

There is a growing body of work on sorting and selection in models other than the unit-cost comparison model. This work is the first treatment of a natural stochastic variant of the problem where the cost of comparing two elements is a random variable. Each cost is chosen independently and is known to the algorithm. Read More

We prove that approximating the size of stopping and trapping sets in Tanner graphs of linear block codes, and more restrictively, the class of low-density parity-check (LDPC) codes, is NP-hard. The ramifications of our findings are that methods used for estimating the height of the error-floor of moderate- and long-length LDPC codes based on stopping and trapping set enumeration cannot provide accurate worst-case performance predictions. Read More

The probabilistic-stream model was introduced by Jayram et al. \cite{JKV07}. It is a generalization of the data stream model that is suited to handling ``probabilistic'' data where each item of the stream represents a probability distribution over a set of possible events. Read More

In many problems in data mining and machine learning, data items that need to be clustered or classified are not points in a high-dimensional space, but are distributions (points on a high dimensional simplex). For distributions, natural measures of distance are not the $\ell_p$ norms and variants, but information-theoretic measures like the Kullback-Leibler distance, the Hellinger distance, and others. Efficient estimation of these distances is a key component in algorithms for manipulating distributions. Read More

We address the problem of bounding below the probability of error under maximum likelihood decoding of a binary code with a known distance distribution used on a binary symmetric channel. An improved upper bound is given for the maximum attainable exponent of this probability (the reliability function of the channel). In particular, we prove that the ``random coding exponent'' is the true value of the channel reliability for code rate $R$ in some interval immediately below the critical rate of the channel. Read More