Rina Panigrahy

Rina Panigrahy
Are you Rina Panigrahy?

Claim your profile, edit publications, add additional information:

Contact Details

Name
Rina Panigrahy
Affiliation
Location

Pubs By Year

Pub Categories

 
Computer Science - Data Structures and Algorithms (8)
 
Computer Science - Learning (5)
 
Computer Science - Computational Geometry (3)
 
Statistics - Machine Learning (2)
 
Computer Science - Computer Science and Game Theory (1)
 
Computer Science - Information Retrieval (1)
 
Computer Science - Logic in Computer Science (1)
 
Computer Science - Digital Libraries (1)
 
Physics - Data Analysis; Statistics and Probability (1)
 
Computer Science - Artificial Intelligence (1)
 
Physics - Physics and Society (1)
 
Computer Science - Computational Complexity (1)

Publications Authored By Rina Panigrahy

We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem. We obtain the first provably good approximation algorithms for this version of low-rank approximation that work for every value of $p \geq 1$, including $p = \infty$. Our algorithms are simple, easy to implement, work well in practice, and illustrate interesting tradeoffs between the approximation quality, the running time, and the rank of the approximating matrix. Read More

We study the efficacy of learning neural networks with neural networks by the (stochastic) gradient descent method. While gradient descent enjoys empirical success in a variety of applications, there is a lack of theoretical guarantees that explains the practical utility of deep learning. We focus on two-layer neural networks with a linear activation on the output node. Read More

We investigate the problem of factorizing a matrix into several sparse matrices and propose an algorithm for this under randomness and sparsity assumptions. This problem can be viewed as a simplification of the deep learning problem where finding a factorization corresponds to finding edges in different layers and values of hidden units. We prove that under certain assumptions for a sparse linear deep network with $n$ nodes in each layer, our algorithm is able to recover the structure of the network and values of top layer hidden units for depths up to $\tilde O(n^{1/6})$. Read More

We consider the classical question of predicting binary sequences and study the {\em optimal} algorithms for obtaining the best possible regret and payoff functions for this problem. The question turns out to be also equivalent to the problem of optimal trade-offs between the regrets of two experts in an "experts problem", studied before by \cite{kearns-regret}. While, say, a regret of $\Theta(\sqrt{T})$ is known, we argue that it important to ask what is the provably optimal algorithm for this problem --- both because it leads to natural algorithms, as well as because regret is in fact often comparable in magnitude to the final payoffs and hence is a non-negligible term. Read More

Fractals are self-similar recursive structures that have been used in modeling several real world processes. In this work we study how "fractal-like" processes arise in a prediction game where an adversary is generating a sequence of bits and an algorithm is trying to predict them. We will see that under a certain formalization of the predictive payoff for the algorithm it is most optimal for the adversary to produce a fractal-like sequence to minimize the algorithm's ability to predict. Read More

Consider the classical problem of predicting the next bit in a sequence of bits. A standard performance measure is {\em regret} (loss in payoff) with respect to a set of experts. For example if we measure performance with respect to two constant experts one that always predicts 0's and another that always predicts 1's it is well known that one can get regret $O(\sqrt T)$ with respect to the best expert by using, say, the weighted majority algorithm. Read More

There is a vast supply of prior art that study models for mental processes. Some studies in psychology and philosophy approach it from an inner perspective in terms of experiences and percepts. Others such as neurobiology or connectionist-machines approach it externally by viewing the mind as complex circuit of neurons where each neuron is a primitive binary circuit. Read More

Can (scientific) knowledge be reliably preserved over the long term? We have today very efficient and reliable methods to encode, store and retrieve data in a storage medium that is fault tolerant against many types of failures. But does this guarantee -- or does it even seem likely -- that all knowledge can be preserved over thousands of years and beyond? History shows that many types of knowledge that were known before have been lost. We observe that the nature of stored and communicated information and the way it is interpreted is such that it always tends to decay and therefore must lost eventually in the long term. Read More

The paper explores known results related to the problem of identifying if a given program terminates on all inputs -- this is a simple generalization of the halting problem. We will see how this problem is related and the notion of proof verifiers. We also see how verifying if a program is terminating involves reasoning through a tower of axiomatic theories -- such a tower of theories is known as Turing progressions and was first studied by Alan Turing in the 1930's. Read More

We present a formal model for studying fashion trends, in terms of three parameters of fashionable items: (1) their innate utility; (2) individual boredom associated with repeated usage of an item; and (3) social influences associated with the preferences from other people. While there are several works that emphasize the effect of social influence in understanding fashion trends, in this paper we show how boredom plays a strong role in both individual and social choices. We show how boredom can be used to explain the cyclic choices in several scenarios such as an individual who has to pick a restaurant to visit every day, or a society that has to repeatedly `vote' on a single fashion style from a collection. Read More

Consider a sequence of bits where we are trying to predict the next bit from the previous bits. Assume we are allowed to say 'predict 0' or 'predict 1', and our payoff is +1 if the prediction is correct and -1 otherwise. We will say that at each point in time the loss of an algorithm is the number of wrong predictions minus the number of right predictions so far. Read More

In this paper we show how the complexity of performing nearest neighbor (NNS) search on a metric space is related to the expansion of the metric space. Given a metric space we look at the graph obtained by connecting every pair of points within a certain distance $r$ . We then look at various notions of expansion in this graph relating them to the cell probe complexity of NNS for randomized and deterministic, exact and approximate algorithms. Read More

Click through rates (CTR) offer useful user feedback that can be used to infer the relevance of search results for queries. However it is not very meaningful to look at the raw click through rate of a search result because the likelihood of a result being clicked depends not only on its relevance but also the position in which it is displayed. One model of the browsing behavior, the {\em Examination Hypothesis} \cite{RDR07,Craswell08,DP08}, states that each position has a certain probability of being examined and is then clicked based on the relevance of the search snippets. Read More

Estimating frequency moments of data streams is a very well studied problem and tight bounds are known on the amount of space that is necessary and sufficient when the stream is adversarially ordered. Recently, motivated by various practical considerations and applications in learning and statistics, there has been growing interest into studying streams that are randomly ordered. In the paper we improve the previous lower bounds on the space required to estimate the frequency moments of a randomly ordered streams. Read More

Given a metric space $(X,d_X)$, $c\ge 1$, $r>0$, and $p,q\in [0,1]$, a distribution over mappings $\h:X\to \mathbb N$ is called a $(r,cr,p,q)$-sensitive hash family if any two points in $X$ at distance at most $r$ are mapped by $\h$ to the same value with probability at least $p$, and any two points at distance greater than $cr$ are mapped by $\h$ to the same value with probability at most $q$. This notion was introduced by Indyk and Motwani in 1998 as the basis for an efficient approximate nearest neighbor search algorithm, and has since been used extensively for this purpose. The performance of these algorithms is governed by the parameter $\rho=\frac{\log(1/p)}{\log(1/q)}$, and constructing hash families with small $\rho$ automatically yields improved nearest neighbor algorithms. Read More

In this paper we study the problem of finding the approximate nearest neighbor of a query point in the high dimensional space, focusing on the Euclidean space. The earlier approaches use locality-preserving hash functions (that tend to map nearby points to the same value) to construct several hash tables to ensure that the query point hashes to the same bucket as its nearest neighbor in at least one table. Our approach is different -- we use one (or a few) hash table and hash several randomly chosen points in the neighborhood of the query point showing that at least one of them will hash to the bucket containing its nearest neighbor. Read More

The study of hashing is closely related to the analysis of balls and bins. It is well-known that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of two-way hashing where the largest bucket contains $O(\log\log n)$ balls with high probability. Read More

We study the problem of covering a given set of $n$ points in a high, $d$-dimensional space by the minimum enclosing polytope of a given arbitrary shape. We present algorithms that work for a large family of shapes, provided either only translations and no rotations are allowed, or only rotation about a fixed point is allowed; that is, one is allowed to only scale and translate a given shape, or scale and rotate the shape around a fixed point. Our algorithms start with a polytope guessed to be of optimal size and iteratively moves it based on a greedy principle: simply move the current polytope directly towards any outside point till it touches the surface. Read More