How Hard is Computing Parity with Noisy Communications?

We show a tight lower bound of $\Omega(N \log\log N)$ on the number of transmissions required to compute the parity of $N$ input bits with constant error in a noisy communication network of $N$ randomly placed sensors, each having one input bit and communicating with others using local transmissions with power near the connectivity threshold. This result settles the lower bound question left open by Ying, Srikant and Dullerud (WiOpt 06), who showed how the sum of all the $N$ bits can be computed using $O(N \log\log N)$ transmissions. The same lower bound has been shown to hold for a host of other functions including majority by Dutta and Radhakrishnan (FOCS 2008). Most works on lower bounds for communication networks considered mostly the full broadcast model without using the fact that the communication in real networks is local, determined by the power of the transmitters. In fact, in full broadcast networks computing parity needs $\theta(N)$ transmissions. To obtain our lower bound we employ techniques developed by Goyal, Kindler and Saks (FOCS 05), who showed lower bounds in the full broadcast model by reducing the problem to a model of noisy decision trees. However, in order to capture the limited range of transmissions in real sensor networks, we adapt their definition of noisy decision trees and allow each node of the tree access to only a limited part of the input. Our lower bound is obtained by exploiting special properties of parity computations in such noisy decision trees.

Comments: 17 pages

Similar Publications

Matrix-matrix multiplication is a basic operation in linear algebra and an essential building block for a wide range of algorithms in various scientific fields. Theory and implementation for the dense, square matrix case are well-developed. If matrices are sparse, with application-specific sparsity patterns, the optimal implementation remains an open question. Read More


With the increase in compute nodes in large compute platforms, a proportional increase in node failures will follow. Many application-based checkpoint/restart (C/R) techniques have been proposed for MPI applications to target the reduced mean time between failures. However, rollback as part of the recovery remains a dominant cost even in highly optimised MPI applications employing C/R techniques. Read More


We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant $k$, detecting $k$-paths and trees on $k$ nodes can be done in constantly many rounds, and $k$-cycles in $O(n)$ rounds. -- On $d$-degenerate graphs, cliques and $4$-cycles can be enumerated in $O(d + \log n)$ rounds, and $5$-cycles in $O(d^2 + \log n)$ rounds. In many cases, these bounds are tight up to logarithmic factors. Read More


This work studies a fully distributed algorithm for computing the PageRank vector, which is inspired by the Matching Pursuit and features: 1) fully distributed 2) expected converges with exponential rate 3) low storage requirement (two scalar values per page). Illustrative experiments are conducted to verify the findings. Read More


Sparse tensors appear in many large-scale applications with multidimensional and sparse data. While multidimensional sparse data often need to be processed on manycore processors, attempts to develop highly-optimized GPU-based implementations of sparse tensor operations are rare. The irregular computation patterns and sparsity structures as well as the large memory footprints of sparse tensor operations make such implementations challenging. Read More


We consider an information spreading problem in which a population of $n$ agents is to determine, through random pairwise interactions, whether an authoritative rumor source $X$ is present in the population or not. The studied problem is a generalization of the rumor spreading problem, in which we additionally impose that the rumor should disappear when the rumor source no longer exists. It is also a generalization of the self-stabilizing broadcasting problem and has direct application to amplifying trace concentrations in chemical reaction networks. Read More


New types of machine learning hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. We already see the limitations of existing algorithms for models that exploit structured input via complex and instance-dependent control flow, which prohibits minibatching. Read More


In this paper, we study the fundamental problem of gossip in the mobile telephone model: a recently introduced variation of the classical telephone model modified to better describe the local peer-to-peer communication services implemented in many popular smartphone operating systems. In more detail, the mobile telephone model differs from the classical telephone model in three ways: (1) each device can participate in at most one connection per round; (2) the network topology can undergo a parameterized rate of change; and (3) devices can advertise a parameterized number of bits about their state to their neighbors in each round before connection attempts are initiated. We begin by describing and analyzing new randomized gossip algorithms in this model under the harsh assumption of a network topology that can change completely in every round. Read More


The \emph{rational fair consensus problem} can be informally defined as follows. Consider a network of $n$ (selfish) \emph{rational agents}, each of them initially supporting a \emph{color} chosen from a finite set $ \Sigma$. The goal is to design a protocol that leads the network to a stable monochromatic configuration (i. Read More


We study Robust Subspace Recovery (RSR) in distributed settings. We consider a huge data set in an ad hoc network without a central processor, where each node has access only to one chunk of the data set. We assume that part of the whole data set lies around a low-dimensional subspace and the other part is composed of outliers that lie away from that subspace. Read More