# The Cost of Global Broadcast in Dynamic Radio Networks

We study the single-message broadcast problem in dynamic radio networks. We
show that the time complexity of the problem depends on the amount of stability
and connectivity of the dynamic network topology and on the adaptiveness of the
adversary providing the dynamic topology. More formally, we model communication
using the standard graph-based radio network model. To model the dynamic
network, we use a generalization of the synchronous dynamic graph model
introduced in [Kuhn et al., STOC 2010]. For integer parameters $T\geq 1$ and
$k\geq 1$, we call a dynamic graph $T$-interval $k$-connected if for every
interval of $T$ consecutive rounds, there exists a $k$-vertex-connected stable
subgraph. Further, for an integer parameter $\tau\geq 0$, we say that the
adversary providing the dynamic network is $\tau$-oblivious if for constructing
the graph of some round $t$, the adversary has access to all the randomness
(and states) of the algorithm up to round $t-\tau$.
As our main result, we show that for any $T\geq 1$, any $k\geq 1$, and any
$\tau\geq 1$, for a $\tau$-oblivious adversary, there is a distributed
algorithm to broadcast a single message in time
$O\big(\big(1+\frac{n}{k\cdot\min\left\{\tau,T\right\}}\big)\cdot n\log^3
n\big)$. We further show that even for large interval $k$-connectivity,
efficient broadcast is not possible for the usual adaptive adversaries. For a
$1$-oblivious adversary, we show that even for any $T\leq
(n/k)^{1-\varepsilon}$ (for any constant $\varepsilon>0$) and for any $k\geq
1$, global broadcast in $T$-interval $k$-connected networks requires at least
$\Omega(n^2/(k^2\log n))$ time. Further, for a $0$ oblivious adversary,
broadcast cannot be solved in $T$-interval $k$-connected networks as long as
$T

**Comments:**17 pages, conference version appeared in OPODIS 2015

## Similar Publications

**Affiliations:**

^{1}for the ALICE Collaboration,

^{2}for the ALICE Collaboration,

^{3}for the ALICE Collaboration

Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Read More

The $CONGEST$ model for distributed network computing is well suited for analyzing the impact of limiting the throughput of a network on its capacity to solve tasks efficiently. For many "global" problems there exists a lower bound of $\Omega(D + \sqrt{n/B})$, where $B$ is the amount of bits that can be exchanged between two nodes in one round of communication, $n$ is the number of nodes and $D$ is the diameter of the graph. Typically, upper bounds are given only for the case $B=O(\log n)$, or for the case $B = +\infty$. Read More

We consider the problem of routing in presence of faults in undirected weighted graphs. More specifically, we focus on the design of compact name-independent fault-tolerant routing schemes, where the designer of the scheme is not allowed to assign names to nodes, i.e. Read More

On the one hand, the correctness of routing protocols in networks is an issue of utmost importance for guaranteeing the delivery of messages from any source to any target. On the other hand, a large collection of routing schemes have been proposed during the last two decades, with the objective of transmitting messages along short routes, while keeping the routing tables small. Regrettably, all these schemes share the property that an adversary may modify the content of the routing tables with the objective of, e. Read More

The paper is devoted to an analytical study of the "master-worker" framework scalability on multiprocessors with distributed memory. A new model of parallel computations called BSF is proposed. The BSF model is based on BSP and SPMD models. Read More

In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. Read More

The main goal for this article is to compare performance penalties when using KVM virtualization and Docker containers for creating isolated environments for HPC applications. The article provides both data obtained using commonly accepted synthetic tests (High Performance Linpack) and real life applications (OpenFOAM). The article highlights the influence on resulting application performance of major infrastructure configuration options: CPU type presented to VM, networking connection type used. Read More

In this paper, we propose a vital data analysis platform which resolves existing problems to utilize vital data for real-time actions. Recently, IoT technologies have been progressed but in the healthcare area, real-time actions based on analyzed vital data are not considered sufficiently yet. The causes are proper use of analyzing methods of stream / micro batch processing and network cost. Read More

The paper addresses the problem of emulating a regular register in a synchronous distributed system where clients invoking ${\sf read}()$ and ${\sf write}()$ operations are anonymous while server processes maintaining the state of the register may be compromised by rational adversaries (i.e., a server might behave as \emph{rational malicious Byzantine} process). Read More

Many modern parallel computing systems are heterogeneous at their node level. Such nodes may comprise general purpose CPUs and accelerators (such as, GPU, or Intel Xeon Phi) that provide high performance with suitable energy-consumption characteristics. However, exploiting the available performance of heterogeneous architectures may be challenging. Read More