Communication Primitives in Cognitive Radio Networks

Cognitive radio networks are a new type of multi-channel wireless network in which different nodes can have access to different sets of channels. By providing multiple channels, they improve the efficiency and reliability of wireless communication. However, the heterogeneous nature of cognitive radio networks also brings new challenges to the design and analysis of distributed algorithms. In this paper, we focus on two fundamental problems in cognitive radio networks: neighbor discovery, and global broadcast. We consider a network containing $n$ nodes, each of which has access to $c$ channels. We assume the network has diameter $D$, and each pair of neighbors have at least $k\geq 1$, and at most $k_{max}\leq c$, shared channels. We also assume each node has at most $\Delta$ neighbors. For the neighbor discovery problem, we design a randomized algorithm CSeek which has time complexity $\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta)$. CSeek is flexible and robust, which allows us to use it as a generic "filter" to find "well-connected" neighbors with an even shorter running time. We then move on to the global broadcast problem, and propose CGCast, a randomized algorithm which takes $\tilde{O}((c^2/k)+(k_{max}/k)\cdot\Delta+D\cdot\Delta)$ time. CGCast uses CSeek to achieve communication among neighbors, and uses edge coloring to establish an efficient schedule for fast message dissemination. Towards the end of the paper, we give lower bounds for solving the two problems. These lower bounds demonstrate that in many situations, CSeek and CGCast are near optimal.


Similar Publications

Idle periods on different processes of Message Passing applications are unavoidable. While the origin of idle periods on a single process is well understood as the effect of system and architectural random delays, yet it is unclear how these idle periods propagate from one process to another. It is important to understand idle period propagation in Message Passing applications as it allows application developers to design communication patterns avoiding idle period propagation and the consequent performance degradation in their applications. Read More


Next-generation supercomputers will feature more hierarchical and heterogeneous memory systems with different memory technologies working side-by-side. A critical question is whether at large scale existing HPC applications and emerging data-analytics workloads will have performance improvement or degradation on these systems. We propose a systematic and fair methodology to identify the trend of application performance on emerging hybrid-memory systems. Read More


Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems. Read More


The computability power of a distributed computing model is determined by the communication media available to the processes, the timing assumptions about processes and communication, and the nature of failures that processes can suffer. In a companion paper we showed how dynamic epistemic logic can be used to give a formal semantics to a given distributed computing model, to capture precisely the knowledge needed to solve a distributed task, such as consensus. Furthermore, by moving to a dual model of epistemic logic defined by simplicial complexes, topological invariants are exposed, which determine task solvability. Read More


This paper considers the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. Read More


The model of population protocols refers to the growing in popularity theoretical framework suitable for studying pairwise interactions within a large collection of simple indistinguishable entities, frequently called agents. In this paper the emphasis is on the space complexity in fast leader election via population protocols governed by the random scheduler, which uniformly at random selects pairwise interactions within the population of n agents. The main result of this paper is a new fast and space optimal leader election protocol. Read More


Distributed actor languages are an effective means of constructing scalable reliable systems, and the Erlang programming language has a well-established and influential model. While Erlang model conceptually provides reliable scalability, it has some inherent scalability limits and these force developers to depart from the model at scale. This article establishes the scalability limits of Erlang systems, and reports the work to improve the language scalability. Read More


We adapt a recent algorithm by Ghaffari [SODA'16] for computing a Maximal Independent Set in the LOCAL model, so that it works in the significantly weaker BEEP model. For networks with maximum degree $\Delta$, our algorithm terminates locally within time $O((\log \Delta + \log (1/\epsilon)) \cdot \log(1/\epsilon))$, with probability at least $1 - \epsilon$. The key idea of the modification is to replace explicit messages about transmission probabilities with estimates based on the number of received messages. Read More


Session types offer a type-based discipline for enforcing communication protocols in distributed programming. We have previously formalized simple session types in the setting of multi-threaded $\lambda$-calculus with linear types. In this work, we build upon our earlier work by presenting a form of dependent session types (of DML-style). Read More


ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high "compression level" in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). Read More