Locality and Singularity for Store-Atomic Memory Models

Robustness is a correctness notion for concurrent programs running under relaxed consistency models. The task is to check that the relaxed behavior coincides (up to traces) with sequential consistency (SC). Although computationally simple on paper (robustness has been shown to be PSPACE-complete for TSO, PGAS, and Power), building a practical robustness checker remains a challenge. The problem is that the various relaxations lead to a dramatic number of computations, only few of which violate robustness. In the present paper, we set out to reduce the search space for robustness checkers. We focus on store-atomic consistency models and establish two completeness results. The first result, called locality, states that a non-robust program always contains a violating computation where only one thread delays commands. The second result, called singularity, is even stronger but restricted to programs without lightweight fences. It states that there is a violating computation where a single store is delayed. As an application of the results, we derive a linear-size source-to-source translation of robustness to SC-reachability. It applies to general programs, regardless of the data domain and potentially with an unbounded number of threads and with unbounded buffers. We have implemented the translation and verified, for the first time, PGAS algorithms in a fully automated fashion. For TSO, our analysis outperforms existing tools.


Similar Publications

Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. Read More


Primitive partitioning strategies for streaming applications operate efficiently under two very strict assumptions: the resources are homogeneous and the messages are drawn from a uniform key distribution. These assumptions are often not true for the real-world use cases. Dealing with heterogeneity and non-uniform workload requires inferring the resource capacities and input distribution at run time. Read More


Triangle-free graphs play a central role in graph theory, and triangle detection (or triangle finding) as well as triangle enumeration (triangle listing) play central roles in the field of graph algorithms. In distributed computing, algorithms with sublinear round complexity for triangle finding and listing have recently been developed in the powerful CONGEST clique model, where communication is allowed between any two nodes of the network. In this paper we present the first algorithms with sublinear complexity for triangle finding and triangle listing in the standard CONGEST model, where the communication topology is the same as the topology of the network. Read More


Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. Read More


In this work, we provide a general framework for adding a linearizable iterator to data structures with set operations. We propose a condition on these set operations, called locality, so that any data structure implemented from local atomic operations can be augmented with a linearizable iterator as described by our framework. We then apply the iterator framework to various data structures, prove locality of their operations, and demonstrate that the iterator framework does not significantly affect the performance of concurrent operations. Read More


Even in the absence of clocks, time bounds on the duration of actions enable the use of time for distributed coordination. This paper initiates an investigation of coordination in such a setting. A new communication structure called a zigzag pattern is introduced, and shown to guarantee bounds on the relative timing of events in this clockless model. Read More


Consider a set of agents in a peer-to-peer communication network, where each agent has a personal dataset and a personal learning objective. The main question addressed in this paper is: how can agents collaborate to improve upon their locally learned model without leaking sensitive information about their data? Our first contribution is to reformulate this problem so that it can be solved by a block coordinate descent algorithm. We obtain an efficient and fully decentralized protocol working in an asynchronous fashion. Read More


The Internet of Mobile Things encompasses stream data being generated by sensors, network communications that pull and push these data streams, as well as running processing and analytics that can effectively leverage actionable information for planning, management, and business advantage. Edge computing emerges as a new paradigm that decentralizes the communication, computation, control and storage resources from the cloud to the edge of the Internet. This paper proposes an edge computing platform where mobile fog nodes are physical devices where descriptive analytics is deployed to analyze real-time transit data streams. Read More


We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known $O(Q+S \cdot (M/B))$ caching cost for the randomized work stealing (RWS) scheduler, where $S$ is the number of steals, $Q$ is the sequential caching cost, and $M$ and $B$ are the cache size and block (or cache line) size respectively. Read More


A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. Read More