On the Transformation Capability of Feasible Mechanisms for Programmable Matter

In this work, we study theoretical models of \emph{programmable matter} systems. The systems under consideration consist of spherical modules, kept together by magnetic forces and able to perform two minimal mechanical operations (or movements): \emph{rotate} around a neighbor and \emph{slide} over a line. In terms of modeling, there are $n$ nodes arranged in a 2-dimensional grid and forming some initial \emph{shape}. The goal is for the initial shape $A$ to \emph{transform} to some target shape $B$ by a sequence of movements. Most of the paper focuses on \emph{transformability} questions, meaning whether it is in principle feasible to transform a given shape to another. We first consider the case in which only rotation is available to the nodes. Our main result is that deciding whether two given shapes $A$ and $B$ can be transformed to each other, is in $\mathbf{P}$. We then insist on rotation only and impose the restriction that the nodes must maintain global connectivity throughout the transformation. We prove that the corresponding transformability question is in $\mathbf{PSPACE}$ and study the problem of determining the minimum \emph{seeds} that can make feasible, otherwise infeasible transformations. Next we allow both rotations and slidings and prove universality: any two connected shapes $A,B$ of the same order, can be transformed to each other without breaking connectivity. The worst-case number of movements of the generic strategy is $\Omega(n^2)$. We improve this to $O(n)$ parallel time, by a pipelining strategy, and prove optimality of both by matching lower bounds. In the last part of the paper, we turn our attention to distributed transformations. The nodes are now distributed processes able to perform communicate-compute-move rounds. We provide distributed algorithms for a general type of transformations.

Comments: 48 pages, 32 figures

Similar Publications

We study several problems related to graph modification problems under connectivity constraints from the perspective of parameterized complexity: {\sc (Weighted) Biconnectivity Deletion}, where we are tasked with deleting~$k$ edges while preserving biconnectivity in an undirected graph, {\sc Vertex-deletion Preserving Strong Connectivity}, where we want to maintain strong connectivity of a digraph while deleting exactly~$k$ vertices, and {\sc Path-contraction Preserving Strong Connectivity}, in which the operation of path contraction on arcs is used instead. The parameterized tractability of this last problem was posed by Bang-Jensen and Yeo [DAM 2008] as an open question and we answer it here in the negative: both variants of preserving strong connectivity are $\sf W[1]$-hard. Preserving biconnectivity, on the other hand, turns out to be fixed parameter tractable and we provide a $2^{O(k\log k)} n^{O(1)}$-algorithm that solves {\sc Weighted Biconnectivity Deletion}. Read More


We consider an issue of much current concern: could fairness, an issue that is already difficult to guarantee, worsen when algorithms run much of our lives? We consider this in the context of resource-allocation problems; we show that algorithms can guarantee certain types of fairness in a verifiable way. Our conceptual contribution is a simple approach to fairness in this context, which only requires that all users trust some public lottery. Our technical contributions are in ways to address the $k$-center and knapsack-center problems that arise in this context: we develop a novel dependent-rounding technique that, via the new ingredients of "slowing down" and additional randomization, guarantees stronger correlation properties than known before. Read More


We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters $\beta$ (the interaction) and $\lambda$ (the external field), except for the case $\vert{\lambda}\vert=1$ (the "zero-field" case). A randomized algorithm (FPRAS) for all graphs, and all $\beta,\lambda$, has long been known. Read More


We study the shared processor scheduling problem with a single shared processor where a unit time saving (weight) obtained by processing a job on the shared processor depends on the job. A polynomial-time optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an $O(n \log n)$ optimization algorithm for a class of instances in which non-decreasing order of jobs with respect to processing times provides a non-increasing order with respect to weights --- this instance generalizes the unweighted case of the problem. Read More


The celebrated Time Hierarchy Theorem for Turing machines states, informally, that more problems can be solved given more time. The extent to which a time hierarchy-type theorem holds in the distributed LOCAL model has been open for many years. It is consistent with previous results that all natural problems in the LOCAL model can be classified according to a small constant number of complexities, such as $O(1),O(\log^* n), O(\log n), 2^{O(\sqrt{\log n})}$, etc. Read More


In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players Bob his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result it presents Bob with the next piece. This model has a closer and more natural correspondence to dynamic data structures than the classic communication models do and hence presents a new perspective on data structures. Read More


We revisit the range $\tau$-majority problem, which asks us to preprocess an array $A[1..n]$ for a fixed value of $\tau \in (0,1/2]$, such that for any query range $[i,j]$ we can return a position in $A$ of each distinct $\tau$-majority element. Read More


We study the problem of clustering sequences of unlabeled point sets taken from a common metric space. Such scenarios arise naturally in applications where a system or process is observed in distinct time intervals, such as biological surveys and contagious disease surveillance. In this more general setting existing algorithms for classical (i. Read More


In this paper, we examine the hash functions expressed as scalar products, i.e., $f(x)=$, for some bounded random vector $v$. Read More


Hill and Kertz studied the prophet inequality on iid distributions [The Annals of Probability 1982]. They proved a theoretical bound of $1-\frac{1}{e}$ on the approximation factor of their algorithm. They conjectured that the best approximation factor for arbitrarily large n is $\frac{1}{1+1/e} \approx 0. Read More