A Passivity-Based Distributed Reference Governor for Constrained Robotic Networks

This paper focuses on a passivity-based distributed reference governor (RG) applied to a pre-stabilized mobile robotic network. The novelty of this paper lies in the method used to solve the RG problem, where a passivity-based distributed optimization scheme is proposed. In particular, the gradient descent method minimizes the global objective function while the dual ascent method maximizes the Hamiltonian. To make the agents converge to the agreed optimal solution, a proportional-integral consensus estimator is used. This paper proves the convergence of the state estimates of the RG to the optimal solution through passivity arguments, considering the physical system static. Then, the effectiveness of the scheme considering the dynamics of the physical system is demonstrated through simulations and experiments.

Comments: 8 pages, International Federation of Automatic Conference 2017, 8 figures

Similar Publications

The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. Read More


In large-scale natural disasters, humans are likely to fail when they attempt to reach high-risk sites or act in search and rescue operations. Robots, however, outdo their counterparts in surviving the hazards and handling the search and rescue missions due to their multiple and diverse sensing and actuation capabilities. The dynamic formation of optimal coalition of these heterogeneous robots for cost efficiency is very challenging and research in the area is gaining more and more attention. Read More


In this work a mixed agent-based and discrete event simulation model is developed for a high frequency bus route in the Netherlands. With this model, different passenger growth scenarios can be easily evaluated. This simulation model helps policy makers to predict changes that have to be made to bus routes and planned travel times before problems occur. Read More


Human societies around the world interact with each other by developing and maintaining social norms, and it is critically important to understand how such norms emerge and change. In this work, we define an evolutionary game-theoretic model to study how norms change in a society, based on the idea that different strength of norms in societies translate to different game-theoretic interaction structures and incentives. We use this model to study, both analytically and with extensive agent-based simulations, the evolutionary relationships of the need for coordination in a society (which is related to its norm strength) with two key aspects of norm change: cultural inertia (whether or how quickly the population responds when faced with conditions that make a norm change desirable), and exploration rate (the willingness of agents to try out new strategies). Read More


A social approach can be exploited for the Internet of Things (IoT) to manage a large number of connected objects. These objects operate as autonomous agents to request and provide information and services to users. Establishing trustworthy relationships among the objects greatly improves the effectiveness of node interaction in the social IoT and helps nodes overcome perceptions of uncertainty and risk. Read More


The paper proposes a hierarchical, agent-based, DES supported, distributed architecture for networked organization control. Taking into account enterprise integration engineering frameworks and business process management techniques, the paper intends to apply control engineering approaches for solving some problems of coordinating networked organizations, such as performance evaluation and optimization of workflows. Read More


Recent progress in artificial intelligence enabled the design and implementation of autonomous computing devices, agents, that may interact and learn from each other to achieve certain goals. Sometimes however, a human operator needs to intervene and interrupt an agent in order to prevent certain dangerous situations. Yet, as part of their learning process, agents may link these interruptions that impact their reward to specific states, and deliberately avoid them. Read More


We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior density, and its relation to the stochastic mirror descent algorithm, to propose a new distributed learning algorithm. Read More


This thesis contributes to the formalisation of the notion of an agent within the class of finite multivariate Markov chains. Agents are seen as entities that act, perceive, and are goal-directed. We present a new measure that can be used to identify entities (called $\iota$-entities), some general requirements for entities in multivariate Markov chains, as well as formal definitions of actions and perceptions suitable for such entities. Read More


This paper contains an axiomatic study of consistent approval-based multi-winner rules, i.e., voting rules that select a fixed-size group of candidates based on approval ballots. Read More