Computer Science - Logic in Computer Science Publications (50)

Search

Computer Science - Logic in Computer Science Publications

Quantum computing is moving rapidly to the point of deployment of technology. Functional quantum devices will require the ability to correct error in order to be scalable and effective. A leading choice of error correction, in particular for modular or distributed architectures, is the surface code with logical two-qubit operations realised via "lattice surgery". Read More


Generalizing standard monadic second-order logic for Kripke models, we introduce monadic second-order logic interpreted over coalgebras for an arbitrary set functor. We then consider invariance under behavioral equivalence of MSO-formulas. More specifically, we investigate whether the coalgebraic mu-calculus is the bisimulation-invariant fragment of the monadic second-order language for a given functor. Read More


Many privacy-type properties of security protocols can be modelled using trace equivalence properties in suitable process algebras. It has been shown that such properties can be decided for interesting classes of finite processes (i.e. Read More


Automata learning has been successfully applied in the verification of hardware and software. The size of the automaton model learned is a bottleneck for scalability and hence optimizations that enable learning of compact representations are important. In this paper, we continue the development of a general framework for automata learning based on category theory and develop a class of optimizations and an accompanying correctness proof for learning algorithms. Read More


A strategy for constructing dynamic programs is introduced that utilises periodic computation of auxiliary data from scratch and the ability to maintain a query for a limited number of change steps. It is established that if some program can maintain a query for log n change steps after an AC$^1$-computable initialisation, it can be maintained by a first-order dynamic program as well, i.e. Read More


We present a new string SMT solver, Z3str3, that is faster than its competitors Z3str2, Norn, CVC4, S3, and S3P over a majority of three industrial-strength benchmarks, namely Kaluza, PISA, and IBM AppScan. Z3str3 supports string equations, linear arithmetic over length function, and regular language membership predicate. The key algorithmic innovation behind the efficiency of Z3str3 is a technique we call theory-aware branching, wherein we modify Z3's branching heuristic to take into account the structure of theory literals to compute branching activities. Read More


The computability power of a distributed computing model is determined by the communication media available to the processes, the timing assumptions about processes and communication, and the nature of failures that processes can suffer. In a companion paper we showed how dynamic epistemic logic can be used to give a formal semantics to a given distributed computing model, to capture precisely the knowledge needed to solve a distributed task, such as consensus. Furthermore, by moving to a dual model of epistemic logic defined by simplicial complexes, topological invariants are exposed, which determine task solvability. Read More


There is a wide gap between symbolic reasoning and deep learning. In this research, we explore the possibility of using deep learning to improve symbolic reasoning. Briefly, in a reasoning system, a deep feedforward neural network is used to guide rewriting processes after learning from algebraic reasoning examples produced by humans. Read More


Concurrent Kleene Algebra (CKA) is a mathematical formalism to study programs that exhibit concurrent behaviour. As with previous extensions of Kleene Algebra, characterizing the free model is crucial in order to develop the foundations of the theory and potential applications. For CKA, this has been an open question for a few years and this paper makes an important step towards an answer. Read More


Weighted labelled transition systems (WLTSs) are an established meta-model aiming to provide general results and tools for a wide range of systems such as non-deterministic, stochastic, and probabilistic systems. In order to encompass processes combining several quantitative aspects, extensions of the WLTS framework have been further proposed, state-to-function transition systems (FuTSs) and uniform labelled transition systems (ULTraSs) being two prominent examples. In this paper we show that this hierarchy of meta-models collapses when studied under the lens of bisimulation-coherent encodings. Read More


This paper positively solves an open problem if it is possible to provide a Hilbert system to Epistemic Logic of Friendship (EFL) by Seligman, Girard and Liu. To find a Hilbert system, we first introduce a sound, complete and cut-free tree (or nested) sequent calculus for EFL, which is an integrated combination of Seligman's sequent calculus for basic hybrid logic and a tree sequent calculus for modal logic. Then we translate a tree sequent into an ordinary formula to specify a Hilbert system of EFL and finally show that our Hilbert system is sound and complete for the intended two-dimensional semantics. Read More


Software architectures usually are comprised of different views for capturing static, runtime, and deployment aspects. What is currently missing, however, are formal validation and verification techniques of multi-view architecture in very early phases of the software development lifecycle. The main contribution of this paper therefore is the construction of a single formal model (in Promela) for certain stylized, and widely used, multi-view architectures by suitably interpreting and fusing sub-models from different UML diagrams. Read More


This paper presents a symmetric monoidal and compact closed bicategory that categorifies the zx-calculus developed by Coecke and Duncan. The $1$-cells in this bicategory are certain graph morphisms that correspond to the string diagrams of the zx-calculus, while the $2$-cells are rewrite rules. Read More


We discuss the homotopy type theory library in the Lean proof assistant. The library is especially geared toward synthetic homotopy theory. Of particular interest is the use of just a few primitive notions of higher inductive types, namely quotients and truncations, and the use of cubical methods. Read More


We show that by restricting the degrees of the vertices of a graph to an arbitrary set $ \Delta $, the threshold point $ \alpha(\Delta) $ of the phase transition for a random graph with $ n $ vertices and $ m = \alpha(\Delta) n $ edges can be either accelerated (e.g., $ \alpha(\Delta) \approx 0. Read More


Markov automata combine non-determinism, probabilistic branching, and exponentially distributed delays. This compositional variant of continuous-time Markov decision processes is used in reliability engineering, performance evaluation and stochastic scheduling. Their verification so far focused on single objectives such as (timed) reachability, and expected costs. Read More


We introduce perfect half space games, in which the goal of Player 2 is to make the sums of encountered multi-dimensional weights diverge in a direction which is consistent with a chosen sequence of perfect half spaces (chosen dynamically by Player 2). We establish that the bounding games of Jurdzi\'nski et al. (ICALP 2015) can be reduced to perfect half space games, which in turn can be translated to the lexicographic energy games of Colcombet and Niwi\'nski, and are positionally determined in a strong sense (Player 2 can play without knowing the current perfect half space). Read More


In this extended abstract we present the GUBS Upper Bound Solver. GUBS is a dedicated constraint solver over the naturals for inequalities formed over uninterpreted function symbols and standard arithmetic operations. GUBS now forms the backbone of HoSA, a tool for analysing space and time complexity of higher-order functional programs automatically. Read More


This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three components: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and a concrete tool for constraint solving. Noticeably, the presented methodology can be fully automated, and is able to analyse a series of examples which cannot be handled by most competitor methodologies. This is possible due to various key ingredients, and in particular an abstract index language and index polymorphism at higher ranks. Read More


Lattice-theoretic ideals have been used to define and generate non granular rough approximations over general approximation spaces over the last few years by few authors. The goal of these studies, in relation based rough sets, have been to obtain nice properties comparable to those of classical rough approximations. In this research paper, these ideas are generalized in a severe way by the present author and associated semantic features are investigated by her. Read More


The high availability and scalability of weakly-consistent systems attracts system designers. Yet, writing correct application code for this type of systems is difficult; even how to specify the intended behavior of such systems is still an open question. There has not been established any standard method to specify the intended dynamic behavior of a weakly consistent system. Read More


We investigate three formalisms to specify graph languages, i.e. sets of graphs, based on type graphs. Read More


The DICE workshop explores the area of Implicit Computational Complexity (ICC), which grew out from several proposals to use logic and formal methods to provide languages for complexity-bounded computation (e.g. Ptime, Logspace computation). Read More


Pebble games are a powerful tool in the study of finite model theory, constraint satisfaction and database theory. Monads and comonads are basic notions of category theory which are widely used in semantics of computation and in modern functional programming. We show that existential k-pebble games have a natural comonadic formulation. Read More


We study 2-player turn-based perfect-information stochastic games with countably infinite state space. The players aim at maximizing/minimizing the probability of a given event (i.e. Read More


We introduce and investigate a weighted propositional configuration logic over a commutative semiring. Our logic, which is proved to be sound and complete, is intended to serve as a specification language for software architectures with quantitative features. We extend the weighted configuration logic to its first-order level and succeed to describe architecture styles equipped with quantitative characteristics. Read More


Monoidal computer is a categorical model of intensional computation, where many different programs correspond to the same input-output behavior. The upshot of yet another model of computation is that a categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. Read More


Final coalgebras as "categorical greatest fixed points" play a central role in the theory of coalgebras. Somewhat analogously, most proof methods studied therein have focused on greatest fixed-point properties like safety and bisimilarity. Here we make a step towards categorical proof methods for least fixed-point properties over dynamical systems modeled as coalgebras. Read More


We present a generalization of cartesian closed categories (CCCs) for dependent types, called dependent cartesian closed categories (DCCCs), which also provides a reformulation of categories with families (CwFs), an abstract semantics for Martin-L\"{o}f type theory (MLTT) which is very close to the syntax. Thus, DCCCs accomplish mathematical elegance as well as a direct interpretation of the syntax. Moreover, they capture the categorical counterpart of the generalization of the simply-typed lambda-calculus (STLC) to MLTT in syntax, and give a systematic perspective on the relation between categorical semantics for these type theories. Read More


We study Abramsky's applicative bisimilarity abstractly, in the context of call-by-value $\lambda$-calculi with algebraic effects. We first of all endow a computational $\lambda$-calculus with a monadic operational semantics. We then show how the theory of relators provides precisely what is needed to generalise applicative bisimilarity to such a calculus, and to single out those monads and relators for which applicative bisimilarity is a congruence, thus a sound methodology for program equivalence. Read More


We introduce a geometry of interaction model for Mazza's multiport interaction combinators, a graph-theoretic formalism which is able to faithfully capture concurrent computation as embodied by process algebras like the $\pi$-calculus. The introduced model is based on token machines in which not one but multiple tokens are allowed to traverse the underlying net at the same time. We prove soundness and adequacy of the introduced model. Read More


Topologists are sometimes interested in space-valued diagrams over a given index category, but it is tricky to say what such a diagram even is if we look for a notion that is stable under equivalence. The same happens in (homotopy) type theory, where it is known only for special cases how one can define a type of type-valued diagrams over a given index category. We offer several constructions. Read More


We study countably infinite MDPs with parity objectives, and special cases with a bounded number of colors in the Mostowski hierarchy (including reachability, safety, Buchi and co-Buchi). In finite MDPs there always exist optimal memoryless deterministic (MD) strategies for parity objectives, but this does not generally hold for countably infinite MDPs. In particular, optimal strategies need not exist. Read More


Building on the concept of local lexing the concept of parameterized local lexing is introduced. Read More


Negotiation diagrams are a model of concurrent computation akin to workflow Petri nets. Deterministic negotiation diagrams, equivalent to the much studied and used free-choice workflow Petri nets, are surprisingly amenable to verification. Soundness (a property close to deadlock-freedom) can be decided in PTIME. Read More


A general theory of presentations for d-frames does not yet exist. We review the difficulties and give sufficient conditions for when they can be overcome. As an application we prove that the category of d-frames is closed under coproducts. Read More


We propose a novel automata model over the alphabet of rational numbers, which we call register automata over the rationals (RA-Q). It reads a sequence of rational numbers and outputs another rational number. RA-Q is an extension of the well-known register automata (RA) over infinite alphabets, which are finite automata equipped with a finite number of registers/variables for storing values. Read More


For a regular cardinal $\kappa$, a formula of the modal $\mu$-calculus is $\kappa$-continuous in a variable x if, on every model, its interpretation as a unary function of x is monotone and preserves unions of $\kappa$-directed sets. We define the fragment C $\aleph 1 (x) of the modal $\mu$-calculus and prove that all the formulas in this fragment are $\aleph 1-continuous. For each formula $\phi$(x) of the modal $\mu$-calculus, we construct a formula $\psi$(x) $\in$ C $\aleph 1 (x) such that $\phi$(x) is $\kappa$-continuous, for some $\kappa$, if and only if $\phi$(x) is equivalent to $\psi$(x). Read More


The AGM model is the most remarkable framework for modeling belief revision. However, it is not perfect in all aspects. Paraconsistent belief revision, multi-agent belief revision and non-prioritized belief revision are three different extensions to AGM to address three important criticisms applied to it. Read More


This paper attempts to address the question of how best to assure the correctness of saturation-based automated theorem provers using our experience developing the theorem prover Vampire. We describe the techniques we currently employ to ensure that Vampire is correct and use this to motivate future challenges that need to be addressed to make this process more straightforward and to achieve better correctness guarantees. Read More


This paper describes three variants of a counterexample guided inductive optimization (CEGIO) approach based on Satisfiability Modulo Theories (SMT) solvers. In particular, CEGIO relies on iterative executions to constrain a verification procedure, in order to perform inductive generalization, based on counterexamples extracted from SMT solvers. CEGIO is able to successfully optimize a wide range of functions, including non-linear and non-convex optimization problems based on SMT solvers, in which data provided by counterexamples are employed to guide the verification engine, thus reducing the optimization domain. Read More


In this paper, we aim to initiate the study of enumeration complexity in the field of dependence logics. Consequently, as a first step, we investigate the problem of enumerating all satisfying teams of a given propositional dependence logic formula without the split junction operator. We distinguish between restricting the team size by arbitrary functions and the parametrised version where the parameter is the team size. Read More


This paper introduces Scavenger, the first theorem prover for pure first-order logic without equality based on the new conflict resolution calculus. Conflict resolution has a restricted resolution inference rule that resembles (a first-order generalization of) unit propagation as well as a rule for assuming decision literals and a rule for deriving new clauses by (a first-order generalization of) conflict-driven clause learning. Read More


This talk describes how a combination of symbolic computation techniques with first-order theorem proving can be used for solving some challenges of automating program analysis, in particular for generating and proving properties about the logically complex parts of software. The talk will first present how computer algebra methods, such as Groebner basis computation, quantifier elimination and algebraic recurrence solving, help us in inferring properties of program loops with non-trivial arithmetic. Typical properties inferred by our work are loop invariants and expressions bounding the number of loop iterations. Read More


For every $q\in \mathbb N$ let $\textrm{FO}_q$ denote the class of sentences of first-order logic FO of quantifier rank at most $q$. If a graph property can be defined in $\textrm{FO}_q$, then it can be decided in time $O(n^q)$. Thus, minimizing $q$ has favorable algorithmic consequences. Read More


In this paper I distinguish two (pre)congruence requirements for semantic equivalences and preorders on processes given as closed terms in a system description language with a recursion construct. A lean congruence preserves equivalence when replacing closed subexpressions of a process by equivalent alternatives. A full congruence moreover allows replacement within a recursive specification of subexpressions that may contain recursion variables bound outside of these subexpressions. Read More


2017Apr

We consider the problem of proving that each point in a given set of states ("target set") can indeed be reached by a given nondeterministic continuous-time dynamical system from some initial state. We consider this problem for abstract continuous-time models that can be concretized as various kinds of continuous and hybrid dynamical systems. The approach to this problem proposed in this paper is based on finding a suitable superset S of the target set which has the property that each partial trajectory of the system which lies entirely in S either is defined as the initial time moment, or can be locally extended backward in time, or can be locally modified in such a way that the resulting trajectory can be locally extended back in time. Read More


Energy consumption is a major concern in multicore systems. Perhaps the simplest strategy for reducing energy costs is to use only as many cores as necessary while still being able to deliver a desired quality of service. Motivated by earlier work on a dynamic (heterogeneous) core allocation scheme for H. Read More


2017Apr
Affiliations: 1Imperial College London, 2Imperial College London, 3Imperial College London

We study categories for reversible computing, focussing on reversible forms of event structures. Event structures are a well-established model of true concurrency. There exist a number of forms of event structures, including prime event structures, asymmetric event structures, and general event structures. Read More


2017Apr
Affiliations: 1LaSIGE, Faculty of Sciences, University of Lisbon, 2LaSIGE, Faculty of Sciences, University of Lisbon, 3Department of Computer Science, Aalborg University

The Message Passing Interface (MPI) framework is widely used in implementing imperative pro- grams that exhibit a high degree of parallelism. The PARTYPES approach proposes a behavioural type discipline for MPI-like programs in which a type describes the communication protocol followed by the entire program. Well-typed programs are guaranteed to be exempt from deadlocks. Read More