The second QuantLA workshop takes place from 6th to 10th of October at the mountain hotel Bastei near Rathen. Our guest lecturers are Annabelle McIver and André Platzer.


Time Table

  Mon, Oct 6 Tue, Oct 7 Wed, Oct 8 Thu, Oct 9 Fri, Oct 10
08:45-9:00 Opening        
09:00-10:30 André Platzer
Logical Foundations of Cyber-Physical Systems
Manfred Droste
Weighted Trace Automata
Annabelle McIver
Compositional quantitative information flow
Christel Baier
Quantiles for transient and long-run measures in Markovian models
Manuel Bodirsky
Tractable set constraints
Markus Teichmann
Regular Context-Free Tree Grammars
Andreas Ecke
Similarity-based Relaxed Instance Queries for the EL-family of Description Logics
Sascha Wunderlich
Weight Monitoring with Linear Temporal Logic
10:30-11:00 Coffee break
11:00-12:30 Markus Lohrey
Algorithmic decomposition in logic
Doreen Heusel
Weighted Unranked Tree Automata over Valuation Monoids and Their Characterization by Weighted Logics
Bernhard Ganter
Skills and Competences
Thomas Weidner
Characterizing Bottom-Up Probabilistic Tree Automata by Logic and Regular Expressions
Karin Quaas
Are Weighted Register Automata equivalent to Weighted Timed Automata?
Stefan Dück
Weighted Automata and Logics on Graphs
Parvaneh Babari Ghorghi
Characterization of +ω-picture languages
Vitaly Perevoshchikov
A Nivat Theorem for Weighted Timed Automata and Weighted Relative Distance Logic
David Müller
Probabilistic Model Checking and Unambiguous Automata over Finite Words
Linda Leuschner
Conditional Assume-Guarantee Verification for Probabilistic Systems
Gerd Brewka
GRAPPA: A Semantical Framework for Graph_Based Argument Processing
12:30-14:00 Lunch break
14:00-15:30 Daniel Borchmann
Exploring Implications and General Concept Inclusions with High Confidence
Heiko Vogler
Weighted Tree Automata over M-monoids and Tv-monoids
Excursion Marcel Lippmann
Runtime Verification Using Temporalised Description Logics
Claudia Carapelle
An undecidable extension of CTL* with constraints
Shiguang Feng
The Decidability and Complexity of TPTL and MTL over Data Words
Oliver Fernandez Gil
Quantitative Formalization of Prototypes in Description Logics (DLs): Preliminary Results
15:30-16:00 Coffee break Coffee break
16:00-17:30 Sebastian Rudolph
The Two Views on Ontological Query Answering
Johannes Osterholzer
The membership problem of context-free tree languages is NP-complete
Franz Baader
A double exponential lower bound for the size of a monitor for a propositional LTL formula
Zeinab Bandpey
Rough sets on Lattices
Anni-Yasmin Turhan
Certain answers in a rough world
Daniel Krähmann
Quantitative Analysis of Stochastic Hybrid Systems

Course Programme

André Platzer: Logical Foundations of Cyber-Physical Systems

We study the logical foundations of cyber-physical systems (CPS), i.e. systems that combine cyber aspects such as communication and computer control with physical aspects such as movement in space. CPS applications abound. Ensuring their correct functioning, however, is a serious challenge. Scientists and engineers need analytic tools to understand and predict the behavior of their systems. That's the key to designing smart and reliable control.

This talk identifies a mathematical model for CPS called multi-dynamical systems, i.e. systems characterized by combining multiple facets of dynamical systems, including discrete and continuous dynamics, but also uncertainty resolved by nondeterministic, stochastic, and adversarial dynamics. Multi-dynamical systems help us understand CPSs better, as being composed of multiple dynamical aspects, each of which is simpler than the full system. The family of differential dynamic logics surveyed in this talk exploits this compositionality in order to tame the complexity of CPS and enable their analysis.

In addition to providing a strong theoretical foundation for CPS, differential dynamic logics have also been instrumental in verifying many applications, including air traffic protocols for aircraft collision avoidance maneuvers, the European Train Control System protocol, several automotive systems, mobile robot navigation with the dynamic window algorithm, and a surgical robotic system for skull-base surgery.

Markus Lohrey: Algorithmic decomposition in logic

The talk will give a survey on recent results on Feferman-Vaught decomposition for various logics. The idea of (algorithmic) Feferman-Vaught decomposition is to compute from a given formula (of a certain logic) that is interpreted over a composed structure (e.g. a disjoint union of two structures or an (a)synchronous product of two transition systems) an equivalent Boolean combination of formulas, each of which is interpreted in one of the component structures. Such a result has been shown 1959 by Feferman and Vaught for first-order logic. Only in 2007, Dawar, Grohe, Kreutzer und Schweikardt showed that the size of Feferman-Vaught decompositions (for both disjoint unions and products) is not elementarily bounded in the size of the input formula. We proved that this result holds even for (i) the FO(2) fragment (first order logic with only two variables) and modal logic with respect to asynchronous products and (ii) the FO(3) fragment with respect to disjoint union. In contrast, for FO(2), Feferman-Vaught decompositions with respect to disjoint union are of size at most doubly exponential in the input formula.
(joint work with Stefan Göller and Jean Christoph Jung)

Stefan Dück: Weighted Automata and Logics on Graphs

Recently, there has been much interest in quantitative features for the specification and analysis of systems. These features can be modeled by using weighted automata and weighted logic, which generalize classical automata respectively logics using weights. The main result of Droste and Gastin, extending the classical result of Büchi-Elgot-Trakhtenbrot, shows that if the weights are taken from an arbitrary semiring, then weighted automata and a syntactically defined fragment of the weighted MSO-logic are expressively equivalent. Subsequently, many authors considered different possible extensions of this result in generalizing the underlying models to trees, nested words, pictures or texts. In this talk, we make an approach to coalesce these works. We introduce a weighted version of Thomas' graph acceptors and a weighted logic for graphs in the sense of Droste and Gastin. We depict in which sense these weighted graph acceptors are a natural and most powerful generalization of weighted automata working over words, trees, nested words, or pictures. Our main result is the equivalence of a restricted version of our weighted logic and the weighted graph acceptors.

Daniel Borchmann: Exploring Implications and General Concept Inclusions with High Confidence

Considering implications or general conception inclusions with high confidence is a purely heuristic approach, as it cannot distinguish between sporadic errors and rare counterexamples. Since it is impractical to ensure that a given data set does not contain rare counterexamples, external sources of information are necessary to distinguish them from errors. Such external sources can be human experts as well as automatic or semi-automatic reasoning systems, among others. To arrange for a systematic invocation of these potentially expensive external sources, we present a generalization of the attribute exploration algorithm from formal concept analysis that is able to also work with implications with high confidence. Based on this generalization, we discuss an exploration algorithm for GCIs with high confidence.

Claudia Carapelle: An undecidable extension of CTL* with constraints

We recently proved that CTL* can be enriched with constraints over the integers without altering the decidability of its satisfiability problem. One characteristic of these constraints is that they have a fixed depth, in the sense that they cannot compare values at an arbitrary distance within the underlying models of our logic. We extend CTL* with non-local constraints, but discover that this leads to undecidability, even in very restricted settings.

Sebastian Rudolph: The Two Views on Ontological Query Answering

We attempt a structured view at the ontological query answering problem by distinguishing between two antagonistic perspectives: The knowledge representation perspective considers the ontology as a part of the specified knowledge whereas the database perspective assumes it to be part of the query. These two perspectives give rise to two computation strategies: (data-driven) forward chaining and (query-driven) backward chaining, based on which different types of decidability criteria can be defined. We give an overview of the two views as well as ensuing conditions for decidability, focusing on existential rules as ontological formalism that has lately gained a lot of renewed interest from both the KR and DB communities.

Zeinab Bandpey: Rough sets on Lattices

The aim of this research is the study of rough sets theory and its relation to the fuzzy lattices. In this regards, we define the approximation space and rough sets and then we explain their application in rings. In the next part we investigate the relationship between lattices, sublattices, ideals and congruence relations to the fuzzy rough set and prove some related results.

Manfred Droste: Weighted Trace Automata

An important model for describing the behavior of concurrent systems is provided by trace alphabets, in which the symbols may partially commute depending on the underlying independence. We will describe weighted automata over traces and characterizations of their behavior using particular rational expressions or a weighted logic. This extends classical results of Kleene-Schützenberger, Ochmanski, and Thomas.

Markus Teichmann: Regular Context-Free Tree Grammars

In the area of natural language processing, tree languages play an important role. Trees model the grammatical structure of a sentence in an intuitive way. Regular tree grammars capture parts of the linguistic phenomena of natural language. Hence, there is a need to consider more powerful classes, e.g., context-free tree grammars. There is the usual trade-off between expressiveness and parsing complexity. Thus, restrictions on context-free tree grammars are investigated which try to simplify parsing while maintaining the expressive power. The aim of this talk is to investigate the connection between context-free tree grammars and regular tree grammars. While context-free tree grammars allow for comfortable modeling of linguistic features, the induced tree languages might be in fact regular. We provide a syntactic criterion (non-self-embedding) on context-free tree grammars that determines regularity of the underlying tree language.

Doreen Heusel: Weighted Unranked Tree Automata over Valuation Monoids and Their Characterization by Weighted Logics

In 2011, Droste and Vogler presented weighted unranked tree automata and a weighted MSO logic for unranked trees over semirings. Surprisingly, their automata capture the expressiveness of their logic but not the other way around.Droste and Vogler leave it as an open problem to determine an expressively equivalent weighted automata model for their logic. We will present a new class of weighted unranked tree automata. As weight structure we will resort to valuation monoids which include all semirings and, in addition, enable us to cope with average. Moreover, we give a characterization of the behavior of our weighted tree automata by several fragments of weighted MSO logic and thereby provide a solution of Drostes and Voglers equivalence problem.

Parvaneh Babari Ghorghi: Characterization of +ω-picture languages

Languages of finite and infinite two-dimensional words (pictures and ω-pictures, respectively) have been considered by several research groups, and several formal models to recognize or generate finite or infinite two-dimensional objects have been proposed in literature.

Here, we define a new class of two-dimensional languages and we call them +ω-picture languages. These are pictures with finite number of rows but infinite number of columns. We characterize this family by three different devices, namely, a kind of automata device called Büchi two-dimensional on-line tessellation automata, tiling systems with a Büchi accepting conditions and also domino systems. We obtain an equivalence theorem for +ω-picture languages describing recognizable languages in terms of our automata model, sets of tiles and sets of dominoes.

In addition, we introduce two concatenation operations that produce +ω-picture languages. Both operations involve languages of finite pictures, and under certain natural conditions these +ω-picture languages are recognizable by our Büchi automata device and tiling systems with Büchi accepting conditions. In fact, our main result is a Büchi like characterization for +ω-picture languages.

In the connection with the logic, we show that +ω-picture languages recognized by tiling systems are definable in terms of an existential monadic second-order formula, but the opposite, namely the problem whether an existential monadic second-order formula is tiling systems recognizable, is still open.

Heiko Vogler: Weighted Tree Automata over M-monoids and Tv-monoids

We give a survey and present some new results concerning weighted tree automata over multioperator monoids (m-wta) and over tree valuation monoids (tv-wta). We consider simulation of m-wta by tv-wta and, vice versa. We show a Kleene theorem for m-recognizable weighted tree languages. The corresponding result for tv-recognizable weighted tree languages is missing in lack of appropriate rational operations. A Büchi-Elgot type characterization in terms of so-called m-expressions exists for m-recognizable weighted tree languages, while for tv-recognizable ones it can be given in terms of restricted weighted MSO formulas provided there is also a product in the tree valuation monoid. As a new result, we introduce the concept of a tv-expression and give a characterization for tv-recognizable weighted tree languages by tv-expressions. Nivat-Engelfriet type characterization of m-wta, i.e., the separation of the state-behavior and the weighting, is given in. Using the simulation of atv-wta by an m-wta in [5] we give the corresponding characterization for tv-wta.
(Joint work with Zoltàn Fülöp)

Shiguang Feng: The Decidability and Complexity of TPTL and MTL over Data Words

Using reductions from the halting and instruction recurring problem of non-deterministic two counter machines, we show that the satisfiability problem of MTL and TPTL is undecidable. This is even the case if we do not allow for propositional variables. We then investigate the unary fragments of these two logics, where only unary temporal modalities F, X and G are allowed to use. We show that the SAT problem for unary MTL is undecidable. For unary TPTL, the SAT problem is still undecidable if we do not allow for propositional variables and restrict the formulae to contain at most one register variable and no next modality X. If we allow 4 propositional variables, the SAT problem for unary MTL is undecidable. We prove that for the unary positive fragment of TPTL, the satisfiability problem is NP-complete. We also consider the complexity of model-checking problem of MTL and TPTL over deterministic one-counter machines.

Johannes Osterholzer: The membership problemof context-free tree languages is NP-complete

In syntax-based natural language processing (NLP), trees are used to express the linguistic structure of an input sentence. There is good evidence that human language exhibits phenomena that can only be captured by mildly context-sensitive formalisms. Their parse tree languages are in general not regular, but generated by context-free tree grammars (cftg).

Context-free tree grammars allow second-order substitution, i.e., in the application of a production, a nonterminal symbol within a tree is substituted by the production's right-hand side, a tree with variables. These variables are, in turn, substituted by the child trees of the nonterminal the production is applied to. Hence, whenever a variable appears more than once in the right-hand side of a cftg production, the corresponding subtree of a sentential form is copied in the application of this production. A cftg is called linear if the right-hand sides of its productions contain every variable at most once.

For the means of NLP, linear cftg do seem to suffice; in fact the well-known mildly context-sensitive formalism of Tree-Adjoining Grammars is equivalent to linear cftg that are monadic (i.e., that have only one variable). However, the complexity of the membership problem of non-linear cftg is of theoretical interest. In fact, as noted by Stamer, its "exact complexity [...] is not known, but there are indexed languages which have an NP-complete [...] membership problem." In this talk, I want to present a (work-in-progress) proof for the NP-completeness of context-free tree languages, which builds upon a similar proof by Rounds for indexed languages.

Anni-Yasmin Turhan: Certain answers in a rough world

Rough Description Logics have recently been studied as a means for representing and reasoning with imprecise knowledge. Practical applications need to exploit reasoning over such knowledge in an efficient way. We describe how the well-known combined approach to query answering can be extended to the rough setting. In particular, we extend both the canonical model and the rewriting procedure such that rough queries over rough EL ontologies can be answered by considering this information alone.

Annabelle McIver: Compositional quantitative information flow

Protecting sensitive information from improper disclosure is a fundamental security goal. But it is complicated and difficult to achieve, often because of unavoidable or even unpredictable operating conditions that can breach security defences.

An attractive approach to that goal is to frame it as a quantitative problem, and then to design methods that allow system vulnerabilities to be measured in terms of the amount of information that they leak. Then the precise operating conditions and any assumptions about prior knowledge will play a crucial role in assessing the severity of any measured vulnerability.

In this talk I will describe a specific quantitative viewpoint based on the abstract-channel model, and I will show how it arises from an order-theoretic notion of leakage derived from precise of measurements information flow. It turns out that the information order generalises the earlier qualitative order-theoretic Lattice of Information and, crucially, is compositional: that is, leakages of a system can be calculated from leakages of its components.

Vitaly Perevoshchikov: A Nivat Theorem for Weighted Timed Automata and Weighted Relative Distance Logic

Weighted timed automata (WTA) model quantitative aspects of realtime systems like continuous consumption of memory, power or financial resources. They accept quantitative timed languages where every timed word is mapped to a value, e.g., a real number. In this paper, we prove a Nivat theorem for WTA which states that recognizable quantitative timed languages are exactly those which can be obtained from recognizable boolean timed languages with the help of several simple operations. We also introduce a weighted extension of relative distance logic developed by Wilke, and we show that our weighted relative distance logic and WTA are equally expressive. The proof of this result can be derived from our Nivat theorem and Wilke?s theorem for relative distance logic. Since the proof of our Nivat theorem is constructive, the translation process from logic to automata and vice versa is also constructive. This leads to decidability results for weighted relative distance logic.

Christel Baier: Quantiles for transient and long-run measures in Markovian models

The quantile of a random variable f can, e.g., be defined as the minimal threshold t where the probability for f being less or equal t exceeds some lower probability bound q. While quantiles are standard in statistics, their relevance for the quantitative analysis of systems modeled by Markov chains or Markov decision processes has be noticed only recently. For example, quantiles can provide useful for a multi-objective analysis where the task is to study the trade-off between different cost and utility parameters. In this talk we will present the foundations of quantiles in Markovian models where the random variable either refers to the transient behavior (e.g. the amount of energy until completing a list of jobs) or the long-run behavior (e.g. the average fraction of time that processes spent in a busy-waiting mode).

Andreas Ecke: Similarity-based Relaxed Instance Queries for the EL-family of Description Logics

In Description Logics (DL) knowledge bases (KBs), information is typically captured by clear-cut concepts. For many practical applications querying the KB by crisp concepts is too restrictive; often, reasonable alteratives are important as well, even if they do not exactly match the query concept. Similarity measures can offer a controlled way of gradually relaxing a query concept within a user-specified limit.

In this talk I will present the task of instance query answering for DL KBs using concepts relaxed by concept similarity measures (CSMs). I will introduce a new similarity measure for general EL-TBoxes, and show relaxed instance queries can be computed for in this case, give and upper bound on the complexity, and discuss extensions of EL with concrete domains, nominals, and role hierarchies.

Thomas Weidner: Characterizing Bottom-Up Probabilistic Tree Automata by Logic and Regular Expressions

Probabilistic automata on finite trees are a classical concept which has been extensively studied, both its theoretical aspects as well as its practical applications. Our goal is to establish a Büchi- and a Kleene-theorem for probabilistic tree automata. To capture the expressive power of MSO logic, we introduce bottom-up probabilistic tree automata, that are more expressive than top-down probabilistic tree automata. Using this model we could prove the expressive equivalence of bottom-up probabilistic tree automata, probabilistic MSO logic, and bottom-up probabilistic tree regular expressions. These expressions, incorporating ideas from the word case, use a different notion of tree substitution, which adopts the bottom-up approach.

David Müller: Probabilistic Model Checking and Unambiguous Automata over Finite Words

Our everyday life demands for correctness and safety of computational systems. Classical model checking analyzes whether a system satisfies certain properties or not. In contrast, probabilistic model checking (PMC) reasons about the likelihood a probabilistic system satisfies a property. Usually, Markovian models are employed for the system model. For the specification of path properties, one can rely on standard (non-probabilistic) logics and automata to formalize the desired behaviors. In this case, the task of the PMC-based analysis is to compute the probability of the set of paths in the Markovian model satisfying the given formula or being accepted by the given automaton, respectively. In this talk we will present how to check properties given as regular languages over finite words with Markov chains.

A prominent approach to PMC relies on building an automaton out of the specification. One can use deterministic finite automata (DFA) and build a product Markov chain out of a DFA and a Markov chain. Since DFA can be exponential big in comparison to nondeterministic finite automata, one seeks to alternatives to DFA. Unambiguous automata over finite words are nondeterministic finite automata which allows exactly one accepting run for every accepted word. Therefore, they can be more succinct than equivalent DFA. We will show how to compute the likelihood of a Markov chain to satisfy a property given by an unambiguous automaton. In our method we create a product graph instead of a product Markov chain, and solve a linear equation system to compute the desired probability.

Marcel Lippmann: Runtime Verification Using Temporalised Description Logics

Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar.

Then the observed data are preprocessed appropriately and stored in a fact base. The data available in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements.

In this talk, we show how approaches to constructing monitors for propositional temporal formulas can be extended to deal with much more expressive temporalised description logics. Moreover, we analyse the complexity of our constructions.

Oliver Fernandez Gil: Quantitative Formalization of Prototypes in Description Logics (DLs): Preliminary Results.

Prototypes can be used to identify classes of objects by exhibiting the relevant properties that an ideal element of a class should have. Then, using a prototype as a reference one can reason about different individuals based on how similar are they to the prototype. In this talk, we present preliminary results on how to formalize prototypes in knowledge representation in a quantitative way. I will introduce a very simple way to represent prototypes and a basic similarity measure to compute how similar is an element to a defined prototype. Then, I will present the logic that results from the integration of such features into the DL EL, and talk about the complexity of deciding the classical DL reasoning tasks. Finally, some ideas about more complex prototype definitions and similarity measures will be mentioned as part of ongoing and future work.

Daniel Krähmann: Quantitative Analysis of Stochastic Hybrid Systems

Stochastic hybrid systems permit to model complex processes and phenomena, e.g. server cooling strategies or the lifetime and recovery rate of a battery, using differential equations and differential inclusions. The known undecidability results for stochastic hybrid systems make it essential to study several abstraction and refinement technqiues. The challenge is to develop suitable approximation methods and algorithms that allow probabilistic model checking based on Markovian stochastic processes. In this talk, we discuss existing modeling formalisms and abstraction techniques for stochastic hybrid systems from the literature. Furthermore, we present a modeling approach that enables the compositional quantitative analysis of stochastic hybrid systems, and explore first ideas for its analysis.

Manuel Bodirsky: Tractable set constraints

Many fundamental problems in artificial intelligence, knowledge representation, and verification involve reasoning about sets and relations between sets and can be modeled as set constraint satisfaction problems (set CSPs). Such problems are frequently intractable, but there are several important set CSPs that are known to be polynomial-time tractable. We introduce a large class of set CSPs that can be solved in quadratic time. Our class, which we call EI, contains all previously known tractable set CSPs, but also some new ones that also appear in description logics. The class of EI set constraints has an elegant universal-algebraic characterization, which we use to show that every set constraint language that properly contains all EI set constraints already has a finite sublanguage with an NP-hard constraint satisfaction problem.
(This is joint work with Martin Hils.)

Sascha Wunderlich: Weight Monitoring with Linear Temporal Logic

Many important performance and reliability measures can be formalized as the accumulated values of weight functions. When considering modern systems, requirements on such quantities arise naturally, for instance on resource consumption (energy, bandwidth) or error resilience and recovery (number of SLA violations, recovery impact). To ensure that these demands are met by a specific system, we can use model checking. To do so, we need an adequately expressive specification mechanism for formalizing accumulation.

In this talk, we introduce an extension of linear time logic (LTL) with new operators that impose constraints on the accumulated weight along path fragments. The fragments are characterized by regular conditions formalized by deterministic finite automata (monitor DFA).

This new logic covers properties expressible by several recently proposed formalisms. We study the model checking problem for weighted transition systems, Markov chains and Markov decision processes with rational weights. While the general problem is undecidable, we provide algorithms and sharp complexity bounds for several sublogics that arise by restricting the monitoring DFA.

Karin Quaas: Are Weighted Register Automata equivalent to Weighted Timed Automata?

A data word is a finite sequence of the form (a1,d1)(a2,d2),...,(an,dn), where ai are labels from a finite alphabet Σ, and di are data values from an infinite data domain D. A timed word is a data word, where the data domain is R≥0, and additionally, the data values are increasing, i.e., di<di+1 for every i ∈ {1, ..., n-1}. In this talk, we consider two popular automaton models that recognize sets of data words respectively timed words: register automata and timed automata. Even though register and timed automata are equipped with very different mechanisms to store and compare data values, Figueira et al recently established a close relation between the two models, showing that the two models are essentially equivalent.

An interesting extension of timed automata with weights was introduced by Alur et al in 2001, making it possible to model both discrete and continuous resource consumption and production of real-time systems. Since then, many problems for weighted timed automata have been considered, e.g., the optimal reachability problem, the model checking problem for weighted extensions of MTL and TCTL, and the energy problem.

In this talk, I am going to present ongoing research work about a new model called weighted register automata. We are going to investigate whether the tight relation between register and timed automata can be lifted to the weighted versions of these models. This would allow to transfer known decidability and complexity results for weighted timed automata to the new model of weighted register automata.

This is joint work with Parvaneh Babari Ghorghi.

Linda Leuschner: Conditional Assume-Guarantee Verification for Probabilistic Systems

In todays world where computational systems become more and more influencing, reliability of these systems is a crucial factor. Although model checking provides well suited algorithms to verify qualitative and quantitative system properties in an automated manner, classical model checking algorithms are often not applicable due to the enormous size of the systems under consideration. Assume-guarantee reasoning (AGR) aims to avoid considering the whole system by using natural compositionality, e.g., interacting system parts. The general idea of AGR is to establish a set of rules that allow to derive properties of a system with many components. Therefore, one first computes auxiliary system properties of one component. In a second step, these properties are used as assumptions during verification of purposed system properties for the analysis of interacting components.

In this talk, we will present an assume-guarantee framework for probabilistic systems that relies on conditional probabilities and the Bayesian rules. We will show how to guarantee probability bounds in the whole system by computing 1. the probability of an assumption and 2. the conditional probability of the purposed system property under this assumption on components.

Gerhard Brewka: GRAPPA: A Semantical Framework for Graph_Based Argument Processing

Graphical models are widely used in argumentation to visualize relationships among propositions or arguments. The intuitive meaning of the links in the graphs is typically expressed using labels of various kinds. In this paper we introduce a general semantical framework for assigning a precise meaning to labelled argument graphs which makes them suitable for automatic evaluation. Our approach rests on the notion of explicit acceptance conditions, as first studied in Abstract Dialectical Frameworks (ADFs). The acceptance conditions used here are functions from multisets of labels to truth values. We define various Dung style semantics for argument graphs. We also introduce a pattern language for specifying acceptance functions. Moreover, we show how argument graphs can be compiled to ADFs, thus providing an automatic evaluation tool via existing ADF implementations. Finally, we also discuss complexity issues.