The color coding means:
: all participate
: Doctoral students participate
: next seminar
Please find the abstracts below the schedule and note that some of the slides are made available here, internally.
|2016 Oct 11
|Rewriting Ontology-Mediated Queries
|2016 Oct 18
|Trace-equivalence of stochastic transition systems with continuous state spaces
|Reflecting the first year of QuantLA vol.2
|2016 Oct 25
|Knowledge Representation for General Problem-Solving Robots
(University of New South Wales)
|2016 Nov 01
|Let's make FL0 great again
|Preferential Query Answering in the Semantic Web with Possibilistic Networks
|2016 Nov 22
|Tree Automata in Parsing and Machine Translation
|2016 Nov 29
|Submodular semilinear valued costraint satisfaction problems
|Tree Automata with Generalized Transition Relations
|2016 Dec 06
|Context Unification is in PSPACE
(University of Wroclaw)
|2016 Dec 13
|Up-To Techniques for Weighted Systems
|2017 Jan 10
|Proof Complexity of Quantified Boolean Formulas
(University of Leeds)
|2017 Jan 17
|Inconsistency handling in ontology-mediated query answering
|2017 Jan 24
|Metric-based State Space Reduction for Markov Chains
|2017 Jan 31
|Seminar on Good Scientific Practice
Carsten Lutz: Rewriting Ontology-Mediated Queries
In the context of querying data under ontologies, a central technique is to rewrite ontology-mediated queries (OMQs) into more classical database query languages. Although the desired rewritten queries are typically not guaranteed to exist, experiments suggest that they do often exist in practical cases. This prompts the study of the limits of rewritability. In this presentation, I will survey results about characterizing OMQs that are rewritable into first-order queries, into linear Datalog, and into Datalog, about the complexity of deciding rewritability, and about the shape of rewritten queries. In particular, I will consider OMQs where the ontologies are formulated in description logics such as EL, ALC, and extensions thereof.
Daniel Gburek: Trace-equivalence of stochastic transition systems with continuous state spaces
Bisimulation equivalence aims to relate stochastic systems with the same branching structure. When switching to the linear-time view, systems are considered to be equivalent provided the sets of the corresponding trace distributions coincide. Proving that bisimulation equivalence is finer than trace-distribution equivalence is related to a scheduler synthesis problem where one starts with two bisimilar systems and a scheduler for one of them and the task is to generate a scheduler for the other system with the same trace distribution. Such proofs are in particular challenging for stochastic models with nondetermism and continuous state spaces where the measurability of schedulers has to be taken into account. In this talk I will sketch thoughts towards a proof that stochastic transition systems with continuous state spaces are trace equivalent provided there exists a countably-separated bisimulation between them. This is a joint work with Christel Baier.
Michael Thielscher: Knowledge Representation for General Problem-Solving Robots
A general problem-solving robot is able to understand the representation of a new task and to successfully tackle it without human intervention. This requires architectures for cognitive robotics that integrate qualitative and quantitative representations. We present a formal framework for the design of control hierarchies along with an instantiation for a real Baxter robot that combines high-level reasoning and planning with a physics simulator to provide spatial knowledge and low-level control nodes for motors and sensor processing. As an example of a general problem representation technique, we present and discuss a formal language for describing to a general problem-solving robot so-called epistemic games. These are characterised by rules that depend on what players can and cannot deduce from the information they get during gameplay.
Maximilian Pensel: Let's make FL0 great again
In early description logic (DL) systems universal quantification and conjunction were considered to be the prominent ways of connecting concept descriptions and their individuals. The DL allowing only for these constructors as well as the top concept is called FL0. The popularity of FL0 decreased, because even though it is considered light-weight, there are more expressive DLs belonging to the same class of computational complexity. Its light-weight sibling EL however, allowing for existential instead of universal quantification, turned into the preferred DL for practical use due to its tractable reasoning complexity. All of that lead to the researching and publishing of results for FL0 being frowned upon, not even mentioning the subjective debate which of the quantifiers is "more useful" for describing concepts. However, from a theoretical point of view, FL0 allows for beautiful translations to other well researched mathematical paradigms, such as language equations and automata, which are usually not possible for many other DLs.
This talk aims towards making FL0 and DL in general more accessible to the audience. Among others, new semantics will be proposed for universal quantification. These semantics should increase the motivation for practical use of FL0 while retaining its mathematical magnificence.
Stefan Borgwardt: Preferential Query Answering in the Semantic Web with Possibilistic Networks
This talk explores how ontological knowledge expressed via existential rules can be combined with possibilistic networks to represent qualitatitive preferences along with domain knowledge. These combinations are called ontological possibilistic networks (OP-nets). To realize preference-based answering of conjunctive queries (CQs) under OP-nets, the notion of skyline answers to CQs will be introduced. I will discuss complexity results for deciding CQ skyline membership for OP-nets, and compare this with related formalisms.
Andreas Maletti: Tree Automata in Parsing and Machine Translation
We will discuss how tree automata were rediscovered in the area of statistical parsing of natural language sentences and demonstrate that some techniques developed in that area might also be beneficial in automata theory. On the example of syntax-based machine translation, we will demonstrate the other direction showing how automata theory can provide solutions to problems in natural language processing. With the identification of the exact expressive power in terms of standard models and known closure properties for them, we developed a syntax-based translation system that can translate not only a single parse, but rather the full parse forest delivered by the parser.
Caterina Viola: Submodular semilinear valued costraint satisfaction problems
Submodular functions are an important class of cost functions in optimisation theory. Given a totally ordered domain D, we say that a rational-valued function f on Dn is submodular if, for all x, y in Dn, it holds that f(x)+f(y) ≥ f(max(x,y))+f(min(x,y)), where max and min are applied componentwise. I focus on submodular cost functions f that are semilinear, that is, the underlying domain D is the set of rational numbers and f is first-order definable in (ℚ;<,+,1). Let us consider a (finite) set Gamma of submodular semilinear cost functions. An instance of the valued constraint satisfaction problem (VCSP) for Gamma is specified by a finite set of variables and by an objective function which is given as the sum of applications of the cost functions in Gamma to some of the variables. The goal is to find an assignment to the variables minimising the objective function. Our strategy to obtain a polynomial-time algorithm solving the VCSP for submodular semilinear cost functions is as follows. In a first step, we show that all submodular semilinear functions have an explicit syntactic characterisation. In a second step, we show how to reduce the problem to submodular VCSPs over finite domains, which are known to be polynomial-time solvable.
Sven Dziadek: Tree Automata with Generalized Transition Relations
This talk investigates how finite tree automata behave when transitions are made more expressive. Two extensions are considered: a new finite tree automaton model with transitions of arbitrary size and a model where a regular tree language defines the set of transitions. The first extension allows transitions in the shape of trees to have any size. The main difference to conventional tree automata is that larger transitions can overlap. Nevertheless, the new tree automaton model has the same expressive power as conventional tree automata models. The main challenge of the provided proof is eliminating overlappings.
The second extension contains a separate tree automaton called the transition automaton. The language recognized by the transition automaton defines the set of transitions. Consequently, transitions are no longer bounded in size and amount. Nevertheless, this second tree automaton model is not more expressive than conventional models. A constructive proof is provided that uses a conventional tree automaton for simulating the extended tree automaton together with its transition automaton.
Artur Jeż: Context Unification is in PSPACE
Contexts are terms with one “hole”, i.e. a place in which we can substitute an argument. In context unification we are given an equation over terms with variables representing contexts and ask about the satisfiability of such a given equation. This problem is a natural subvariant of second-order unification, which is undecidable, and a generalization of word equations, which are decidable. It is one of the few natural problems between those two whose decidability was uncertain. We show that context unification is in PSPACE.
This result is obtained by an extension of the recompression technique, recently developed by the speaker and used in particular to obtain a new PSPACE algorithm for satisfiability of word equations, to context unification. The recompression is based on performing simple compression rules (replacing pairs of neighbouring function symbols), which are (conceptually) applied on the solution of the context equation and modifying the equation in a way so that such compression steps can be in fact performed directly on the equation, without the knowledge of the actual solution. The interesting feature of this technique is that it ignores the problem's inner combinatorics and focuses and modifies the way the instance is represented.
Barbara König: Up-To Techniques for Weighted Systems
We show how up-to techniques for (bi-)similarity can be used in the setting of weighted systems. The problems we consider are language equivalence, language inclusion and the threshold problem (also known as universality problem) for weighted automata.
We build a bisimulation relation on the fly and work up-to congruence and up-to similarity. This requires to determine whether a pair of vectors (over a semiring) is in the congruence closure of a given relation of vectors. This problem is considered for rings and l-monoids, for the latter we provide a rewriting algorithm and show its confluence and termination.
We then explain how to apply these up-to techniques to weighted automata and provide runtime results.
Joint work with Filippo Bonchi & Sebastian Küpper.
Olaf Beyersdorff: Proof Complexity of Quantified Boolean Formulas
The main aim in proof complexity is to understand the complexity of theorem proving. Arguably, what is even more important is to establish techniques for lower bounds, and the recent history of computational complexity speaks volumes on how difficult it is to develop general lower bound techniques. Understanding the size of proofs is important for at least two reasons. The first is its tight relation to the separation of complexity classes: NP vs. coNP for propositional proofs, and NP vs. PSPACE in the case of proof systems for quantified boolean formulas (QBF). The second reason to study lower bounds for proofs is the analysis of SAT and QBF solvers: powerful algorithms that efficiently solve the classically hard problems of SAT and QBF for large classes of practically relevant formulas.
In this talk I give an overview of the relatively young field of QBF proof complexity. We explain the main resolution-based proof systems for QBF, modelling CDCL and expansion-based solving. In the main part of the talk we will give an overview of current lower bound techniques (and their limitations) for QBF systems. In particular, we exhibit a new and elegant proof technique for showing lower bounds in QBF proof systems based on strategy extraction. This technique provides a direct transfer of circuit lower bounds to lengths of proofs lower bounds.
Camille Bourgaux: Inconsistency handling in ontology-mediated query answering
The problem of querying description logic knowledge bases using database-style queries (in particular, conjunctive queries) has been a major focus of recent description logic research. An important issue that arises in this context is how to handle the case in which the data is inconsistent with the ontology. Indeed, since in classical logic an inconsistent logical theory implies every formula, inconsistency-tolerant semantics are needed to obtain meaningful answers. This work aims to develop methods for dealing with inconsistent description logic knowledge bases using three natural semantics (AR, IAR, and brave) previously proposed in the literature and that rely on the notion of a repair, which is an inclusion-maximal subset of the data consistent with the ontology. In our framework, these three semantics are used conjointly to identify answers with different levels of confidence. In addition to developing efficient algorithms for query answering over inconsistent DL-Lite knowledge bases, we address three problems that should support the adoption of this framework: (i) query result explanation, to help the user to understand why a given answer was (not) obtained under one of the three semantics, (ii) query-driven repairing, to exploit user feedback about errors or omissions in the query results to improve the data quality, and (iii) preferred repair semantics, to take into account the reliability of the data.
Giorgio Bacci: Metric-based State Space Reduction for Markov Chains
The aim of the work is to address the state-space reduction problem on Markov Chains (MCs) using a behavioral metric-based approach. Given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By taking as distance the bisimilarity metric of Desharnais at al., we show that the above problem can be solved as a bilinear program and prove that its threshold problem is in PSPACE and NP-hard. Finally, we present an approach based on an expectation maximization algorithm that provides suboptimal solutions. Experiments suggest that our method gives a practical approach that outperforms the bilinear program implementation run on state-of-the-art bilinear solvers.
Joint work with Giovanni Bacci, Kim G. Larsen, and Radu Mardare.
(Paper available at: http://people.cs.aau.dk/~grbacci/Papers/mc-approx-extended.pdf)