It is possible to subscribe to a daily updated calendar that contains appointments from the table below using the following URIs1:
- For all seminars: http://www.math.tu-dresden.de/~mottet/cal/quantla.ics
- For all large seminars (PIs): http://www.math.tu-dresden.de/~mottet/cal/quantla-no-student.ics
How to subscribe to calendars may differ between applications, for the Mac OS Calendar App go to File -> New Calendar Subscription... and supply one of the given URIs.
The color coding means:
: all participate
: Doctoral students participate
Please find the abstracts below the schedule and note that some of the slides are made available here, internally.
1 This has been made possible by Antoine Mottet.
Abstracts
Johan Thapper: The Valued Constraint Satisfaction Problem
The valued constraint satisfaction problem (VCSP) is a framework for studying the computational complexity of classes of discrete optimisation problems. It was introduced as an optimisation version of the constraint satisfaction problem (CSP), where the goal is to assign labels to variables under some given set of constraints. The VCSP captures both Max CSP-type problems, where one wants to optimise the number of satisfied constraints and integer programming-type problems, where one adds an objective function to an ordinary CSP to indicate the desirability of each assignment.
As the VCSP is hard in general, any meaningful study of its complexity requires restricting the problem in some way. One can, for example, restrict the type of cost functions that are allowed to appear in the description of an instance. Another possibility is to restrict the structure of the instance, that is, the way that cost functions are combined. I will talk about problems, algorithms, and hardness results in the context of various such restrictions.
Simon Jantsch: From Linear Temporal Logic to Unambiguous Büchi Automata
Translations from Linear Temporal Logic (LTL) to various types of automata on infinite words have been studied extensively. They are a central part of the automata-theoretic approach to model checking. The idea is to reduce the question of whether a system satisfies a property, which is given as an LTL-formula, to the emptiness check of an automaton. A standard automaton model that is used to represent LTL formulas are non-deterministic Büchi automata (NBA).
In this talk, I will introduce an algorithm for translating LTL to NBA and show how it can be adapted to produce unambiguous Büchi automata (UBA). Unambiguity is a restricted form of determinism and may be used in cases where determinism is required in classical approaches. For example, a polynomial time algorithm for model checking Markov chains against UBA-properties is known, which is not possible for properties specified by NBA.
Erik Paul: Finite Sequentiality of Unambiguous Max-Plus Tree Automata
In this talk, I will provide some insights into the proof of the decidability of the finite sequentiality problem for unambiguous max-plus tree automata. A max-plus tree automaton is called unambiguous if there is at most one accepting run on every tree. The finite sequentiality problem asks whether for a given max-plus tree automaton, there exist finitely many deterministic max-plus tree automata whose pointwise maximum is equivalent to the given automaton.
Lena Schiffer: Weighted Finite-State Transducers for OCR Post-Correction
Although optical character recognition (OCR) quality has improved substantially over the last decade, it still struggles on historic material. Instead of developing historical models for existing OCR, it can be prudent to first build complementary tools for post-correction, which feed from all visual and contextual cues of OCR itself, but maintain their own comprehensive models. Many existing post-correction approaches employ weighted finite-state transducers (WFST), which allow for compact storage and processing of symbol sequences. Using prior knowledge on what mistakes OCR typically makes, how new words are formed grammatically, and which words are likely to appear next to each other,a post-correction system can be modelled as the composition of single transducers representing input character hypotheses, error model, lexicon, and word-level language model, each weighted with probabilities.
In this talk, I will discuss the basics and challenges of OCR post-correction using WFST in the context of the joint coordinated research effort OCR-D, which is aimed at developing OCR methods for printed historic material in preparation of automated mass-digitisation of all German-speaking prints from the 16th to 19th century.
Erik Paul: A Feferman-Vaught Decomposition Theorem for Weighted MSO Logic
In this talk, we consider a weighted Feferman-Vaught decomposition theorem for disjoint unions and products of finite structures. The classical Feferman-Vaught Theorem describes how the evaluation of a first order sentence in a generalized product of relational structures can be reduced to the evaluation of sentences in the contributing structures and the index structure. The logic we employ for our weighted extension is based on the weighted MSO logic introduced by Droste and Gastin to obtain a Büchi-type result for weighted automata. We show that for disjoint unions and products of structures, the evaluation of formulas from two respective fragments of the logic can be reduced to the evaluation of formulas in the contributing structures. We also prove that the respective restrictions are necessary. Surprisingly, for the case of disjoint unions, the fragment is the same as the one used in the Büchi-type result of weighted automata. In fact, even the formulas used to show that the respective restrictions are necessary are the same in both cases. However, here proving that they do not allow for a Feferman-Vaught-like decomposition is more complex and employs Ramsey’s Theorem.
Claudia Schulz: Which arguments are responsible for undecisiveness in argumentation semantics?
Different semantics of abstract Argumentation Frameworks (AFs) provide different levels of decisiveness for reasoning about the acceptability of conflicting arguments. The stable semantics is useful for applications requiring a high level of decisiveness, as it assigns to each argument the label “accepted” or the label “rejected”. Unfortunately, stable labellings are not guaranteed to exist, thus raising the question as to which parts of AFs are responsible for this non-existence. In this talk, I will discuss this issue by investigating a more general question concerning preferred labellings (which may be less decisive than stable labellings but are always guaranteed to exist): why a given preferred labelling may not be stable and thus undecided on some arguments.
Holger Dell: Lovász Meets Weisfeiler and Leman
The Weisfeiler-Leman algorithm is a simple polynomial-time algorithm that solves the graph isomorphism problem heuristically, that is, it may sometimes fail to distinguish two graphs G,H even though they are not isomorphic. Which pairs of graphs G,H are correctly distinguished by the algorithm? There are various known characterizations, using counting logics, using the Sherali-Adams hierarchy in linear programming, and using pebble games. We add a new characterization that is based on classic works of Lovász on the numbers of graph homomorphisms. In the talk, we also discuss some relations to subgraph counting algorithms as well as machine learning.
(Joint work with Martin Grohe and Gaurav Rattan)
Markus Lohrey: Querying regular languages over sliding-windows
Sliding-window streaming algorithms get as input a stream of data values and have to answer queries about the last n values for a certain window size n. In the talk we consider queries that are given by regular languages. More precisely, we consider the so-called sliding window word problem for a language L: Given a data stream of symbols a1 a2 a3 a4 ..., answer at every time instant t, whether at-n+1 ... at belongs to L. We are mainly interested in the space complexity of this problem measured in the window length n. For regular languages, we prove that the space complexity of the sliding window word problem is either constant, logarithmic, or linear. Moreover, for the constant and logarithmic space classes we provide very natural characterizations: For every regular language L the sliding window word problem can be solved in
(i) constant space if and only if L is a boolean combination of regular length languages and suffix-testable languages and in
(ii) logarithmic space if and only if L is a boolean combination of regular length languages and regular left ideals. If we allow randomized streaming algorithms with a two-sided error, then the above space trichotomy becomes a space quatrochotomy: for every regular languages, the space complexity of the sliding window word problem is either constant, doubly logarithmic, logarithmic, or linear.
(joint work with Moses Ganardi and Danny Hucke)
David Carral: Reasoning over Existential Rules with Acyclicity Notions
The chase is a sound and complete (albeit non-terminating) algorithm for conjunctive query answering over ontologies of existential rules. On the theoretical side, we develop sufficient conditions to guarantee its termination (i.e., acyclicity notions), and study several restrictions that furthermore ensure its polynomiality. On the practical side, we empirically study the generality of these conditions and we extend the Datalog engine VLog to develop an efficient implementation of the chase. Furthermore, we conduct an extensive evaluation, and show that VLog can compete with the state of the art, regarding runtime, scalability, and memory efficiency.
Simon Knäuer: Constraint Satisfaction over the Random Tournament
The Random Tournament T is the Fraïssé limit of the class of all finite tournament graphs. The talk will introduce this structure and define the Constraint Satisfaction Problem of its first-order reducts. It will give a proof of the complexity dichotomy for CSPs of first-order expansions of T by injective relations.
Bart Bogaerts: Approximation Fixpoint Theory and its Application to Knowledge Representation
Founded in the early 2000s by Denecker, Marek and Truszczynski, approximation fixpoint theory (AFT) is an abstract, algebraic theory to characterize semantics of non-monotonic logics. In recent years, interest in AFT has gradually increased, with applications now ranging from foundations of database theory to abstract argumentation.
In this presentation, we provide an overview of the field: we start from the motivations that drove Denecker et al to the conception of this field, discuss both theoretical developments and applications, and end with potential topics for future improvements.
Of particular interest in this presentation is the application of AFT for (weighted) argumentation.
Emmanuel Filiot: Computability and Definability of Word Transductions
A finite word transduction is a binary relation of finite words. After an introduction on recent results about word transductions, in particular about logics and automata for word transductions and their connections, the talk introduces a new logic to specify properties of word transductions. This logic will lead us to a new characterization of the robust class of so called regular transductions. Decidability results for model-checking and synthesising transductions are also presented. Finally, a strong connection between transductions and data words (over an infinite alphabet) is given, leading to a new decidable logic for data words.
James Worrell: Synthesis of Algebraic and Semi-Algebraic Invariants
Invariants are one of the most fundamental and useful notions in automated verification, dynamical systems, and control theory. Current research on invariant generation employs an eclectic array of techniques, including abductive inference, abstract interpretation, constraint solving, interpolation, and machine learning. In this talk we give a select overview of previous work on invariant synthesis, focussing on a simple class of programs with affine updates. Our starting point is a classical algorithm of Michael Karr for discovering affine invariants. We proceed to mention a large number of variations and generalisations of this work to the case of polyhedral and algebraic invariants. In the main body of the talk we describe two recent procedures to compute the strongest algebraic invariant of an affine program and to synthesise semi-algebraic invariants for proving non-termination of linear loops .
This talk is based on joint work with Shaull Almagor, Dmitry Chistikov, Ehud Hrushovski, Amaury Pouly, and Joël Ouaknine.
Markus Ulbricht: Repairing non-monotonic knowledge bases
Minimal inconsistent subsets of knowledge bases in monotonic logics play an important role when investigating the reasons for conflicts and trying to handle them. In the context of non-monotonic reasoning this notion is not as meaningful due to the possibility of resolving conflicts by adding information. In this talk we consider inconsistency in non-monotonic logics while taking this issue into account. In particular, we show that the well-known classical duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics even if we allow adding novel information to a given knowledge base. We also discuss infinite knowledge bases.
Willi Hieke: On the acceptability of arguments in rank-based semantics
Abstract argumentation frameworks provide a general formalism to reasoning with conflicting arguments. As such, various semantics have been put forward. While the well-known acceptability semantics aim to compute sets of jointly acceptable arguments, rank-based semantics associate orders to arguments with respect to their sole acceptability in a given argumentation framework. This work suggests an intermediate approach by ranking sets of arguments using existing rank-based semantics. However, rank-based semantics are generally not applicable for cyclic graphs and therefore, a second approach to ranking sets of arguments is being introduced, as well.