The color coding means:
: all participate
: Doctoral students participate
: next seminar
Please find the abstracts below the schedule and note that some of the slides are made available here, internally.
Abstracts
Egon Börger: The Abstract State Machines Method for High Level System Design and Analysis
We explain the three basic concepts of the Abstract State Machines (ASM) Method for a rigorous development of software intensive systems. The method allows the practitioner to start with an accurate and trustworthy application-domain-centric system model and to link such a `ground model' in a well documented and controllable way through intermediate design steps (called `refinements') to its implementation. The method has been used successfully, under industrial constraints, for the design and analysis of complex hardware/software systems. We highlight some characteristic examples and provide the simple definition of ASMs, starting from scratch. Through its versatility the ASM approach is non-monolithic and integratable at any development level into current software design and analysis environments.
Sergej Kuznetsov: On Dualization in Lattices Given by Their Irreducible Elements or by Implication Bases.
Dualization of a monotone Boolean function can be defined as transformation of the set of minimal 1 values to the set of its maximal 0 values or vice versa. Since dualization is equivalent to many important problems in computer and data sciences, including the famous problem of computing minimal transversals of a hypergraph, the quasi-polynomial dualization algorithm for Boolean lattices proposed by M. Fredman and L. Khachiyan in 1996 was an important breakthrough. It paved the way to generalizations for various classes of structures where dualization in output subexponential time is possible, among them dualization in lattices given by ordered sets of their elements or by products of bounded width lattices, like chains. A well-known fact underlying Formal Concept Analysis (FCA) is that every lattice is determined, up to isomorphism, by the ordered set of its meet (infimum) and join (supremum) irreducible elements. Dualization in the case where a lattice is given by the sets of irreducible elements (i.e., as a concept lattice) has an important application, e.g., it is equivalent to the problem of enumerating minimal hypotheses, a specific type of classifiers in a problem setting about learning from positive and negative examples. We show that dualization for representation of this type is impossible in output polynomial time unless P = NP. However, in an important particular case where the lattice is distributive, a subexponential algorithm can be proposed. We also discuss the complexity of dualization for several partial cases, as well as complexity of dualization when the lattice is given by an implication base.
Torben Scheffel: Temporal Stream-based Specification Language (TeSSLa)
The Temporal Stream-based Specification Language (TeSSLa) is a specification language developed for the specification of properties over real-time constraints and asynchronous arrival of events on the input streams. A formula of TeSSLa is in the end a function mapping a set of input streams to a set of output streams.
This talk will provide a short introduction to TeSSLa and its application domains at first and then focus on the theoretical results for TeSSLa and various fragments of it regarding other well known formalisms like Turing machines, automata and logics as well as complexity results for different decision problems like equivalence of formulas.
Teodora Nasz: Decidability of the HOM Problem in Suitable Weighted Settings
The HOM problem consists in deciding, given a tree homomorphism H and a regular tree language L represented by a tree automaton, whether H(L) is regular. Guillem Godoy and Omer Giménez proved decidability of this problem in 2010. For this, they introduce and investigate new automata classes and obtain a number of decidability results for these. We discuss their proof as well as ongoing research in generalizing the techniques to suitable semirings.
Jakub Rydval: Temporal constraint satisfaction problems in least fixed point logic
Raj Dahya: Category and Complexity of Spaces in Operator Theory
Ulrich Furbach: From Theorem Proving to Cognitive Reasoning
Starting from a depiction of the state of the art in predicate logic theorem proving we address problems which occur if provers are applied in the wild. In particular we discuss how automated reasoning systems can be used for natural language question answering. Our approach to takle common sense reasoning benchmarks within the Corg project (http://corg.hs-harz.de) is presented and we demonstrate how word embeddings can help with the problem of axiom selection.
Bartosz Bednarczyk: Adding Quantitative Taste to FO2 on Words
The talk will be about the two-variable fragment of First-Order Logic interpreted on words.
Motivation: There are several reasons why logicians could be interested in FO2[<]. First of all, the logic has relatively high expressive power i.e., FO2[<] definable languages belong to the robust fragment of star-free languages. Moreover, it characterizes exactly the Delta_2 alternation fragment of FO[<]. Second, the most interesting computational problems concerning FO2[<] are decidable and have elementary complexity. Lastly, FO2[<] has many connections with temporal logics. It is known that FO2[<] has the same expressive power as UTL[XF, YP] fragment of Unary Temporal Logic and is expressively complete to Propositional Interval Neighbourhood Logic PNLπ+.
About the talk: We will focus on quantitative extensions of FO2[<]. We will show that the complexity of the satisfiability problem behaves well while extending the logic with threshold-counting or modulo-counting quantifiers. We will prove that, surprisingly, adding just the simple percentage constraints already make the logic undecidable. We will also discuss possible open problems and future research directions in this area.
The talk will be based on my CSL 2017 && FSTTCS 2017 papers as well as an unpublished paper about the undecidability of FO2[<] with Majority Quantifier, which will appear on arxiv soon.
Simon Jantsch: Farkas certificates and witnessing subsystems for probabilistic reachability constraints
This talk introduces Farkas certificates for threshold problems on reachability constraints in Markov decision processes (MDP). These certificates are vectors satisfying certain linear inequation systems that we derive by applying variants of Farkas' Lemma to the linear programs characterizing the reachability probabilities in MDP. Farkas' Lemma is a fundamental result in linear algebra that gives, for a system of linear inequalities, a dual system such that one is satisfiable if and only if the other is not. A seemingly independent concept that has been studied in the context of probabilistic reachability is that of a witnessing subsystem. Witnessing subsystems are subsystems that witness a lower-bounded reachability property in the primary system by already having enough probability to reach the target state. We show that a Farkas certificate with K non-zero entries can be translated into a witnessing subsystem with K states, and vice versa. This connection yields new algorithms for the computation of small witnessing subsystems that perform well in practice.
Christian Ikenmeyer: Geometric complexity theory and fast matrix multiplication
The talk gives an introduction to geometric complexity theory, which is an approach towards complexity lower bounds using algebraic geometry and representation theory. We will explain the basic notions, highlight the key challenges, and draw connections to the theory of fast matrix multiplication algorithms.
Lukas Grätz: Tableaux for Non-Deterministic Semantics
Non-deterministic semantics is a generalization of many-valued logics and has some applications interesting in linguistic, circuit design and philosophical logic. A semantics for the modal logic S4 without possible worlds serves as motivation. In contrast to many-valued semantics with truth values V = {0,1,...}, non-deterministic semantics is not truth functional, since the truth value v(A) of a formula A is not uniquely determined by the truth values of its components, e.g. given a proposition p with value 1, the value of "necessarily p" could be either 0 or 1.
The price of non-determinism is that the size of truth tables grows very fast and we need semantic methods with better complexity. In this talk, I will present a method to construct tableau systems for any non-deterministic (finite) semantics. This is based on sets-as-signs tableau systems for many valued semantics. Rule construction is graphically visualized by covering cells in the non-deterministic matrix. The constructed tableau systems are shown to be sound and complete.
Danny Richter & Sven Dziadek: Experience report: research stay
How to prepare, go, live and work for, to, at a research stay. Short report and hints for your own planning. Also of course: We answer questions.
Sam van Gool: Model completeness in logical algebra
In one sentence: what is an existentially closed Heyting algebra and what does it have to do with automata?
Logical systems of deduction often resemble algebraic systems of equation resolution. The simplest instance of this resemblance is the fact that classical propositional logic essentially boils down to studying algebras over the two-element field. When one changes the logical deduction system, the algebraic structures become less simple, and more interesting; this is where one enters the world of Heyting algebras, modal algebras, and generalizations of such.
The aim of our work here is to gain a better understanding of such logical-algebraic structures by studying them from the perspective of model theory. In the model-theoretic study of usual algebra, the concept of model completeness plays a central role: it provides the correct abstraction of the concept of an algebraically closed field. We show that model completeness also has an important role to play in logical algebra.
In particular, we will discuss two cases of model completeness: intuitionistic logic, and linear temporal logic. In the former, model completeness can be seen to be closely related to a certain interpolation property of the logic, originally established by Pitts. In the latter, automata on infinite words are the technical ingredient that leads to model completeness.