The sixth QuantLA workshop takes place in Meißen, near Dresden, in the Dorint Parkhotel Meißen from the 28th to 31st of August. This years guest lecturers are Laure Daviaud from the University of Warwick and Sylvain Schmitz from ENS de Cachan. The schedule and abstracts of all talks can be found below.
The following describes the journey from Dresden to the Hotel as it is necessary to go through Dresden in any case. It is also an option to meet at Dresden Hauptbahnhof or Bahnhof Neustadt in small or large groups at a certain time in order to take the remaining trip together.
The closest train station to the Dorint Parkhotel Meißen is Meißen main station, it can be reached from Dresden Bahnhof Neustadt and Dresden main station using the regional train S1. From Meißen main station it is a 10 minute walk to the hotel. Check https://www.dvb.de/en-gb/ for train (and bus) schedules in Dresden and the surrounding area (including Meißen). As an alternative to the ten minute walk, you can take a bus from Meißen main station to the stop Gustav-Graf-Straße (5 minute bus ride) and then walk to the hotel (4 minutes).
The social event of the workshop will take place on Wednesday, August 29 2018, in the afternoon. Note that on Wednesday, the dinner will be a part of the social event and take place in the Domkeller in Meißen.
Maximilian Pensel: Quantification for Rational and Relevant Semantics in Defeasible Description Logics
In many scenarios involving human cognition and the evolution of knowledge, classical monotone Description Logics (DLs) do not provide intuitive reasoning results. To express presumptive knowledge and still obtain useful results in light of contradicting special cases, requires nonmonotonic semantics. In Defeasible Description Logics (DDLs), default assumptions can be modelled using defeasible implications, statements that can be disregarded to circumvent a contradiction. While there are several approaches investigating nonmonotonic semantics for DDLs, some of the more recently introduced methods are insufficient w.r.t. DL quantification. As a result of the underlying construction in these approaches, defeasible information is fully neglected for quantified concepts, and thus many expected entailments are missing.
This PhD project investigates these severe shortcomings of otherwise well-behaved, promising approaches to nonmonotonic reasoning in DLs. The main contribution is to find and formalise a stronger semantics that is capable to obtain the expected entailments without sacrificing the axiomatisation that provides the well-behaviour of the previous approaches. In this talk I will present our nonmonotonic semantics for the relatively inexpressive, yet practically useful, low-complexity description logic (defeasible) EL_bot. We extend classical canonical models, a useful feature in the EL-family of DLs, into typicality models that allow to draw conclusions based on defeasible information, even for quantified concepts. We showed that this new method improves on rational and relevant semantics for DDLs, i.e. two differently strong semantics, rooted in the preferential approach for propositional logic by Kraus, Lehmann and Magidor (KLM). I continue to describe the plans to finalise my project, including but not limited to: answering open questions (e.g. on computational complexity), axiomatising the new semantics in accord with KLM postulates, lifting the results to more expressive DLs (potentially involving aspects of uncertainty) and discussing more practical inference mechanisms such as conjunctive query answering in DDLs.
Lukas Schweizer: Fixed-Domain Reasoning: Recent Extensions
I will report on two recent aspects that we investigated in the context of reasoning in DLs under the fixed-domain semantics.
We recently started to work on preferences over models. Beginning with „minimal“ models, that is models that are smaller than others are to be preferred over models that contain more information than necessarily required. This is actually in the spirit of McCarthy’s circumscription, and I will show for special cases of circumscribed DL knowledge bases, how to compute such models using a rewriting into answer-set programming.
Then I will then introduce the notion of justifications; i.e. explanations in form of minimal subsets that entail a certain axiom. We compare three approaches to to compute justifications: one using standard technology employing an axiomatization of the fixed-domain semantics, one using our dedicated fixed-domain reasoner Wolpertinger in combination with standard justification computation technology, and one where the problem is encoded entirely into answer-set programming.
Pavlos Marantidis: Extending Matching in Description Logics
Matching concept descriptions against concept patterns was introduced as a non-standard inference task in Description Logics motivated by applications in knowledge bases. It was first investigated for the DL FL0, for which a polynomial-time algorithm was obtained. In this talk, we will present recent results that extend matching in FL0 in two aspects. Initially, we investigate approximate matching, that increases the recall of the method by checking whether the input description and pattern can “almost” match, rather than become equivalent. The meaning of almost is formalised using distance measures between concepts. Afterwards, we examine matching in the presence of TBoxes, i.e., background knowledge of concept inclusions. It is well-known that subsumption reasoning rises from P to ExpTime when TBoxes are considered, and we prove that the same is true for matching. Finally, we present some preliminary ideas on approximate matching in the presence of TBoxes.
Laure Daviaud: About the equivalence problem for weighted automata
In this talk, I will review some classical results about the (un)decidability of the equivalence problem for weighted automata (mainly for probabilistic automata and automata over the tropical semiring). I would like to present more particularly some techniques used in the proof of those results (depending on time):
- reduction from the halting problem of two-counter machines to prove the undecidability of the equivalence problem both for automata over the tropical semiring and probabilistic automata;
- Simon's proof of the decidability of boundedness of distance automata.
Kevin Stier: Tree Substitution Grammars
We introduce Tree Substitution Grammars and illustrate their expressive power by giving a variety of examples for languages generated by them. Afterwards we show that the Tree Substitution Languages are a subset of the well-known Regular Tree Languages and that Tree Substitution Languages are not closed under union. We are then able to consider unions of Tree Substitution Languages and their expressive power. In particular, we show that Tree Substitution Languages form an infinite hierarchy under union.
Jakob Piribauer: Conditional and Partial Expectations in Markov Decision Processes
In the classical stochastic shortest path problem, the task is to find a scheduler optimising the expectation of the accumulated weight before reaching a goal state in a weighted probabilistic transition system, in our case in a Markov decision processes. We consider two variants, dropping the classical requirement that the goal state has to be reached almost surely. The first variant asks to optimise the conditional expectation of accumulated weight under the condition that the goal is reached. In the second variant, weight 0 is assigned to paths not reaching the goal state and the expectation of the resulting random variable is to be optimised.
After giving an overview over recent results in the setting of non-negative weights, we will demonstrate new problems arising in the context of integer weights and present first results in this more general setting.
Sylvain Schmitz: On the Complexity of VAS Reachability
The decidability of the reachability problem in vector addition systems is a landmark result in theoretical computer science, with applications in an array of fields ranging from program verification to data logics. I will present the main arguments of the first complexity upper bound for this problem, obtained together with Leroux by analysing the decomposition algorithm invented by Mayr and successively refined by Kosaraju and Lambert.
Danny Richter: Bounded Satisfiability and Model Checking for MTL
In Metric Temporal Logic, a popular extension of LTL, temporal operators are augmented with interval constraints to specify quantitative properties. For this logic, depending on the type of word and state machine models, the satisfiability as well as the model checking problem are either undecidable or have a very high computational complexity. By adding a bound on the length of the considered words (resp. computations), we gain decidability and better complexity bounds compared to results for the corresponding problems without a length-bound.
Luisa Herrmann: Weighted Automata with Storage on Words and Trees
In this talk we study weighted automata with storage on words and trees as well as particular instantiations of them. After giving an overview on results obtained so far, we will show a Medvedev characterization for weighted tree languages. For this, we introduce representable weighted tree languages, a language class that is built up using particular operations on weighted tree languages. We prove that restricted representable weighted tree languages are exactly those weighted tree languages that can be recognized by weighted tree automata. Moreover, we investigate the relation between unrestricted representable weighted tree languages and weighted monadic second-order logic. In the last part of this talk, we will give an outlook on an extension of a homomorphic closure property to weighted automata with storage.
Caterina Viola: Piecewise linear functions and infinite domain valued constraint satisfaction problems.
Valued constraint satisfaction problems (VCSPs) are a class of combinatorial optimisation problems.
Recently, the computational complexity of all VCSPs for finite sets of cost functions over finite domains has been classified completely. Many natural optimisation problems, however, cannot be formulated as VCSPs over a finite domain. One example is the famous linear programming problem over the rational numbers, where the task is to optimise a linear function subject to linear inequalities. This can be modelled as a VCSP over ℚ by allowing unary linear cost functions and cost functions of higher arity to express the crisp (i.e. hard) linear inequalities.
I will present our work that initiates the systematic investigation of infinite-domain VCSPs by studying the complexity of VCSPs for classes of piecewise linear cost functions, i.e., cost functions admitting a first-order definition on (ℚ;+,<,1). Such VCSPs can be solved in polynomial time when the cost functions are additionally submodular, and that this is indeed a maximally tractable class: adding any cost function that is not submodular leads to an NP-hard VCSP (CSL 2018).
I will close the presentation with an overview on our ongoing project whose aim is to characterise the expressive power of in terms of fractional polymorphisms.
Johannes Greiner: On the complexity of CSPs of generic combinations
In order to further understand the complexity constraint satisfaction problems (CSPs) on infinite domains we look at "generic combinations" of structures. The CSPs of generic combinations can be understood as the combination of the CSPs of less expressive structures. In this talk we briefly present our hardness result from IJCAR'18 for generic combinations. Furthermore we give an overview of other approaches and preliminary findings that we currently investigate.
Shima Asaadi: A New Semantic Relatedness Dataset for Evaluating Compositional Distributional Models
Compositional Distributional Semantic Models (CDSMs) represent the meaning of compositional expressions in vector and matrix space. In these models, a function is proposed to compute the meaning representation of an expression by composing the representation of its individual components. To investigate the performance of CDSMs on meaning composition different evaluation methods have been introduced. Pairwise semantic similarity and semantic relatedness are among the dominant evaluation methods. Since semantic relatedness covers any kinds of lexical association between expressions including similarity association, we create an English dataset for semantic relatedness task to evaluate CDSMs. We use the best-worst scaling technique to annotate the data in which known problems in creating the past datasets have been eliminated. The new dataset will be used to analyze the performance of a wide range of CDSMs such as compositional matrix-space models and vector-space models. Moreover, since the dataset has been created via crowdsourcing, we will study the human judgments on semantic relatedness for different relations such as hypernym-hyponym relation, etc. between expressions.
Sven Dziadek: Weighted ω-pushdown automata
Recently, weighted ω-pushdown automata have been introduced by Droste, Ésik, Kuich. This new type of automaton has access to a stack and models quantitative aspects of infinite words. We propose a different automaton model. This new model derives from automata on infinite nested words. It behaves as a normal form for ω-pushdown automata and has several advantages over the more general model. We show that this new model is equivalent to the more general one in the unweighted case. The equivalence is believed to hold also in the weighted case and the proof is currently elaborated.
As a first result, we prove a Nivat theorem for these automata. This theorem states that the behaviors of weighted ω-pushdown automata are precisely the projections of very simple ω-series restricted to ω-context-free languages.
The second result is an equivalent logic for the unweighted counterpart of the new automaton model and is based on the logic invented by Lautemann, Schwentick, Thérien for context-free languages over finite words. For the weighted case, a logic expressively equivalent to
weighted ω-pushdown automata is currently in development.
Gustav Grabolle: A Nivat theorem for weighted alternating finite automata
There are several intuitive ways of applying the concept of alternation to a weighted setting. Recently, it was shown, that all of these characterize the same class quantitative languages and that weighted alternating finite automata are more expressive than weighted finite automata.
In this talk I am going to give a further characterization in the form of a Nivat result for weighted alternating finite automata. I will show, that quantitative languages recognized by weighted alternating finite automata are not closed under inverses of non-deleting homomorphisms. Nevertheless, I will prove a new Nivat like result were tree transducers instead of monoid homomorphisms are used. This result generalizes the result for weighted finite automata and ties alternating automata into the field of tree transducers.
Antoine Mottet: The Containment Problem for Unambiguous Register Automata
We study the containment problem for unambiguous register automata.
Register automata, introduced by Kaminski and Francez in 1994, extend finite automata with a finite set of registers.
Register automata can process data words, i.e., finite words over an infinite data domain; the registers are used to store data coming from the input word for later comparisons. A register automaton is unambiguous if it has at most one accepting run for every input data word. The class of languages recognized by these automata is not closed under complement, thereby preventing a classical reduction of the containment problem to the emptiness problem for register automata. I will present a proof that the containment problem for unambiguous register automata is decidable, based on a reduction to a reachability problem on some finite graph.
Christian Alrabbaa: FO rewriting of conjunctive queries for DL-Lite with Concrete Domains.
Results of queries over a database are obtained from the plain information that it provides. On the other hand, ontologies contribute additional knowledge that can be used to infer data that are not explicitly stored in the database; answering queries using the ontological knowledge is what we call ontology-based query answering (OBQA). Query rewriting is a well-known technique of OBQA, which compiles the ontology into the query so that it can be evaluated directly over the database. In our setting, we consider conjunctive queries and the ontology language that we deal with is a dialect of DL-Lite that is extended with so-called concrete domains. These concrete domains enrich the expressivity of the ontology; therefore allowing expressions that involve string or numerical values and predicates over these values. Some previous work on the topic provides a theoretical approach to construct the rewriting, but it is not feasible to implement due to the large number of concrete domain implications that need to be considered for each query. We present a novel technique that saturates the TBox and extends the process of rewriting conjunctive queries with concrete domains to obtain a practical implementation.
Markus Ulbricht: Inconsistency in non-monotonic logics
Inconsistency is an omnipresent phenomenon in logical accounts of knowledge representation and reasoning (KR). Classical logics usually suffer from the principle of explosion which renders reasoning meaningless, as everything can be derived from inconsistent theories. Therefore, reasoning under inconsistency is an important research area in KR. In order to analyze inconsistency, minimal inconsistent subsets of knowledge bases play an important role in classical logics, most notably for repair and inconsistency measurement. It turns out that for non-monotonic reasoning, minimal inconsistent subsets are not as important since consistent knowledge bases might contain inconsistent subsets.
Our investigation of inconsistency is based on a more appropriate notion, called strong inconsistency. We obtain that -in an arbitrary logic, monotonic or not- minimal strongly inconsistent subsets play a similar role as minimal inconsistent subsets in classical reasoning. In particular, the well-known classical duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics if the strong notion of inconsistency is used.
Besides general results that can be stated for arbitrary logics, we mainly focus on answer set programming and abstract argumentation frameworks. Our analysis involves consideration of the computational complexity of various related reasoning and function problems. A quantitative approach for analyzing inconsistencies is given by the field inconsistency measurement which investigates functions that assign real numbers to (usually classical) theories, with the intuitive meaning that large values indicate severe inconsistency. We also briefly discuss this issue.