Chair of Automata Theory at the Institute of Theoretical Computer Science of the Faculty of Computer Science of the Technische Universität Dresden

# Publications

The list of publications is also available as PDF document. There is also a list of our technical reports and theses.- 2017, 2016, 2015, 2014, 2013, 2012, 2011,
- 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003, 2002, 2001,
- 2000, 1999, 1998, 1997, 1996, 1995, 1994, 1993, 1992, 1991,
- 1990, 1989, 1988, 1987, 1986, 1985

## 2017

Franz Baader: **A New Description Logic with Set Constraints and Cardinality Constraints on Role Successors**. In Clare Dixon and Marcelo Finger, editors, *Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)*, volume 10483 of *Lecture Notes in Computer Science*, pages 43–59. Brasília, Brazil, Springer-Verlag, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce a new description logic that extends the well-known logic ALCQ by allowing the statement of constraints on role successors that are more general than the qualified number restrictions of ALCQ. To formulate these constraints, we use the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA), in which one can express Boolean combinations of set constraints and numerical constraints on the cardinalities of sets. Though our new logic is considerably more expressive than ALCQ, we are able to show that the complexity of reasoning in it is the same as in ALCQ, both without and with TBoxes.
Franz Baader, Daniel Borchmann, and Adrian Nuradiansyah: **Preliminary Results on the Identity Problem in Description Logic Ontologies**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors, *Proceedings of the 30th International Workshop on Description Logics, Montpellier, France, July 18-21, 2017.*, volume 1879 of *CEUR Workshop Proceedings*. CEUR-WS.org, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

The work in this paper is motivated by a privacy scenario in which the identity of certain persons (represented as anonymous individ- uals) should be hidden. We assume that factual information about known individuals (i.e., individuals whose identity is known) and anonymous individuals is stored in an ABox and general background information is expressed in a TBox, where both the TBox and the ABox are publicly accessible. The identity problem then asks whether one can deduce from the TBox and the ABox that a given anonymous individual is equal to a known one. Since this would reveal the identity of the anonymous indi- vidual, such a situation needs to be avoided. We first observe that not all Description Logics (DLs) are able to derive any such equalities between individuals, and thus the identity problem is trivial in these DLs. We then consider DLs with nominals, number restrictions, or function de- pendencies, in which the identity problem is non-trivial. We show that in these DLs the identity problem has the same complexity as the instance problem. Finally, we consider an extended scenario in which users with different rôles can access different parts of the TBox and ABox, and we want to check whether, by a sequence of rôle changes and queries asked in each rôle, one can deduce the identity of an anonymous individual.
Franz Baader, Daniel Borchmann, and Adrian Nuradiansyah: **The Identity Problem in Description Logic Ontologies and Its Application to View-Based Information Hiding**. In Zhe Wang, Anni-Yasmin Turhan, Kewen Wang, and Xiaowang Zhang, editors, *Semantic Technology - 7th Joint International Conference, JIST 2017, Gold Coast, QLD, Australia, November 10-12, 2017, Proceedings*, pages 102–117, 2017.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

The work in this paper is motivated by a privacy scenario in which the identity of certain persons (represented as anonymous individ- uals) should be hidden. We assume that factual information about known individuals (i.e., individuals whose identity is known) and anonymous individuals is stored in an ABox and general background information is expressed in a TBox, where both the TBox and the ABox are publicly accessible. The identity problem then asks whether one can deduce from the TBox and the ABox that a given anonymous individual is equal to a known one. Since this would reveal the identity of the anonymous indi- vidual, such a situation needs to be avoided. We first observe that not all Description Logics (DLs) are able to derive any such equalities between individuals, and thus the identity problem is trivial in these DLs. We then consider DLs with nominals, number restrictions, or function de- pendencies, in which the identity problem is non-trivial. We show that in these DLs the identity problem has the same complexity as the instance problem. Finally, we consider an extended scenario in which users with different rôles can access different parts of the TBox and ABox, and we want to check whether, by a sequence of rôle changes and queries asked in each rôle, one can deduce the identity of an anonymous individual.
Franz Baader, Stefan Borgwardt, Patrick Koopmann, Ana Ozaki, and Veronika Thost: **Metric Temporal Description Logics with Interval-Rigid Names**. In Clare Dixon and Marcelo Finger, editors, *Proceedings of the 11th International Symposium on Frontiers of Combining Systems (FroCoS'17)*, volume 10483 of *Lecture Notes in Computer Science*, pages 60–76, 2017.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
Franz Baader, Stefan Borgwardt, Patrick Koopmann, Ana Ozaki, and Veronika Thost: **Metric Temporal Description Logics with Interval-Rigid Names (Extended Abstract)**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors, *Proceedings of the 30th International Workshop on Description Logics (DL'17)*, volume 1879 of *CEUR Workshop Proceedings*. Montpellier, France, CEUR-WS, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

In contrast to qualitative linear temporal logics, which can be used to state that some property will eventually be satisfied, metric temporal logics allow to formulate constraints on how long it may take until the property is satisfied. While most of the work on combining Description Logics (DLs) with temporal logics has concentrated on qualitative temporal logics, there has recently been a growing interest in extending this work to the quantitative case. In this paper, we complement existing results on the combination of DLs with metric temporal logics over the natural numbers by introducing interval-rigid names. This allows to state that elements in the extension of certain names stay in this extension for at least some specified amount of time.
Franz Baader, Stefan Borgwardt, and Marcel Lippmann: **Query Rewriting for DL-Lite with n-ary Concrete Domains**. In Carles Sierra, editor,

*Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, pages 786–792, 2017.

BibTeX entry Paper (PDF) ©IJCAI

#### Abstract:

We investigate ontology-based query answering (OBQA) in a setting where both the ontology and the query can refer to concrete values such as numbers and strings. In contrast to previous work on this topic, the built-in predicates used to compare values are not restricted to being unary. We introduce restrictions on these predicates and on the ontology language that allow us to reduce OBQA to query answering in databases using the so-called combined rewriting approach. Though at first sight our restrictions are different from the ones used in previous work, we show that our results strictly subsume some of the existing first-order rewritability results for unary predicates.
Franz Baader, Stefan Borgwardt, and Marcel Lippmann: **Query Rewriting for DL-Lite with n-ary Concrete Domains (Abstract)**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors,

*Proceedings of the 30th International Workshop on Description Logics (DL'17)*, volume 1879 of

*CEUR Workshop Proceedings*. Montpellier, France, CEUR-WS, 2017.

BibTeX entry Paper (PDF)

Franz Baader, Stefan Borgwardt, and Rafael Peñaloza: **Decidability and Complexity of Fuzzy Description Logics**. *Künstliche Intelligenz*, 31(1):85–90, 2017. Project report.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Fuzzy description logics (FDLs) have been introduced to represent concepts for which membership cannot be determined in a precise way, i.e., where instead of providing a strict border between being a member and not being a member, it is more appropriate to model a gradual change from membership to non-membership. First approaches for reasoning in FDLs where based either on a reduction to reasoning in classical description logics (DLs) or on adaptations of reasoning approaches for DLs to the fuzzy case. However, it turned out that these approaches in general do not work if expressive terminological axioms, called general concept inclusions (GCIs), are available in the FDL. The goal of this project was a comprehensive study of the border between decidability and undecidability for FDLs with GCIs, as well as determining the exact complexity of the decidable logics. As a result, we have provided an almost complete classification of the decidability and complexity of FDLs with GCIs.
Franz Baader and Andreas Ecke: **Extending the Description Logic ALC with More Expressive Cardinality Constraints on Concepts**. In *GCAI 2017. 3rd Global Conference on Artificial Intelligence*, volume 50 of *EPiC Series in Computing*, pages 6–19. EasyChair, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

We extend the terminological formalism of the well-known description logic ALC from concept inclusions (CIs) to more general constraints formulated in the quantifier-free fragment of Boolean Algebra with Presburger Arithmetic (QFBAPA). In QFBAPA one can formulate Boolean combinations of inclusion constraints and numerical constraints on the cardinalities of sets. Our new formalism extends, on the one hand, so-called cardinality restrictions on concepts, which have been introduced two decades ago, and on the other hand the recently introduced statistical knowledge bases. Though considerably more expressive, our formalism has the same complexity (NExpTime) as cardinality restrictions on concepts. We will also introduce a restricted version of our formalism for which the complexity is ExpTime. This yields the until now unknown exact complexity of the consistency problem for statistical knowledge bases.
Franz Baader and Oliver Fernández Gil: **Decidability and Complexity of Threshold Description Logics Induced by Concept Similarity Measures**. In *Proceedings of the 32nd Annual ACM Symposium on Applied Computing, Marrakech, Morocco, April 4-6, 2017*, pages 983–988. ACM, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

In a recent research paper, we have proposed an extension of the light-weight Description Logic (DL) EL in which concepts can be defined in an approximate way. To this purpose, the notion of a graded membership function m, which instead of a Boolean membership value 0 or 1 yields a membership degree from the interval [0,1], was introduced. Threshold concepts can then, for example, require that an individual belongs to a concept C with degree at least 0.8. Reasoning in the threshold DL tel(m) obtained this way of course depends on the employed graded membership function m. The paper defines a specific such function, called deg, and determines the exact complexity of reasoning in tel(deg). In addition, it shows how concept similarity measures (CSMs) satisfying certain properties can be used to define graded membership functions m , but it does not investigate the complexity of reasoning in the induced threshold DLs tel(m ). In the present paper, we start filling this gap. In particular, we show that computability of implies decidability of tel(m ), and we introduce a class of CSMs for which reasoning in the induced threshold DLs has the same complexity as in tel(deg).
Franz Baader, Oliver Fernández Gil, and Pavlos Marantidis: **Approximation in Description Logics: How Weighted Tree Automata Can Help to Define the Required Concept Comparison Measures in FL_{0}**. In Frank Drewes, Carlos Martín-Vide, and Bianca Truthe, editors,

*Proceedings of the 11th International Conference on Language and Automata Theory and Applications (LATA 2017)*, volume 10168 of

*Lecture Notes in Computer Science*, pages 3–26. Springer, 2017.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Recently introduced approaches for relaxed query answering, approximately defining concepts, and approximately solving unification problems in Description Logics have in common that they are based on the use of concept comparison measures together with a threshold construction. In this paper, we will briefly review these approaches, and then show how weighted automata working on infinite trees can be used to construct computable concept comparison measures for FL0 that are equivalence invariant w.r.t. general TBoxes. This is a first step towards employing such measures in the mentioned approximation approaches.
Franz Baader, Ian Horrocks, Carsten Lutz, and Ulrike Sattler: **An Introduction to Description Logic**. Cambridge University Press, 2017.

BibTeX entry

#### Abstract:

Description logics (DLs) have a long tradition in computer science and knowledge representation, being designed so that domain knowledge can be described and so that computers can reason about this knowledge. DLs have recently gained increased importance since they form the logical basis of widely used ontology languages, in particular the web ontology language OWL. Written by four renowned experts, this is the first textbook on description logics. It is suitable for self-study by graduates and as the basis for a university course. Starting from a basic DL, the book introduces the reader to their syntax, semantics, reasoning problems and model theory and discusses the computational complexity of these reasoning problems and algorithms to solve them. It then explores a variety of reasoning techniques, knowledge-based applications and tools and it describes the relationship between DLs and OWL.
Franz Baader, Patrick Koopmann, and Anni-Yasmin Turhan: **Using Ontologies to Query Probabilistic Numerical Data**. In *Frontiers of Combining Systems: 11th International Symposium*, volume 10483 of *Lecture Notes in Computer Science*, pages 77–94. Springer International Publishing, 2017.

BibTeX entry
Paper (PDF)
Extended technical report (PDF)
DOI
(The final publication is available at link.springer.com)
©Spinger International Publishing

#### Abstract:

We consider ontology-based query answering in a setting where some of the data are numerical and of a probabilistic nature, such as data obtained from uncertain sensor readings. The uncertainty for such numerical values can be more precisely represented by continu- ous probability distributions than by discrete probabilities for numerical facts concerning exact values. For this reason, we extend existing ap- proaches using discrete probability distributions over facts by continuous probability distributions over numerical values. We determine the exact (data and combined) complexity of query answering in extensions of the well-known description logics EL and ALC with numerical comparison operators in this probabilistic setting.
Franz Baader and Pavlos Marantidis: **Language equations for approximate matching in the Description Logic FL0**. In Adrià Gascón and Christopher Lynch, editors, *Proceedings of the 31st International Workshop on Unification (UNIF'17)*, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

Both matching and unification in the Description Logic FL0 can be reduced to solving certain formal language equations. In previous work, we have extended unification in FL0 to approximate unification, and have shown that approximate unification can be reduced to approximately solving language equations. An approximate solution of a language equation need not make the languages on the left- and right-hand side of the equation equal, but close w.r.t. a given distance function. In the present paper, we consider approximate matching. We show that, for a large class of distance functions, approximate matching is in NP. We then consider a particular distance function d1(K,L) = 2^{−}n, where n is the length of the shortest word in the symmetric difference of the languages K, L, and show that w.r.t. this distance function approximate matching is polynomial.

Stephan Böhme and Thomas Kühn: **Reasoning on Context-Dependent Domain Models**. In Zhe Wang, Anni-Yasmin Turhan, Kewen Wang, and Xiaowang Zhang, editors, *7th Joint International Conference Semantic Technology, JIST 2017*, pages 69–85. Gold Coast, Australia, Springer International Publishing, 2017.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Modelling context-dependent domains is hard, as capturing multiple context-dependent concepts and constraints easily leads to inconsistent models or unintended restrictions. However, current semantic technologies not yet support reasoning on context-dependent domains. To remedy this, we introduced ConDL, a set of novel description logics tailored to reason on contextual knowledge, as well as JConHT, a dedicated reasoner for ConDL ontologies. ConDL enables reasoning on the consistency and satisfiability of context-dependent domain models, e.g., Compartment Role Object Models (CROM). We evaluate the suitability and efficiency of our approach by reasoning on a modelled banking application and measuring the performance on randomly generated models.
Stefan Borgwardt, Marco Cerami, and Rafael Peñaloza: **The Complexity of Fuzzy EL under the Lukasiewicz T-norm**.

*International Journal of Approximate Reasoning*, 91:179–201, 2017.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Fuzzy Description Logics (DLs) are are a family of knowledge representation formalisms designed to represent and reason about vague and imprecise knowledge that is inherent to many application domains. Previous work has shown that the complexity of reasoning in a fuzzy DL using finitely many truth degrees is usually not higher than that of the underlying classical DL. We show that this does not hold for fuzzy extensions of the light-weight DL EL, which is used in many biomedical ontologies, under the finitely valued Åukasiewicz semantics. More precisely, the complexity of reasoning increases from P to ExpTime, even if only one additional truth value is introduced. When adding complex role inclusions and inverse roles, the logic even becomes undecidable. Even more surprisingly, when considering the infinitely valued Åukasiewicz semantics, reasoning in fuzzy EL is undecidable.
Stefan Borgwardt, Marco Cerami, and Rafael Peñaloza: **Lukasiewicz Fuzzy EL is Undecidable**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors,

*Proceedings of the 30th International Workshop on Description Logics (DL'17)*, volume 1879 of

*CEUR Workshop Proceedings*. Montpellier, France, CEUR-WS, 2017.

BibTeX entry Paper (PDF)

#### Abstract:

Fuzzy Description Logics have been proposed as formalisms for representing and reasoning about imprecise knowledge by introducing intermediate truth degrees. Unfortunately, it has been shown that reasoning in these logics easily becomes undecidable, when infinitely many truth degrees are considered and conjunction is not idempotent. In this paper, we take those results to the extreme, and show that subsumption in fuzzy EL under Åukasiewicz semantics is undecidable.This provides the first instance of a Horn-style logic with polynomial-time reasoning whose fuzzy extension becomes undecidable.
Stefan Borgwardt, Ismail Ilkan Ceylan, and Thomas Lukasiewicz: **Ontology-Mediated Queries for Probabilistic Databases**. In Satinder Singh and Shaul Markovitch, editors, *Proceedings of the 31st AAAI Conf. on Artificial Intelligence (AAAI'17)*, pages 1063–1069. San Francisco, USA, AAAI Press, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

Probabilistic databases (PDBs) are usually incomplete, e.g., contain only the facts that have been extracted from the Web with high confidence. However, missing facts are often treated as being false, which leads to unintuitive results when querying PDBs. Recently, open-world probabilistic databases (OPDBs) were proposed to address this issue by allowing probabilities of unknown facts to take any value from a fixed probability interval. In this paper, we extend OPDBs by Datalog+/- ontologies, under which both upper and lower probabilities of queries become even more informative, enabling us to distinguish queries that were indistinguishable before. We show that the dichotomy between P and PP in (Open)PDBs can be lifted to the case of first-order rewritable positive programs (without negative constraints); and that the problem can become NP^{P}P-complete, once negative constraints are allowed. We also propose an approximating semantics that circumvents the increase in complexity caused by negative constraints.

Stefan Borgwardt, Ismail Ilkan Ceylan, and Thomas Lukasiewicz: **Ontology-Mediated Queries for Probabilistic Databases (Extended Abstract)**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors, *Proceedings of the 30th International Workshop on Description Logics (DL'17)*, volume 1879 of *CEUR Workshop Proceedings*. Montpellier, France, CEUR-WS, 2017.

BibTeX entry
Paper (PDF)

Stefan Borgwardt and Rafael Peñaloza: **Algorithms for Reasoning in Very Expressive Description Logics under Infinitely Valued Gödel Semantics**. *International Journal of Approximate Reasoning*, 83:60–101, 2017.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Fuzzy description logics (FDLs) are knowledge representation formalisms capable of dealing with imprecise knowledge by allowing intermediate membership degrees in the interpretation of concepts and roles. One option for dealing with these intermediate degrees is to use the so-called Gödel semantics, under which conjunction is interpreted by the minimum of the degrees of the conjuncts. Despite its apparent simplicity, developing reasoning techniques for expressive FDLs under this semantics is a hard task. In this paper, we introduce two new algorithms for reasoning in very expressive FDLs under Gödel semantics. They combine the ideas of a previous automata-based algorithm for Gödel FDLs with the known crispification and tableau approaches for FDL reasoning. The results are the two first practical algorithms capable of reasoning in infinitely valued FDLs supporting general concept inclusions.
Stefan Borgwardt and Rafael Peñaloza: **Fuzzy Description Logics – A Survey**. In Serafín Moral, Olivier Pivert, Daniel Sánchez, and Nicolás Marín, editors, *Proceedings of the 11th International Conference on Scalable Uncertainty Management (SUM'17)*, volume 10564 of *Lecture Notes in Computer Science*, pages 31–45. Granada, Spain, Springer-Verlag, 2017.

BibTeX entry
Paper (PDF)
DOI

Camille Bourgaux and Anni-Yasmin Turhan: **Temporal Query Answering in DL-Lite over Inconsistent Data**. In Claudia d'Amato and Miriam Fernandez, editors, *Proceedings of the 16th International Semantic Web Conference (ISWC 2017)*, *LNCS*, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

In ontology-based systems that process data stemming from different sources and that is received over time, as in context-aware systems, reasoning needs to cope with the temporal dimension and should be resilient against inconsistencies in the data. Motivated by such settings, this paper addresses the problem of handling inconsistent data in a temporal version of ontology-based query answering. We consider a recently proposed temporal query language that combines conjunctive queries with operators of propositional linear temporal logic and extend to this setting three inconsistency-tolerant semantics that have been introduced for querying inconsistent description logic knowledge bases. We investigate their complexity for DL-Lite_{R}temporal knowledge bases, and furthermore complete the picture for the consistent case.

Ismail Ilkan Ceylan, Stefan Borgwardt, and Thomas Lukasiewicz: **Most Probable Explanations for Probabilistic Database Queries**. In Carles Sierra, editor, *Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, pages 950–956, 2017.

BibTeX entry
Paper (PDF)
©IJCAI

#### Abstract:

Forming the foundations of large-scale knowledge bases, probabilistic databases have been widely studied in the literature. In particular, probabilistic query evaluation has been investigated intensively as a central inference mechanism. However, despite its power, query evaluation alone cannot extract all the relevant information encompassed in large-scale knowledge bases. To exploit this potential, we study two inference tasks; namely finding the most probable database and the most probable hypothesis for a given query. As natural counterparts of most probable explanations (MPE) and maximum a posteriori hypotheses (MAP) in probabilistic graphical models, they can be used in a variety of applications that involve prediction or diagnosis tasks. We investigate these problems relative to a variety of query languages, ranging from conjunctive queries to ontology-mediated queries, and provide a detailed complexity analysis.
Ismail Ilkan Ceylan, Stefan Borgwardt, and Thomas Lukasiewicz: **Most Probable Explanations for Probabilistic Database Queries (Extended Abstract)**. In Alessandro Artale, Birte Glimm, and Roman Kontchakov, editors, *Proceedings of the 30th International Workshop on Description Logics (DL'17)*, volume 1879 of *CEUR Workshop Proceedings*. Montpellier, France, CEUR-WS, 2017.

BibTeX entry
Paper (PDF)

Ismail Ilkan Ceylan, Adnan Darwiche, and Guy Van Den Broeck: **Open-World Probabilistic Databases: An Abridged Report**. In Carles Sierra, editor, *Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, 2017. Sister Conference Best Paper Track, to appear.

BibTeX entry
Paper (PDF)

#### Abstract:

Large-scale probabilistic knowledge bases are be- coming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this paper, we revisit the semantics under- lying such systems. In particular, the closed-world assumption of probabilistic databases, that facts not in the database have probability zero, clearly conflicts with their everyday use. To address this discrepancy, we propose an open-world probabilistic database semantics, which relaxes the probabilities of open facts to default intervals. For this open- world setting, we lift the existing data complexity dichotomy of probabilistic databases, and propose an efficient evaluation algorithm for unions of conjunctive queries. We also show that query evaluation can become harder for non-monotone queries.
Ismail Ilkan Ceylan, Thomas Lukasiewicz, Rafael Peñaloza, and Oana Tifrea-Marciuska: **Query Answering in Ontologies under Preference Rankings**. In Carles Sierra, editor, *Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, 2017. To appear.

BibTeX entry
Paper (PDF)

#### Abstract:

We present an ontological framework, based on preference rankings, that allows users to express their preferences between the knowledge explicitly available in the ontology. Using this formalism, the answers for a given query to an ontology can be ranked by preference, allowing users to retrieve the most preferred answers only. We provide a host of complexity results for the main computational tasks in this framework, for the general case, and for EL and DL-Lite_{core}as underlying ontology languages.

Ismail Ilkan Ceylan and Rafael Peñaloza Nyssen: **The Bayesian Ontology Language BEL**. *Journal of Automated Reasoning*, 58(1):67–95, 2017.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

We introduce the new probabilistic description logic (DL) BEL, which extends the light-weight DL EL with the possibility of expressing uncertainty about the validity of some knowledge. Contrary to other probabilistic DLs, BEL is designed to represent classical knowledge that depends on an uncertain context; that is, some of the knowledge may hold or not depending on the current situation. The probability distribution of these contexts is expressed by a Bayesian network (BN). We study different reasoning problems in BEL, providing tight complexity bounds for all of them. One particularly interesting property of our framework is that reasoning can be decoupled between the logical (i.e., EL), and the probabilistic (i.e., the BN) components. We later generalize all the notions presented to introduce Bayesian extensions of arbitrary ontology languages. Using the decoupling property, we are able to provide tight complexity bounds for reasoning in the Bayesian extensions of many other DLs. We provide a detailed analysis of our formalism w.r.t. the assumptions made and compare it with the existing approaches.
Patrick Koopmann and Jieying Chen: **Computing ALCH-Subsumption Modules Using Uniform Interpolation**. In Patrick Koopmann, Sebastian Rudolph, Renate Schmidt, and Christoph Wernhard, editors, *Proceedings of SOQE 2017*, *CEUR Workshop Proceedings*. CEUR-WS.org, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate how minimal subsumption modules can be extracted using methods for uniform interpolation and forgetting. Given an ontology and a signature of concept and role names, a subsumption module is a subset of the ontology that preserves all logical entailments that can be expressed in the description logic of the ontology using only terms in the specified signature. As such, they are useful for ontology reuse and ontology analysis. While there exists a range of methods for computing or approximating minimal modules for a range of module types, we are not aware of a practical, implemented method for computing minimal subsumption modules in description logics beyond ELH. In this paper, we present a method that uses uniform interpolation/forgetting to compute subsumption modules in ALCH, and which under certain conditions guarantees minimality of the extracted modules. As a side product, our method computes a so-called LK subsumption module, which over-approximates the union of all minimal subsumption modules, and as such may already have applications of its own. We further present an initial evaluation of this method on a varied corpus of ontologies.
Patrick Koopmann, Marcus Hähnel, and Anni-Yasmin Turhan: **Energy-Efficiency of OWL Reasoners—Frequency Matters**. In *Proceedings of JIST 2017*. Springer International Publishing, 2017.

BibTeX entry
Paper (PDF)
©Springer-Verlag
©Springer International Publishing

#### Abstract:

While running times of ontology reasoners have been studied extensively, studies on energy-consumption of reasoning are scarce, and the energy-efficiency of ontology reasoning is not fully understood yet. Earlier empirical studies on the energy-consumption of ontology reasoners focused on reasoning on smart phones and used measurement methods prone to noise and side-effects. This paper presents an evaluation of the energy-efficiency of five state-of-the-art OWL reasoners on an ARM single-board computer that has built-in sensors to measure the energy consumption of CPUs and memory precisely. Using such a machine gives full control over installed and running software, active clusters and CPU frequencies, allowing for a more precise and detailed picture of the energy consumption of ontology reasoning. Besides evaluating the energy consumption of reasoning, our study further explores the relationship between computation power of the CPU, reasoning time, and energy consumption.
Francesco Kriegel: **Acquisition of Terminological Knowledge from Social Networks in Description Logic**. In Rokia Missaoui, Sergei O. Kuznetsov, and Sergei Obiedkov, editors, *Formal Concept Analysis of Social Networks*, pages 97–142. Cham, Springer International Publishing, 2017.

BibTeX entry
DOI

#### Abstract:

The Web Ontology Language (OWL) has gained serious attraction since its foundation in 2004, and it is heavily used in applications requiring representation of as well as reasoning with knowledge. It is the language of the Semantic Web, and it has a strong logical underpinning by means of so-called Description Logics (DLs). DLs are a family of conceptual languages suitable for knowledge representation and reasoning due to their strong logical foundation, and for which the decidability and complexity of common reasoning problems are widely explored. In particular, the reasoning tasks allow for the deduction of implicit knowledge from explicitly stated facts and axioms, and plenty of appropriate algorithms were developed, optimized, and implemented, e.g., tableaux algorithms and completion algorithms. In this document, we present a technique for the acquisition of terminological knowledge from social networks. More specifically, we show how OWL axioms, i.e., concept inclusions and role inclusions in DLs, can be obtained from social graphs in a sound and complete manner. A social graph is simply a directed graph, the vertices of which describe the entities, e.g., persons, events, messages, etc.; and the edges of which describe the relationships between the entities, e.g., friendship between persons, attendance of a person to an event, a person liking a message, etc. Furthermore, the vertices of social graphs are labeled, e.g., to describe properties of the entities, and also the edges are labeled to specify the concrete relationships. As an exemplary social network we consider Facebook, and show that it fits our use case.
Francesco Kriegel: **First Notes on Maximum Entropy Entailment for Quantified Implications**. In *Formal Concept Analysis - 14th International Conference, ICFCA 2017, Rennes, France, June 13-16, 2017, Proceedings*, pages 155–167, 2017.

BibTeX entry
DOI

#### Abstract:

Entropy is a measure for the uninformativeness or randomness of a data set, i.e., the higher the entropy is, the lower is the amount of information. In the field of propositional logic it has proven to constitute a suitable measure to be maximized when dealing with models of probabilistic propositional theories. More specifically, it was shown that the model of a probabilistic propositional theory with maximal entropy allows for the deduction of other formulae which are somehow expected by humans, i.e., allows for some kind of common sense reasoning. In order to pull the technique of maximum entropy entailment to the field of Formal Concept Analysis, we define the notion of entropy of a formal context with respect to the frequency of its object intents, and then define maximum entropy entailment for quantified implication sets, i.e., for sets of partial implications where each implication has an assigned degree of confidence. Furthermore, then this entailment technique is utilized to define so-called maximum entropy implicational bases (ME-bases), and a first general example of such a ME-base is provided.
Francesco Kriegel: **Implications over Probabilistic Attributes**. In *Formal Concept Analysis - 14th International Conference, ICFCA 2017, Rennes, France, June 13-16, 2017, Proceedings*, pages 168–183, 2017.

BibTeX entry
DOI

#### Abstract:

We consider the task of acquisition of terminological knowledge from given assertional data. However, when evaluating data of real-world applications we often encounter situations where it is impractical to deduce only crisp knowledge, due to the presence of exceptions or errors. It is rather appropriate to allow for degrees of uncertainty within the derived knowledge. Consequently, suitable methods for knowledge acquisition in a probabilistic framework should be developed. In particular, we consider data which is given as a probabilistic formal context, i.e., as a triadic incidence relation between objects, attributes, and worlds, which is furthermore equipped with a probability measure on the set of worlds. We define the notion of a probabilistic attribute as a probabilistically quantified set of attributes, and define the notion of validity of implications over probabilistic attributes in a probabilistic formal context. Finally, a technique for the axiomatization of such implications from probabilistic formal contexts is developed. This is done is a sound and complete manner, i.e., all derived implications are valid, and all valid implications are deducible from the derived implications. In case of finiteness of the input data to be analyzed, the constructed axiomatization is finite, too, and can be computed in finite time.
Francesco Kriegel: **Probabilistic implication bases in FCA and probabilistic bases of GCIs in EL**. *Int. J. General Systems*, 46(5):511–546, 2017.

BibTeX entry
DOI

#### Abstract:

A probabilistic formal context is a triadic context the third dimension of which is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications with respect to probabilistic formal contexts, and provides a construction for a base of implications the probabilities of which exceed a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide. Furthermore, the results are extended towards the lightweight description logic ℰℒ⊥ with probabilistic interpretations, and a method for computing a base of general concept inclusions the probabilities of which are greater than a pre-defined lower bound is proposed. Additionally, we consider so-called probabilistic attributes over probabilistic formal contexts, and provide a method for the axiomatization of implications over probabilistic attributes.
Francesco Kriegel and Daniel Borchmann: **NextClosures: parallel computation of the canonical base with background knowledge**. *Int. J. General Systems*, 46(5):490–510, 2017.

BibTeX entry
DOI

#### Abstract:

The canonical base of a formal context plays a distinguished role in Formal Concept Analysis, as it is the only minimal implicational base known so far that can be described explicitly. Consequently, several algorithms for the computation of this base have been proposed. However, all those algorithms work sequentially by computing only one pseudo-intent at a time – a fact that heavily impairs the practicability in real-world applications. In this paper, we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner with respect to arbitrary implicational background knowledge. First experimental evaluations show that for sufficiently large data sets the speed-up is proportional to the number of available CPU cores.
Markus Krötzsch, Maximilian Marx, Ana Ozaki, and Veronika Thost: **Attributed Description Logics: Ontologies for Knowledge Graphs**. In Claudia d'Amato, Miriam Fernández, Valentina A. M. Tamma, Freddy Lécué, Philippe Cudré-Mauroux, Juan F. Sequeda, Christoph Lange, and Jeff Heflin, editors, *Proceedings of the 16th International Semantic Web Conference (ISWC'17)*, volume 10587 of *LNCS*, pages 418–435. Springer, October 2017.

BibTeX entry
Paper (PDF)
DOI
©Springer-Verlag

#### Abstract:

In modelling real-world knowledge, there often arises a need to represent and reason with meta-knowledge. To equip description logics (DLs) for dealing with such ontologies, we enrich DL concepts and roles with finite sets of attribute–value pairs, called annotations, and allow concept inclusions to express constraints on annotations. We show that this may lead to increased complexity or even undecidability, and we identify cases where this increased expressivity can be achieved without incurring increased complexity of reasoning. In particular, we describe a tractable fragment based on the lightweight description logic EL, and we cover SROIQ, the DL underlying OWL 2 DL.
Markus Krötzsch, Maximilian Marx, Ana Ozaki, and Veronika Thost: **Reasoning with Attributed Description Logics**. In *Proceedings of the 30th International Workshop on Description Logics (DL 2017)*, *CEUR Workshop Proceedings*. CEUR-WS.org, July 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

In modelling real-world knowledge, there often arises a need to represent and reason with meta-knowledge. To equip description logics (DLs) for dealing with such ontologies, we enrich DL concepts and roles with finite sets of attribute–value pairs, called annotations, and allow concept inclusions to express constraints on annotations. We show that this may lead to increased complexity or even undecidability, and we identify cases where this increased expressivity can be achieved without incurring increased complexity of reasoning. In particular, we describe a tractable fragment based on the lightweight description logic EL.
Maximilian Marx, Markus Krötzsch, and Veronika Thost: **Logic on MARS: Ontologies for generalised property graphs**. In Carles Sierra, editor, *Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI'17)*, pages 1188–1194. International Joint Conferences on Artificial Intelligence, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

Graph-structured data is used to represent large information collections, called knowledge graphs, in many applications. Their exact format may vary, but they often share the concept that edges can be annotated with additional information, such as validity time or provenance information. Property Graph is a popular graph database format that also provides this feature. We give a formalisation of a generalised notion of Property Graphs, called multi-attributed relational structures (MARS), and introduce a matching knowledge representation formalism, multi-attributed predicate logic (MAPL). We analyse the expressive power of MAPL and suggest a simpler, rule-based fragment of MAPL that can be used for ontological reasoning on Property Graphs. To the best of our knowledge, this is the first approach to making Property Graphs and related data structures accessible to symbolic AI.
Maximilian Pensel and Anni-Yasmin Turhan: **Including Quantification in Defeasible Reasoning for the Description Logic EL_{bot}**. In Marcello Balduccini and Tomi Janhunen, editors,

*Proceedings of the 14th International Conference on Logic Programming and Nonmonotonic Reasoning - LPNMR*, pages 78–84. Springer, 2017.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Defeasible Description Logics (DDLs) extend classical Description Logics with defeasible concept inclusions and offer thereby a form of non-monotonicity. For reasoning in such settings often the rational closure according to the well-known KLM postulates (for propositional logic) was employed in earlier approaches. If in DDLs that use quantification a defeasible subsumption relationship holds between two concepts, such a relationship might also hold if these concepts appear nested in existential restrictions. Earlier reasoning algorithms for DDLs do not detect this kind of defeasible subsumption relationships. We devise a new form of canonical models that extend classical canonical models for EL_{b}ot by elements that satisfy increasing amounts of defeasible knowledge. We show that reasoning based on these models yields the missing entailments.

Maximilian Pensel and Anni-Yasmin Turhan: **Making Quantification Relevant again—the case of Defeasible EL_{bot}**. In Richard Booth, Giovanni Casini, and Ivan Varzinczak, editors,

*Proceedings of the 4th International Workshop on Defeasible and Ampliative Reasoning - DARe*. CEUR-WS.org, 2017.

BibTeX entry Paper (PDF)

#### Abstract:

Defeasible Description Logics (DDLs) extend Description Logics with defeasible concept inclusions. Reasoning in DDLs often employs rational or relevant closure according to the (propositional) KLM postulates. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, this relationship might also hold if these concepts appear in existential restrictions. Such nested defeasible subsumption relationships were not detected by earlier reasoning algorithms—neither for rational nor relevant closure. Recently, we devised a new approach for EL_{b}ot that alleviates this problem for rational closure by the use of typicality models that extend classical canonical models by domain elements that individually satisfy any amount of consistent defeasible knowledge. In this paper we lift our approach to relevant closure and show that reasoning based on typicality models yields the missing entailments.

Veronika Thost: **News on Temporal Conjunctive Queries**. In Daniele Dell'Aglio, Darko Anicic, Payam M. Barnaghi, Emanuele Della Valle, Deborah L. McGuinness, Loris Bozzato, Thomas Eiter, Martin Homola, and Daniele Porello, editors, *Joint Proceedings of the Web Stream Processing workshop (WSP 2017) and the 2nd International Workshop on Ontology Modularity, Contextuality, and Evolution (WOMoCoE 2017) co-located with 16th International Semantic Web Conference (ISWC 2017), Vienna, Austria, October 22nd, 2017*, volume 1936 of *CEUR Workshop Proceedings*, pages 1–16. CEUR-WS.org, October 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

Temporal query languages are important for stream processing, and ontologies for stream reasoning. Temporal conjunctive queries have therefore been investigated recently together with description logic ontologies, and the knowledge we have about the combined complexities is rather complete. However, often the size of the queries and the ontology is negligible, and what costs is the data. We present new results on the data complexity of ontology-based temporal query answering and close the gap between co-NP and ExpTime for many description logics.
Veronika Thost: **Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information (Extended Abstract)**. *KI*, 31(4):377–380, 2017.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Ontologies may capture the terminology of an application domain and describe domain knowledge in a machine-processable way. Formal ontology languages, such as description logics, additionally provide semantics to these specifications. Systems for ontology-based data access (OBDA) may thus apply logical reasoning to answer queries over given data; they use the ontological knowledge to infer new information that is implicit in the data. The classical OBDA setting regards however only a single moment, which means that information about time is not used for reasoning and that the queries cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing.
Marco Wilhelm, Gabriele Kern-Isberner, and Andreas Ecke: **Basic Independence Results for Maximum Entropy Reasoning Based on Relational Conditionals**. In *GCAI 2017. 3rd Global Conference on Artificial Intelligence*, volume 50 of *EPiC Series in Computing*, pages 36–50. EasyChair, 2017.

BibTeX entry
Paper (PDF)

#### Abstract:

Maximum entropy reasoning (ME-reasoning) based on relational conditionals combines both the capability of ME-distributions to express uncertain knowledge in a way that excellently fits to commonsense, and the great expressivity of an underlying first-order logic. The drawbacks of this approach are its high complexity which is generally paired with a costly domain size dependency, and its non-transparency due to the non-existent a priori independence assumptions as against in Bayesian networks. In this paper we present some independence results for ME-reasoning based on the aggregating semantics for relational conditionals that help to disentangle the composition of ME-distributions, and therefore, lead to a problem reduction and provide structural insights into ME-reasoning.## 2016

F. Baader, M. Bienvenu, C. Lutz, and F. Wolter: **Query and Predicate Emptiness in Ontology-Based Data Access**. *Journal of Artificial Intelligence Research (JAIR)*, 56:1–59, 2016.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

In ontology-based data access (OBDA), database querying is enriched with an ontology that provides domain knowledge and additional vocabulary for query formulation. We identify query emptiness and predicate emptiness as two central reasoning services in this context. Query emptiness asks whether a given query has an empty answer over all databases formulated in a given vocabulary. Predicate emptiness is defined analogously, but quantifies universally over all queries that contain a given predicate. In this paper, we determine the computational complexity of query emptiness and predicate emptiness in the EL, DL-Lite, and ALC-families of description logics, investigate the connection to ontology modules, and perform a practical case study to evaluate the new reasoning services.
Franz Baader, Nguyen Thanh Binh, Stefan Borgwardt, and Barbara Morawska: **Deciding Unifiability and Computing Local Unifiers in the Description Logic EL without Top Constructor**.

*Notre Dame Journal of Formal Logic*, 57(4):443–476, 2016.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive description logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has been shown to be NP-complete, and thus of considerably lower complexity than unification in other description logics of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show that unification in EL without the top concept is PSpace-complete. In addition to the decision problem, we also consider the problem of actually computing unifiers in EL without top.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Extending Unification in EL to Disunification: The Case of Dismatching and Local Disunification**.

*Logical Methods in Computer Science*, 12(4:1):1–28, 2016.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we consider only solutions that are constructed from terms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of disunification problems.
Franz Baader and Andreas Ecke: **Reasoning with Prototypes in the Description Logic ALC using Weighted Tree Automata**. In *Proceedings of the 10th International Conference on Language and Automata Theory and Applications (LATA 2016)*, volume 9618 of *Lecture Notes in Computer Science*, pages 63–75. Springer-Verlag, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce an extension to Description Logics that allows us to use prototypes to define concepts. To accomplish this, we introduce the notion of a prototype distance functions (pdf), which assign to each element of an interpretation a distance value. Based on this, we define a new concept constructor of the form P n(d) for being a relation from ≤,<,≥,>, which is interpreted as the set of all elements with a distance n according to the pdf d. We show how weighted alternating parity tree automata (wapta) over the integers can be used to define pdfs, and how this allows us to use both concepts and pointed interpretations as prototypes. Finally, we investigate the complexity of reasoning in ALCP(wapta), which extends the Description Logic ALC with prototype constructors for pdfs defined using wapta.
Franz Baader and Oliver Fernández Gil: **Extending the Description Logic tel(deg) with Acyclic TBoxes**. In *ECAI 2016 - 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands - Including Prestigious Applications of Artificial Intelligence (PAIS 2016)*, volume 285 of *Frontiers in Artificial Intelligence and Applications*, pages 1096–1104. IOS Press, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

In a previous paper, we have introduced an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we have defined a graded membership function deg, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <, <=, >, >= then collect all the individuals that belong to C with degree t. We have then investigated the complexity of reasoning in the Description Logic tEL(deg), which is obtained from EL by adding such threshold concepts. In the present paper, we extend these results, which were obtained for reasoning without TBoxes, to the case of reasoning w.r.t. acyclic TBoxes. Surprisingly, this is not as easy as might have been expected. On the one hand, one must be quite careful to define acyclic TBoxes such that they still just introduce abbreviations for complex concepts, and thus can be unfolded. On the other hand, it turns out that, in contrast to the case of EL, adding acyclic TBoxes to tEL(deg) increases the complexity of reasoning by at least on level of the polynomial hierarchy.
Franz Baader and Pierre Ludmann: **The Unification Type of ACUI w.r.t. the Unrestricted Instantiation Preorder is not Finitary**. In Silvio Ghilardi and Manfred Schmidt-Schauß, editors,

*Proceedings of the 30th International Workshop on Unification (UNIF'16)*, pages 31–35, 2016.

BibTeX entry Paper (PDF)

#### Abstract:

The unification type of an equational theory is defined using a preorder on substitutions, called the instantiation preorder, whose scope is either restricted to the variables occurring in the unification problem, or unrestricted such that all variables are considered. It is known that the unification type of an equational theory may vary, depending on which instantiation preorder is used. More precisely, it was shown that the theory ACUI of an associative, commutative, and idempotent binary function symbol with a unit is unitary w.r.t. the restricted instantiation preorder, but not unitary w.r.t. the unrestricted one. Here, we improve on this result, by showing that, w.r.t. the unrestricted instantiation preorder, ACUI is not even finitary.
Franz Baader, Pavlos Marantidis, and Alexander Okhotin: **Approximate Unification in the Description Logic FL_{0}**. In Loizos Michael and Antonis C. Kakas, editors,

*Proc. of the 15th Eur. Conf. on Logics in Artificial Intelligence (JELIA 2016)*, volume 10021 of

*Lecture Notes in Artificial Intelligence*, pages 49–63. Springer-Verlag, 2016.

BibTeX entry

#### Abstract:

Unification in Description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL FL0, where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that "almost" unify. The meaning of "almost" is formalized using distance measures between concepts. We show that approximate unification in FL0 can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.
Franz Baader, Pavlos Marantidis, and Alexander Okhotin: **Approximately Solving Set Equations**. In Silvio Ghilardi and Manfred Schmidt-Schauß, editors, *Proceedings of the 30th International Workshop on Unification (UNIF'16)*, pages 37–41, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

Unification with constants modulo the theory ACUI of an associative (A), commutative (C) and idempotent (I) binary function symbol with a unit (U) corresponds to solving a very simple type of set equations. It is well-known that solvability of systems of such equations can be decided in polynomial time by reducing it to satisfiability of propositional Horn formulae. Here we introduce a modified version of this problem by no longer requiring all equations to be completely solved, but allowing for a certain number of violations of the equations. We introduce three different ways of counting the number of violations, and investigate the complexity of the respective decision problem, i.e., the problem of deciding whether there is an assignment that solves the system with at most l violations for a given threshold value l.
Daniel Borchmann, Felix Distel, and Francesco Kriegel: **Axiomatisation of General Concept Inclusions from Finite Interpretations**. *Journal of Applied Non-Classical Logics*, 26(1):1–46, 2016.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Description logic knowledge bases can be used to represent knowledge about a particular domain in a formal and unambiguous manner. Their practical relevance has been shown in many research areas, especially in biology and the Semantic Web. However, the tasks of constructing knowledge bases itself, often performed by human experts, is difficult, time-consuming and expensive. In particular the synthesis of terminological knowledge is a challenge every expert has to face. Because human experts cannot be omitted completely from the construction of knowledge bases, it would therefore be desirable to at least get some support from machines during this process. To this end, we shall investigate in this work an approach which shall allow us to extract terminological knowledge in the form of general concept inclusions from factual data, where the data is given in the form of vertex and edge labeled graphs. As such graphs appear naturally within the scope of the Semantic Web in the form of sets of RDF triples, the presented approach opens up another possibility to extract terminological knowledge from the Linked Open Data Cloud.
Daniel Borchmann and Tom Hanika: **Some Experimental Results on Randomly Generating Formal Contexts**. In Marianne Huchard and Sergei Kuznetsov, editors, *Proceedings of the Thirteenth International Conference on Concept Lattices and Their Applications*, volume 1624 of *CEUR Workshop Proceedings*, pages 57–69. CEUR-WS.org, July 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate different simple approaches to generate random formal contexts. To this end, we consider for each approach the empirical correlation between the number of intents and pseudo-intents. We compare the results of these experiments with corresponding observations on real-world use-cases. This comparison yields huge differences between artificially generated and real-world data sets, indicating that using randomly generated formal contexts for applications such as benchmarking may not necessarily be meaningful. In doing so, we additionally show that the previously observed phenomenon of the “Stegosaurus” does not express a real correlation between intents and pseudo-intents, but is an artifact of the way random contexts are generated.
Stefan Borgwardt, Bettina Fazzinga, Thomas Lukasiewicz, Akanksha Shrivastava, and Oana Tifrea-Marciuska: **Preferential Query Answering in the Semantic Web with Possibilistic Networks**. In Subbarao Kambhampati, editor, *Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI'16)*, pages 994–1000. New York City, USA, AAAI Press, 2016.

BibTeX entry
Paper (PDF)
©IJCAI

#### Abstract:

In this paper, we explore how ontological knowledge expressed via existential rules can be combined with possibilistic networks (i) to represent qualitative preferences along with domain knowledge, and (ii) to realize preference-based answering of conjunctive queries (CQs). We call these combinations ontological possibilistic networks (OP-nets). We define skyline and k-rank answers to CQs under preferences and provide complexity (including data tractability) results for deciding consistency and CQ skyline membership for OP-nets. We show that our formalism has a lower complexity than a similar existing formalism.
Stefan Borgwardt, Bettina Fazzinga, Thomas Lukasiewicz, Akanksha Shrivastava, and Oana Tifrea-Marciuska: **Preferential Query Answering in the Semantic Web with Possibilistic Networks**. In Gerhard Friedrich, Malte Helmert, and Franz Wotawa, editors, *Appendix to the 39th German Conference on Artificial Intelligence (KI'16)*, volume 9904 of *Lecture Notes in Artificial Intelligence*, pages 264–270. Klagenfurt, Austria, Springer-Verlag, 2016. Extended abstract.

BibTeX entry
Paper (PDF)
DOI
©Springer-Verlag

#### Abstract:

In this paper, we explore how ontological knowledge expressed via existential rules can be combined with possibilistic networks (i) to represent qualitative preferences along with domain knowledge, and (ii) to realize preference-based answering of conjunctive queries (CQs). We call these combinations ontological possibilistic networks (OP-nets). We define skyline and k-rank answers to CQs under preferences and provide complexity (including data tractability) results for deciding consistency and CQ skyline membership for OP-nets. We show that our formalism has a lower complexity than a similar existing formalism.
Stefan Borgwardt, Theofilos Mailis, Rafael Peñaloza, and Anni-Yasmin Turhan: **Answering Fuzzy Conjunctive Queries over Finitely Valued Fuzzy Ontologies**. *Journal on Data Semantics*, 5(2):55–75, 2016.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Fuzzy Description Logics (DLs) provide a means for representing vague knowledge about an application domain. In this paper, we study fuzzy extensions of conjunctive queries (CQs) over the DL SROIQ based on finite chains of degrees of truth. To answer such queries, we extend a well-known technique that reduces the fuzzy ontology to a classical one, and use classical DL reasoners as a black box. We improve the complexity of previous reduction techniques for finitely valued fuzzy DLs, which allows us to prove tight complexity results for answering certain kinds of fuzzy CQs. We conclude with an experimental evaluation of a prototype implementation, showing the feasibility of our approach.
Stefan Borgwardt and Rafael Peñaloza: **Reasoning in Expressive Gödel Description Logics**. In Maurizio Lenzerini and Rafael Peñaloza, editors, *Proceedings of the 29th International Workshop on Description Logics (DL'16)*, volume 1577 of *CEUR Workshop Proceedings*, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

Fuzzy description logics (FDLs) are knowledge representation formalisms capable of dealing with imprecise knowledge by allowing intermediate membership degrees in the interpretation of concepts and roles. One option for dealing with these intermediate degrees is to use the so-called Gödel semantics, under which conjunction is interpreted by the minimum of the degrees. Despite its apparent simplicity, developing reasoning techniques for expressive FDLs under this semantics is a hard task. In this paper, we illustrate two algorithms for deciding consistency in (sublogics of) SROIQ under Gödel semantics.
Stefan Borgwardt and Rafael Peñaloza: **Reasoning in fuzzy description logics using automata**. *Fuzzy Sets and Systems*, 298:22–43, 2016.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Automata-based methods have been successfully employed to prove tight complexity bounds for reasoning in many classical logics, and in particular in Description Logics (DLs). Very recently, the ideas behind these automata-based approaches were adapted for reasoning also in fuzzy extensions of DLs, with semantics based either on finitely many truth degrees or the Gödel t-norm over the interval [0,1]. Clearly, due to the different semantics in these logics, the construction of the automata for fuzzy DLs is more involved than for the classical case. In this paper we provide an overview of the existing automata-based methods for reasoning in fuzzy DLs, with a special emphasis on explaining the ideas and the requirements behind them. The methods vary from deciding emptiness of automata on infinite trees to inclusions between automata on finite words. Overall, we provide a comprehensive perspective on the automata-based methods currently in use, and the many complexity results obtained through them.
Claudia Carapelle and Anni-Yasmin Turhan: **Description Logics Reasoning w.r.t. general TBoxes is decidable for Concrete Domains with the EHD-property**. In Gal A. Kaminka, Maria Fox, Paolo Bouquet, Eyke Hüllermeier, Virginia Dignum, Frank Dignum, and Frank van Harmelen, editors, *Proceedings of the 22nd European Conference on Artificial Intelligence*, pages 1440–1448. Dresden, Germany, IOS Press, 2016.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Reasoning for Description logics with concrete domains and w.r.t. general TBoxes easily becomes undecidable. However, with some restriction on the concrete domain, decidability can be regained. We introduce a novel way to integrate concrete domains D into the well-known description logic ALC, we call the resulting logic ALCP(D). We then identify sufficient conditions on D that guarantee decidability of the satisfiability problem, even in the presence of general TBoxes. In particular, we show decidability of ALCP(D) for several domains over the integers, for which decidabil- ity was open. More generally, this result holds for all negation-closed concrete domains with the EHD-property, which stands for ‘the exis- tence of a homomorphism is definable’. Such technique has recently been used to show decidability of CTL∗ with local constraints over the integers.
Ismail Ilkan Ceylan, Adnan Darwiche, and Guy Van den Broeck: **Open-World Probabilistic Databases**. In Chitta Baral, James P. Delgrande, and Frank Wolter, editors, *Proceedings of 15. International Conference on Principles of Knowledge Representation and Reasoning (KR 2016)*, pages 339–348. AAAI Press, 2016. Marco Cadoli Best Student Paper Prize

BibTeX entry
Paper (PDF)

#### Abstract:

Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this paper, we revisit the semantics underlying such systems. In particular, the closed-world assumption of probabilistic databases, that facts not in the database have probability zero, clearly conflicts with their everyday use. To address this discrepancy, we propose an open-world probabilistic database semantics, which relaxes the probabilities of open facts to intervals. While still assuming a finite domain, this semantics can provide meaningful answers when some probabilities are not precisely known. For this open world setting, we propose an efficient evaluation algorithm for unions of conjunctive queries. Our open-world algorithm incurs no overhead compared to closed-world reasoning and runs in time linear in the size of the database for tractable queries. All other queries are #P-hard, implying a data complexity dichotomy between linear time and #P. For queries involving negation, however, open-world reasoning can become NP-, or even NP^{P}P-hard. Finally, we discuss additional knowledge-representation layers that can further strengthen open-world reasoning about big uncertain data.

Ismail Ilkan Ceylan, Adnan Darwiche, and Guy Van den Broeck: **Open-World Probabilistic Databases (Extended Abstract)**. In Maurizio Lenzerini and Rafael Peñaloza, editors, *Proceedings of the 29th International Workshop on Description Logics (DL 2016)*. CEUR Workshop Proceedings, 2016.

BibTeX entry
Paper (PDF)

Ismail Ilkan Ceylan, Thomas Lukasiewicz, and Rafael Peñaloza Nyssen: **Complexity Results for Probabilistic Datalog+/-**. In Maria S. Fox and Gal A. Kaminka, editors, *Proceedings of the 22nd European Conference on Artificial Intelligence (ECAI 2016)*, pages 1414–1422. IOS Press, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

We study the query evaluation problem in probabilistic databases in the presence of probabilistic existential rules. Our focus is on Datalog+/- family of languages for which we define the probabilistic counterpart using a flexible and compact encoding of probabilities. This formalism can be viewed as a generalization of probabilistic databases as it allows to generate new facts from the given ones using the so-called tuple generating dependencies, or existential rules. We study the computational cost of this additional expressiveness under two different semantics. First, we use a conventional approach and assume that the probabilistic knowledge base is consistent and employ the standard possible world semantics. Afterwards, we introduce a probabilistic inconsistency-tolerant semantics, which we refer as inconsistency-tolerant possible world semantics. For both of these cases, we provide a through complexity analysis relative to different languages; drawing a complete picture of the complexity of probabilistic query answering in this family.
Francesco Kriegel: **Axiomatization of General Concept Inclusions from Streams of Interpretations with optional Error Tolerance**. In Sergei Kuznetsov, Amedeo Napoli, and Sebastian Rudolph, editors, *Proceedings of the 5th Workshop "What can FCA do for Artificial Intelligence?" (FCA4AI 2016)*, 2016. To appear.

BibTeX entry
Paper (PDF)

#### Abstract:

We propose applications that utilize the infimum and the supremum of closure operators that are induced by structures occuring in the field of Description Logics. More specifically, we consider the closure operators induced by interpretations as well as closure operators induced by TBoxes, and show how we can learn GCIs from streams of interpretations, and how an error-tolerant axiomatization of GCIs from an interpretation guided by a hand-crafted TBox can be achieved.
Francesco Kriegel: **NextClosures with Constraints**. In Marianne Huchard and Sergei Kuznetsov, editors, *Proceedings of the 13th International Conference on Concept Lattices and Their Applications (CLA 2016)*, volume 1624 of *CEUR Workshop Proceedings*, pages 231–243. Moscow, Russia, CEUR-WS.org, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

In a former paper, the algorithm NextClosures for computing the set of all formal concepts as well as the canonical base for a given formal context has been introduced. Here, this algorithm shall be generalized to a setting where the data-set is described by means of a closure operator in a complete lattice, and furthermore it shall be extended with the possibility to handle constraints that are given in form of a second closure operator. As a special case, constraints may be predefined as implicational background knowledge. Additionally, we show how the algorithm can be modified in order to do parallel Attribute Exploration for unconstrained closure operators, as well as give a reason for the impossibility of (parallel) Attribute Exploration for constrained closure operators if the constraint is not compatible with the data-set.
Francesco Kriegel: **Parallel Attribute Exploration**. In Ollivier Haemmerlé, Gem Stapleton, and Catherine Faron-Zucker, editors, *Proceedings of the 22nd International Conference on Conceptual Structures (ICCS 2016)*, volume 9717 of *Lecture Notes in Computer Science*, pages 91–106. Annecy, France, Springer-Verlag, 2016.

BibTeX entry
Paper (PDF)
DOI
©Springer-Verlag

#### Abstract:

The canonical base of a formal context is a minimal set of implications that is sound and complete. A recent paper has provided a new algorithm for the parallel computation of canonical bases. An important extension is the integration of expert interaction for Attribute Exploration in order to explore implicational bases of inaccessible formal contexts. This paper presents and analyzes an algorithm that allows for Parallel Attribute Exploration.
Markus Krötzsch and Veronika Thost: **Ontologies for Knowledge Graphs: Breaking the Rules**. In Yolanda Gil, Elena Simperl, Paul Groth, Freddy Lecue, Markus Krötzsch, Alasdair Gray, Marta Sabou, Fabian Flöck, and Hideaki Takeda, editors, *Proceedings of the 15th International Semantic Web Conference (ISWC 2016)*, *LNCS*. Springer, 2016.

BibTeX entry
Paper (PDF)

#### Abstract:

Large-scale knowledge graphs (KGs) are widely used in industry and academia, and provide excellent use-cases for ontologies. We find, however, that popular ontology languages, such as OWL and Datalog, cannot express even the most basic relationships on the normalised data format of KGs. Existential rules are more powerful, but may make reasoning undecidable. Normalising them to suit KGs often also destroys syntactic restrictions that ensure decidability and low complexity. We study this issue for several classes of existential rules and derive new syntactic criteria to recognise well-behaved rule-based ontologies over KGs.
Benjamin Zarrieß and Jens Claßen: **Decidable Verification of Golog Programs over Non-Local Effect Actions**. In Dale Schuurmans and Michael Wellman, editors, *Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16)*. AAAI Press, February 2016.

BibTeX entry

#### Abstract:

The Golog action programming language is a powerful means to express high-level behaviours in terms of programs over actions defined in a Situation Calculus theory. In particular for physical systems, verifying that the program satisfies certain desired temporal properties is often crucial, but undecidable in general, the latter being due to the language's high expressiveness in terms of first-order quantification and program constructs. So far, approaches to achieve decidability involved restrictions where action effects either had to be context-free (i.e. not depend on the current state), local (i.e. only affect objects mentioned in the action's parameters), or at least bounded (i.e. only affect a finite number of objects). In this paper, we present a new, more general class of action theories (called acyclic) that allows for context-sensitive, non-local, unbounded effects, i.e. actions that may affect an unbounded number of possibly unnamed objects in a state-dependent fashion. We contribute to the further exploration of the boundary between decidability and undecidability for Golog, showing that for acyclic theories in the two-variable fragment of first-order logic, verification of CTL* properties of programs over ground actions is decidable.## 2015

Franz Baader, Stefan Borgwardt, and Marcel Lippmann: **Temporal Conjunctive Queries in Expressive Description Logics with Transitive Roles**. In Bernhard Pfahringer and Jochen Renz, editors, *Proceedings of the 28th Australasian Joint Conference on Artificial Intelligence (AI'15)*, volume 9457 of *Lecture Notes in Artificial Intelligence*, pages 21–33. Canberra, Australia, Springer-Verlag, 2015.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.
Franz Baader, Stefan Borgwardt, and Marcel Lippmann: **Temporal Query Entailment in the Description Logic SHQ**.

*Journal of Web Semantics*, 33:71–93, 2015.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Ontology-based data access (OBDA) generalizes query answering in databases towards deductive entailment since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As the query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as the ontology language we use the expressive DL SHQ. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem. In the course of this investigation, we also establish the complexity of consistency of Boolean knowledge bases in SHQ.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Dismatching and Local Disunification in EL**. In Maribel Fernández, editor,

*Proceedings of the 26th International Conference on Rewriting Techniques and Applications (RTA'15)*, volume 36 of

*Leibniz International Proceedings in Informatics*, pages 40–56. Warsaw, Poland, Dagstuhl Publishing, 2015.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints on unifiers can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we restrict the attention to solutions that are built from so-called atoms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of (general) disunification problems.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Dismatching and Local Disunification in EL (Extended Abstract)**. In Santiago Escobar and Mateu Villaret, editors,

*Proceedings of the 29th International Workshop on Unification (UNIF'15)*, pages 13–18, 2015.

BibTeX entry Paper (PDF)

Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Dismatching and Local Disunification in EL (Extended Abstract)**. In Diego Calvanese and Boris Konev, editors,

*Proceedings of the 28th International Workshop on Description Logics (DL'15)*, volume 1350 of

*CEUR Workshop Proceedings*, pages 30–33, 2015.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics has been introduced as a means to detect redundancies in ontologies. We try to extend the known decidability results for unification in the Description Logic EL to disunification since negative constraints on unifiers can be used to avoid unwanted unifiers. While decidability of the solvability of general EL-disunification problems remains an open problem, we obtain NP-completeness results for two interesting special cases: dismatching problems, where one side of each negative constraint must be ground, and local solvability of disunification problems, where we restrict the attention to solutions that are built from so-called atoms occurring in the input problem. More precisely, we first show that dismatching can be reduced to local disunification, and then provide two complementary NP-algorithms for finding local solutions of (general) disunification problems.
Franz Baader, Stefan Borgwardt, and Rafael Peñaloza: **On the Decidability Status of Fuzzy ALC with General Concept Inclusions**.

*Journal of Philosophical Logic*, 44(2):117–146, 2015.

BibTeX entry Paper (PDF) DOI

#### Abstract:

The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. It has turned out, however, that in the presence of general concept inclusion axioms (GCIs) this extension is less straightforward than thought. In fact, a number of tableau algorithms claimed to deal correctly with fuzzy DLs with GCIs have recently been shown to be incorrect. In this paper, we concentrate on fuzzy ALC, the fuzzy extension of the well-known DL ALC. We present a terminating, sound, and complete tableau algorithm for fuzzy ALC with arbitrary continuous t-norms. Unfortunately, in the presence of GCIs, this algorithm does not yield a decision procedure for consistency of fuzzy ALC ontologies since it uses as a sub-procedure a solvability test for a finitely represented, but possibly infinite, system of inequations over the real interval [0,1], which are built using the t-norm. In general, it is not clear whether this solvability problem is decidable for such infinite systems of inequations. This may depend on the specific t-norm used. In fact, we also show in this paper that consistency of fuzzy ALC ontologies with GCIs is undecidable for the product t-norm. This implies, of course, that for the infinite systems of inequations produced by the tableau algorithm for fuzzy ALC with product t-norm, solvability is in general undecidable. We also give a brief overview of recently obtained (un)decidability results for fuzzy ALC w.r.t. other t-norms.
Franz Baader, Gerhard Brewka, and Oliver Fernández Gil: **Adding Threshold Concepts to the Description Logic EL**. In Carsten Lutz and Silvio Ranise, editors,

*Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)*, volume 9322 of

*Lectures Notes in Artificial Intelligence*, pages 33–48. Wroclaw, Poland, Springer-Verlag, 2015.

BibTeX entry Paper (PDF)

#### Abstract:

We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C t for in <,<=,>,>= then collect all the individuals that belong to C with degree t. We generalize a well-known characterization of membership in EL concepts to construct a specific graded membership function deg , and investigate the complexity of reasoning in the Description Logic tauEL(deg), which extends EL by threshold concepts defined using deg . We also compare the instance problem for threshold concepts of the form C >t in tauEL(deg) with the relaxed instance queries of Ecke et al.
Franz Baader, Gerhard Brewka, and Oliver Fernández Gil: **Adding Threshold Concepts to the Description Logic EL (extended abstract)**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL-2015)*, volume 1350 of *CEUR Workshop Proceedings*. Athens, Greece, CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce an extension of the lightweight Description Logic EL that allows us to define concepts in an approximate way. For this purpose, we use a graded membership function which, for each individual and concept, yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts then collect all the individuals that belong to an EL concept C with degree less, less or equal, greater, respectively greater or equal r, for some r in [0,1] . We generalize a well-known characterization of membership in EL concepts to obtain an appropriate graded membership function deg, and investigate the complexity of reasoning in the Description Logic which extends EL by threshold concepts defined using deg.
Franz Baader and Pierre Ludmann: **The Exact Unification Type of Commutative Theories**. In Santiago Escobar and Mateu Villaret, editors, *Proceedings of the 29th International Workshop on Unification (UNIF'15)*, pages 19–23, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

The exact unification type of an equational theory is based on a new preorder on substitutions, called the exactness preorder, which is tailored towards transferring decidability results for unification to disunification. We show that two important results regarding the unification type of commutative theories hold not only for the usual instantiation preorder, but also for the exactness preorder: w.r.t. elementary unification, commutative theories are of type unary or nullary, and the theory ACUIh of Abelian idempotent monoids with a homomorphism is nullary.
Stephan Böhme and Marcel Lippmann: **Decidable Contextualized DLs with Rigid Roles**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL-2015)*, volume 1350 of *CEUR Workshop Proceedings*, pages 92–95. Athens, Greece, CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

Stephan Böhme and Marcel Lippmann: **Decidable Description Logics of Context with Rigid Roles**. In Carsten Lutz and Silvio Ranise, editors, *Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)*, volume 9322 of *Lecture Notes in Artificial Intelligence*, pages 17–32. Wroclaw, Poland, Springer-Verlag, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

To represent and reason about contextualized knowledge often two-dimensional Description Logics (DLs) are employed, where one DL is used to describe contexts (or possible worlds) and the other DL is used to describe the objects, i.e. the relational structure of the specific contexts. Previous approaches for DLs of context that combined pairs of DLs resulted in undecidability in those cases where so-called rigid roles are admitted, i.e. if parts of the relational structure are the same in all contexts. In this paper, we present a novel combination of pairs of DLs and show that reasoning stays decidable even in the presence of rigid roles. We give complexity results for various combinations of DLs including ALC, SHOQ, and EL.
Daniel Borchmann: **Exploring Faulty Data**. In Jaume Baixeries, Christian Sacarea, and Manuel Ojeda-Aciego, editors, *Proceedings of the 13th International Conference on Formal Concept Analysis (ICFCA'15)*, volume 9113 of *Lecture Notes in Computer Science*, pages 219–235. Springer-Verlag, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Within formal concept analysis, attribute exploration is a powerful tool to semi-automatically check data for completeness with respect to a given domain. However, the classical formulation of attribute exploration does not take into account possible errors which are present in the initial data. To remedy this, we present in this work a generalization of attribute exploration based on the notion of confidence, that will allow for the exploration of implications which are not necessarily valid in the initial data, but instead enjoy a minimal confidence therein.
Stefan Borgwardt, Marco Cerami, and Rafael Peñaloza: **The Complexity of Subsumption in Fuzzy EL**. In Qiang Yang and Michael Wooldridge, editors,

*Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15)*, pages 2812–2818. Buenos Aires, Argentinia, AAAI Press, 2015.

BibTeX entry Paper (PDF) ©IJCAI

#### Abstract:

Fuzzy Description Logics (DLs) are used to represent and reason about vague and imprecise knowledge that is inherent to many application domains. It was recently shown that the complexity of reasoning in finitely-valued fuzzy DLs is often not higher than that of the underlying classical DL. We show that this does not hold for fuzzy extensions of the light-weight DL EL, which is used in many biomedical ontologies, under the Lukasiewicz semantics. The complexity of reasoning increases from PTime to ExpTime, even if only one additional truth value is introduced. The same lower bound holds also for infinitely-valued Lukasiewicz extensions of EL.
Stefan Borgwardt, Felix Distel, and Rafael Peñaloza: **The Limits of Decidability in Fuzzy Description Logics with General Concept Inclusions**. *Artificial Intelligence*, 218:23–55, 2015.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Fuzzy Description Logics (DLs) can be used to represent and reason with vague knowledge. This family of logical formalisms is very diverse, each member being characterized by a specific choice of constructors, axioms, and triangular norms, which are used to specify the semantics. Unfortunately, it has recently been shown that the consistency problem in many fuzzy DLs with general concept inclusion axioms is undecidable. In this paper, we present a proof framework that allows us to extend these results to cover large classes of fuzzy DLs. On the other hand, we also provide matching decidability results for most of the remaining logics. As a result, we obtain a near-universal classification of fuzzy DLs according to the decidability of their consistency problem.
Stefan Borgwardt, Marcel Lippmann, and Veronika Thost: **Temporalizing Rewritable Query Languages over Knowledge Bases**. *Journal of Web Semantics*, 33:50–70, 2015.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. OBDA can sometimes be realized by compiling the information of the ontology into the query and the database. The resulting query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a generic temporal query language that combines linear temporal logic with queries over ontologies. This language is well-suited for expressing temporal properties of dynamic systems and is useful in context-aware applications that need to detect specific situations. We show that, if atemporal queries are rewritable in the sense described above, then the corresponding temporal queries are also rewritable such that we can answer them over a temporal database. We present three approaches to answering the resulting queries.
Stefan Borgwardt, Theofilos Mailis, Rafael Peñaloza, and Anni-Yasmin Turhan: **Conjunctive Query Answering with Finitely Many Truth Degrees**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL'15)*, volume 1350 of *CEUR Workshop Proceedings*, pages 364–367, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Fuzzy Description Logics (DLs) provide a means for representing vague knowledge about an application domain. In this paper, we study fuzzy extensions of conjunctive queries (CQs) over the DL SROIQ based on finite chains of degrees of truth. To answer such queries, we extend a well-known technique that reduces the fuzzy ontology to a classical one, and use classical DL reasoners as a black box. We improve the complexity of previous reduction techniques for finitely valued fuzzy DLs, which allows us to prove tight complexity results for answering certain kinds of fuzzy CQs.
Stefan Borgwardt and Rafael Peñaloza: **Reasoning in Expressive Description Logics under Infinitely Valued Gödel Semantics**. In Carsten Lutz and Silvio Ranise, editors, *Proceedings of the 10th International Symposium on Frontiers of Combining Systems (FroCoS'15)*, volume 9322 of *Lecture Notes in Artificial Intelligence*, pages 49–65. Wroclaw, Poland, Springer-Verlag, 2015.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Fuzzy Description Logics (FDLs) combine classical Description Logics with the semantics of Fuzzy Logics in order to represent and reason with vague knowledge. Most FDLs using truth values from the interval [0,1] have been shown to be undecidable in the presence of a negation constructor and general concept inclusions. One exception are those FDLs whose semantics is based on the infinitely valued Gödel t-norm (G). We extend previous decidability results for the FDL G-ALC to deal with complex role inclusions, nominals, inverse roles, and qualified number restrictions. Our novel approach is based on a combination of the known crispification technique for finitely valued FDLs and an automata-based procedure for reasoning in G-ALC.
Stefan Borgwardt and Rafael Peñaloza: **Reasoning in Infinitely Valued G- IALCQ**. In

*Proceedings of the 3rd Workshop on Weighted Logics for AI (WL4AI'15)*, pages 2–8, 2015.

BibTeX entry Paper (PDF)

#### Abstract:

Fuzzy Description Logics (FDLs) are logic-based formalisms used to represent and reason with vague or imprecise knowledge. It has been recently shown that reasoning in most FDLs using truth values from the interval [0,1] becomes undecidable in the presence of a negation constructor and general concept inclusion axioms. One exception to this negative result are FDLs whose semantics is based on the infinitely valued Gödel t-norm (G). In this paper, we extend previous decidability results for G-IALC to deal also with qualified number restrictions. Our novel approach is based on a combination of the known crispification technique for finitely valued FDLs and the automata-based procedure originally developed for reasoning in G-IALC. The proposed approach combines the advantages of these two methods, while removing their respective drawbacks.
Stefan Borgwardt and Veronika Thost: **Temporal Query Answering in DL-Lite with Negation**. In Georg Gottlob, Geoff Sutcliffe, and Andrei Voronkov, editors, *Proceedings of the 1st Global Conference on Artificial Intelligence (GCAI'15)*, volume 36 of *EasyChair Proceedings in Computing*, pages 51–65. EasyChair, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontology-based query answering augments classical query answering in databases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We investigate temporal query answering w.r.t. ontologies formulated in DLLite, a family of description logics that captures the conceptual features of relational databases and was tailored for efficient query answering. We consider a recently proposed temporal query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL). In particular, we consider negation in the ontology and query language, and study both data and combined complexity of query entailment.
Stefan Borgwardt and Veronika Thost: **Temporal Query Answering in the Description Logic EL**. In Qiang Yang and Michael Wooldridge, editors,

*Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15)*, pages 2819–2825. Buenos Aires, Argentina, AAAI Press, 2015.

BibTeX entry Paper (PDF) ©IJCAI

#### Abstract:

Context-aware systems use data collected at runtime to recognize certain predefined situations and trigger adaptations. This can be implemented using ontology-based data access (OBDA), which augments classical query answering in databases by adopting the open-world assumption and including domain knowledge provided by an ontology. We investigate temporalized OBDA w.r.t. ontologies formulated in EL, a description logic that allows for efficient reasoning and is successfully used in practice. We consider a recently proposed temporalized query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL), and study both data and combined complexity of query entailment in this setting. We also analyze the satisfiability problem in the similar formalism EL-LTL.
Stefan Borgwardt and Veronika Thost: **Temporal Query Answering in the Description Logic EL (extended abstract)**. In Diego Calvanese and Boris Konev, editors,

*Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of

*CEUR Workshop Proceedings*, pages 83–87, 2015.

BibTeX entry Paper (PDF)

#### Abstract:

Context-aware systems use data collected at runtime to recognize certain predefined situations and trigger adaptations. This can be implemented using ontology-based data access (OBDA), which augments classical query answering in databases by adopting the open-world assumption and including domain knowledge provided by an ontology. We investigate temporalized OBDA w.r.t. ontologies formulated in EL, a description logic that allows for efficient reasoning and is successfully used in practice. We consider a recently proposed temporalized query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL), and study both data and combined complexity of query entailment in this setting. We also analyze the satisfiability problem in the similar formalism EL-LTL.
Ismail Ilkan Ceylan: **Query Answering in Bayesian Description Logics**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of *CEUR Workshop Proceedings*. CEUR-WS, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

The Bayesian Description Logic (BDL) BEL is a probabilistic DL, which extends the lightweight DL EL by defining a joint probability distribution over EL axioms with the help of a Bayesian network (BN). In the recent work, extensions of standard logical reasoning tasks in BEL are shown to be reducible to inferences in BNs. This work concentrates on a more general reasoning task, namely on conjunctive query answering in BEL where every query is associated to a probability leading to different reasoning problems. In particular, we study the probabilistic query entailment, top-k answers, and top-k contexts as reasoning problems. Our complexity analysis suggests that all of these problems are tractable under certain assumptions.
Ismail Ilkan Ceylan and Rafael Peñaloza Julian Mendez: **The Bayesian Ontology reasoner is BORN!**. In Michel Dumontier, Birte Glimm, Rafael Gonçalves, Matthew Horridge, Ernesto Jiménez-Ruiz, Nicolas Matentzoglu, Bijan Parsia, Giorgos Stamou, and Giorgos Stoilos, editors, *Proceedings of the 4th International Workshop on OWL Reasoner Evaluation (ORE 2015)*, pages 8–14. CEUR Workshop Proceedings, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Bayesian ontology languages are a family of probabilistic ontology languages that allow to encode probabilistic information over the axioms of an ontology with the help of a Bayesian network. The Bayesian ontology language BEL is an extension of the lightweight Description Logic (DL) EL within the above-mentioned framework. We present the system BORN that implements the probabilistic subsumption problem for BEL.
Ismail Ilkan Ceylan, Thomas Lukasiewicz, and Rafael Peñaloza: **Answering EL Queries in the Presence of Preferences**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of *CEUR Workshop Proceedings*. CEUR-WS, 2015.

BibTeX entry
Paper (PDF)

Ismail Ilkan Ceylan and Rafael Peñaloza: **Probabilistic Query Answering in the Bayesian Description Logic BEL**. In Christoph Beierle and Alex Dekhtyar, editors, *Proceedings of 9th International Conference on Scalable Uncertainty Management (SUM 2015)*, volume 9310 of *LNAI*, pages 1–15. Springer, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

BEL is a probabilistic description logic (DL) that extends the light-weight DL EL with a joint probability distribution over the axioms, expressed with the help of a Bayesian network (BN). In recent work it has been shown that the complexity of standard logical reasoning in BEL is the same as performing probabilistic inferences over the BN. In this paper we consider conjunctive query answering in BEL. We study the complexity of the three main problems associated to this setting: computing the probability of a query entailment, computing the most probable answers to a query, and computing the most probable context in which a query is entailed. In particular, we show that all these problems are tractable w.r.t. data and ontology complexity.
Ismail Ilkan Ceylan and Rafael Peñaloza: **Dynamic Bayesian Description Logics**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of *CEUR Workshop Proceedings*. CEUR-WS, 2015.

BibTeX entry
Paper (PDF)

Jieying Chen, Michel Ludwig, Yue Ma, and Dirk Walther: **Towards Extracting Ontology Excerpts**. In Songmao Zhang, Martin Wirsing, and Zili Zhang, editors, *Proceedings of the 8th International Conference on Knowledge Science, Engineering and Management (KSEM 2015)*, volume 9403 of *Lecture Notes in Computer Science*, pages 78–89. Springer, 2015.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

In the presence of an ever growing amount of information, organizations and human users need to be able to focus on certain key pieces of information and to intentionally ignore all other possibly relevant parts. Knowledge about complex systems that is represented in ontologies yields collections of axioms that are too large for human users to browse, let alone to comprehend or reason about it. We introduce the notion of an ontology excerpt as being a fixed-size subset of an ontology, consisting of the most relevant axioms for a given set of terms. These axioms preserve as much as possible the knowledge about the considered terms described in the ontology. We consider different extraction techniques for ontology excerpts based on methods from the area of information retrieval. To evaluate these techniques, we propose to measure the degree of incompleteness of the resulting excerpts using the notion of logical difference.
Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Similarity-based Relaxed Instance Queries**. *Journal of Applied Logic*, 13(4, Part 1):480–508, 2015. Special Issue for the Workshop on Weighted Logics for AI 2013

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

In Description Logics (DL) knowledge bases (KBs), information is typically captured by clear-cut concepts. For many practical applications querying the KB by crisp concepts is too restrictive; a user might be willing to lose some precision in the query, in exchange of a larger selection of answers. Similarity measures can offer a controlled way of gradually relaxing a query concept within a user-specified limit. In this paper we formalize the task of instance query answering for DL KBs using concepts relaxed by concept similarity measures (CSMs). We investigate computation algorithms for this task in the DL EL, their complexity and properties for the CSMs employed regarding whether unfoldable or general TBoxes are used. For the case of general TBoxes we define a family of CSMs that take the full TBox information into account, when assessing the similarity of concepts.
Andreas Ecke, Maximilian Pensel, and Anni-Yasmin Turhan: **ELASTIQ: Answering Similarity-threshold Instance Queries in EL**. In Diego Calvanese and Boris Konev, editors,

*Proceedings of the 28th International Workshop on Description Logics (DL-2015)*, volume 1350 of

*CEUR Workshop Proceedings*. CEUR-WS.org, June 2015.

BibTeX entry Paper (PDF)

#### Abstract:

Recently an approach has been devised how to employ concept similarity measures (CSMs) for relaxing instance queries over EL ontologies in a controlled way. The approach relies on similarity measures between pointed interpretations to yield CSMs with certain properties. We report in this paper on ELASTIQ, which is a first implementation of this approach and propose initial optimizations for this novel inference. We also provide a first evaluation of ELASTIQ on the Gene Ontology.
Shasha Feng, Michel Ludwig, and Dirk Walther: **Deciding Subsumers of Least Fixpoint Concepts w.r.t. general EL -TBoxes**. In Steffen Hölldobler, Markus Krötzsch, Rafael Peñaloza, and Sebastian Rudolph, editors, *Proceedings of the 38th Annual German Conference on AI (KI 2015)*, volume 9324 of *Lecture Notes in Computer Science*, pages 59–71, 2015.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

In this paper we provide a procedure for deciding subsumptions of the form*E*, where is an

*-concept,*

_{\}mu*E*an -concept and a general -TBox. Deciding such subsumptions can be used for computing the logical difference between general -TBoxes. Our procedure is based on checking for the existence of a certain simulation between hypergraph representations of the set of subsumees of and of

*E*, respectively. With the aim of keeping the procedure implementable, we provide a detailed construction of such hypergraphs deliberately avoiding the use of intricate automata-theoretic techniques.

Shasha Feng, Michel Ludwig, and Dirk Walther: **Foundations for the Logical Difference of EL-TBoxes**. In Georg Gottlob, Geoff Sutcliffe, and Andrei Voronkov, editors, *GCAI 2015. Global Conference on Artificial Intelligence*, volume 36 of *EPiC Series in Computing*, pages 93–112. EasyChair, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate the logical difference problem between general EL-TBoxes. The logical difference is the set of concept subsumptions that are logically entailed by a first TBox but not by a second one. We show how the logical difference between two EL-TBoxes can be reduced to fixpoint reasoning wrt. EL-TBoxes. Entailments of the first TBox can be represented by subsumptions of least fixpoint concepts by greatest fixpoint concepts, which can then be checked wrt. the second TBox. We present the foundations for a dedicated procedure based on a hypergraph representation of the fixpoint concepts without the use of automata-theoretic techniques, avoiding possible complexity issues of a reduction to modal mu-calculus reasoning. The subsumption checks are based on checking for the existence of simulations between the hypergraph representations of the fixpoint concepts and the TBoxes.
Shasha Feng, Michel Ludwig, and Dirk Walther: **The Logical Difference for EL: from Terminologies towards TBoxes**. In *Proceedings of the 1st International Workshop on Semantic Technologies (IWOST)*, *CEUR workshop proceedings*, 2015. To appear

BibTeX entry

#### Abstract:

In this paper we are concerned with the logical difference problem between ontologies. The logical difference is the set of subsumption queries that follow from a first ontology but not from a second one. We revisit our solution to logical difference problem for EL-terminologies based on finding simulations between hypergraph representations of the terminologies, and we investigate a possible extension of the method to general EL-TBoxes.
Francesco Kriegel: **Axiomatization of General Concept Inclusions in Probabilistic Description Logics**. In Steffen Hölldobler, Sebastian Rudolph, and Markus Krötzsch, editors, *Proceedings of the 38th German Conference on Artificial Intelligence (KI 2015)*, volume 9324 of *Lecture Notes in Artificial Intelligence*, pages 124–136. Dresden, Germany, Springer Verlag, 2015.

BibTeX entry
Paper (PDF)
DOI
©Springer-Verlag

#### Abstract:

Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced, and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is well-known.
Francesco Kriegel: **Extracting ALEQR(Self)-Knowledge Bases from Graphs**. In Sergei O. Kuznetsov, Rokia Missaoui, and Sergei A. Obiedkov, editors, *Proceedings of the International Workshop on Social Network Analysis using Formal Concept Analysis (SNAFCA 2015)*, volume 1534 of *CEUR Workshop Proceedings*. Nerja, Spain, CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

A description graph is a directed graph that has labeled vertices and edges. This document proposes a method for extracting a knowledge base from a description graph. The technique is presented for the description logic ALEQRS which allows for conjunctions, primitive negation, existential restrictions, value restrictions, qualified number restrictions, existential self restrictions, and complex role inclusion axioms, but also sublogics may be chosen to express the axioms in the knowledge base. The extracted knowledge base entails all statements that can be expressed in the chosen description logic and are encoded in the input graph.
Francesco Kriegel: **Incremental Learning of TBoxes from Interpretation Sequences with Methods of Formal Concept Analysis**. In Diego Calvanese and Boris Konev, editors, *Proceedings of the 28th International Workshop on Description Logics (DL 2015)*, volume 1350 of *CEUR Workshop Proceedings*, pages 452–464. Athens, Greece, CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Formal Concept Analysis and its methods for computing minimal implicational bases have been successfully applied to axiomatise minimal EL-TBoxes from models, so called bases of GCIs. However, no technique for an adjustment of an existing EL-TBox w.r.t. a new model is available, i.e., on a model change the complete TBox has to be recomputed. This document proposes a method for the computation of a minimal extension of a TBox w.r.t. a new model. The method is then utilised to formulate an incremental learning algorithm that requires a stream of interpretations, and an expert to guide the learning process, respectively, as input.
Francesco Kriegel: **Probabilistic Implicational Bases in FCA and Probabilistic Bases of GCIs in EL**. In Sadok Ben Yahia and Jan Konecny, editors, *Proceedings of the 12th International Conference on Concept Lattices and their Applications (CLA 2015)*, volume 1466 of *CEUR Workshop Proceedings*, pages 193–204. Clermont-Ferrand, France, CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

A probabilistic formal context is a triadic context whose third dimension is a set of worlds equipped with a probability measure. After a formal definition of this notion, this document introduces probability of implications, and provides a construction for a base of implications whose probability satisfy a given lower threshold. A comparison between confidence and probability of implications is drawn, which yields the fact that both measures do not coincide, and cannot be compared. Furthermore, the results are extended towards the light-weight description logic EL with probabilistic interpretations, and a method for computing a base of general concept inclusions whose probability fulfill a certain lower bound is proposed.
Francesco Kriegel and Daniel Borchmann: **NextClosures: Parallel Computation of the Canonical Base**. In Sadok Ben Yahia and Jan Konecny, editors, *Proceedings of the 12th International Conference on Concept Lattices and their Applications (CLA 2015)*, volume 1466 of *CEUR Workshop Proceedings*, pages 182–192. Clermont-Ferrand, France, CEUR-WS.org, 2015. Best Paper Award.

BibTeX entry
Paper (PDF)

#### Abstract:

The canonical base of a formal context plays a distinguished role in formal concept analysis. This is because it is the only minimal base so far that can be described explicitly. For the computation of this base several algorithms have been proposed. However, all those algorithms work sequentially, by computing only one pseudo-intent at a time - a fact which heavily impairs the practicability of using the canonical base in real-world applications. In this paper we shall introduce an approach that remedies this deficit by allowing the canonical base to be computed in a parallel manner. First experimental evaluations show that for sufficiently large data-sets the speedup is proportional to the number of available CPUs.
Theofilos Malis, Anni-Yasmin Turhan, and Erik Zenker: **A Pragmatic Approach to Answering CQs over Fuzzy DL-Lite-ontologies - introducing FLite**. In *Proceedings of the 28th International Workshop on Description Logics (DL-2015)*, June 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Fuzzy Description Logics (FDLs) generalize crisp ones by providing membership degree semantics. To offer efficient query answering for FDLs it is desirable to extend the rewriting-based approach for DL-Lite to its fuzzy variants. For answering conjunctive queries over fuzzy DL-LiteR ontologies we present an approach, that employs the crisp rewriting as a black-box procedure and treats the degrees in a second rewriting step. This pragmatic approach yields a sound procedure for the Goedel based fuzzy semantics, which we have implemented in the FLite reasoner that employs the Ontop system. A first evaluation of FLite suggests that one pays only a linear overhead for fuzzy queries.
Alina Petrova, Yue Ma, George Tsatsaronis, Maria Kissa, Felix Distel, Franz Baader, and Michael Schroeder: **Formalizing Biomedical Concepts from Textual Definitions**. *Journal of Biomedical Semantics*, 6(22), 2015.

BibTeX entry
DOI

#### Abstract:

Background: Ontologies play a major role in life sciences, enabling a number of applications, from new data integration to knowledge verification. SNOMED CT is a large medical ontology that is formally defined so that it ensures global consistency and support of complex reasoning tasks. Most biomedical ontologies and taxonomies on the other hand define concepts only textually, without the use of logic. Here, we investigate how to automatically generate formal concept definitions from textual ones. We develop a method that uses machine learning in combination with several types of lexical and semantic features and outputs formal definitions that follow the structure of SNOMED CT concept definitions. Results: We evaluate our method on three benchmarks and test both the underlying relation extraction component as well as the overall quality of output concept definitions. In addition, we provide an analysis on the following aspects: (1) How do definitions mined from the Web and literature differ from the ones mined from manually created definitions, e.g., MESH? (2) How do different feature representations, e.g., the restrictions of relations' domain and range, impact on the generated definition quality?, (3) How do different machine learning algorithms compare to each other for the task of formal definition generation?, and, (4) What is the influence of the learning data size to the task? We discuss all of these settings in detail and show that the suggested approach can achieve success rates of over 90%. In addition, the results show that the choice of corpora, lexical features, learning algorithm and data size do not impact the performance as strongly as semantic types do. Semantic types limit the domain and range of a predicted relation, and as long as relations' domain and range pairs do not overlap, this information is most valuable in formalizing textual definitions. Conclusions: The analysis presented in this manuscript implies that automated methods can provide a valuable contribution to the formalization of biomedical knowledge, thus paving the way for future applications that go beyond retrieval and into complex reasoning. The method is implemented and accessible to the public from: https://github.com/alifahsyamsiyah/learningDL .
Veronika Thost, Jan Holste, and Özgür Özçep: **On Implementing Temporal Query Answering in DL-Lite (extended abstract)**. In *Proceedings of the 28th International Workshop on Description Logics (DL-2015)*. Athens, Greece, CEUR Workshop Proceedings, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontology-based data access augments classical query answering over fact bases by adopting the open-world assumption and by including domain know-ledge provided by an ontology. We implemented temporal query answering w.r.t. ontologies formulated in the Description Logic DL-Lite. Focusing on temporal conjunctive queries (TCQs), which combine conjunctive queries via the operators of propositional linear temporal logic, we regard three approaches for answering them: an iterative algorithm that considers all data available; a window-based algorithm; and a rewriting approach, which translates the TCQs to be answered into SQL queries. Since the relevant ontological knowledge is already encoded into the latter queries, they can be answered by a standard database system. Our evaluation especially shows that implementations of both the iterative and the window-based algorithm answer TCQs within a few milliseconds, and that the former achieves a constant performance, even if data is growing over time.
Anni-Yasmin Turhan and Erik Zenker: **Towards Temporal Fuzzy Query Answering on Stream-based Data**. In Daniela Nicklas and Özgür Lütfü Özçep, editors, *Proceedings of the 1st Workshop on High-Level Declarative Stream Processing (HiDest'15)*, volume 1447 of *CEUR Workshop Proceedings*, pages 56–69. CEUR-WS.org, 2015.

BibTeX entry
Paper (PDF)

#### Abstract:

For reasoning over streams of data ontology-based data access is a common approach. The method for answering conjunctive queries (CQs) over DL-Lite ontologies in this setting is by rewritings of the query and evaluation of the resulting query by a data base engine. For stream-based applications the classical expressivity of DL-Lite lacks means to handle fuzzy and temporal information. In this paper we report on a combination of a recently proposed pragmatic approach for answering CQs over fuzzy DL-Lite ontologies with answering of CQs over sequences of ABoxes, resulting in a system that supplies rewritings for query answering over temporal fuzzy DL-Lite-ontologies.
Benjamin Zarrieß and Jens Claßen: **Verification of Knowledge-Based Programs over Description Logic Actions**. In Qiang Yang and Michael Wooldridge, editors, *Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI'15)*, pages 3278–3284. AAAI Press, 2015.

BibTeX entry

#### Abstract:

A knowledge-based program defines the behavior of an agent by combining primitive actions, programming constructs and test conditions that make explicit reference to the agent's knowledge. In this paper we consider a setting where an agent is equipped with a Description Logic (DL) knowledge base providing general domain knowledge and an incomplete description of the initial situation. We introduce a corresponding new DL-based action language that allows for representing both physical and sensing actions, that we then use to build knowledge-based programs with test conditions expressed in an epistemic DL. After proving undecidability for the general case, we then discuss a restricted fragment where verification becomes decidable. The provided proof is constructive and comes with an upper bound on the procedure's complexity.## 2014

Josefine Asmus, Daniel Borchmann, Ivo F. Sbalzarini, and Dirk Walther: **Towards an FCA-based Recommender System for Black-Box Optimization**. In Sergei O. Kuznetsov, Amedeo Napoli, and Sebastian Rudolph, editors, *Proceedings of the 3rd International Workshop on "What can FCA do for Artificial Intelligence?" (FCA4AI'14)*, volume 1257 of *CEUR Workshop Proceedings*, pages 35–42, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Black-box optimization problems are of practical importance throughout science and engineering. Hundreds of algorithms and heuristics have been developed to solve them. However, none of them outperforms any other on all problems. The success of a particular heuristic is always relative to a class of problems. So far, these problem classes are elusive and it is not known what algorithm to use on a given problem. Here we describe the use of Formal Concept Analysis (FCA) to extract implications about problem classes and algorithm performance from databases of empirical benchmarks. We explain the idea in a small example and show that FCA produces meaningful implications. We further outline the use of attribute exploration to identify problem features that predict algorithm performance.
Franz Baader: **Ontology-Based Monitoring of Dynamic Systems**. In Chitta Baral, Giuseppe De Giacomo, and Thomas Eiter, editors, *Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)*, pages 678–681. Vienna, Austria, AAAI Press, 2014. Invited contribution.

BibTeX entry
Paper (PDF)

#### Abstract:

Our understanding of the notion "dynamic system" is a rather broad one: such a system has states, which can change over time. Ontologies are used to describe the states of the system, possibly in an incomplete way. Monitoring is then concerned with deciding whether some run of the system or all of its runs satisfy a certain property, which can be expressed by a formula of an appropriate temporal logic. We consider different instances of this broad framework, which can roughly be classified into two cases. In one instance, the system is assumed to be a black box, whose inner working is not known, but whose states can be (partially) observed during a run of the system. In the second instance, one has (partial) knowledge about the inner working of the system, which provides information on which runs of the system are possible. In this paper, we will review some of our recent work that can be seen as instances of this general framework of ontology-based monitoring of dynamic systems. We will also mention possible extensions towards probabilistic reasoning and the integration of mathematical modeling of dynamical systems.
Franz Baader and Marcel Lippmann: **Runtime Verification Using the Temporal Description Logic ALC-LTL Revisited**.

*Journal of Applied Logic*, 12(4):584–613, 2014.

BibTeX entry DOI

#### Abstract:

Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct BÃ¼chi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL.
Franz Baader and Barbara Morawska: **Matching with respect to general concept inclusions in the Description Logic EL**. In Temur Kutsia and Christophe Ringeissen, editors,

*Proceedings of the 28th International Workshop on Unification (UNIF'14)*,

*RISC-Linz Report Series No. 14-06*, pages 22–26, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we were able to solve the general case: we can show that matching in EL w.r.t. general TBoxes is NP-complete. We also determine some tractable variants of the matching problem.
Franz Baader and Barbara Morawska: **Matching with respect to general concept inclusions in the Description Logic EL**. In Carsten Lutz and Michael Thielscher, editors,

*Proceedings of the 37th German Conference on Artificial Intelligence (KI'14)*, volume 8736 of

*Lecture Notes in Artificial Intelligence*, pages 135–146. Springer-Verlag, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that matching without a TBox is NP-complete. In this paper we show that matching in EL w.r.t. general TBoxes (i.e., finite sets of general concept inclusions, GCIs) is in NP by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem w.r.t. general TBoxes.
Franz Baader and Barbara Morawska: **Matching with respect to general concept inclusions in the Description Logic EL**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors,

*Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of

*CEUR Workshop Proceedings*, pages 33–44, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that the matching problem is NP-complete. It then took almost 10 years before this NP-completeness result could be extended from matching to unification in EL. The next big challenge was then to further extend these results from matching and unification without a TBox to matching and unification w.r.t. a general TBox, i.e., a finite set of general concept inclusions. For unification, we could show some partial results for general TBoxes that satisfy a certain restriction on cyclic dependencies between concepts, but the general case is still open. For matching, we solve the general case in this paper: we show that matching in EL w.r.t. general TBoxes is NP-complete by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem.
Daniel Borchmann, Rafael Peñaloza, and Wenqian Wang: **Classifying Software Bug Reports Using Methods from Formal Concept Analysis**. *Studia Universitatis Babeş-Bolyai Informatica*, 59:10–27, 2014. Suplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA'14)

BibTeX entry
Paper (PDF)

#### Abstract:

We provide experience in applying methods from formal concept analysis to the problem of classifying software bug reports characterized by distinguished features. More specifically, we investigate the situation where we are given a set of already processed bug reports together with the components of the program that contained the corresponding error. The task is the following: given a new bug report with specific features, provide a list of components of the program based on the bug reports already processed that are likely to contain the error. To this end, we investigate several approaches that employ the idea of implications between features and program components. We describe these approaches in detail, and apply them to real-world data for evaluation. The best of our approaches is capable of identifying in just a fraction of a second the component causing a bug with an accuracy of over 70 percent.
Stefan Borgwardt: **Fuzzy DLs over Finite Lattices with Nominals**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 58–70, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

The complexity of reasoning in fuzzy description logics (DLs) over a finite lattice L usually does not exceed that of the underlying classical DLs. This has recently been shown for the logics between L-IALC and L-ISCHI using a combination of automata- and tableau-based techniques. In this paper, this approach is modified to deal with nominals and constants in L-ISCHOI. Reasoning w.r.t. general TBoxes is ExpTime-complete, and PSpace-completeness is shown under the restriction to acyclic terminologies in two sublogics. The latter implies two previously unknown complexity results for the classical DLs ALCHO and SO.
Stefan Borgwardt, Marco Cerami, and Rafael Peñaloza: **Many-Valued Horn Logic is Hard**. In Thomas Lukasiewicz, Rafael Peñaloza, and Anni-Yasmin Turhan, editors, *Proceedings of the 1st International Workshop on Logics for Reasoning about Preferences, Uncertainty, and Vagueness (PRUV'14)*, volume 1205 of *CEUR Workshop Proceedings*, pages 52–58, 2014.

BibTeX entry
Paper (PDF)

Stefan Borgwardt, Felix Distel, and Rafael Peñaloza: **Decidable Gödel description logics without the finitely-valued model property**. In Chitta Baral, Giuseppe De Giacomo, and Thomas Eiter, editors, *Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)*, pages 228–237. AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

In the last few years, there has been a large effort for analyzing the computational properties of reasoning in fuzzy description logics. This has led to a number of papers studying the complexity of these logics, depending on the chosen semantics. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics w.r.t. witnessed models over the Gödel t-norm. We show that in the logic G-IALC, reasoning cannot be restricted to finitely-valued models in general. Despite this negative result, we also show that all the standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC.
Stefan Borgwardt, Felix Distel, and Rafael Peñaloza: **Gödel Description Logics with General Models**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 391–403, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

In the last few years, the complexity of reasoning in fuzzy description logics has been studied in depth. Surprisingly, despite being arguably the simplest form of fuzzy semantics, not much is known about the complexity of reasoning in fuzzy description logics using the Gödel t-norm. It was recently shown that in the logic G-IALC under witnessed model semantics, all standard reasoning problems can be solved in exponential time, matching the complexity of reasoning in classical ALC. We show that this also holds under general model semantics.
Stefan Borgwardt, José A. Leyva Galano, and Rafael Peñaloza: **Gödel FL_{0} with Greatest Fixed-Point Semantics**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors,

*Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of

*CEUR Workshop Proceedings*, pages 71–82, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

We study the fuzzy extension of FL_{0}with semantics based on the Gödel t-norm. We show that gfp-subsumption w.r.t. a finite set of primitive definitions can be characterized by a relation on weighted automata, and use this result to provide tight complexity bounds for reasoning in this logic.

Stefan Borgwardt, José A. Leyva Galano, and Rafael Peñaloza: **The Fuzzy Description Logic G- FL_{0} with Greatest Fixed-Point Semantics**. In Eduardo Fermé and João Leite, editors,

*Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA'14)*, volume 8761 of

*Lecture Notes in Artificial Intelligence*, pages 62–76. Funchal, Portugal, Springer-Verlag, 2014.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

We study the fuzzy extension of the Description Logic FL_{0}with semantics based on the Gödel t-norm. We show that subsumption w.r.t. a finite set of primitive definitions, using greatest fixed-point semantics, can be characterized by a relation on weighted automata. We use this result to provide tight complexity bounds for reasoning in this logic, showing that it is PSpace-complete. If the definitions do not contain cycles, subsumption becomes co-NP-complete.

Stefan Borgwardt and Rafael Peñaloza: **Consistency Reasoning in Lattice-Based Fuzzy Description Logics**. *International Journal of Approximate Reasoning*, 55(9):1917–1938, 2014.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Fuzzy Description Logics have been widely studied as a formalism for representing and reasoning with vague knowledge. One of the most basic reasoning tasks in (fuzzy) Description Logics is to decide whether an ontology representing a knowledge domain is consistent. Surprisingly, not much is known about the complexity of this problem for semantics based on complete De Morgan lattices. To cover this gap, in this paper we study the consistency problem for the fuzzy Description Logic L-SHI and its sublogics in detail. The contribution of the paper is twofold. On the one hand, we provide a tableaux-based algorithm for deciding consistency when the underlying lattice is finite. The algorithm generalizes the one developed for classical SHI. On the other hand, we identify decidable and undecidable classes of fuzzy Description Logics over infinite lattices. For all the decidable classes, we also provide tight complexity bounds.
Stefan Borgwardt and Rafael Peñaloza: **Finite Lattices Do Not Make Reasoning in ALCOI Harder**. In F. Bobillo, R.N. Carvalho, P.C.G. da Costa, C. d'Amato, N. Fanizzi, K.B. Laskey, K.J. Laskey, Th. Lukasiewicz, M. Nickles, and M. Pool, editors, *Uncertainty Reasoning for the Semantic Web III*, volume 8816 of *LNCS*, pages 122–141. Springer-Verlag, 2014. Revised Selected Papers from the ISWC International Workshops URSW 2011 - 2013

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We consider the fuzzy description logic ALCOI with semantics based on a finite residuated De Morgan lattice. We show that reasoning in this logic is ExpTime-complete w.r.t. general TBoxes. In the sublogics ALCI and ALCO, it is PSpace-complete w.r.t. acyclic TBoxes. This matches the known complexity bounds for reasoning in classical description logics between ALC and ALCOI.
Diego Calvanese, Ismail Ilkan Ceylan, Marco Montali, and Ario Santoso: **Verification of Context-Sensitive Knowledge and Action Bases**. In *Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA 2014)*, volume 8761 of *LNCS*, pages 514–528. Springer Verlag, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Knowledge and Action Bases (KABs) have been recently proposed as a formal framework to capture the dynamics of systems which manipulate Description Logic (DL) Knowledge Bases (KBs) through action execution. In this work, we enrich the KAB setting with contextual information, making use of different context dimensions. On the one hand, context is determined by the environment using context-changing actions that make use of the current state of the KB and the current context. On the other hand, it affects the set of TBox assertions that are relevant at each time point, and that have to be considered when processing queries posed over the KAB. Here we extend to our enriched setting the results on verification of rich temporal properties expressed in μ-calculus, which had been established for standard KABs. Specifically, we show that under a run-boundedness condition, verification stays decidable and does not incur in any additional cost in terms of worst-case complexity. We also show how to adapt syntactic conditions ensuring run-boundedness so as to account for contextual information, taking into account context-dependent activation of TBox assertions.
Diego Calvanese, Ismail Ilkan Ceylan, Marco Montali, and Ario Santoso: **Adding Context to Knowledge and Action Bases**. In *Proceedings of Acquisition, Representation and Reasoning About Context with Logic (ARCOE 2014)*. CoRR Technical Report, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Knowledge and Action Bases (KABs) have been recently proposed as a formal framework to capture the dynamics of systems which manipulate Description Logic (DL) Knowledge Bases (KBs) through action execution. In this work, we enrich the KAB setting with contextual information, making use of different context dimensions. On the one hand, context is determined by the environment using context-changing actions that make use of the current state of the KB and the current context. On the other hand, it affects the set of TBox assertions that are relevant at each time point, and that have to be considered when processing queries posed over the KAB. Here we extend to our enriched setting the results on verification of rich temporal properties expressed in mu-calculus, which had been established for standard KABs. Specifically, we show that under a run-boundedness condition, verification stays decidable.
Ismail Ilkan Ceylan and Rafael Peñaloza: **Reasoning in the Description Logic BEL using Bayesian Networks**. In *Proceedings of 4th International Workshop on Statistical Relational AI (StarAI 2014)*, volume WS-14-13 of *AAAI Workshops*. AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We study the problem of reasoning in the probabilistic Description Logic BEL. Using a novel structure, we show that probabilistic reasoning in this logic can be reduced in polynomial time to standard inferences over a Bayesian network. This reduction provides tight complexity bounds for probabilistic reasoning in BEL.
Ismail Ilkan Ceylan and Rafael Peñaloza: **The Bayesian Description Logic BEL**. In *Proceedings of 7th International Joint Conference on Automated Reasoning (IJCAR 2014)*, volume 8562 of *LNCS*, pages 480–494. Springer, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce the probabilistic Description Logic BEL. In BEL, axioms are required to hold only in an associated context. The probabilistic component of the logic is given by a Bayesian network that describes the joint probability distribution of the contexts. We study the main reasoning problems in this logic; in particular, we (i) prove that deciding positive and almost-sure entailments is not harder for BEL than for the BN, and (ii) show how to compute the probability, and the most likely context for a consequence.
Ismail Ilkan Ceylan and Rafael Peñaloza: **Tight Complexity Bounds for Reasoning in the Description Logic BEL**. In

*Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA 2014)*, volume 8761 of

*LNCS*, pages 77–91. Springer, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

Recently, Bayesian extensions of Description Logics, and in particular the logic BEL, were introduced as a means of representing certain knowledge that depends on an uncertain context. In this paper we introduce a novel structure, called proof structure, that encodes the contextual information required to deduce subsumption relations from a BEL knowledge base. Using this structure, we show that probabilistic reasoning in BEL can be reduced in polynomial time to standard Bayesian network inferences, thus obtaining tight complexity bounds for reasoning in BEL.
Ismail Ilkan Ceylan and Rafael Peñaloza: **Bayesian Description Logics**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*. CEUR-WS, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We present Bayesian Description Logics (BDLs): an extension of Description Logics (DLs) with contextual probabilities encoded in a Bayesian network (BN). Classical DL reasoning tasks are extended to consider also the contextual and probabilistic information in BDLs. A complexity analysis of these problems shows that, for propositionally closed DLs, this extension comes without cost, while for tractable DLs the complexity is affected by the cost of reasoning in the BN.
Jieying Chen, Michel Ludwig, Yue Ma, and Dirk Walther: **Evaluation of Extraction Techniques for Ontology Excerpts**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 471–482, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce the notion of an ontology excerpt as being a fixed-size subset of an ontology that preserves as much knowledge as possible about the terms in a given vocabulary as described in the ontology. We consider different extraction techniques for ontology excerpts based on methods from Information Retrieval. To evaluate these techniques, we measure the degree of incompleteness of the resulting excerpts using the notion of logical difference. We provide an experimental evaluation of the extraction techniques by applying them on the biomedical ontology SNOMED CT.
Jens Claßen, Martin Liebenberg, Gerhard Lakemeyer, and Benjamin Zarrieß: **Exploring the Boundaries of Decidable Verification of Non-Terminating Golog Programs**. In *Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014)*, pages 1012–1019. Quebec City, Quebec, Canada, AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

The action programming language Golog has been found useful for the control of autonomous agents such as mobile robots. In scenarios like these, tasks are often open-ended so that the respective control programs are non-terminating. Before deploying such programs on a robot, it is often desirable to verify that they meet certain requirements. For this purpose, Claßen and Lakemeyer recently introduced algorithms for the verification of temporal properties of Golog programs. However, given the expressiveness of Golog, their verification procedures are not guaranteed to terminate. In this paper, we show how decidability can be obtained by suitably restricting the underlying base logic, the effect axioms for primitive actions, and the use of actions within Golog programs. Moreover, we show that dropping any of these restrictions immediately leads to undecidability of the verification problem.
Chiara Del Vescovo and Rafael Peñaloza: **DeaLing with Ontologies using CODs**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 157–168, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

A major challenge in knowledge representation is to manage the access to knowledge: users should not be presented with knowledge that is irrelevant to their topic of interest, or have no right to access. Two general strategies exist for providing access restrictions: (1) the ontology engineers describe the conditions that allow access to specific fragments of the ontology, or (2) fragments are automatically identified through their logical properties. The former is prone to miss logical connections between axioms, while the latter can fail to capture relevant knowledge that has no logical connection with the topic of interest. We define the Context-Oriented Decomposition (COD) of an ontology as a technique that combines the benefits of both approaches: it allows authors to identify relevant fragments, while guaranteeing the strong semantic properties of the logic-based Atomic Decomposition.
Stathis Delivorias, Haralampos Hatzikirou, Rafael Penaloza, and Dirk Walther: **Detecting Emergent Phenomena in Cellular Automata Using Temporal Description Logics**. In *Proceedings of the 11th edition of Cellular Automata for Research and Industry (ACRI 2014)*, *Lecture Notes in Computer Science*. Springer-Verlag, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Cellular automata are discrete mathematical models that have been proven useful as representations of a wide variety of systems exhibiting emergent behavior. Detection of emergent behavior is typically computationally expensive as it relies on computer simulations. We pro- pose to specify cellular automata using a suitable Temporal Description Logic and we show that we can formulate queries about the evolution of a cellular automaton as reasoning tasks in this logic.
Felix Distel, Jamal Atif, and Isabelle Bloch: **Concept Dissimilarity with Triangle Inequality**. In *Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR'14)*. Vienna, Austria, AAAI Press, 2014. Short Paper. To appear.

BibTeX entry
Paper (PDF)

#### Abstract:

Several researchers have developed properties that ensure compatibility of a concept similarity or dissimilarity measure with the formal semantics of Description Logics. While these authors have highlighted the relevance of the triangle inequality, none of their proposed dissimilarity measures satisfy it. In this work we present a theoretical framework for dissimilarity measures with this property. Our approach is based on concept relaxations, operators that perform stepwise generalizations on concepts. We prove that from any relaxation we can derive a dissimilarity measure that satisfies a number or properties that are important when comparing concepts.
Felix Distel, Jamal, Atif, and Isabelle Bloch: **Concept Dissimilarity Based on Tree Edit Distances and Morphological Dilations**. In Torsten Schaub, editor, *Proceedings of the 21st International Conference on Artificial Intelligence (ECAI'14)*, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

A number of similarity measures for comparing description logic concepts have been proposed. Criteria have been developed to evaluate a measure's fitness for an application. These criteria include on the one hand those that ensure compatibility with the semantics, such as equivalence soundness, and on the other hand the properties of a metric, such as the triangle inequality. In this work we present two classes of dissimilarity measures that are at the same time equivalence sound and satisfy the triangle inequality: a simple dissimilarity measure, based on description trees for the lightweight description logic ; and an instantiation of a general framework, presented in our previous work, using dilation operators from mathematical morphology, and which exploits the link between Hausdorff distance and dilations using balls of the ground distance as structuring elements.
Andreas Ecke: **Similarity-based Relaxed Instance Queries in EL^{++}**. In Thomas Lukasiewicz, Rafael Peñaloza, and Anni-Yasmin Turhan, editors,

*Proceedings of the First Workshop on Logics for Reasoning about Preferences, Uncertainty, and Vagueness*, volume 1205 of

*CEUR Workshop Proceedings*, pages 101–113. CEUR-WS.org, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

Description Logic (DL) knowledge bases (KBs) allow to express knowledge about concepts and individuals in a formal way. This knowledge is typically crisp, i.e., an individual either is an instance of a given concept or it is not. However, in practice this is often too restrictive: when querying for instances, one may often also want to find suitable alternatives, i.e., individuals that are not instances of query concept, but could still be considered `good enough'. Relaxed instance queries have been introduced to gradually relax this inference in a controlled way via the use of concept similarity measures (CSMs). So far, those algorithms only work for the DL EL, which has limited expressive power. In this paper, we introduce a suitable CSM for EL++-concepts. EL++ adds nominals, role inclusion axioms, and concrete domains to EL. We extend the algorithm to compute relaxed instance queries w.r.t. this new CSM, and thus to work for general EL++ KBs.
Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Answering Instance Queries Relaxed by Concept Similarity**. In Chitta Baral, Giuseppe De Giacomo, and Thomas Eiter, editors, *Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR'14)*, pages 248–257. Vienna, Austria, AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concepts. For many applications querying the KB by crisp query concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of concept similarity. In this paper we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by concept similarity measures (CSM). For the DL*EL*we investigate computation algorithms for this task, their complexity and properties for the employed CSM in case unfoldabel Tboxes or general TBoxes aer used.

Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Completion-based Generalization Inferences for the Description Logic ELOR with Subjective Probabilities**.

*International Journal of Approximate Reasoning*, 55(9):1939–1970, 2014.

BibTeX entry Paper (PDF) DOI

#### Abstract:

Description Logics (DLs) are a well-established family of knowledge representation formalisms. One of its members, the DL*ELOR*has been successfully used for representing knowledge from the bio-medical sciences, and is the basis for the OWL 2 EL profile of the standard ontology language for the Semantic Web. Reasoning in this DL can be performed in polynomial time through a completion-based algorithm. In this paper we study the logic Prob-

*ELOR*, that extends

*ELOR*with subjective probabilities, and present a completion-based algorithm for polynomial time reasoning in a restricted version, Prob-

*ELOR*, of Prob-

^{c}_{01}*ELOR*. We extend this algorithm to computation algorithms for approximations of (i) the most specific concept, which generalizes a given individual into a concept description, and (ii) the least common subsumer, which generalizes several concept descriptions into one. Thus, we also obtain methods for these inferences for the OWL 2 EL profile. These two generalization inferences are fundamental for building ontologies automatically from examples. The feasibility of our approach is demonstrated empirically by our prototype system GEL.

Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Mary, What's Like All Cats?**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 526–529, 2014.

BibTeX entry
Paper (PDF)

Marcus Hähnel, Julian Mendez, Veronika Thost, and Anni-Yasmin Turhan: **Bridging the Application Knowledge Gap**. In *Workshop on Adaptive and Reflective Middleware'14*, December 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Regarding energy efficiency, resource management in complex hard- and software systems that is based on the information typically available to the OS alone does not yield best results. Nevertheless, general-purpose resource management should stay independent of application-specific information. To resolve this dilemma, we propose a generic, ontology-based approach to resource scheduling that is context-aware and takes information of running applications into account. The central task here is to recognize situations that might necessitate an adaptation of resource scheduling. This task is performed by logical reasoning over OWL ontologies. Our initial study shows that current OWL 2 EL reasoner systems can perform recognition of exemplary situations relevant to resource management within 4 seconds.
Francesco Kriegel: **Incremental Computation of Concept Diagrams**. *Studia Universitatis Babeş-Bolyai Informatica*, 59:45–61, 2014. Supplemental proceedings of the 12th International Conference on Formal Concept Analysis (ICFCA 2014), Cluj-Napoca, Romania

BibTeX entry
Paper (PDF)

#### Abstract:

Suppose a formal context K=(G,M,I) is given, whose concept lattice B(K) with an attribute-additive concept diagram is already known, and an attribute column C=(G,n,J) shall be inserted to or removed from it. This paper introduces and proves an incremental update algorithm for both tasks.
Karsten Lehmann and Rafael Peñaloza: **The Complexity of Computing the Behaviour of Lattice Automata on Infinite Trees**. *Theoretical Computer Science*, 534:53–68, 2014.

BibTeX entry
Paper (PDF)
DOI

#### Abstract:

Several logic-based decision problems have been shown to be reducible to the emptiness problem of automata. In a similar way, non-standard reasoning problems can be reduced to the computation of the behaviour of weighted automata. In this paper, we consider a variant of weighted Büchi automata working on (unlabeled) infinite trees, where the weights belong to a lattice. We analyse the complexity of computing the behaviour of this kind of automata if the underlying lattice is not distributive. We show that the decision version of this problem is in ExpTime and PSpace-hard in general, assuming that the lattice operations are polynomial-time computable. If the lattice is what we call "linear-space-computable-encoded", then the upper bound can be reduced to PSpace, but the lower bound also decreases to NP-hard and co-NP-hard. We conjecture that the upper bounds provided are in fact tight.
Michel Ludwig: **Just: a Tool for Computing Justifications w.r.t. EL Ontologies**. In Samantha Bail, Birte Glimm, Ernesto Jiménez-Ruiz, Nicolas Matentzoglu, Bijan Parsia, and Andreas Steigmiller, editors, *Proceedings of the 3rd International Workshop on OWL Reasoner Evaluation (ORE 2014)*, pages 1–7. CEUR Workshop Proceedings, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce the tool JUST for computing justifications for general concept inclusions w.r.t. ontologies formulated in the description logic EL extended with role inclusions. The computation of justifications in JUST is based on saturating the input axioms under all possible inferences w.r.t. a consequence-based calculus. We give an overview of the implemented techniques and we conclude with an experimental evaluation of the performance of JUST when applied on several practical ontologies.
Michel Ludwig and Boris Konev: **Practical Uniform Interpolation and Forgetting for ALC TBoxes with Applications to Logical Difference**. In Chitta Baral, Giuseppe De Giacomo, and Thomas Eiter, editors, *Proceedings of the 14th International Conference on Principles of Knowledge Representation and Reasoning (KR'14)*. AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We develop a clausal resolution-based approach for computing uniform interpolants of TBoxes formulated in the description logic when such uniform interpolants exist. We also present an experimental evaluation of our approach and of its application to the logical difference problem for real-life ALC ontologies. Our results indicate that in many practical cases uniform interpolants exist and that they can be computed with the presented algorithm.
Michel Ludwig and Rafael Peñaloza: **Brave and Cautious Reasoning in EL**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 274–286, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Developing and maintaining ontologies is an expensive and error-prone task. After an error is detected, users may have to wait for a long time before a corrected version of the ontology is available. In the meantime, one might still want to derive meaningful knowledge from the ontology, while avoiding the known errors. We introduce brave and cautious reasoning and show that it is hard for EL. We then propose methods for improving the reasoning times by precompiling information about the known errors and using proof-theoretic techniques for computing justifications. A prototypical implementation shows that our approach is feasible for large ontologies used in practice.
Michel Ludwig and Rafael Peñaloza: **Error-Tolerant Reasoning in the Description Logic EL**. In Eduardo Fermé and João Leite, editors, *Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA'14)*, volume 8761 of *Lecture Notes in Artificial Intelligence*, pages 107–121. Madeira, Portugal, Springer-Verlag, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Developing and maintaining ontologies is an expensive and error-prone task. After an error is detected, users may have to wait for a long time before a corrected version of the ontology is available. In the meantime, one might still want to derive meaningful knowledge from the ontology, while avoiding the known errors. We study error-tolerant reasoning tasks in the description logic . While these problems are intractable, we propose methods for improving the reasoning times by precompiling information about the known errors and using proof-theoretic techniques for computing justifications. A prototypical implementation shows that our approach is feasible for large ontologies used in practice.
Michel Ludwig and Dirk Walther: **Detecting Conjunctive Query Differences between ELHr-Terminologies using Hypergraphs**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 287–298, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

We present a new method for detecting logical differences between EL-terminologies extended with role inclusions, domain and range restrictions of roles using a hypergraph representation of ontologies. In this paper we consider differences given by pairs consisting of a conjunctive query and of an ABox formulated over a vocabulary of interest. We define a simulation notion between such hypergraph representations and we show that the existence of simulations coincides with the absence of a logical difference. To demonstrate the practical applicability of our approach, we evaluate a prototype implementation on large ontologies.
Michel Ludwig and Dirk Walther: **The Logical Difference for ELHr-Terminologies using Hypergraphs**. In Torsten Schaub, Gerhard Friedrich, and Barry O'Sullivan, editors, *Proceedings of the 21st European Conference on Artifical Intelligence (ECAI 2014)*, volume 263 of *Frontiers in Artificial Intelligence and Applications*, pages 555–560. IOS Press, 2014.

BibTeX entry
DOI

#### Abstract:

We propose a novel approach for detecting semantic diffferences between ontologies. In this paper we investigate the logical difference for EL-terminologies extended with role inclusions, domain and range restrictions of roles. Three types of queries are covered: concept subsumption, instance and conjunctive queries. Using a hypergraph representation of such ontologies, we show that logical differences can be detected by checking for the existence of simulations between the corresponding hypergraphs. A minor adaptation of the simulation notions allows us to capture different types of queries. We also evaluate our hypergraph approach by applying a prototype implementation on large ontologies.
Yue Ma and Rafael Peñaloza: **Towards Parallel Repair: An Ontology Decomposition-based Approach**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 633–645, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontology repair remains one of the main bottlenecks for the development of ontologies for practical use. Many automated methods have been developed for suggesting potential repairs, but ultimately human intervention is required for selecting the adequate one, and the human expert might be overwhelmed by the amount of information delivered to her. We propose a decomposition of ontologies into smaller components that can be repaired in parallel. We show the utility of our approach for ontology repair, provide algorithms for computing this decomposition through standard reasoning, and study the complexity of several associated problems.
Theofilos Mailis, Rafael Peñaloza, and Anni-Yasmin Turhan: **Conjunctive Query Answering in Finitely-valued Fuzzy Description Logics**. In Roman Kontchakov and Marie-Laure Mugnier, editors, *Proceedings of the 8th International Conference on Web Reasoning and Rule Systems (RR 2014)*, pages 124–139. Springer, 2014.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Fuzzy Description Logics (DLs) generalize crisp ones by providing membership degree semantics for concepts and roles. A popular technique for reasoning in fuzzy DL ontologies is by providing a reduction to crisp DLs and then employ reasoning in the crisp DL. In this paper we adopt this approach to solve conjunctive query (CQ) answering problems for fuzzy DLs. We give reductions for Gödel and Łukasiewicz variants of fuzzy SROIQ and two kinds of fuzzy CQs. The correctness of the proposed reduction is proved and its complexity is studied for different fuzzy variants of SROIQ.
Theofilos Mailis and Anni-Yasmin Turhan: **Employing DL-LiteR-Reasoners for Fuzzy Query Answering**. In Thepchai Supnithi and Takahira Yamaguchi, editors, *Proceedings of the 4th Joint International Semantic Technology Conference (JIST2014)*. Lecture Notes in Computer Science, 2014.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Recently, answering of conjunctive queries has been investigated and implemented in optimized reasoner systems based on the rewriting approach for crisp DLs. In this paper we investigate how to employ such existing implementations for query answering in DL-LiteR over fuzzy ontologies. To this end we give an extended rewriting algorithm for the case of fuzzy DL-LiteR-ABoxes that employs the one for crisp DL-LiteR and investigate the limitations of this approach. We also tested the performance of our proto-type implementation FLite of this method.
Francisco Martin-Recuerda and Dirk Walther: **Axiom Dependency Hypergraphs for Fast Modularisation and Atomic Decomposition**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 299–310, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

In this paper we use directed hypergraphs to represent the locality-based dependencies between the axioms of an OWL ontology. We define a notion of an axiom dependency hypergraph, where axioms are represented as nodes and dependencies between axioms as hyper- edges connecting possibly several nodes with one node. We show that a locality-based module of an ontology corresponds to a connected compo- nent in the hypergraph, and an atom of an ontology to a strongly con- nected component. Collapsing the strongly connected components into single nodes yields a condensed axiom dependency hypergraph, which contains the atomic decomposition of the ontology. To condense the ax- iom dependency hypergraph we exploit linear time graph algorithms on its graph fragment. This optimization can significantly reduce the time needed to compute the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies, and for computing syntactic locality-based modules using the condensed axiom dependency hypergraph.
Francisco Martin-Recuerda and Dirk Walther: **Fast Modularisation and Atomic Decomposition of Ontologies using Axiom Dependency Hypergraphs**. In Peter Mika, Tania Tudorache, Abraham Bernstein, Chris Welty, Craig Knoblock, Denny Vrandecic, Paul Groth, Natasha Noy, Krzysztof Janowicz, and Carole Goble, editors, *Proceedings of the 13th International Semantic Web Conference (ISWC 2014), Part II*, volume 8797 of *Lecture Notes in Computer Science*, pages 49–64. Springer-Verlag, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

In this paper we define the notion of an axiom dependency hypergraph, which explicitly represents how axioms are included into a module by the algorithm for computing locality-based modules. A locality-based module of an ontology corresponds to a set of connected nodes in the hypergraph, and atoms of an ontology to strongly connected components. Collapsing the strongly connected components into single nodes yields a condensed hypergraph that comprises a representation of the atomic decomposition of the ontology. To speed up the condensation of the hypergraph, we first reduce its size by collapsing the strongly connected components of its graph fragment employing a linear time graph algorithm. This approach helps to significantly reduce the time needed for computing the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies. We also demonstrate a significant improvement in the time needed to extract locality-based modules from an axiom dependency hypergraph and its condensed version.
Dorian Merz, Rafael Peñaloza, and Anni-Yasmin Turhan: **Reasoning in ALC with Fuzzy Concrete Domains**. In Carsten Lutz and Michael Thielscher, editors,

*Proceedings of 37th edition of the German Conference on Artificial Intelligence (KI'14)*, volume 8736 of

*Lecture Notes in Artificial Intelligence*, pages 171–182. Springer Verlag, 2014.

BibTeX entry Paper (PDF)

#### Abstract:

In the context of Description Logics (DLs) concrete domains allow to model concepts and facts by the use of concrete values and predicates between them. For reasoning in the DL ALC with general TBoxes concrete domains may cause undecidability. Under certain restrictions of the concrete domains decidability can be regained. Typically, the concrete domain predicates are crisp, which is a limitation for some applications. In this paper we investigate crisp ALC in combination with fuzzy concrete domains for general TBoxes, devise conditions for decidability, and give a tableau-based reasoning algorithm.
Rafael Peñaloza: **Automata-based Reasoning in Fuzzy Description Logics**. In Tommaso Flaminio, Lluis Godo, Siegfried Gottlob, and Erich Peter Klement, editors, *Proceedings of the 35th Linz Seminar on Fuzzy Set Theory*, pages 106–106, 2014.

BibTeX entry
Paper (PDF)

Rafael Peñaloza, Veronika Thost, and Anni-Yasmin Turhan: **Certain Answers in a Rough World**. In Meghyn Bienvenu, Magdalena Ortiz, Riccardo Rosati, and Mantas Simkus, editors, *Proceedings of the 27th International Workshop on Description Logics (DL'14)*, volume 1193 of *CEUR Workshop Proceedings*, pages 709–712, 2014.

BibTeX entry
Paper (PDF)

Rafael Peñaloza and Aparna Saisree Thuluva: **COBRA, a Demo**. In C. Maria Keet and Valentina Tamma, editors, *Proceedings of the 11th International Workshop on OWL: Experiences and Directions (OWLED 2014)*, volume 1265 of *CEUR Workshop Proceedings*, 2014.

BibTeX entry
Paper (PDF)

Benjamin Zarrieß and Jens Claßen: **On the Decidability of Verifying LTL Properties of Golog Programs**. In *Technical Report of the AAAI 2014 Spring Symposium: Knowledge Representation and Reasoning in Robotics (KRR14)*. Palo Alto, California, USA, AAAI Press, 2014.

BibTeX entry
Paper (PDF)

#### Abstract:

The high-level action programming language Golog is a useful means for modeling the behavior of autonomous agents such as mobile robots. It relies on a representation given in terms of a logic-based action theory in the Situation Calculus (SC). To guarantee that the possibly non-terminating execution of a Golog program leads to the desired behavior of the agent, it is desirable to (automatically) verify that it satisfies certain requirements given in terms of temporal formulas. However, due to the high (first-order) expressiveness of the Golog language, the verification problem is in general undecidable. In this paper we show that for a fragment of the Golog language defined on top of the decidable logic C^{2}, the verification problem for linear time temporal properties becomes decidable, which extends earlier results to a more expressive fragment of the input formalism. Moreover, we justify the involved restrictions on program constructs and action theory by showing that relaxing any of these restrictions instantly renders the verification problem undecidable again.

Benjamin Zarrieß and Jens Claßen: **Verifying CTL* Properties of Golog Programs over Local-Effect Actions**. In *Proceedings of the Twenty-First European Conference on Artificial Intelligence (ECAI 2014)*, 2014.

BibTeX entry

#### Abstract:

Golog is a high-level action programming language for controlling autonomous agents such as mobile robots. It is defined on top of a logic-based action theory expressed in the Situation Calculus. Before a program is deployed onto an actual robot and executed in the physical world, it is desirable, if not crucial, to verify that it meets certain requirements (typically expressed through temporal formulas) and thus indeed exhibits the desired behaviour. However, due to the high (first-order) expressiveness of the language, the corresponding verification problem is in general undecidable. In this paper, we extend earlier results to identify a large, non-trivial fragment of the formalism where verification is decidable. In particular, we consider properties expressed in a first-order variant of the branching-time temporal logic CTL*. Decidability is obtained by (1) resorting to the decidable first-order fragment C^{2}as underlying base logic, (2) using a fragment of Golog with ground actions only, and (3) requiring the action theory to only admit local effects.

## 2013

Ignasi Ab\'ıo, Robert Nieuwenhuis, Albert Oliveras, and Enric Rodr\'ıguez-Carbonell: **A Parametric Approach for Smaller and Better Encodings of Cardinality Constraints**. In *19th International Conference on Principles and Practice of Constraint Programming*, *CP'13*, 2013.

BibTeX entry

Ignasi Ab\'ıo, Robert Nieuwenhuis, Albert Oliveras, Enric Rodr\'ıguez-Carbonell, and Peter J. Stuckey: **To Encode or to Propagate? The Best Choice for Each Constraint in SAT**. In *19th International Conference on Principles and Practice of Constraint Programming*, *CP'13*, 2013.

BibTeX entry

Mario Alviano and Rafael Peñaloza: **Fuzzy Answer Sets Approximations**. *Theory and Practice of Logic Programming*, 13(4–5):753–767, 2013.

BibTeX entry
Paper (PDF)
DOI
©Cambridge University Press

#### Abstract:

Fuzzy answer set programming (FASP) is a recent formalism for knowledge representation that enriches the declarativity of answer set programming by allowing propositions to be graded. To now, no implementations of FASP solvers are available and all current proposals are based on compilations of logic programs into different paradigms, like mixed integer programs or bilevel programs. These approaches introduce many auxiliary variables which might affect the performance of a solver negatively. To limit this downside, operators for approximating fuzzy answer sets can be introduced: Given a FASP program, these operators compute lower and upper bounds for all atoms in the program such that all answer sets are between these bounds. This paper analyzes several operators of this kind which are based on linear programming, fuzzy unfounded sets and source pointers. Furthermore, the paper reports on a prototypical implementation, also describing strategies for avoiding computations of these operators when they are guaranteed to not improve current bounds. The operators and their implementation can be used to obtain more constrained mixed integer or bilevel programs, or even for providing a basis for implementing a native FASP solver. Interestingly, the semantics of relevant classes of programs with unique answer sets, like positive programs and programs with stratified negation, can be already computed by the prototype without the need for an external tool.
Jamal Atif, Isabelle Bloch, Felix Distel, and Céline Hudelot: **A fuzzy extension of explanatory relations based on mathematical morphology**. In Gabriella Pasi and Javier Montero, editors, *Proceedings of the 8th conference of the European Society for Fuzzy Logic and Technology (EUSFLAT-2013)*, 2013. to appear

BibTeX entry

#### Abstract:

In this paper, we build upon previous work defining explanatory relations based on mathematical morphology operators on logical formulas in propositional logics. We propose to extend such relations to the case where the set of models of a formula is fuzzy, as a first step towards morphological fuzzy abduction. The membership degrees may represent degrees of satisfaction of the formula, preferences, vague information for instance related to a partially observed situation, imprecise knowledge, etc. The proposed explanatory relations are based on successive fuzzy erosions of the set of models, conditionally to a theory, while the maximum membership degree in the results remains higher than a threshold. Two explanatory relations are proposed, one based on the erosion of the conjunction of the theory and the formula to be explained, and the other based on the erosion of the theory, while remaining consistent with the formula at least to some degree. Extensions of the rationality postulates introduced by Pino-Perez and Uzcategui are proposed. As in the classical crisp case, we show that the second explanatory relation exhibits stronger properties than the first one.
Jamal Atif, Isabelle Bloch, Felix Distel, and Céline Hudelot: **Mathematical Morphology Operators over Concept Lattices**. In Peggy Cellier, Felix Distel, and Bernhard Ganter, editors, *Proceedings of the 11th International Conference on Formal Concept Analysis (ICFCA'13)*, volume 7880 of *Lecture Notes in Computer Science*, pages 28–43. Springer-Verlag, 2013.

BibTeX entry
Paper (PDF)
©Springer-Verlag
(The final publication is available at link.springer.com)

#### Abstract:

Although mathematical morphology and formal concept analysis are two lattice-based data analysis theories, they are still developed in two disconnected research communities. The aim of this paper is to contribute to fill this gap, beyond the classical relationship between the Galois connections defined by the derivation operators and the adjunctions underlying the algebraic mathematical morphology framework. In particular we define mathematical morphology operators over concept lattices, based on distances, valuations, or neighborhood relations in concept lattices. Their properties are also discussed. These operators provide new tools for reasoning over concept lattices.
Franz Baader, Stefan Borgwardt, and Marcel Lippmann: **Temporalizing Ontology-Based Data Access**. In Maria Paola Bonacina, editor, *Proceedings of the 24th International Conference on Automated Deduction (CADE-24)*, volume 7898 of *Lecture Notes in Artificial Intelligence*, pages 330–344. Lake Placid, NY, USA, Springer-Verlag, 2013.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
Franz Baader, Oliver Fernández Gil, and Barbara Morawska: **Hybrid Unification in the Description Logic EL**. In Barbara Morawska and Konstantin Korovin, editors,

*Proceedings of the 27th International Workshop on Unification (UNIF'13)*, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete.
Franz Baader, Oliver Fernández Gil, and Barbara Morawska: **Hybrid Unification in the Description Logic EL**. In Pascal Fontaine, Christophe Ringeissen, and Renate A. Schmidt, editors,

*Proceedings of the 9th International Symposium on Frontiers of Combining Systems (FroCoS 2013)*, volume 8152 of

*Lecture Notes in Computer Science*, pages 295–310. Nancy, France, Springer-Verlag, September 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete and introduce a goal-oriented algorithm for computing hybrid unifiers.
Franz Baader, Oliver Fernández Gil, and Barbara Morawska: **Hybrid EL-Unification is NP-Complete**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors,

*Proceedings of the 26th International Workshop on Description Logics (DL-2013)*, volume 1014 of

*CEUR Workshop Proceedings*, pages 29–40, July 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. However, the unification algorithms for EL developed until recently could not deal with ontologies containing general concept inclusions (GCIs). In a series of recent papers we have made some progress towards addressing this problem, but the ontologies the developed unification algorithms can deal with need to satisfy a certain cycle restriction. In the present paper, we follow a different approach. Instead of restricting the input ontologies, we generalize the notion of unifiers to so-called hybrid unifiers. Whereas classical unifiers can be viewed as acyclic TBoxes, hybrid unifiers are cyclic TBoxes, which are interpreted together with the ontology of the input using a hybrid semantics that combines fixpoint and descriptive semantics. We show that hybrid unification in EL is NP-complete.
Franz Baader and Alexander Okhotin: **On Language Equations with One-sided Concatenation**. *Fundamenta Informaticae*, 126(1):1–35, 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

Language equations are equations where both the constants occurring in the equations and the solutions are formal languages. They have first been introduced in formal language theory, but are now also considered in other areas of computer science. In the present paper, we restrict the attention to language equations with one-sided concatenation, but in contrast to previous work on these equations, we allow not just union but all Boolean operations to be used when formulating them. In addition, we are not just interested in deciding solvability of such equations, but also in deciding other properties of the set of solutions, like its cardinality (finite, infinite, uncountable) and whether it contains least/greatest solutions. We show that all these decision problems are ExpTime-complete.
Franz Baader and Benjamin Zarrieß: **Verification of Golog Programs over Description Logic Actions**. In Pascal Fontaine, Christophe Ringeissen, and Renate A. Schmidt, editors, *Proceedings of the 9th International Symposium on Frontiers of Combining Systems (FroCoS 2013)*, volume 8152 of *Lecture Notes in Computer Science*, pages 181–196. Nancy, France, Springer-Verlag, September 2013.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

High-level action programming languages such as Golog have successfully been used to model the behavior of autonomous agents. In addition to a logic-based action formalism for describing the environment and the effects of basic actions, they enable the construction of complex actions using typical programming language constructs. To ensure that the execution of such complex actions leads to the desired behavior of the agent, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program. Due to the expressiveness of the action formalism underlying Golog (Situation Calculus), the verification problem for Golog programs is in general undecidable. Action formalisms based on Description Logic (DL) try to achieve decidability of inference problems such as the projection problem by restricting the expressiveness of the underlying base logic. However, until now these formalisms have not been used within Golog programs. In the present paper, we introduce a variant of Golog where basic actions are defined using such a DL-based formalism, and show that the verification problem for such programs is decidable. This improves on our previous work on verifying properties of infinite sequences of DL actions in that it considers (finite and infinite) sequences of DL actions that correspond to (terminating and non-terminating) runs of a Golog program rather than just infinite sequences accepted by a Büchi automaton abstracting the program.
Daniel Borchmann: **Axiomatizing EL^{}_{gfp}-General Concept Inclusions in the Presence of Untrusted Individuals**. In

*Proceedings of the 26th International Workshop on Description Logics (DL-2013)*, volume 1014 of

*CEUR Workshop Proceedings*, pages 65–79. CEUR-WS.org, July 2013.

BibTeX entry

#### Abstract:

To extract terminological knowledge from data, Baader and Distel have proposed an effective method that allows for the extraction of a base of all valid general concept inclusions of a given finite interpretation. In previous works, to be able to handle small amounts of errors in our data, we have extended this approach to also extract general concept inclusions which are ``almost valid'' in the interpretation. This has been done by demanding that general concept inclusions which are ``almost valid'' are those having only an allowed percentage of counterexamples in the interpretation. In this work, we shall further extend our previous work to allow the interpretation to contain both trusted and untrusted individuals, i.e. individuals from which we know and do not know that they are correct, respectively. The problem we then want to solve is to find a compact representation of all terminological knowledge that is valid for all trusted individuals and is almost valid for all others.
Daniel Borchmann: **Axiomatizing EL^{}-Expressible Terminological Knowledge from Erroneous Data**. In

*Proceedings of the Seventh International Conference on Knowledge Capture*, pages 1–8. ACM, 2013.

BibTeX entry

#### Abstract:

In a recent approach, Baader and Distel proposed an algorithm to axiomatize all terminological knowledge that is valid in a given data set and is expressible in the description logic*EL*. This approach is based on the mathematical theory of formal concept analysis. However, this algorithm requires the initial data set to be free of errors, an assumption that normally cannot be made for real-world data. In this work, we propose a first extension of the work of Baader and Distel to handle errors in the data set. The approach we present here is based on the notion of confidence, as it has been developed and used in the area of data mining.

^{}
Daniel Borchmann: **Towards an Error-Tolerant Construction of EL^{} -Ontologies from Data Using Formal Concept Analysis**. In Peggy Cellier, Felix Distel, and Bernhard Ganter, editors,

*Formal Concept Analysis, 11th International Conference, ICFCA 2013, Dresden, Germany, May 21-24, 2013. Proceedings*, volume 7880 of

*Lecture Notes in Computer Science*, pages 60–75. Springer, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

In the work of Baader and Distel, a method has been proposed to axiomatize all general concept inclusions (GCIs) expressible in the description logic*EL*and valid in a given interpretation

^{}*I*. This provides us with an effective method to learn $EL

^{}-ontologies from interpretations. In this work, we want to extend this approach in the direction of handling errors, which might be present in the data-set. We shall do so by not only considering valid GCIs but also those whose confidence is above a given threshold

*c*. We shall give the necessary definitions and show some first results on the axiomatization of all GCIs with confidence at least

*c*. Finally, we shall provide some experimental evidence based on real-world data that supports our approach.

Stefan Borgwardt, Marcel Lippmann, and Veronika Thost: **Temporal Query Answering in the Description Logic DL-Lite**. In Pascal Fontaine, Christophe Ringeissen, and Renate A. Schmidt, editors, *Proceedings of the 9th International Symposium on Frontiers of Combining Systems (FroCoS 2013)*, volume 8152 of *Lecture Notes in Computer Science*, pages 165–180. Nancy, France, Springer-Verlag, 2013.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. For ontologies formulated in Description Logics of the DL-Lite family, OBDA can be realized by rewriting the query into a classical first-order query, e.g. an SQL query, by compiling the information of the ontology into the query. The query is then answered using classical database techniques. In this paper, we consider a temporal version of OBDA. We propose a temporal query language that combines a linear temporal logic with queries over DL-Lite_{c}ore-ontologies. This language is well-suited to express temporal properties of dynamical systems and is useful in context-aware applications that need to detect specific situations. Using a first-order rewriting approach, we transform our temporal queries into queries over a temporal database. We then present three approaches to answering the resulting queries, all having different advantages and drawbacks.

Stefan Borgwardt, Marcel Lippmann, and Veronika Thost: **Temporal Query Answering in DL-Lite**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors, *Proceedings of the 26th International Workshop on Description Logics (DL-2013)*, volume 1014 of *CEUR Workshop Proceedings*. Ulm, Germany, CEUR-WS.org, July 2013.

BibTeX entry
Paper (PDF)

Stefan Borgwardt and Rafael Peñaloza: **About Subsumption in Fuzzy EL**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors,

*Proceedings of the 2013 International Workshop on Description Logics (DL'13)*, volume 1014 of

*CEUR-WS*, pages 526–538, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

The Description Logic EL is used to formulate several large biomedical ontologies. Fuzzy extensions of EL can express the vagueness inherent in many biomedical concepts. We consider fuzzy EL with semantics based on general t-norms, and study the reasoning problems of deciding positive subsumption and 1-subsumption and computing the best subsumption degree.
Stefan Borgwardt and Rafael Peñaloza: **Positive Subsumption in Fuzzy EL with General t-norms**. In Francesca Rossi, editor,

*Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13)*, pages 789–795. Beijing, China, AAAI Press, 2013.

BibTeX entry Paper (PDF) ©IJCAI

#### Abstract:

The Description Logic EL is used to formulate several large biomedical ontologies. Fuzzy extensions of EL can express the vagueness inherent in many biomedical concepts. We study the reasoning problem of deciding positive subsumption in fuzzy EL with semantics based on general t-norms. We show that the complexity of this problem depends on the specific t-norm chosen. More precisely, if the t-norm has zero divisors, then the problem is co-NP-hard; otherwise, it can be decided in polynomial time. We also show that the best subsumption degree cannot be computed in polynomial time if the t-norm contains the Łukasiewicz t-norm.
Stefan Borgwardt and Rafael Peñaloza: **The Complexity of Lattice-Based Fuzzy Description Logics**. *Journal on Data Semantics*, 2(1):1–19, 2013.

BibTeX entry
Paper (PDF)
©Springer-Verlag
(The final publication is available at link.springer.com)

#### Abstract:

We study the complexity of reasoning in fuzzy description logics with semantics based on finite residuated lattices. For the logic SHI, we show that deciding satisfiability and subsumption of concepts, with or without a TBox, are ExpTime-complete problems. In ALCHI and a variant of SI, these decision problems become PSpace-complete when restricted to acyclic TBoxes. This matches the known complexity bounds for reasoning in crisp description logics between ALC and SHI.
Waltenegus Dargie, Eldora, Julian Mendez, Christoph Möbius, Kateryna Rybina, Veronika Thost, and Anni-Yasmin Turhan: **Situation Recognition for Service Management Systems Using OWL 2 Reasoners**. In *Proceedings of the 10th IEEE Workshop on Context Modeling and Reasoning 2013*, pages 31–36. San Diego, California, IEEE Computer Society, March 2013.

BibTeX entry
Paper (PDF)
©IEEE Press

#### Abstract:

For service management systems the early recognition of situations that necessitate a rebinding or a migration of services is an important task. To describe these situations on differing levels of detail and to allow their recognition even if only incomplete information is available, we employ the ontology language OWL 2 and the reasoning services defined for it. In this paper we provide a case study on the performance of state of the art OWL 2 reasoning systems for answering class queries and conjunctive queries modeling the relevant situations for service rebinding or migration in the differing OWL 2 profiles.
Felix Distel and Yue Ma: **A hybrid approach for learning concept definitions from text**. In *Proceedings of the 2013 International Workshop on Description Logics (DL'13)*, *CEUR-WS*, 2013. To appear.

BibTeX entry

#### Abstract:

In recent years approaches for extracting formal definitions from natural language have been developed. These approaches typically use methods from natural language processing, such as relation extraction or syntax parsing. They make only limited use of description logic reasoning. We propose a hybrid approach combining natural language processing methods and description logic reasoning. In a first step description candidates are obtained using a natural language processing method. Description logic reasoning is used in a post-processing step to select good quality candidate definitions. We identify the corresponding reasoning problem and examine its complexity.
Andreas Ecke, Michel Ludwig, and Dirk Walther: **The Concept Difference for EL-Terminologies using Hypergraphs**. In

*Proceedings of the International workshop on (Document) Changes: modeling, detection, storage and visualization (DChanges 2013)*, volume 1008 of

*CEUR-WS*, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Ontologies are used to represent and share knowledge. Numerous ontologies have been developed so far, especially in knowledge intensive areas such as the biomedical domain. As the size of ontologies increases, their continued development and maintenance is becoming more challenging as well. Detecting and representing semantic differences between versions of ontologies is an important task for which automated tool support is needed. In this paper we investigate the logical difference problem using a hypergraph representation of EL-terminologies. We focus solely on the concept difference wrt. a signature. For computing this difference it suffices to check the existence of simulations between hypergraphs whereas previous approaches required a combination of different methods.
Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Computing Role-depth Bounded Generalizations in the Description Logic ELOR**. In Ingo J. Timm and Matthias Thimm, editors,

*Proceedings of the 36th German Conference on Artificial Intelligence (KI 2013)*, volume 8077 of

*Lecture Notes in Artificial Intelligence*, pages 49–60. Koblenz, Germany, Springer-Verlag, 2013.

BibTeX entry Paper (PDF) Extended technical report (PDF)

#### Abstract:

Description Logics (DLs) are a family of knowledge representation formalisms, that provides the theoretical basis for the standard web ontology language OWL. Generalization services like the least common subsumer (lcs) and the most specific concept (msc) are the basis of several ontology design methods, and form the core of similarity measures. For the DL ELOR, which covers most of the OWL 2 EL profile, the lcs and msc need not exist in general, but they always exist if restricted to a given role-depth. We present algorithms that compute these role-depth bounded generalizations. Our method is easy to implement, as it is based on the polynomial-time completion algorithm for ELOR.
Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Role-depth bounded Least Common Subsumer in Prob- EL with Nominals**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors,

*Proceedings of the 26th International Workshop on Description Logics (DL-2013)*, volume 1014 of

*CEUR-WS*, pages 670–688, July 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Completion-based algorithms can be employed for computing the least common subsumer of two concepts up to a given role-depth, in extensions of the lightweight DL EL. This approach has been applied also to the probabilistic DL Prob-EL, which is variant of EL with subjective probabilities. In this paper we extend the completion-based lcs-computation algorithm to nominals, yielding a procedure for the DL Prob-ELO⁰¹.
Andreas Ecke, Rafael Peñaloza, and Anni-Yasmin Turhan: **Towards Instance Query Answering for Concepts Relaxed by Similarity Measures**. In *Workshop on Weighted Logics for AI (in conjunction with IJCAI'13)*, 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

In Description Logics (DL) knowledge bases (KBs) information is typically captured by crisp concept descriptions. However, for many practical applications querying the KB by crisp concepts is too restrictive. A controlled way of gradually relaxing a query concept can be achieved by the use of similarity measures. To this end we formalize the task of instance query answering for crisp DL KBs using concepts relaxed by similarity measures. We identify relevant properties for the similarity measure and give first results on a computation algorithm.
Sebastian Goetz, Julian Mendez, Veronika Thost, and Anni-Yasmin Turhan: **OWL 2 Reasoning To Detect Energy-Efficient Software Variants From Context**. In Kavitha Srinivas and Simon Jupp, editors, *Proceedings of the 10th OWL: Experiences and Directions Workshop (OWLED 2013)*, May 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

Runtime variability management of component-based software systems allows to consider the current context of a system for system configuration to achieve energy-efficiency. For optimizing the system configuration at runtime, the early recognition of situations apt to reconfiguration is an important task. To describe these situations on differing levels of detail and to allow their recognition even if only incomplete information is available, we employ the ontology language OWL 2 and the reasoning services defined for it. In this paper, we show that the relevant situations for optimizing the current system configuration can be modeled in the different OWL 2 profiles. We further provide a case study on the performance of state of the art OWL 2 reasoning systems for answering concept queries and conjunctive queries modeling the situations to be detected.
Sebastian Götz, René Schöne, Claas Wilke, Julian Mendez, and Uwe Aßmann: **Towards Predictive Self-optimization by Situation Recognition**. *2nd Workshop EASED@ BUIS 2013*11, 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

Abstract: Energy efficiency of software is an increasingly important topic. To achieve energy efficiency, a system should automatically optimize itself to provide the best possible utility to the user for the least possible cost in terms of energy consumption. To reach this goal, the system has to continuously decide whether and how to adapt itself, which takes time and consumes energy by itself. During this time, the system could be in an inefficient state and waste energy. We envision the application of predictive situation recognition to initiate decision making before it is actually needed. Thus, the time of the system being in an inefficient state is reduced, leading to a more energy-efficient reconfiguration.
Andreas Herzig, Emiliano Lorini, and Dirk Walther: **Reasoning about Actions Meets Strategic Logics**. In Davide Grossi, Olivier Roy, and Huaxin Huang, editors, *Logic, Rationality, and Interaction - 4th International Workshop, LORI 2013, Hangzhou, China, October 9-12, 2013, Proceedings*, volume 8196 of *Lecture Notes in Computer Science*, pages 162–175. Springer, 2013.

BibTeX entry
Paper (PDF)

Michel Ludwig and Boris Konev: **Towards Practical Uniform Interpolation and Forgetting for ALC TBoxes**. In

*Proceedings of the 26th International Workshop on Description Logics (DL-2013)*, volume 1014 of

*CEUR-WS*, pages 377–389, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

We develop a clausal resolution-based approach for computing uniform interpolants of TBoxes formulated in the description logic when such uniform interpolants exist. We also present an experimental evaluation of our approach and its applications to concept forgetting, ontology obfuscation and logical difference on real-life*ALC*ontologies. Our results indicate that in many practical cases a uniform interpolant exists and can be computed with the presented algorithm.

Yue Ma and Qingfeng Chang: **Measuring Incompleteness under Multi-Valued Semantics by Partial MaxSAT Solvers**. In *The 12th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty*, 2013. To appear.

BibTeX entry

#### Abstract:

Knowledge base metrics provide a useful way to analyze and compare knowledge bases. For example, inconsistency measurements have been proposed to distinguish different inconsistent knowledge bases. Whilst inconsistency degrees have been widely developed, the incompleteness of a knowledge base is rarely studied due to the difficulty of formalizing incompleteness. For this, we propose an incompleteness degree based on multi-valued semantics and show that it satisfies some desired properties. Moreover, we develop an algorithm to compute the proposed metric by reducing the problem to an instance of partial MaxSAT problem such that we can benefit from highly optimized partial MaxSAT solvers. We finally examine the approach over a set of knowledge bases from real applications, which experimentally shows that the proposed incompleteness metric can be computed pratically.
Yue Ma and Felix Distel: **Concept Adjustment for Description Logics**. In Mathieu d'Aquin and Andrew Gordon, editors, *Proceedings of the 7th International Conference on Knowledge Capture*. ACM, 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

There exist a handful of natural language processing and machine learning approaches for extracting Description Logic concept definitions from natural language texts. Typically, for a single target concept several textual sentences are used, from which candidate concept descriptions are obtained. These candidate descriptions may have confidence values associated with them. In a final step, the candidates need to be combined into a single concept, in the easiest case by selecting a relevant subset and taking its conjunction. However, concept descriptions generated in this manner can contain false information, which is harmful when added to a formal knowledge base. In this paper, we claim that this can be improved by considering formal constraints that the target concept needs to satisfy. We first formalize a reasoning problem for the selection of relevant candidates and examine its computational complexity. Then, we show how it can be reduced to SAT, yielding a practical algorithm for its solution. Furthermore, we describe two ways to construct formal constraints, one is automatic and the other interactive. Applying this approach to the SNOMED CT ontology construction scenario, we show that the proposed framework brings a visible benefit for SNOMED CT development.
Yue Ma and Felix Distel: **Learning Formal Definitions for Snomed CT from Text**. In Niels Peek, Roque Marín Morales, and Mor Peleg, editors, *Artificial Intelligence in Medicine*, volume 7885 of *Lecture Notes in Computer Science*, pages 73–77. Springer-Verlag, 2013.

BibTeX entry
Paper (PDF)
©Springer-Verlag
(The final publication is available at link.springer.com)

#### Abstract:

Snomed CT is a widely used medical ontology which is formally expressed in a fragment of the Description Logic EL++. The underlying logics allow for expressive querying, yet make it costly to maintain and extend the ontology. In this paper we present an approach for the extraction of Snomed CT definitions from natural language text. We test and evaluate the approach using two types of texts.
Yue Ma and Adelina Nazarenko François Lévy: **Semantic Annotation in Specific Domains with rich Ontologies (in French)**. In *20ème conférence du Traitement Automatique du Langage Naturel*, 2013. To appear.

BibTeX entry

#### Abstract:

Technical documentations are generally difficult to explore and maintain. Powerful tools could help users, provided the documents have been semantically annotated. However, the annotations must be sufficiently specialized, rich and consistent, relying on some explicit semantic model – usually an ontology – that represents the semantics of the target domain. In this paper, we analyze that traditional approaches have limited success for this task. Hence, we propose a novel approach, named phrase-based statistical semantic annotation, for predicting semantic annotations from limited training data. Such a modeling makes the challenging problem, domain specific semantic annotation regarding arbitrarily rich semantic models, easily handled. We used several evaluation metrics on two different business regulatory texts, for which our approach achieved a good performance. In particular, it obtained*91.9\%97.65\%*F-measure in the label and position predictions with different settings. This suggests that human annotators can be highly supported in domain specific semantic annotation tasks.

Yue Ma and Julian Mendez: **High Quality Data Generation: An Ontology Reasoning based Approach**. In *International Workshop on Artificial Intelligence for Big Data (in conjunction with IJCAI'13)*, 2013. To appear.

BibTeX entry

#### Abstract:

As Big Data is getting increasingly more helpful for different applications, the problem of obtaining reliable data becomes important. The importance is more obvious for domain specific applications because of their abstruse domain knowledge. Most of the Big Data based techniques manipulate directly datasets under the assumption that data quantity can lead to a good system quality. In this paper, we show that the quality can be improved by automatically enriching a given dataset with more high-quality data beforehand. This is achieved by a tractable reasoning technique over the widely used biomedical ontology SNOMED CT. Our approach is evaluated by the scenario of formal definition generation from natural language texts, where the average precision of learned definitions is improved by 5.3%.
Francisco Martin-Recuerda and Dirk Walther: **Towards Fast Atomic Decomposition using Axiom Dependency Hypergraphs**. In Chiara Del Vescovo, Torsten Hahmann, David Pearce, and Dirk Walther, editors, *Proceedings of the 7th International Workshop on Modular Ontologies co-located with the 12th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR 2013), Corunna, Spain, September 15, 2013*, volume 1081 of *CEUR Workshop Proceedings*. CEUR-WS.org, 2013.

BibTeX entry
Paper (PDF)

Rafael Peñaloza and Anni-Yasmin Turhan: **Instance-based Non-standard Inferences in EL with Subjective Probabilities**. In Fernando Bobillo, Paulo C. G. Costa, Claudia d'Amato, Nicola Fanizzi, Kathryn B. Laskey, Kenneth J. Laskey, Thomas Lukasiewicz, Matthias Nickles, and Michael Pool, editors,

*Uncertainty Reasoning for the Semantic Web II, International Workshops URSW 2008-2010 Held at ISWC and UniDL 2010 Held at FLoC, Revised Selected Papers*, number 7123 in

*Lecture Notes in Computer Science*, pages 80–98. Springer-Verlag, 2013.

BibTeX entry Paper (PDF) ©Springer-Verlag (The final publication is available at link.springer.com)

#### Abstract:

For practical ontology-based applications representing and reasoning with probabilities is an essential task. For Description Logics with subjective probabilities reasoning procedures for testing instance relations based on the completion method have been developed. In this paper we extend this technique to devise algorithms for solving non-standard inferences for EL and its probabilistic extension Prob-EL^{0}1: computing the most specific concept of an individual and finding explanations for instance relations.

Rafael Peñaloza and Tingting Zou: **Rough EL Classification**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors,

*Proceedings of the 2013 International Workshop on Description Logics (DL'13)*, volume 1014 of

*CEUR-WS*, pages 415–427, 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Rough Description Logics (DLs) have been studied as a means for representing and reasoning with imprecise knowledge. It has been shown that reasoning in rough DLs can be reduced to reasoning in a classical DL that allows value restrictions, and transitive and inverse roles. This shows that for propositionally closed DLs, the complexity of reasoning is not increased by the inclusion of rough constructors. However, applying such a reduction to rough EL yields an exponential time upper bound. We show that this blow-up in complexity can be avoided, providing a polynomial-time completion-based algorithm for classifying rough EL ontologies.
Rafael Peñaloza and Tingting Zou: **Roughening the EL Envelope**. In P. Fontaine, C. Ringeissen, and R. A. Schmidt, editors,

*Proceedings of the 2013 International Symposium on Frontiers of Combining Systems (FroCoS 2013)*, volume 8152 of

*Lecture Notes in Computer Science*, pages 71–86. Nancy, France, Springer-Verlag, 2013.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

The EL family of description logics (DLs) has been successfully applied for representing the knowledge of several domains, specially from the bio-medical fields. One of its principal characteristics is that its reasoning tasks have polynomial complexity, which makes them suitable for large-scale knowledge bases. In their classical form, these logics cannot handle imprecise concepts in a satisfactory manner. Rough sets have been studied as a method for describing imprecise notions, by providing a lower and an upper approximation, which are defined through classes of indiscernible elements. In this paper we study the combination of the EL family of DLs with the notion of rough sets, thus obtaining a family of rough DLs. We show that the rough extension of these DLs maintains the polynomial-time complexity enjoyed by its classical counterpart. We also present a completion-based algorithm that is a strict generalization of the known method for the DL EL++.
Uwe Ryssel, Felix Distel, and Daniel Borchmann: **Fast algorithms for implication bases and attribute exploration using proper premises**. *Annals of Mathematics and Artificial Intelligence*, Special Issue 65:1–29, 2013.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

A central task in formal concept analysis is the enumeration of a small base for the implications that hold in a formal context. The usual stem base algorithms have been proven to be costly in terms of runtime. Proper premises are an alternative to the stem base. We present a new algorithm for the fast computation of proper premises. It is based on a known link between proper premises and minimal hypergraph transversals. Two further improvements are made, which reduce the number of proper premises that are obtained multiple times and redundancies within the set of proper premises. We have evaluated our algorithms within an application related to refactoring of model variants. In this application an implicational base needs to be computed, and runtime is more crucial than minimal cardinality. In addition to the empirical tests, we provide heuristic evidence that an approach based on proper premises will also be beneficial for other applications. Finally, we show how our algorithms can be extended to an exploration algorithm that is based on proper premises.
Veronika Thost, Konrad Voigt, and Daniel Schuster: **Query Matching for Report Recommendation**. In *Proceedings of the 22Nd ACM International Conference on Conference on Information and Knowledge Management*, *CIKM '13*, pages 1391–1400. San Francisco, California, USA, ACM, 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

Today, reporting is an essential part of everyday business life. But the preparation of complex Business Intelligence data by formulating relevant queries and presenting them in meaningful visualizations, so-called reports, is a challenging task for non-expert database users. To support these users with report creation, we leverage existing queries and present a system for query recommendation in a reporting environment, which is based on query matching. Targeting at large-scale, real-world reporting scenarios, we propose a scalable, index-based query matching approach. Moreover, schema matching is applied for a more fine-grained, structural comparison of the queries. In addition to interactively providing content-based query recommendations of good quality, the system works independent of particular data sources or query languages. We evaluate our system with an empirical data set and show that it achieves an F1-Measure of 0.56 and outperforms the approaches applied by state-of-the-art reporting tools (e.g., keyword search) by up to 30%.
George Tsatsaronis, Alina Petrova, Maria Kissa, Yue Ma, Felix Distel, Franz Baader, and Michael Schroeder: **Learning Formal Definitions for Biomedical Concepts**. In Kavitha Srinivas and Simon Jupp, editors, *Proceedings of the 10th OWL: Experiences and Directions Workshop (OWLED 2013)*, May 2013.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontologies such as the SNOMED Clinical Terms (SNOMED CT), and the Medical Subject Headings (MeSH) play a major role in life sciences. Modeling formally the concepts and the roles in this domain is a crucial process to allow for the integration of biomedical knowledge across applications. In this direction we propose a novel methodology to learn formal definitions for biomedical concepts from unstructured text. We evaluate experimentally the suggested methodology in learning formal definitions of SNOMED CT concepts, using their text definitions from MeSH. The evaluation is focused on the learning of three roles which are among the most populated roles in SNOMED CT: Associated Morphology, Finding Site and Causative Agent. Results show that our methodology may provide an Accuracy of up to 75%. For the representation of the instances three main approaches are suggested, namely, Bag of Words, word n-grams and character n-grams.
Anni-Yasmin Turhan: **Introductions to Description Logics - A Guided Tour**. In Sebastian Rudolph, Georg Gottlob, Ian Horrocks, and Frank van Harmelen, editors, *In Proceedings of Reasoning Web. Semantic Technologies for Intelligent Data Access - 9th International Summer School*, volume 8067 of *Lecture Notes in Computer Science*, pages 150–161. Springer, 2013.

BibTeX entry

#### Abstract:

Description Logics (DLs) are the logical formalism underlying the standard web ontology language OWL 2. DLs have formal semantics which are the basis for many powerful reasoning services. This paper provides an overview of basic topics in the field of Description Logics by surveying the introductory literature and course material with a focus on DL reasoning services. The resulting compilation also gives a historical perspective on DLs as a research area.
Anni-Yasmin Turhan and Benjamin Zarrieß: **Computing the lcs w.r.t. General EL^{+} TBoxes**. In Thomas Eiter, Birte Glimm, Yevgeny Kazakov, and Markus Krötzsch, editors,

*Proceedings of the 26th International Workshop on Description Logics (DL-2013)*,

*CEUR Workshop Proceedings*, pages 477–488. Ulm, Germany, CEUR-WS.org, July 2013.

BibTeX entry Paper (PDF)

#### Abstract:

Recently, exact conditions for the existence of the least common subsumer (lcs) computed w.r.t. general*EL*-TBoxes have been devised. This paper extends these results and provides necessary and suffcient conditions for the existence of the lcs w.r.t.

*EL*-TBoxes. We show decidability of the existence in PTime and polynomial bounds on the maximal role-depth of the lcs, which in turn yields a computation algorithm for the lcs w.r.t.

^{+}*EL*-TBoxes.

^{+}
Benjamin Zarrieß and Anni-Yasmin Turhan: **Most Specific Generalizations w.r.t. General EL-TBoxes**. In

*Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13)*. Beijing, China, AAAI Press, 2013.

BibTeX entry Paper (PDF) ©IJCAI

#### Abstract:

In the area of Description Logics the least common subsumer (lcs) and the most specific concept (msc) are inferences that generalize a set of concepts or an individual, respectively, into a single concept. If computed w.r.t. a general -TBox neither the lcs nor the msc need to exist. So far in this setting no exact conditions for the existence of lcs- or msc-concepts are known. This paper provides necessary and suffcient conditions for the existence of these two kinds of concepts. For the lcs of a fixed number of concepts and the msc we show decidability of the existence in PTime and polynomial bounds on the maximal role-depth of the lcs- and msc-concepts. The latter allows to compute the lcs and the msc, respectively.
Thomas Zerjatke and Monika Sturm: **Solving a PSPACE-complete problem by gene assembly**. *Journal of Logic and Computation*, 23(4):897–908, 2013.

BibTeX entry
DOI

#### Abstract:

Gene assembly is a natural process of genome re-arrangement that occurs during sexual reproduction of unicellular organisms called ciliates. Two computational models adapting this process of gene assembly have been proposed: the intramolecular, e.g. (Ehrenfeucht et al., 2004, Computation in Living Cells: Gene Assembly in Ciliates), and the intermolecular model, e.g. (Landweber and Kari, 2001, Evolution as Computation). A context sensitive version of the intramolecular model introduced by Ishdorj and Petre (2007, Proceedings of the 6th International Conference on Unconventional Computation) was shown to be computationally universal and efficient for solving NP-complete problems. In this article we show that within this model PSPACE-complete problems can also be solved in linear time.## 2012

Franz Baader, Stefan Borgwardt, Julian Alfredo Mendez, and Barbara Morawska: **UEL: Unification Solver for EL**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors,

*Proceedings of the 25th International Workshop on Description Logics (DL'12)*, volume 846 of

*CEUR Workshop Proceedings*, pages 26–36, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

UEL is a system that computes unifiers for unification problems formulated in the description logic EL. EL is a description logic with restricted expressivity, but which is still expressive enough for the formal representation of biomedical ontologies, such as the large medical ontology SNOMED CT. We propose to use UEL as a tool to detect redundancies in such ontologies by computing unifiers of two formal concepts suspected of expressing the same concept of the application domain. UEL provides access to two different unification algorithms and can be used as a plug-in of the popular ontology editor Protégé, or stand-alone.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **A Goal-Oriented Algorithm for Unification in ELH_{R+} w.r.t. Cycle-Restricted Ontologies**. In Michael Thielscher and Dongmo Zhang, editors,

*Proceedings of the 25th Australasian Joint Conference on Artificial Intelligence (AI'12)*, volume 7691 of

*Lecture Notes in Artificial Intelligence*, pages 493–504. Sydney, Australia, Springer-Verlag, 2012.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. A goal-oriented NP unification algorithm for EL that uses nondeterministic rules to transform a given unification problem into solved form has recently been presented. In this paper, we extend this goal-oriented algorithm in two directions: on the one hand, we add general concept inclusion axioms (GCIs), and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the algorithm to be complete, however, the ontology consisting of the GCIs and role axioms needs to satisfy a certain cycle restriction.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **A Goal-Oriented Algorithm for Unification in EL w.r.t. Cycle-Restricted TBoxes**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors,

*Proceedings of the 25th International Workshop on Description Logics (DL'12)*, volume 846 of

*CEUR Workshop Proceedings*, pages 37–47, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has been shown to be NP-complete, and thus of significantly lower complexity than unification in other DLs of similarly restricted expressive power. Recently, a brute-force NP-unification algorithm for EL w.r.t. a restricted form of general concept inclusion axioms was developed. This paper introduces a goal-oriented algorithm that reduces the amount of nondeterministic guesses considerably.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Computing Minimal EL-unifiers is Hard**. In Silvio Ghilardi and Lawrence Moss, editors,

*Proceedings of the 9-th International Conference on Advances in Modal Logic (AiML'12)*, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Unification has been investigated both in modal logics and in description logics, albeit with different motivations. In description logics, unification can be used to detect redundancies in ontologies. In this context, it is not sufficient to decide unifiability, one must also compute appropriate unifiers and present them to the user. For the description logic EL, which is used to define several large biomedical ontologies, deciding unifiability is an NP-complete problem. It is known that every solvable EL-unification problem has a minimal unifier, and that every minimal unifier is a local unifier. Existing unification algorithms for EL compute all minimal unifiers, but additionally (all or some) non-minimal local unifiers. Computing only the minimal unifiers would be better since there are considerably less minimal unifiers than local ones, and their size is usually also quite small. In this paper we investigate the question whether the known algorithms for EL-unification can be modified such that they compute exactly the minimal unifiers without changing the complexity and the basic nature of the algorithms. Basically, the answer we give to this question is negative.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Extending Unification in EL Towards General TBoxes**. In Gerhard Brewka, Thomas Eiter, and Sheila A. McIlraith, editors,

*Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning (KR'12)*, pages 568–572. AAAI Press, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of significantly lower complexity than unification in other DLs of similarly restricted expressive power. However, the unification algorithms for EL developed so far cannot deal with general concept inclusion axioms (GCIs). This paper makes a considerable step towards addressing this problem, but the GCIs our new unification algorithm can deal with still need to satisfy a certain cycle restriction.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **Recent Advances in Unification for the EL Family**. In Santiago Escobar, Konstantin Korovin, and Vladimir Rybakov, editors,

*Proceedings of the 26th International Workshop on Unification (UNIF'12)*, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics (DLs) has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the DL EL, which is used to define several large biomedical ontologies, unification is NP-complete. Several algorithms that solve unification in EL have previously been presented. In this paper, we summarize recent extensions of these algorithms that can deal with general concept inclusion axioms (GCIs), role hierarchies(H), and transitive roles (R+). For the algorithms to be complete, however, the ontology consisting of the GCIs and role axioms needs to satisfy a certain cycle restriction.
Franz Baader, Stefan Borgwardt, and Barbara Morawska: **SAT-Encoding of Unification in ELH_{R+} w.r.t. Cycle-Restricted Ontologies**. In

*Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)*, volume 7364 of

*Lecture Notes in Artificial Intelligence*, pages 30–44. Manchester, UK, Springer-Verlag, 2012.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

Unification in Description Logics has been proposed as an inference service that can, for example, be used to detect redundancies in ontologies. For the Description Logic EL, which is used to define several large biomedical ontologies, unification is NP-complete. An NP unification algorithm for EL based on a translation into propositional satisfiability (SAT) has recently been presented. In this paper, we extend this SAT encoding in two directions: on the one hand, we add general concept inclusion axioms, and on the other hand, we add role hierarchies (H) and transitive roles (R+). For the translation to be complete, however, the ontology needs to satisfy a certain cycle restriction. The SAT translation depends on a new rewriting-based characterization of subsumption w.r.t. ELHR+-ontologies.
Franz Baader, Silvio Ghilardi, and Carsten Lutz: **LTL over Description Logic Axioms**. *ACM Trans. Comput. Log.*, 13(3), 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

Most of the research on temporalized Description Logics (DLs) has concentrated on the case where temporal operators can be applied to concepts, and sometimes additionally to TBox axioms and ABox assertions. The aim of this paper is to study temporalized DLs where temporal operators on TBox axioms and ABox assertions are available, but temporal operators on concepts are not. While the main application of existing temporalized DLs is the representation of conceptual models that explicitly incorporate temporal aspects, the family of DLs studied in this paper addresses applications that focus on the temporal evolution of data and of ontologies. Our results show that disallowing temporal operators on concepts can significantly decrease the complexity of reasoning. In particular, reasoning with rigid roles (whose interpretation does not change over time) is typically undecidable without such a syntactic restriction, whereas our logics are decidable in elementary time even in the presence of rigid roles. We analyze the effects on computational complexity of dropping rigid roles, dropping rigid concepts, replacing temporal TBoxes with global ones, and restricting the set of available temporal operators. In this way, we obtain a novel family of temporalized DLs whose complexity ranges from 2-ExpTime-complete via NExpTime-complete to ExpTime-complete.
Franz Baader, Martin Knechtel, and Rafael Peñaloza: **Context-Dependent Views to Axioms and Consequences of Semantic Web Ontologies**. *Journal of Web Semantics*, 12–13:22–40, 2012. Available at http://dx.doi.org/10.1016/j.websem.2011.11.006

BibTeX entry
Paper (PDF)

#### Abstract:

The framework developed in this paper can deal with scenarios where selected sub-ontologies of a large ontology are offered as views to users, based on contexts like the access rights of a user, the trust level required by the application, or the level of detail requested by the user. Instead of materializing a large number of different sub-ontologies, we propose to keep just one ontology, but equip each axiom with a label from an appropriate context lattice. The different contexts of this ontology are then also expressed by elements of this lattice. For large-scale ontologies, certain consequences (like the subsumption hierarchy) are often pre-computed. Instead of pre-computing these consequences for every context, our approach computes just one label (called a boundary) for each consequence such that a comparison of the user label with the consequence label determines whether the consequence follows from the sub-ontology determined by the context. We describe different black-box approaches for computing boundaries, and present first experimental results that compare the efficiency of these approaches on large real-world ontologies. Black-box means that, rather than requiring modifications of existing reasoning procedures, these approaches can use such procedures directly as sub-procedures, which allows us to employ existing highly-optimized reasoners. Similar to designing ontologies, the process of assigning axiom labels is error-prone. For this reason, we also address the problem of how to repair the labelling of an ontology in case the knowledge engineer notices that the computed boundary of a consequence does not coincide with her intuition regarding in which context the consequence should or should not be visible.
Franz Baader, Julian Mendez, and Barbara Morawska: **UEL: Unification Solver for the Description Logic EL – System Description**. In *Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)*, volume 7364 of *Lecture Notes in Artificial Intelligence*, pages 45–51. Manchester, UK, Springer-Verlag, 2012.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

UEL is a system that computes unifiers for unification problems formulated in the description logic EL. EL is a description logic with restricted expressivity, but which is still expressive enough for the formal representation of biomedical ontologies, such as the large medical ontology SNOMED CT. We propose to use UEL as a tool to detect redundancies in such ontologies by computing unifiers of two formal concepts suspected of expressing the same concept of the application domain. UEL can be used as a plug-in of the popular ontology editor Protege, or as a standalone unification application.
Franz Baader and Alexander Okhotin: **Solving language equations and disequations with applications to disunification in description logics and monadic set constraints**. In Nikolaj Bjørner and Andrei Voronkov, editors, *Proceedings of the 18th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR-12)*, volume 7180 of *Lecture Notes in Computer Science*, pages 107–121. Mérida, Venezuela, Springer-Verlag, 2012.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We extend previous results on the complexity of solving language equations with one-sided concatenation and all Boolean operations to the case where also disequations (i.e., negated equations) may occur. To show that solvability of systems of equations and disequations is still in ExpTime, we introduce a new type of automata working on infinite trees, which we call looping automata with colors. As applications of these results, we show new complexity results for disunification in the description logic FL0 and for monadic set constraints with negation. We believe that looping automata with colors may also turn out to be useful in other applications.
Stefan Borgwardt, Felix Distel, and Rafael Peñaloza: **How Fuzzy is my Fuzzy Description Logic?**. In Bernhard Gramlich, Dale Miller, and Ulrike Sattler, editors, *Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12)*, volume 7364 of *Lecture Notes in Artificial Intelligence*, pages 82–96. Manchester, UK, Springer-Verlag, 2012.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Fuzzy Description Logics (DLs) with t-norm semantics have been studied as a means for representing and reasoning with vague knowledge. Recent work has shown that even fairly inexpressive fuzzy DLs become undecidable for a wide variety of t-norms. We complement those results by providing a class of t-norms and an expressive fuzzy DL for which ontology consistency is linearly reducible to crisp reasoning, and thus has its same complexity. Surprisingly, in these same logics crisp models are insufficient for deciding fuzzy subsumption.
Stefan Borgwardt, Felix Distel, and Rafael Peñaloza: **Gödel Negation Makes Unwitnessed Consistency Crisp**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, *Proceedings of the 2012 International Workshop on Description Logics (DL'12)*, volume 846 of *CEUR-WS*, pages 103–113, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontology consistency has been shown to be undecidable for a wide variety of fairly inexpressive fuzzy Description Logics (DLs). In particular, for any t-norm "starting with" the Lukasiewicz t-norm, consistency of crisp ontologies (w.r.t. witnessed models) is undecidable in any fuzzy DL with conjunction, existential restrictions, and (residual) negation. In this paper we show that for any t-norm with Gödel negation, that is, any t-norm not starting with Lukasiewicz, ontology consistency for a variant of fuzzy SHOI is linearly reducible to crisp reasoning, and hence decidable in exponential time. Our results hold even if reasoning is not restricted to the class of witnessed models only.
Stefan Borgwardt and Barbara Morawska: **Finding Finite Herbrand Models**. In Nikolaj Bjørner and Andrei Voronkov, editors, *Proceedings of the 18th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR'12)*, volume 7180 of *Lecture Notes in Computer Science*, pages 138–152. Mérida, Venezuela, Springer-Verlag, 2012.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We show that finding finite Herbrand models for a restricted class of first-order clauses is ExpTime-complete. A Herbrand model is called finite if it interprets all predicates by finite subsets of the Herbrand universe. The restricted class of clauses consists of anti-Horn clauses with monadic predicates and terms constructed over unary function symbols and constants. The decision procedure can be used as a new goal-oriented algorithm to solve linear language equations and unification problems in the description logic FL0. The new algorithm has only worst-case exponential runtime, in contrast to the previous one which was even best-case exponential.
Stefan Borgwardt and Rafael Peñaloza: **A Tableau Algorithm for Fuzzy Description Logics over Residuated De Morgan Lattices**. In Markus Krötzsch and Umberto Straccia, editors, *Proceedings of the 6th International Conference on Web Reasoning and Rule Systems (RR 2012)*, volume 7497 of *Lecture Notes in Computer Science*, pages 9–24. Springer, 2012.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Fuzzy description logics can be used to model vague knowledge in application domains. This paper analyses the consistency and satisfiability problems in the description logic SHI with semantics based on a complete residuated De Morgan lattice. The problems are undecidable in the general case, but can be decided by a tableau algorithm when restricted to finite lattices. For some sublogics of SHI, we provide upper complexity bounds that match the complexity of crisp reasoning.
Stefan Borgwardt and Rafael Peñaloza: **Non-Gödel Negation Makes Unwitnessed Consistency Undecidable**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, *Proceedings of the 2012 International Workshop on Description Logics (DL'12)*, volume 846 of *CEUR-WS*, pages 411–421, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

Recent results show that ontology consistency is undecidable for a wide variety of fuzzy Description Logics (DLs). Most notably, undecidability arises for a family of inexpressive fuzzy DLs using only conjunction, existential restrictions, and residual negation, even if the ontology itself is crisp. All those results depend on restricting reasoning to witnessed models. In this paper, we show that ontology consistency for inexpressive fuzzy DLs using any t-norm starting with the Łukasiewicz t-norm is also undecidable w.r.t. general models.
Stefan Borgwardt and Rafael Peñaloza: **Undecidability of Fuzzy Description Logics**. In Gerhard Brewka, Thomas Eiter, and Sheila A. McIlraith, editors, *Proceedings of the 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012)*, pages 232–242. Rome, Italy, AAAI Press, 2012.

BibTeX entry
Paper (PDF)
©AAAI

#### Abstract:

Fuzzy description logics (DLs) have been investigated for over two decades, due to their capacity to formalize and reason with imprecise concepts. Very recently, it has been shown that for several fuzzy DLs, reasoning becomes undecidable. Although the proofs of these results differ in the details of each specific logic considered, they are all based on the same basic idea. In this paper, we formalize this idea and provide sufficient conditions for proving undecidability of a fuzzy DL. We demonstrate the effectiveness of our approach by strengthening all previously-known undecidability results and providing new ones. In particular, we show that undecidability may arise even if only crisp axioms are considered.
Felix Distel: **Adapting Fuzzy Formal Concept Analysis for Fuzzy Description Logics**. In *Proceedings of the 9th international conference on concept lattices and their applications (CLA 2012)*, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

Fuzzy Logics have been applied successfully within both Formal Concept Analysis and Description Logics. Especially in the latter field, Fuzzy Logics have been gaining significant momentum during the last two years. Unfortunately, the research on fuzzy logics within the two communities has been conducted independently from each other, leading to different approaches being pursued. We show that if we look at a restricted variant of fuzzy formal concept analysis, then the differences between the two approaches can be reconciled. Moreover, an implicational base can be computed even when the identity hedge is used.
Andreas Ecke and Anni-Yasmin Turhan: **Optimizations for the role-depth bounded least common subsumer in el+**. In Matthew Horridge and Pavel Klinov, editors,

*Proc. of 9th OWL: Experiences and Directions Workshop (OWLED 2012)*, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Computing the least common subsumer (lcs) yields a generalization of a collection of concepts, computing such generalizations is a useful reasoning task for many ontology-based applications. Since the lcs need not exist, if computed w.r.t. general TBoxes, an approximative approach, the role-depth bounded lcs, has been proposed. Recently, this approach has been extended to the Description logic , which covers most of the OWL 2 EL profile. In this paper we present two kinds of optimizations for the computation of such approximative lcs: one to obtain succinct rewritings of*el+*-concepts and the other to speed-up the k-lcs computation. The evaluation of these optimizations give evidence that they can improve the computation of the role-depth bounded lcs by orders of magnitude.

Andreas Ecke and Anni-Yasmin Turhan: **Role-depth Bounded Least Common Subsumers for EL+ and ELI**. In Yevgeny Kazakhov and Frank Wolter, editors,

*Proc. of Description Logics Workshop*, volume 846 of

*CEUR*, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

For*EL*the least common subsumer (lcs) need not exist, if computed w.r.t. general TBoxes. In case the role-depth of the lcs concept description is bounded, an approximate solution can be obtained. In this paper we extend the completion-based method for computing such approximate solutions to

*ELI*and

*EL*+. For

*ELI*the extension needs to be able to treat complex node labels. For a naive method generates highly redundant concept descriptions for which we devise a heuristic that produces smaller, but equivalent concept descriptions. We demonstrate the usefulness of this heuristic by an evaluation.

Weili Fu and Rafael Peñaloza: **Adding Context to Tableaux for DLs**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, *Proceedings of the 2012 International Workshop on Description Logics (DL'12)*, volume 846 of *CEUR-WS*, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

We consider the problem of reasoning with ontologies where every axiom is associated to a context, and contexts are related through a total order. These contexts could represent, for example, a degree of trust associated to the axiom, or a level of granularity for the knowledge provided. We describe an extension of tableaux-based decision procedures into methods that compute the best-fitting context for the consequences of an ontology, and apply it to the tableaux algorithm for ALC. We also describe an execution strategy that preserves most of the standard optimizations used in modern DL reasoners.
Karsten Lehmann and Rafael Peñaloza: **The Complexity of Computing the Behaviour of Weighted Büchi Automata over Lattices**. In Heiko Vogler and Manfred Droste, editors, *Proceedings of the 6th International Workshop Weighted Automata: Theory and Applications (WATA'12)*, 2012.

BibTeX entry
Paper (PDF)

Karsten Lehmann and Anni-Yasmin Turhan: **A Framework for Semantic-based Similarity Measures for ELH-Concepts**. In Luis Fariñas del Cerro, Andreas Herzig, and Jérôme Mengin, editors,

*Proceedings of the 13th European Conference on Logics in Artificial Intelligence*,

*Lecture Notes in Artificial Intelligence*, pages 307–319. Springer Verlag, 2012.

BibTeX entry Paper (PDF)

#### Abstract:

Similarity measures for concepts written in Description Logics (DLs) are often devised based on the syntax of concepts or simply by adjusting them to a set of instance data. These measures do not take the semantics of the concepts into account and can thus lead to unintuitive results. It even remains unclear how these measures behave if applied to new domains or new sets of instance data. In this paper we develop a framework for similarity measures for ELH-concept descriptions based on the semantics of the DL ELH. We show that our framework ensures that the measures resulting from instantiations fulfill fundamental properties, such as equivalence invariance, yet the framework provides the flexibility to adjust measures to specifics of the modelled domain.
Frederick Maier, Yue Ma, and Pascal Hitzler: **Paraconsistent OWL and Related Logics**. In *Semantic Web journal*, March 2012.

BibTeX entry
Paper (PDF)

Julian Mendez: **jcel: A Modular Rule-based Reasoner**. *In Proceedings of the 1st International Workshop on OWL Reasoner Evaluation (ORE 2012)*, 858, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

jcel is a reasoner for the description logic EL+ that uses a rule-based completion algorithm. These algorithms are used to get subsumptions in the lightweight description logic EL and its extensions. One of these extensions is EL+, a description logic with restricted expressivity, but used in formal representation of biomedical ontologies. These ontologies can be encoded using the Web Ontology Language (OWL), and through the OWL API, edited using the popular ontology editor Protege. jcel implements a subset of the OWL 2 EL profile, and can be used as a Java library or as a Protege plug-in. This system description presents the architecture and main features of jcel, and reports some of the challenges and limitations faced in its development.
Guohui Xiao and Yue Ma: **Inconsistency Measurement based on Variables in Minimal Unsatisfiable Subsets**. In *Proceedings of European Conference on Artificial Intelligence (ECAI'12)*, pages 864–869, 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

Measuring inconsistency degrees of knowledge bases (KBs) provides important context information for facilitating inconsistency handling. Several semantic and syntax based measures have been proposed separately. In this paper, we propose a new way to define inconsistency measurements by combining semantic and syntax based approaches. It is based on counting the variables of minimal unsatisfiable subsets (MUSes) and minimal correction subsets (MCSes), which leads to two equivalent inconsistency degrees, named IDMUS and IDMCS. We give the theoretical and experimental comparisons between them and two purely semantic-based inconsistency degrees: 4-valued and the Quasi Classical semantics based inconsistency degrees. Moreover, the computational complexities related to our new inconsistency measurements are studied. As it turns out that computing the exact inconsistency degrees is intractable in general, we then propose and evaluate an anytime algorithm to make IDMUS and IDMCS usable in knowledge management applications. In particular, as most of syntax based measures tend to be difficult to compute in reality due to the exponential number of MUSes, our new inconsistency measures are practical because the numbers of variables in MUSes are often limited or easily to be approximated. We evaluate our approach on the DC benchmark. Our encouraging experimental results show that these new inconsistency measurements or their approximations are efficient to handle large knowledge bases and to better distinguish inconsistent knowledge bases.
Wael Yehia, Hongkai Liu, Marcel Lippmann, Franz Baader, and Mikhail Soutchanski: **Experimental Results on Solving the Projection Problem in Action Formalisms Based on Description Logics**. In Yevgeny Kazakov, Domenico Lembo, and Frank Wolter, editors, *Proceedings of the 25th International Workshop on Description Logics (DL-2012)*, volume 846 of *CEUR Workshop Proceedings*. Rome, Italy, CEUR-WS.org, June 2012.

BibTeX entry
Paper (PDF)

#### Abstract:

In the reasoning about actions community, one of the most basic reasoning problems is the projection problem: the question whether a certain assertion holds after executing a sequence of actions. While undecidable for general action theories based on the situation calculus, the projection problem was shown to be decidable in two different restrictions of the situation calculus to theories formulated using description logics. In this paper, we compare our implementations of projection procedures for these two approaches on random testing data for several realistic application domains. Important contributions of this work are not only the obtained experimental results, but also the approach for generating test cases. By using patterns extracted from the respective application domains, we ensure that the randomly generated input data make sense and are not inconsistent.## 2011

F. Baader and S. Ghilardi: **Unification in Modal and Description Logics**. *Logic Journal of the IGPL*, 19(6):705–730, 2011. Available at http://jigpal.oxfordjournals.org/content/19/6/705.abstract

BibTeX entry
Paper (PDF)

#### Abstract:

Unification was originally introduced in automated deduction and term rewriting, but has recently also found applications in other fields. In this article, we give a survey of the results on unification obtained in two closely related, yet different, application areas of unification: description logics and modal logics.
Franz Baader: **What's new in Description Logics**. *Informatik-Spektrum*, 34(5):434–442, 2011.

BibTeX entry
(The final publication is available at link.springer.com)

#### Abstract:

Mainstream research in Description Logics (DLs) until recently concentrated on increasing the expressive power of the employed description language while keeping standard inference problems like subsumption and instance manageable in the sense that highly optimized reasoning procedures for them behave well in practice. One of the main successes of this line of research was the adoption of OWL DL, which is based on an expressive DL, as the standard ontology language for the Semantic Web. More recently, there has been a growing interest in more light-weight DLs, and in other kinds of inference problems, mainly triggered by need in applications with large-scale ontologies. In this paper, we first review the DL research leading to the very expressive DLs with practical inference procedures underlying OWL, and then sketch the recent development of light-weight DLs and novel inference procedures.
Franz Baader, Nguyen Thanh Binh, Stefan Borgwardt, and Barbara Morawska: **Computing Local Unifiers in the Description Logic EL without the Top Concept**. In Franz Baader, Barbara Morawska, and Jan Otop, editors,

*Proceedings of the 25th International Workshop on Unification (UNIF'11)*, pages 2–8, 2011.

BibTeX entry Paper (PDF)

Franz Baader, Nguyen Thanh Binh, Stefan Borgwardt, and Barbara Morawska: **Unification in the Description Logic EL without the Top Concept**. In Nikolaj Bjørner and Viorica Sofronie-Stokkermans, editors,

*Proceedings of the 23rd International Conference on Automated Deduction (CADE 2011)*, volume 6803 of

*Lecture Notes in Computer Science*, pages 70–84. Wroclaw, Poland, Springer-Verlag, 2011.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show in this paper that unification in EL without the top concept is PSpace-complete.
Franz Baader, Nguyen Thanh Binh, Stefan Borgwardt, and Barbara Morawska: **Unification in the Description Logic EL without the Top Concept**. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors,

*Proceedings of the 24th International Workshop on Description Logics (DL 2011)*, volume 745 of

*CEUR-WS*, pages 26–36, 2011.

BibTeX entry Paper (PDF)

#### Abstract:

Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic EL is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using EL. On the other hand, unification in EL has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However, EL allows the use of the top concept, which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from EL makes the unification problem considerably harder. More precisely, we will show that unification in EL without the top concept is PSpace-complete.
Franz Baader and Rafael Peñaloza: **Are Fuzzy Description Logics with General Concept Inclusion Axioms Decidable?**. In *Proceedings of 2011 IEEE International Conference on Fuzzy Systems (Fuzz-IEEE 2011)*, pages 1735–1742. IEEE Press, 2011.

BibTeX entry
Paper (PDF)
©IEEE Press

#### Abstract:

This paper concentrates on a fuzzy Description Logic with product t-norm and involutive negation. It does not answer the question posed in its title for this logic, but it gives strong indications that the answer might in fact be "no." On the one hand, it shows that an algorithm that was claimed to answer the question affirmatively for this logic is actually incorrect. On the other hand, it proves undecidability of a variant of this logic.
Franz Baader and Rafael Peñaloza: **GCIs Make Reasoning in Fuzzy DL with the Product T-norm Undecidable**. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, *Proceedings of the 24th International Workshop on Description Logics (DL 2011)*, volume 745 of *CEUR-WS*, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

Fuzzy Description Logics (DLs) have been investigated for at least two decades because they can be used to formalize imprecise concepts. In particular, tableau algorithm for crisp DLs have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of GCIs, some of these fuzzy DLs do not have the finite model property, thus throwing doubt on the correctness of tableau algorithms claimed to handle fuzzy DLs with GCIs. Previously, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In this paper, we show that undecidability also holds if we consider a fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, interpreted as the residuum.
Franz Baader and Rafael Peñaloza: **On the Undecidability of Fuzzy Description Logics with GCIs and Product t-norm**. In Cesare Tinelli and Viorica Sofronie-Stokkermans, editors, *Proceedings of 8th International Symposium Frontiers of Combining Systems (FroCoS 2011)*, volume 6989 of *Lecture Notes in Aritificial Intelligence*, pages 55–70. Saarbrücken, Germany, Springer-Verlag, 2011.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of general concept inclusion axioms (GCIs), some of these fuzzy DLs actually do not have the finite model property, thus throwing doubt on the correctness of tableau algorithm for which it was claimed that they can handle fuzzy DLs with GCIs. In a previous paper, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In the present paper, we show that undecidability also holds if we consider a t-norm-based fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, which is interpreted as the residuum. The only condition on the t-norm is that it is a continuous t-norm "starting" with the product t-norm, which covers an uncountable family of t-norms.
Daniel Borchmann and Felix Distel: **Mining of EL-GCIs**. In

*The 11th IEEE International Conference on Data Mining Workshops*. Vancouver, Canada, IEEE Computer Society, 11 December 2011.

BibTeX entry Paper (PDF) ©IEEE Press

#### Abstract:

We consider an existing approach for mining general inclusion axioms written in the lightweight Description Logic EL. In comparison to classical association rule mining, this approach allows more complex patterns to be obtained. Ours is the first implementation of these algorithms for learning Description Logic axioms. We use our implementation for a case study on two real world datasets. We discuss the outcome and examine what further research will be needed for this approach to be applied in a practical setting.
Stefan Borgwardt and Rafael Peñaloza: **Description Logics over Lattices with Multi-valued Ontologies**. In Toby Walsh, editor, *Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI'11)*, pages 768–773. Barcelona, Spain, AAAI Press, 2011.

BibTeX entry
Paper (PDF)
©IJCAI

#### Abstract:

Uncertainty is unavoidable when modeling most application domains. In medicine, for example, symptoms (such as pain, dizziness, or nausea) are always subjective, and hence imprecise and incomparable. Additionally, concepts and their relationships may be inexpressible in a crisp, clear-cut manner. We extend the description logic ALC with multi-valued semantics based on lattices that can handle uncertainty on concepts as well as on the axioms of the ontology. We introduce reasoning methods for this logic w.r.t. general concept inclusions and show that the complexity of reasoning is not increased by this new semantics.
Stefan Borgwardt and Rafael Peñaloza: **Finite Lattices Do Not Make Reasoning in ALCI Harder**. In Fernando Bobillo et.al., editor,

*Proceedings of the 7th International Workshop on Uncertainty Reasoning for the Semantic Web (URSW'11)*, volume 778 of

*CEUR-WS*, pages 51–62, 2011.

BibTeX entry Paper (PDF)

#### Abstract:

We consider the fuzzy logic ALCI with semantics based on a finite residuated lattice. We show that the problems of satisfiability and subsumption of concepts in this logic are ExpTime-complete w.r.t. general TBoxes and PSpace-complete w.r.t. acyclic TBoxes. This matches the known complexity bounds for reasoning in crisp ALCI.
Stefan Borgwardt and Rafael Peñaloza: **Fuzzy Ontologies over Lattices with T-norms**. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, *Proceedings of the 24th International Workshop on Description Logics (DL 2011)*, volume 745 of *CEUR Workshop Proceedings*, pages 70–80. Barcelona, Spain, CEUR-WS.org, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

Although fuzzy description logics over total orders and De Morgan lattices have been studied for nearly two decades, only a few, very specific instances of these logics are able to deal with general concept inclusion axioms. In this paper, we describe a general approach for extending description logic semantics to lattice-based fuzzy sets t-norms. We show that this logic is undecidable for a very simple class of infinite lattices, and describe an optimal automata-based algorithm for deciding satisfiability when the lattice is finite.
Stefan Borgwardt and Rafael Peñaloza: **The Inclusion Problem for Weighted Automata on Infinite Trees**. In Pál Dömösi and Szabolcs Iván, editors, *Proceedings of the 13th International Conference on Automata and Formal Languages (AFL'11)*, pages 108–122. Debrecen, Hungary, College of Nyíregyháza, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

Weighted automata can be seen as a natural generalization of finite state automata to more complex algebraic structures. The standard reasoning tasks for unweighted automata can also be generalized to the weighted setting. In this paper we study the problems of intersection, complementation and inclusion for weighted automata on infinite trees and show that they are not harder complexity-wise than reasoning with unweighted automata. We also present explicit methods for solving these problems optimally.
Felix Distel: **Some Complexity Results about Essential Closed Sets**. In Petko Valtchev and Robert J\"aschke, editors, *International Conference on Formal Concept Analysis*, volume 6628 of *LNCS*, pages 81–92, 2011.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We examine the enumeration problem for essential closed sets of a formal context. Essential closed sets are sets that can be written as the closure of a pseudo-intent. The results for enumeration of essential closed sets are similar to existing results for pseudo-intents, albeit some differences exist. For example, while it is possible to compute the lectically first pseudo-intent in polynomial time, we show that it is not possible to compute the lectically first essential closed set in polynomial time unless P = NP. This also proves that essential closed sets cannot be enumerated in the lectic order with polynomial delay unless P = NP. We also look at minimal essential closed sets and show that they cannot be enumerated in output polynomial time unless P = NP.
Felix Distel and Barış Sertkaya: **On the complexity of enumerating pseudo-intents**. *Discrete Applied Mathematics*, 159(6):450–466, 2011.

BibTeX entry
©Elsevier

#### Abstract:

We investigate whether the pseudo-intents of a given formal context can efficiently be enumerated. We show that they cannot be enumerated in a specified lexicographic order with polynomial delay unless P = NP. Furthermore we show that if the restriction on the order of enumeration is removed, then the problem becomes at least as hard as enu- merating minimal transversals of a given hypergraph. We introduce the notion of minimal pseudo-intents and show that recognizing minimal pseudo-intents is polynomial. Despite their less complicated nature, surprisingly it turns out that minimal pseudo-intents cannot be enumerated in output-polynomial time unless P = NP.
Eldora, Martin Knechtel, and Rafael Peñaloza: **Correcting Access Restrictions to a Consequence More Flexibly**. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors, *Proceedings of the 24th International Workshop on Description Logics (DL 2011)*, volume 745 of *CEUR Workshop Proceedings*, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

Recent research has shown that labeling ontologies can be useful for restricting the access to some of the axioms and their implicit consequences. However, the labeling of the axioms is an error-prone and highly sensible task. In previous work we have shown how to correct the access restrictions if the security administrator knows the precise access level that a consequence must receive, and axioms are relabeled to that same access level. In this paper, we look at a more general situation in which access rights can be granted or denied to some specific users, without having to fully specify the precise access level. We also allow a more flexible labeling function, where the new access level of the relabeled axioms may differ from the level of the restriction. We provide black-box algorithms for computing suggestions of axioms to be relabeled.
Hongkai Liu, Carsten Lutz, Maja Milicic, and Frank Wolter: **Foundations of instance level updates in expressive description logics**. *Artificial Intelligence*, 175(18):2170–2197, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

In description logic (DL), ABoxes are used for describing the state of affairs in an application domain. We consider the problem of updating ABoxes when the state changes, assuming that update information is described at an atomic level, i.e., in terms of possibly negated ABox assertions that involve only atomic concepts and roles. We analyze such basic ABox updates in several standard DLs, in particular addressing questions of expressibility and succinctness: can updated ABoxes always be expressed in the DL in which the original ABox was formulated and, if so, what is the size of the updated ABox? It turns out that DLs have to include nominals and the ‘@’ constructor of hybrid logic for updated ABoxes to be expressible, and that this still holds when updated ABoxes are approximated. Moreover, the size of updated ABoxes is exponential in the role depth of the original ABox and the size of the update. We also show that this situation improves when updated ABoxes are allowed to contain additional auxiliary symbols. Then, DLs only need to include nominals for updated ABoxes to exist, and the size of updated ABoxes is polynomial in the size of both the original ABox and the update.
Julian Mendez, Andreas Ecke, and Anni-Yasmin Turhan: **Implementing completion-based inferences for the el-family**. In Riccardo Rosati, Sebastian Rudolph, and Michael Zakharyaschev, editors,

*Proceedings of the international Description Logics workshop*. CEUR, 2011.

BibTeX entry Paper (PDF)

#### Abstract:

Completion algorithms for subsumption are investigated for many extensions of the description logic EL. While for several of them subsumption is tractable, this is no longer the case, if inverse roles are admitted. In this paper we present an optimized version of the completion algorithm for ELIHFR, which is implemented in jCEL. The completion sets computed during classification are a good substrate for implementing other reasoning services such as generalizations. We report on an extension of jCEL that computes role-depth bounded least common subsumers and most specific concepts based on completion sets.
R. Peñaloza and A.-Y. Turhan: **A Practical Approach for Computing Generalization Inferences in EL**. In Marko Grobelnik and Elena Simperl, editors,

*Proceedings of the 8th European Semantic Web Conference (ESWC'11)*,

*Lecture Notes in Computer Science*. Springer-Verlag, 2011.

BibTeX entry Paper (PDF)

#### Abstract:

We present methods that compute generalizations of concepts or individuals described in ontologies written in the Description Logic EL. These generalizations are the basis of methods for ontology design and are the core of concept similarity measures. The reasoning service least common subsumer (lcs) generalizes a set of concepts. Similarly, the most specific concept (msc) generalizes an individual into a concept description. For EL the lcs and the msc do not need to exist, if computed w.r.t. general EL-TBoxes. However, it is possible to find a concept description that is the lcs (msc) up to a certain role-depth. In this paper we present a practical approach for computing the role-depth bounded lcs and msc, based on the polynomial-time completion algorithm for EL and describe its implementation.
Uwe Ryssel, Felix Distel, and Daniel Borchmann: **Fast Computation of Proper Premises**. In Amedeo Napoli and Vilem Vychodil, editors, *International Conference on Concept Lattices and Their Applications*, pages 101–113. INRIA Nancy – Grand Est and LORIA, 2011.

BibTeX entry
Paper (PDF)

#### Abstract:

This work is motivated by an application related to refactoring of model variants. In this application an implicational base needs to be computed, and runtime is more crucial than minimal cardinality. Since the usual stem base algorithms have proven to be too costly in terms of runtime, we have developed a new algorithm for the fast computation of proper premises. It is based on a known link between proper premises and minimal hypergraph transversals. Two further improvements are made, which reduce the number of proper premises that are obtained multiple times and redundancies within the set of proper premises. We provide heuristic evidence that an approach based on proper premises will also be beneficial for other applications.
Anni-Yasmin Turhan: **Description Logic reasoning for Semantic Web Ontologies – Extended abstract**. In Rajendra Akerkar, editor, *Proceedings of the first International Conference on Web Intelligence, Mining and Semantics*. ACM, 2011.

BibTeX entry
Paper (PDF)

## 2010

Franz Baader, Bernhard Beckert, and Tobias Nipkow: **Deduktion: von der Theorie zur Anwendung**. *Informatik-Spektrum*, 33(5):444–451, 2010.

BibTeX entry
(The final publication is available at link.springer.com)

Franz Baader, Meghyn Bienvenu, Carsten Lutz, and Frank Wolter: **Query and Predicate Emptiness in Description Logics**. In Fangzhen Lin and Ulrike Sattler, editors, *Proceedings of the 12th International Conference on Principles of Knowledge Representation and Reasoning (KR2010)*. AAAI Press, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

Ontologies can be used to provide an enriched vocabulary for the formulation of queries over instance data. We identify query emptiness and predicate emptiness as two central reasoning services in this context. Query emptiness asks whether a given query has an empty answer over all data sets formulated in a given signature. Predicate emptiness is defined analogously, but quantifies universally over all queries that contain a given predicate. In this paper, we determine the computational complexity of query emptiness and predicate emptiness in the EL, DL-Lite, and ALC-families of description logics, investigate the connection to ontology modules, and perform a practical case study to evaluate the new reasoning services.
Franz Baader, Marcel Lippmann, and Hongkai Liu: **Using Causal Relationships to Deal with the Ramification Problem in Action Formalisms Based on Description Logics**. In Christian G. Fermüller and Andrei Voronkov, editors, *Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR-17)*, volume 6397 of *Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science)*, pages 82–96. Yogyakarta, Indonesia, Springer-Verlag, October 2010.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

In the reasoning about actions community, causal relationships have been proposed as a possible approach for solving the ramification problem, i.e., the problem of how to deal with indirect effects of actions. In this paper, we show that causal relationships can be added to action formalisms based on Description Logics (DLs) without destroying the decidability of the consistency and the projection problem. We investigate the complexity of these decision problems based on which DL is used as base logic for the action formalism.
Franz Baader, Hongkai Liu, and Anees ul Mehdi: **Verifying Properties of Infinite Sequences of Description Logic Actions**. In Helder Coelho, Rudi Studer, and Michael Wooldridge, editors, *Proceedings of the 19th European Conference on Artificial Intelligence (ECAI10)*, volume 215 of *Frontiers in Artificial Intelligence and Applications*, pages 53–58. IOS Press, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

The verification problem for action logic programs with non-terminating behaviour is in general undecidable. In this paper, we consider a restricted setting in which the problem becomes decidable. On the one hand, we abstract from the actual execution sequences of a non-terminating program by considering infinite sequences of actions defined by a Buechi automaton. On the other hand, we assume that the logic underlying our action formalism is a decidable description logic rather than full first-order predicate logic.
Franz Baader, Carsten Lutz, and Anni-Yasmin Turhan: **Small is again Beautiful in Description Logics**. *KI – Künstliche Intelligenz*, 24(1):25–33, 2010.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

Franz Baader and Barbara Morawska: **SAT Encoding of Unification in EL**. In Christian G. Fermüller and Andrei Voronkov, editors, *Proceedings of the 17th International Conference on Logic for Programming, Artifical Intelligence, and Reasoning (LPAR-17)*, volume 6397 of *Lecture Notes in Computer Science (subline Advanced Research in Computing and Software Science)*, pages 97–111. Yogyakarta, Indonesia, Springer-Verlag, October 2010.

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
Franz Baader and Barbara Morawska: **Unification in the Description Logic EL**.

*Logical Methods in Computer Science*, 6(3), 2010. Special Issue of the 20th International Conference on Rewriting Techniques and Applications; also available at http://arxiv.org/abs/1006.2289

BibTeX entry Paper (PDF)

#### Abstract:

Abstract: The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
Franz Baader and Rafael Peñaloza: **Automata-based Axiom Pinpointing**. *Journal of Automated Reasoning*, 45(2):91–129, 2010. Special Issue: Selected Papers from IJCAR 2008

BibTeX entry
Paper (PDF)
(The final publication is available at link.springer.com)

#### Abstract:

Axiom pinpointing has been introduced in description logics (DL) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question (MinA). Most of the pinpointing algorithms described in the DL literature are obtained as extensions of tableau-based reasoning algorithms for computing consequences from DL knowledge bases. In this paper, we show that automata-based algorithms for reasoning in DLs and other logics can also be extended to pinpointing algorithms. The idea is that the tree automaton constructed by the automata-based approach can be transformed into a weighted tree automaton whose so-called behaviour yields a pinpointing formula, i.e., a monotone Boolean formula whose minimal valuations correspond to the MinAs. We also develop an approach for computing the behaviour of a given weighted tree automaton. We use the DL as well as Linear Temporal Logic (LTL) to illustrate our new pinpointing approach.
Franz Baader and Rafael Peñaloza: **Axiom Pinpointing in General Tableaux**. *Journal of Logic and Computation*, 20(1):5–34, 2010. Special Issue: Tableaux and Analytic Proof Methods

BibTeX entry
Paper (PDF)

#### Abstract:

Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. Most of the pinpointing algorithms described in the DLliterature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this article is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of tableau algorithms, which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Felix Distel: **An Approach to Exploring Description Logic Knowledge Bases**. In Barış Sertkaya and Léonard Kwuida, editors, *Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010)*, volume 5986 of *Lecture Notes in Artificial Intelligence*, pages 209–224. Springer, 2010.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper is the successor to two previous papers published at the ICFCA conference. In the first paper we have shown that in the Description Logics EL and ELgfp , the set of general concept inclusions holding in a finite model always has a finite basis. An exploration formalism that can be used to obtain this basis was presented in the second paper. In this paper we show how this formalism can be modified such that counterexamples to GCIs can be provided in the form of ABox- individuals. In a second part of the paper we examine which description logics can be used for this ABox.
Felix Distel: **Hardness of Enumerating Pseudo-Intents in the Lectic Order**. In Barış Sertkaya and Léonard Kwuida, editors, *Proceedings of the 8th International Conference on Formal Concept Analysis, (ICFCA 2010)*, volume 5986 of *Lecture Notes in Artificial Intelligence*, pages 124–137. Springer, 2010.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

We investigate the complexity of enumerating pseudo-intents in the lectic order. We look at the following decision problem: Given a formal context and a set of n pseudo-intents determine whether they are the lectically first n pseudo-intents. We show that this problem is coNP-hard. We thereby show that there cannot be an algorithm with a good theoretical complexity for enumerating pseudo-intents in a lectic order. In a second part of the paper we introduce the notion of minimal pseudo-intents, i.e. pseudo-intents that do not strictly contain a pseudo-intent. We provide some complexity results about minimal pseudo-intents that are readily obtained from the previous result.
Martin Knechtel and Rafael Peñaloza: **A Generic Approach for Correcting Access Restrictions to a Consequence**. In Lora Aroyo, Grigoris Antoniou, Eero Hyvönen, Annette ten Teije, Heiner Stuckenschmidt, Liliana Cabral, and Tania Tudorache, editors, *Proceedings of the 7th Extended Semantic Web Conference (ESWC 2010)*, volume 6088 of *Lecture Notes in Computer Science*, pages 167–182, 2010.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Recent research has shown that annotations are useful for representing access restrictions to the axioms of an ontology and their implicit consequences. Previous work focused on assigning a label, representing its access level, to each consequence from a given ontology. However, a security administrator might not be satisfied with the access level obtained through these methods. In this case, one is interested in finding which axioms would need to get their access restrictions modified in order to get the desired label for the consequence. In this paper we look at this problem and present algorithms for solving it with a variety of optimizations. We also present first experimental results on large scale ontologies, which show that our methods perform well in practice.
Martin Knechtel and Rafael Peñaloza: **Correcting Access Restrictions to a Consequence**. In Volker Haarslev, David Toman, and Grant Weddell, editors, *Proceedings of the 23rd International Workshop on Description Logics (DL 2010)*, volume 573 of *CEUR-WS*, pages 220–231, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

Recent research has shown that annotations are useful for representing access restrictions to the axioms of an ontology and their implicit consequences. Previous work focused on computing a consequence's access restriction efficiently from the restrictions of its implying axioms. However, a security administrator might not be satisfied since the intended restriction differs from the one obtained through these methods. In this case, one is interested in finding a minimal set of axioms which need changed restrictions. In this paper we look at this problem and present algorithms based on ontology repair for solving it. Our first experimental results on large scale ontologies show that our methods perform well in practice.
Martin Knechtel and Heiner Stuckenschmidt: **Query-Based Access Control for Ontologies**. In P. Hitzler and T. Lukasiewicz, editors, *Proceedings of the 4th International Conference on Web Reasoning and Rule Systems (RR 2010)*, volume 6333 of *Lecture Notes in Computer Science*, pages 73–87, 2010.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Role-based access control is a standard mechanism in information systems. Based on the role a user has, certain information is kept from the user even if requested. For ontologies representing knowledge, deciding what can be told to a user without revealing secrets is more difficult as the user might be able to infer secret knowledge using logical reasoning. In this paper, we present two approaches to solving this problem: query rewriting vs. axiom filtering, and show that while both approaches prevent the unveiling of secret knowledge, axiom filtering is more complete in the sense that it does not suppress knowledge the user is allowed to see while this happens frequently in query rewriting. Axiom filtering requires that each axiom carries a label representing its access level. We present methods to find an optimal axiom labeling to enforce query-based access restrictions and report experiments on real world data showing that a significant number of results are retained using the axiom filtering method.
Thomas Lukasiewicz, Rafael Peñaloza, and Anni-Yasmin Turhan, editors: **Proceedings of the First International Workshop on Uncertainty in Description Logics**, number 613 in CEUR, July 2010. UniDL is a IJCAR collocated FLoC workshop. See http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-613/

BibTeX entry

#### Abstract:

During the recent decade, handling uncertainty has started to play an important role in ontology languages, especially in application areas like the Semantic Web, biomedicine, and artificial intelligence. For this reason, there is currently a strong research interest in description logics (DLs) that allow for dealing with uncertainty. The subject of the workshop is how to deal with uncertainty and imprecision in DLs. This encompasses approaches that enable probabilistic or fuzzy reasoning in DLs, but the workshop is also open for approaches based on other uncertainty formalisms. The workshop focusses on the investigation of reasoning problems and approaches for solving them, including especially tractable ones. For classical DL reasoning problems such as subsumption and satisfiability, algorithms that can handle uncertainty exist, but they are still less well-investigated than in the case of standard DLs without uncertainty. For novel reasoning services, such as query answering, computation of generalizations, modules, or explanations, it is not yet clear how to realize them in DLs that can express uncertainty.
Rafael Peñaloza: **Using Sums-of-Products for Non-standard Reasoning**. In A.-H. Dediu, H. Fernau, and C. Martín-Vide, editors, *Proceedings of the 4th International Conference on Language and Automata Theory and Applications (LATA 2010)*, volume 6031 of *Lecture Notes in Computer Science*, pages 488–499. Springer-Verlag, 2010.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

An important portion of the current research in Description Logics is devoted to the expansion of the reasoning services and the developement of algorithms that can adequatedly perform so-called non-standard reasoning. Applications of non-standard reasoning services cover a wide selection of areas such as access control, agent negotiation, or uncertainty reasoning, to name just a few. In this paper we show that some of these non-standard inferences can be seen as the computation of a sum of products, where ``sum'' and ``product'' are the two operators of a bimonoid. We then show how the main ideas of automata-based axiom-pinpointing, combined with weighted model counting, yield a generic method for computing sums-of-products over arbitrary bimonoids.
Rafael Peñaloza: **Wie findet man die verantwortliche Axiome? Axiom-Pinpointing in Beschreibungslogiken**. In *Ausgezeichnete Informatikdissertationen 2009*, volume D10 of *Lecture Notes in Informatics*, pages 181–190. Germany, Gesellschaft für Informatik, 2010. In german

BibTeX entry
Paper (PDF)

#### Abstract:

Axiom-Pinpointing bestimmt die für eine Konsequenz verantwortlichen Axiome einer Ontologie und unterstützt dadurch die Suche und Behebung von Fehlern. In der Dissertation von Rafael Peñaloza, deren Resultate hier zusammengefasst werden, wurde untersucht, unter welchen Bedingungen sich tableau-artige und auf automaten-basierte Schlussfolgerungsverfahren für Beschreibungslogiken stets zu Pinpointing-Verfahren erweitern lassen. Zusätzlich wurde die Komplexität des Pinpointing-Problems erforscht.
Rafael Peñaloza and Barış Sertkaya: **Complexity of Axiom Pinpointing in the DL-Lite Family**. In Volker Haarslev, David Toman, and Grant Weddell, editors, *Proceedings of the 2010 International Workshop on Description Logics (DL2010)*, volume 573 of *CEUR-WS*, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate the computational complexity of axiom pinpointing in the DL-Lite family, which has been very popular due to its success in efficiently accessing large data and answering complex queries. We consider the problem of explaining TBox reasoning. We investigate in detail the complexity of enumerating MinAs in a DL-Lite TBox for a given consequence of this TBox. We show that for DL-Lite_{c}ore

^{H}, DL-Lite

_{k}rom

^{H}and DL-Lite

_{h}orn

^{N}TBoxes MinAs are efficiently enumerable with polynomial delay, but for DL-Lite

_{b}ool they cannot be enumerated in output-polynomial time unless P = NP.

Rafael Peñaloza and Barış Sertkaya: **Complexity of Axiom Pinpointing in the DL-Lite Family of Description Logics**. In Helder Coelho, Rudi Studer, and Michael Wooldridge, editors, *Proceedings of the 19th European Conference on Artificial Intelligence (ECAI 2010)*, volume 215 of *Frontiers in Artificial Intelligence and Applications*, pages 29–34. IOS Press, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate the complexity of axiom pinpointing for different members of the DL-Lite family of Description Logics. More precisely, we consider the problem of enumerating all minimal subsets of a given DL-Lite knowledge base that have a given consequence. We show that for the DL-Lite_{core}

^{H}, DL-Lite

_{krom}

^{H}and DL-Lite

_{horn}

^{HN}fragments such minimal subsets are efficiently enumerable with polynomial delay, but for the DL-Lite

_{bool}fragment they cannot be enumerated in output polynomial time unless = . We also show that interestingly, for the DL-Lite

_{horn}

^{HN}fragment such minimal sets can be enumerated in reverse lexicographic order with polynomial delay, but it is not possible in the forward lexicographic order since computing the first one is already -hard.

Rafael Peñaloza and Barış Sertkaya: **On the Complexity of Axiom Pinpointing in the EL Family of Description Logics**. In Fangzhen Lin, Ulrike Sattler, and Miroslaw Truszczynski, editors, *Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010)*. AAAI Press, 2010.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate the computational complexity of axiom pinpointing, which is the task of finding minimal subsets of a Description Logic knowledge base that have a given consequence. We consider the problems of enumerating such subsets with and without order, and show hardness results that already hold for the propositional Horn fragment, or for the Description Logic . We show complexity results for several other related decision and enumeration problems for these fragments that extend to more expressive logics. In particular we show that hardness of these problems depends not only on expressivity of the fragment but also on the shape of the axioms used.
Rafael Peñaloza and Anni-Yasmin Turhan: **Role-depth Bounded Least Common Subsumers by Completion for EL- and Prob-EL-TBoxes**. In V. Haarslev, D. Toman, and G. Weddell, editors,

*Proc. of the 2010 Description Logic Workshop (DL'10)*, volume 573 of

*CEUR-WS*, 2010.

BibTeX entry Paper (PDF)

#### Abstract:

The least common subsumer (lcs) w.r.t general EL-TBoxes does not need to exists in general due to cyclic axioms. In this paper we present an algorithm for computing role-depth bounded EL-lcs based on the completion algorithm for . We extend this computation algorithm to a recently introduced probabilistic variant of EL: Prob-EL^{0}1.

Rafael Peñaloza and Anni-Yasmin Turhan: **Towards Approximative Most Specific Concepts by Completion for EL with Subjective Probabilities**. In Thomas Lukasiewicz, Rafael Peñaloza, and Anni-Yasmin Turhan, editors, *Proceedings of the First International Workshop on Uncertainty in Description Logics (UniDL'10)*, volume 613 of *CEUR-WS*, 2010.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

The most specific concept (msc) w.r.t general EL-TBoxes does not need to exists in general due to cyclic axioms. In this paper we present an algorithm for computing role-depth bounded EL-msc based on the completion algorithm for . We extend this computation algorithm to a recently introduced probabilistic variant of EL: Prob-EL^{0}1.

Anni-Yasmin Turhan: **Reasoning and Explanation in EL and in Expressive Description Logics**. In Uwe Aßmann, Andreas Bartho, and Christian Wende, editors,

*Reasoning Web*, number 6325 in

*LNCS*, pages 1–27. Springer, 2010.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

Description Logics (DLs) are the formalism underlying the standard web ontology language OWL 2. DLs have formal semantics which are the basis for powerful reasoning services. In this paper, we introduce the basic notions of DLs and the techniques that realize subsumption — the fundamental reasoning service of DL systems. We discuss two reasoning methods for this service: the tableau method for expressive DLs such as ALC and the completion method for the light-weight DL El. We also present methods for generating explanations for computed subsumption relationships in these two DLs.## 2009

Franz Baader: **Description Logics**. In *Reasoning Web: Semantic Technologies for Information Systems, 5th International Summer School 2009*, volume 5689 of *Lecture Notes in Computer Science*, pages 1–39. Springer–Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Description Logics (DLs) are a well-investigated family of logic-based knowledge representation formalisms, which can be used to represent the conceptual knowledge of an application domain in a structured and formally well-understood way. They are employed in various application domains, such as natural language processing, configuration, and databases, but their most notable success so far is the adoption of the DL-based language OWL as standard ontology language for the semantic web. This article concentrates on the problem of designing reasoning procedures for DLs. After a short introduction and a brief overview of the research in this area of the last 20 years, it will on the one hand present approaches for reasoning in expressive DLs, which are the foundation for reasoning in the Web ontology language OWL DL. On the other hand, it will consider tractable reasoning in the more light-weight DL EL, which is employed in bio-medical ontologies, and which is the foundation for the OWL 2 profile OWL 2 EL.
Franz Baader, Andreas Bauer, Peter Baumgartner, Anne Cregan, Alfredo Gabaldon, Krystian Ji, Kevin Lee, David Rajaratnam, and Rolf Schwitter: **A Novel Architecture for Situation Awareness Systems**. In Martin Giese and Arild Waaler, editors, *Proceedings of the 18th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods (Tableaux 2009)*, volume 5607 of *Lecture Notes in Computer Science*, pages 77–92. Springer-Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Situation Awareness (SA) is the problem of comprehending elements of an environment within a volume of time and space. It is a crucial factor in decision-making in dynamic environments. Current SA systems support the collection, filtering and presentation of data from different sources very well, and typically also some form of low-level data fusion and analysis, e.g., recognizing patterns over time. However, a still open research challenge is to build systems that support higher-level information fusion, viz., to integrate domain specific knowledge and automatically draw conclusions that would otherwise remain hidden or would have to be drawn by a human operator. To address this challenge, we have developed a novel system architecture that emphasizes the role of formal logic and automated theorem provers in its main components. Additionally, it features controlled natural language for operator I/O. It offers three logical languages to adequately model different aspects of the domain. This allows to build SA systems in a more declarative way than is possible with current approaches. From an automated reasoning perspective, the main challenges lay in combining (existing) automated reasoning techniques, from low-level data fusion of time-stamped data to semantic analysis and alert generation that is based on linear temporal logic. The system has been implemented and interfaces with Google-Earth to visualize the dynamics of situations and system output. It has been successfully tested on realistic data, but in this paper we focus on the system architecture and in particular on the interplay of the different reasoning components.
Franz Baader, Andreas Bauer, and Marcel Lippmann: **Runtime Verification Using a Temporal Description Logic**. In Silvio Ghilardi and Roberto Sebastiani, editors, *Proceedings of the 7th International Symposium on Frontiers of Combining Systems (FroCoS 2009)*, volume 5749 of *Lecture Notes in Computer Science*, pages 149–164. Springer-Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system's behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any time point yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix into a trace satisfy (violate) the formula. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use ALC-LTL, which can use axioms of the description logic ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we consider the case where states may be described in an incomplete way by ALC ABoxes.
Franz Baader, Andreas Bauer, and Alwen Tiu: **Matching Trace Patterns with Regular Policies**. In A.H. Dediu, A.M. Ionescu, and C. Martin-Vide, editors, *Proceedings of the Third International Conference on Language, and Automata Theory, and Applications (LATA 2009)*, volume 5457 of *Lecture Notes in Artificial Intelligence*, pages 105–116. Springer-Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We consider policies that are described by regular expressions, finite automata, or formulae of linear temporal logic (LTL). Such policies are assumed to describe situations that are problematic, and thus should be avoided. Given a trace pattern u, i.e., a sequence of action symbols and variables, were the variables stand for unknown (i.e., not observed) sequences of actions, we ask whether u potentially violates a given policy L, i.e., whether the variables in u can be replaced by sequences of actions such that the resulting trace belongs to L. We also consider the dual case where the regular policy L is supposed to describe all the admissible situations. Here, we want to know whether u always adheres to the given policy L, i.e., whether all instances of u belong to L. We determine the complexity of the violation and the adherence problem, depending on whether trace patterns are linear or not, and on whether the policy is assumed to be fixed or not.
Franz Baader and Felix Distel: **Exploring Finite Models in the Description Logic ELgfp**. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of *Lecture Notes in Artificial Intelligence*, pages 146–161. Springer Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

In a previous ICFCA paper we have shown that, in the Description Logics EL and ELgfp, the set of general concept inclusions holding in a finite model always has a finite basis. In this paper, we address the problem of how to compute this basis efficiently, by adapting methods from formal concept analysis.
Franz Baader, Martin Knechtel, and Rafael Peñaloza: **A Generic Approach for Large-Scale Ontological Reasoning in the Presence of Access Restrictions to the Ontology's Axioms**. In Abraham Bernstein et al., editor, *Proceedings of the 8th International Semantic Web Conference (ISWC 2009)*, volume 5823 of *Lecture Notes in Computer Science*, pages 49–64, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

The framework developed in this paper can deal with scenarios where selected sub-ontologies of a large ontology are offered as views to users, based on criteria like the user's access right, the trust level required by the application, or the level of detail requested by the user. Instead of materializing a large number of different sub-ontologies, we propose to keep just one ontology, but equip each axiom with a label from an appropriate labeling lattice. The access right, required trust level, etc. is then also represented by a label (called user label) from this lattice, and the corresponding sub-ontology is determined by comparing this label with the axiom labels. For large-scale ontologies, certain consequence (like the concept hierarchy) are often precomputed. Instead of precomputing these consequences for every possible sub-ontology, our approach computes just one label for each consequence such that a comparison of the user label with the consequence label determines whether the consequence follows from the corresponding sub-ontology or not. In this paper we determine under which restrictions on the user and axiom labels such consequence labels (called boundaries) always exist, describe different black-box approaches for computing boundaries, and present first experimental results that compare the efficiency of these approaches on large real-world ontologies. Black-box means that, rather than requiring modifications of existing reasoning procedures, these approaches can use such procedures directly as sub-procedures, which allows us to employ existing highly-optimized reasoners.
Franz Baader and Barbara Morawska: **Unification in the Description Logic EL**. In Ralf Treinen, editor,

*Proceedings of the 20th International Conference on Rewriting Techniques and Applications (RTA 2009)*, volume 5595 of

*Lecture Notes in Computer Science*, pages 350–364. Springer-Verlag, 2009.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

The Description Logic EL has recently drawn considerable attention since, on the one hand, important inference problems such as the subsumption problem are polynomial. On the other hand, EL is used to define large biomedical ontologies. Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The main result of this paper is that unification in EL is decidable. More precisely, EL-unification is NP-complete, and thus has the same complexity as EL-matching. We also show that, w.r.t. the unification type, EL is less well-behaved: it is of type zero, which in particular implies that there are unification problems that have no finite complete set of unifiers.
Franz Baader, Stefan Schulz, Kent Spackmann, and Bontawee Suntisrivaraporn: **How Should Parthood Relations be Expressed in SNOMED CT?**. In *Proceedings of 1. Workshop des GI-Arbeitskreises Ontologien in Biomedizin und Lebenswissenschaften (OBML 2009)*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

We recall the re-engineering of SNOMED CT's SEP encoding as proposed in a previous paper, and then show that a backward compatible version, which also contains definitions for the auxiliary S- and P-concepts, requires an additional complex role inclusion that destroys the acyclicity property of the set of complex role inclusion. For this reason, the backward compatible reengineered version of SNOMED CT is not expressible in OWL 2, but it is expressible in EL++ and an appropriate extension of SROIQ.
Franz Baader and Barış Sertkaya: **Usability Issues in Description Logic Knowledge Base Completion**. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of *Lecture Notes in Artificial Ingelligence*, pages 1–21. Springer Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

In a previous paper, we have introduced an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the assertional part and by a domain expert. This approach, called knowledge base completion, was based on an extension of attribute exploration to the case of partial contexts. The present paper recalls this approach, and then addresses usability issues that came up during first experiments with a preliminary implementation of the completion algorithm. It turns out that these issues can be addressed by extending the exploration algorithm for partial contexts such that it can deal with implicational background knowledge.
Frithjof Dau and Martin Knechtel: **Access Policy Design Supported by FCA Methods**. In Frithjof Dau and Sebastian Rudolph, editors, *Proceedings of the 17th International Conference on Conceptual Structures, (ICCS 2009)*, volume 5662 of *Lecture Notes in Computer Science*, pages 141–154, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Role Based Access Control (RBAC) is a methodology for providing users in an IT system specific permissions like write or read to users. It abstracts from specific users and binds permissions to user roles. Similarly, one can abstract from specific documents and bind permission to document types. In this paper, we apply Description Logics (DLs) to formalize RBAC. We provide a thorough discussion on different possible interpretations of RBAC matrices and how DLs can be used to capture the RBAC constraints. We show moreover that with DLs, we can express more intended constraints than it can be done in the common RBAC approach, thus proving the benefit of using DLs in the RBAC setting. For deriving additional constraints, we introduce a strict methodology, based on attribute exploration method known from Formal Concept Analysis. The attribute exploration allows to systematically finding unintended implications and to deriving constraints and making them explicit. Finally, we apply our approach to a real-life example.
Conrad Drescher, Hongkai Liu, Franz Baader, Steffen Guhlemann, Uwe Petersohn, Peter Steinke, and Michael Thielscher: **Putting ABox Updates into Action**. In Silvio Ghilardi and Roberto Sebastiani, editors, *The Seventh International Symposium on Frontiers of Combining Systems (FroCoS-2009)*, volume 5749 of *Lecture Notes in Computer Science*, pages 149–164. Springer-Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view.
Conrad Drescher, Hongkai Liu, Franz Baader, Peter Steinke, and Michael Thielscher: **Putting ABox Updates into Action**. In *Proceedings of the 8th IJCAI International Workshop on Nonmontonic Reasoning, Action and Change (NRAC-09)*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view.
Matthias Heinrich, Antje Boehm-Peters, and Martin Knechtel: **A Platform to Automatically Generate and Incorporate Documents into an Ontology-Based Content Repository**. In Uwe M. Borghoff and Boris Chidlovskii, editors, *Proceedings of the 2009 ACM Symposium on Document Engineering (DocEng 2009)*, pages 43–46, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

In order to access large information pools efficiently data has to be structured and categorized. Recently, applying ontologies to formalize information has become an established approach. In particular, ontology-based search and navigation are promising solutions which are capable to significantly improve state of the art systems (e.g. full-text search engines). However, the ontology roll-out and maintenance are costly tasks. Therefore, we propose a documentation generation platform that automatically derives content and incorporates generated content into an existing ontology. The demanding task of classifying content as concept instances, setting data type and object properties is accomplished by the documentation generation platform. Eventually, our approach results in a semantically enriched content base. Note that no manual effort is required to establish links between content objects and the ontology.
Kay Kadner, Gerald Huebsch, Martin Knechtel, Thomas Springer, and Christoph Pohl: **Multimodality in Mobile Computing and Mobile Devices: Methods for Adaptable Usability**, chapter Platform Support for Multimodality on Mobile Devices, pages 75–105. IGI Global, 2009.

BibTeX entry

#### Abstract:

The diversity of todays mobile technology also entails multiple interaction channels offered per device. This chapter surveys the basics of multimodal interactions in a mobility context and introduces a number of concepts for platform support. Synchronization approaches for input fusion and output fission as well as a concept for device federation are discussed with the help of an exemplary multimodal route planning application. An outlook on future trends concludes the chapter.
Julian Mendez and Boontawee Suntisrivaraporn: **Reintroducing CEL as an OWL 2 EL Reasoner**. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of *CEUR-WS*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

The CEL system is known for its scalability of reasoning in the lightweight DL EL++ which has been proved suitable for several ontology applications, most notably from the life science domain. Recently, the DL EL++ has been adopted as the logical underpinning of the OWL 2 EL profile of the new Web Ontology Language which potentially attracts new folks of CEL's users. To seamlessly integrate the reasoner to the OWL user community, we have implemented the OWL API for CEL. This paper describes the challenges, design decision and architecture of this implementation. Additionally, we present experimental results which highlight the scalability of the reasoner, as well as demonstrate a low overhead of our OWL API implementation.
Rafael Peñaloza: **Reasoning With Weighted Ontologies**. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of *CEUR-WS*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

We study the problem of reasoning over weighted ontologies. We assume that every axiom is labeled with an element of a distributive lattice (called its weight) and try to compute its so-called boundary, with respect to a given property. We show that axiom pinpointing is the most general instance of this problem. Finally, we present three applications of the problem of boundary computation.
Rafael Peñaloza: **Using Tableaux and Automata for Pinpointing in EL**. In Valentin Goranko, editor, *TABLEAUX 2009 Wokshop on Tableaux versus Automata as Logical Decision Methods (AutoTab'09)*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

We show that the subsumption algorithm for the Description Logic EL can be seen both as a tableau-based and an automata-based decision procedure. Each of these views allows an extension into a so-called pinpointing algorithm. We show that the tableau-based extension has a worst-case exponential execution time, while the automata-based extension runs in polynomial time.
Rafael Peñaloza and Barış Sertkaya: **Axiom Pinpointing is Hard**. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of *CEUR-WS*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

We investigate the complexity of several decision, enumeration and counting problems in axiom pinpointing in Description Logics. We prove hardness results that already hold for the propositional Horn fragment. We show that for this fragment, unless P = NP, all minimal subsets of a given TBox that have a given consequence, i.e. MinAs, cannot be enumerated in a specified lexicographic order with polynomial delay. Moreover, we show that recognizing the set of all MinAs is at least as hard as recognizing the set of all minimal transversals of a given hypergraph, however whether this problem is intractable remains open. We also show that checking the existence of a MinA that does not contain any of the given sets of axioms, as well as checking the existence of a MinA that contains a specified axiom are both NP-hard. In addition we show that counting all MinAs and counting the MinAs that contain a certain axiom are both #P-hard.
Stefan Schulz, Boontawee Suntisrivaraporn, Franz Baader, and Martin Boeker: **SNOMED reaching its adolescence: Ontologists' and logicians' health check**. *International Journal of Medical Informatics*, 78(Supplement 1):S86–S94, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

After a critical review of the present architecture of SNOMED CT, addressing both logical and ontological issues, we present a roadmap toward an overall improvement and recommend the following actions: SNOMED CT's ontology, dictionary, and information model components should be kept separate. SNOMED CT's upper level should be re-arranged according to a standard upper level ontology. SNOMED CT concepts should be assigned to the four disjoint groups: classes, instances, relations, and meta-classes. SNOMED CT's binary relations should be reduced to a set of canonical ones, following existing recommendations. Taxonomies should be cleansed and split into disjoint partitions. The number of full definitions should be increased. Finally, new approaches are proposed for modeling part-whole hierarchies, as well as the integration of qualifier relations into a unified framework. All proposed modifications can be expressed by the computationally tractable description logic EL++.
Barış Sertkaya: **OntoComP System Description**. In Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, and Ulrike Sattler, editors, *Proceedings of the 2009 International Workshop on Description Logics (DL2009)*, volume 477 of *CEUR-WS*, 2009.

BibTeX entry
Paper (PDF)

#### Abstract:

We describe OntoComP, a Protege 4 plugin that supports knowledge engineers in completing DL-based ontologies. More precisely, OntoComP supports a knowledge engineer in checking whether an ontology contains all the relevant information about the application domain, and in extending the ontology appropriately if this is not the case. It acquires complete knowledge about the application domain efficiently by asking successive questions to the knowledge engineer. By using novel techniques from Formal Concept Analysis, it ensures that, on the one hand, the interaction with the knowledge engineer is kept to a minimum, and, on the other hand, the resulting ontology is complete in a certain well-defined sense.
Barış Sertkaya: **OntoComP: A Protege Plugin for Completing OWL Ontologies**. In *Proceedings of the 6th European Semantic Web Conference, (ESWC 2009)*, volume 5554 of *Lecture Notes in Computer Science*, pages 898–902. Springer Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We describe OntoComP, which is a Protege 4 plugin that supports the ontology engineer in completing OWL ontologies. More precisely, OntoComP supports the ontology engineer in checking whether an ontology contains all the relevant information about the application domain, and in extending the ontology appropriately if this is not the case. It acquires complete knowledge about the application domain efficiently by asking successive questions to the ontology engineer. By using novel techniques from Formal Concept Analysis, it ensures that, on the one hand, the interaction with the ontology engineer is kept to a minimum, and, on the other hand, the resulting ontology is complete in a certain well-defined sense.
Barış Sertkaya: **Some Computational Problems Related to Pseudo-intents**. In Sébastien Ferré and Sebastian Rudolph, editors, *Proceedings of the 7th International Conference on Formal Concept Analysis, (ICFCA 2009)*, volume 5548 of *Lecture Notes in Artificial Intelligence*, pages 130–145. Springer Verlag, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We investigate the computational complexity of several decision, enumeration and counting problems related to pseudo-intents. We show that given a formal context and a subset of its set of pseudo-intents, checking whether this context has an additional pseudo-intent is in coNP, and it is at least as hard as checking whether a given simple hypergraph is not saturated. We also show that recognizing the set of pseudo-intents is also in coNP, and it is at least as hard as identifying the minimal transversals of a given hypergraph. Moreover, we show that if any of these two problems turns out to be coNP-hard, then unless P = NP, pseudo-intents cannot be enumerated in output polynomial time. We also investigate the complexity of finding subsets of a given Duquenne-Guigues Base from which a given implication follows. We show that checking the existence of such a subset within a specified cardinality bound is NP-complete, and counting all such minimal subsets is #P-complete.
Barış Sertkaya: **Towards the Complexity of Recognizing Pseudo-intents**. In Frithjof Dau and Sebastian Rudolph, editors, *Proceedings of the 17th International Conference on Conceptual Structures, (ICCS 2009)*, pages 284–292, 2009.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Pseudo-intents play a key role in Formal Concept Analysis. They form the premises of the implications in the Duquenne-Guigues Base, which is a minimum cardinality base for the valid implications of a formal context. It has been shown that checking whether a set is a pseudo-intent is in coNP. However, it is still open whether this problem is coNP-hard, or it is solvable in polynomial time. In the current work we prove a first lower bound for this problem by showing that it is at least as hard as TRANSVERSAL HYPERGRAPH, which is the problem of checking whether the edges of a given hypergraph are precisely the minimal transversals of another given hypergraph. This is a prominent open problem in hypergraph theory that is conjectured to form a complexity class properly contained between P and coNP. Our result partially explains why the attempts in the FCA community for finding a polynomial algorithm for recognizing pseudo-intents have failed until now. We also formulate a decision problem, namely FIRST PSEUDO-INTENT, and show that if this problem is not polynomial, then, unless P = NP, pseudo-intents cannot be enumerated with polynomial delay in lexicographic order.
Thomas Springer and Anni-Yasmin Turhan: **Employing Description Logics in Ambient Intelligence for Modeling and Reasoning about Complex Situations**. *Journal of Ambient Intelligence and Smart Environments*, 1(3):235–259, 2009.

BibTeX entry

#### Abstract:

Ambient Intelligence systems need to represent information about their environment and recognize relevant situations to perform appropriate actions proactively and autonomously. The context information gathered by these systems comes with imperfections such as incompleteness or incorrectness. These characteristics need to be handled gracefully by the Ambient Intelligence system. Moreover, the represented information must allow for a fast and reliable recognition of the current situation. To solve these problems we propose a method for situation modeling using the Description Logics based ontology language OWL DL and a framework for employing Description Logics reasoning services to recognize the current situation based on context. The benefits from the approach are manifold: the semantics of Description Logics allow for graceful handling of incomplete knowledge. The well-investigated reasoning services do not only allow recognizing the current situation, but also can add to the reliability of the overall system. Moreover optimized reasoning systems are freely available and ready to use. We underpin the feasibility of our approach by providing a case study based on a smart home application conducting an evaluation of different Description Logics reasoners with respect to our application ontology as well as a discussion of Description Logics systems in Ambient Intelligence.## 2008

Franz Baader, Sebastian Brandt, and Carsten Lutz: **Pushing the EL Envelope Further**. In Kendall Clark and Peter F. Patel-Schneider, editors, *In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

We extend the description logic EL++ with reflexive roles and range restrictions, and show that subsumption remains tractable if a certain syntactic restriction is adopted. We also show that subsumption becomes PSpace-hard (resp. undecidable) if this restriction is weakened (resp. dropped). Additionally, we prove that tractability is lost when symmetric roles are added: in this case, subsumption becomes ExpTime- hard.
Franz Baader and Felix Distel: **A Finite Basis for the Set of EL-Implications Holding in a Finite Model**. In Raoul Medina and Sergei Obiedkov, editors, *Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)*, volume 4933 of *Lecture Notes in Artificial Intelligence*, pages 46–61. Springer, 2008.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Formal Concept Analysis (FCA) can be used to analyze data given in the form of a formal context. In particular, FCA provides efficient algorithms for computing a minimal basis of the implications holding in the context. In this paper, we extend classical FCA by considering data that are represented by relational structures rather than formal contexts, and by replacing atomic attributes by complex formulae defined in some logic. After generalizing some of the FCA theory to this more general form of contexts, we instantiate the general framework with attributes defined in the Description Logic (DL) EL, and with relational structures over a signature of unary and binary predicates, i.e., models for EL. In this setting, an implication corresponds to a so-called general concept inclusion axiom (GCI) in EL, The main technical result of this paper is that, in EL, for any finite model there is a finite set of implications (GCIs) holding in this model from which all implications (GCIs) holding in the model follow.
Franz Baader, Silvio Ghilardi, and Carsten Lutz: **LTL over Description Logic Axioms**. In *Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR2008)*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Most of the research on temporalized Description Logics (DLs) has concentrated on the case where temporal operators can occur within DL concept descriptions. In this setting, reasoning usually becomes quite hard if rigid roles, i.e., roles whose interpretation does not change over time, are available. In this paper, we consider the case where temporal operators are allowed to occur only in front of DL axioms (i.e., ABox assertions and general concept inclusion axioms), but not inside of concepts descriptions. As the temporal component, we use linear temporal logic (LTL) and in the DL component we consider the basic DL ALC. We show that reasoning in the presence of rigid roles becomes considerably simpler in this setting.
Franz Baader, Silvio Ghilardi, and Carsten Lutz: **LTL over Description Logic Axioms**. In *Proceedings of the 21st International Workshop on Description Lo gics (DL2008)*, volume 353 of *CEUR-WS*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Most of the research on temporalized Description Logics (DLs) has concentrated on the most general case where temporal operators can occur both within DL concepts and in front of DL axioms. In this setting, reasoning usually becomes quite hard. If rigid roles (i.e., roles whose interpretation does not vary over time) are allowed, then the interesting inference problems (such as satisfiability of concepts) become undecidable. Even if all symbols are interpreted as flexible (i.e., their interpretations can change arbitrarily from one time-point to the next), the complexity of reasoning is doubly exponential, i.e., one exponential higher than the complexity of reasoning in pure DLs such as ALC. In this paper, we consider the case where temporal operators are allowed to occur only in front of axioms (i.e., ABox assertions and general concept inclusion axioms (GCIs)), but not inside concepts. As the temporal component, we use linear temporal logic (LTL) and in the DL component we consider ALC. We show that reasoning becomes simpler in this setting.
Franz Baader, Jan Hladik, and Rafael Peñaloza: **Automata Can Show PSPACE Results for Description Logics**. *Information and Computation, Special Issue: First International Conference on Language and Automata Theory and Applications (LATA'07)*, 206(9–10):1045–1056, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

In the area of Description Logic (DL), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts with respect to acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
Franz Baader, Novak Novakovic, and Boontawee Suntisrivaraporn: **A Proof-Theoretic Subsumption Reasoner for Hybrid EL-TBoxes**. In

*Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of

*CEUR-WS*, 2008.

BibTeX entry Paper (PDF)

#### Abstract:

Hybrid EL-TBoxes combine general concept inclusions (GCIs), which are interpreted with descriptive semantics, with cyclic concept definitions, which are interpreted with greatest fixpoint (gfp) semantics. We introduce a proof-theoretic approach that yields a polynomial-time decision procedure for subsumption in EL w.r.t. hybrid TBoxes, and present preliminary experimental results regarding the performance of the reasoner Hyb that implements this decision procedure.
Franz Baader and Rafael Peñaloza: **Automata-Based Axiom Pinpointing**. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, *Proceedings of the 4th International Joint Conference on Automated Reasoning, (IJCAR 2008)*, volume 5195 of *Lecture Notes in Artificial Intelligence*, pages 226–241. Springer, 2008.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Axiom pinpointing has been introduced in description logics (DL) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question (MinA). Most of the pinpointing algorithms described in the DL literature are obtained as extensions of tableau-based reasoning algorithms for computing consequences from DL knowledge bases. In this paper, we show that automata-based algorithms for reasoning in DLs can also be extended to pinpointing algorithms. The idea is that the tree automaton constructed by the automata-based approach can be transformed into a weighted tree automaton whose so-called behaviour yields a pinpointing formula, i.e., a monotone Boolean formula whose minimal valuations correspond to the MinAs. We also develop an approach for computing the bahaviour of a given weighted tree automaton.
Franz Baader and Boontawee Suntisrivaraporn: **Debugging SNOMED CT Using Axiom Pinpointing in the Description Logic EL^{+}**. In

*Proceedings of the 3rd Knowledge Representation in Medicine (KR-MED'08): Representing and Sharing Knowledge Using SNOMED*, volume 410 of

*CEUR-WS*, 2008.

BibTeX entry Paper (PDF)

#### Abstract:

SNOMED CT is a large-scale medical ontology, which is developed using a variant of the inexpressive Description Logic EL. Description Logic reasoning can not only be used to compute subsumption relationships between SNOMED concepts, but also to pinpoint the reason why a certain subsumption relationship holds by computing the axioms responsible for this relationship. This helps developers and users of SNOMED CT to understand why a given subsumption relationship follows from the ontology, which can be seen as a first step toward removing unwanted subsumption relationships. In this paper, we describe a new method for axiom pinpointing in the Description Logic EL+, which is based on the computation so-called reachability-based modules. Our experiments on SNOMED CT show that the sets of axioms explaining subsumption are usually quite small, and that our method is fast enough to compute such sets on demand.
Meghyn Bienvenu: **Complexity of Abduction in the EL Family of Lightweight Description Logics**. In Gerhard Brewka and Jérôme Lang, editors, *Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR08)*, pages 220–230. AAAI Press, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

The complexity of logic-based abduction has been extensively studied for the case in which the background knowledge is represented by a propositional theory, but very little is known about abduction with respect to description logic knowledge bases. The purpose of the current paper is to examine the complexity of logic-based abduction for the EL family of lightweight description logics. We consider several minimality criteria for explanations (set inclusion, cardinality, prioritization, and weight) and three decision problems: deciding whether an explanation exists, deciding whether a given hypothesis appears in some acceptable explanation, and deciding whether a given hypothesis belongs to every acceptable explanation. We determine the complexity of these tasks for general TBoxes and also for EL and EL+ terminologies. We also provide results concerning the complexity of computing abductive explanations.
Meghyn Bienvenu: **Prime Implicate Normal Form for ALC Concepts**. In

*Proceedings of the Twenty-Third Conference on Artificial Intelligence (AAAI-08)*, pages 412–417. AAAI Press, 2008.

BibTeX entry Paper (PDF)

#### Abstract:

In this paper, we present a normal form for concept expressions in the description logic ALC which is based on a recently introduced notion of prime implicate for the modal logic K. We show that concepts in prime implicate normal form enjoy a number of interesting properties. In particular, we prove that subsumption between ALC concepts in prime implicate normal form can be carried out in polynomial time using a simple structural subsumption algorithm reminiscent of those used for less expressive description logics. We provide a sound and complete algorithm for putting concepts into prime implicate normal form, and we investigate the spatial complexity of this transformation, showing there to be an at most doubly-exponential blowup in concept length. At the end of the paper, we compare prime implicate normal form to two other normal forms for ALC, discussing the relative merits of the different approaches.
Birte Glimm, Carsten Lutz, Ian Horrocks, and Ulrike Sattler: **Answering conjunctive queries in the SHIQ description logic**.

*Journal of Artificial Intelligence Research*, 31:150–197, 2008.

BibTeX entry Paper (PDF)

#### Abstract:

Conjunctive queries play an important role as an expressive query language for Description Logics (DLs). Although modern DLs usually provide for transitive roles, conjunctive query answering over DL knowledge bases is only poorly understood if transitive roles are admitted in the query. In this paper, we consider unions of conjunctive queries over knowledge bases formulated in the prominent DL SHIQ and allow transitive roles in both the query and the knowledge base. We show decidability of query answering in this setting and establish two tight complexity bounds: regarding combined complexity, we prove that there is a deterministic algorithm for query answering that needs time single exponential in the size of the KB and double exponential in the size of the query, which is optimal. Regarding data complexity, we prove containment in co-NP.
Christoph Haase and Carsten Lutz: **Complexity of Subsumption in the EL Family of Description Logics: Acyclic and Cyclic TBoxes**. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, *Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08)*, volume 178 of *Frontiers in Artificial Intelligence and Applications*, pages 25–29. IOS Press, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

We perform an exhaustive study of the complexity of subsumption in the EL family of lightweight description logics w.r.t. acyclic and cyclic TBoxes. It turns out that there are interesting members of this family for which subsumption w.r.t. cyclic TBoxes is tractable, whereas it is ExpTime-complete w.r.t. general TBoxes. For other extensions that are intractable w.r.t. general TBoxes, we establish intractability already for acyclic and cyclic TBoxes.
Matthias Heinrich, Antje Boehm-Peters, and Martin Knechtel: **MoDDo - a tailored documentation system for model-driven software development**. In *ICWI '08: Proceedings of the IADIS International Conference WWW/Internet*, pages 321–324, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

In the last decade Model-Driven Software Development (MDSD) has become an established software engineering discipline. The new approach dramatically changed the entire software development lifecycle. However, the documentation process was not adapted and stuck with the old paradigms. In this paper, we propose a model-driven documentation system which is adapted to the MDSD lifecycle and therefore exploits synergies coming along with the alignment of software development and software documentation. Furthermore, the proposed documentation system builds upon the Internet as their major provisioning platform.
Miki Hermann and Barış Sertkaya: **On the Complexity of Computing Generators of Closed Sets**. In Raoul Medina and Sergei A. Obiedkov, editors, *Proceedings of the 6th International Conference on Formal Concept Analysis, (ICFCA 2008)*, volume 4933 of *Lecture Notes in Computer Science*, pages 158–168. Springer Verlag, 2008.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

We investigate the computational complexity of some decision and counting problems related to generators of closed sets fundamental in Formal Concept Analysis. We recall results from the literature about the problem of checking the existence of a generator with a specified cardinality, and about the problem of determining the number of minimal generators. Moreover, we show that the problem of counting minimum cardinality generators is #.coNP-complete. We also present an incremental-polynomial time algorithm from relational database theory that can be used for computing all minimal generators of an implication-closed set.
Martin Knechtel: **Access restriction inside ontologies**. In Rainer Ruggaber, editor, *I-ESA'08: Proceedings of the 1st Internet of Services Doctoral Symposium 2008 at International Conference on Interoperability of Enterprise Systems and Applications*, volume 374 of *CEUR Workshop Proceedings, ISSN 1613-0073*, 2008.

BibTeX entry
Paper (PDF)

Martin Knechtel: **Access rights and collaborative ontology integration for reuse across security domains**. In Philippe Cudré-Mauroux, editor, *Proceedings of the ESWC 2008 Ph.D. Symposium*, volume 358 of *CEUR Workshop Proceedings, ISSN 1613-0073*, pages 36–40, 2008.

BibTeX entry
Paper (PDF)

Martin Knechtel and Jan Hladik: **RBAC Authorization Decision with DL Reasoning**. In *ICWI '08: Proceedings of the IADIS International Conference WWW/Internet*, pages 169–176, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Access control is crucial also for the Semantic Web. Technologies and Standards from the Semantic Web Community itself provide powerful means to model access control definitions and automatically reason about them. We extend Hierarchical Role Based Access Control by a class hierarchy of the accessed objects and give it the name RBAC-CH. We present a concept to implement this model in a DL knowledge base in the form of an OWL 1.1 ontology. The permissions are defined for user roles on object classes. The concrete permissions of users to objects are then automatically derived by a reasoning service. We present a straightforward ontology model and evaluate it in a running example with a state of the art reasoner. For the RBAC policy enforcement we need to run the reasoner only once and at runtime we only need to read out the inferred knowledge base to decide about authorization.
Martin Knechtel, Jan Hladik, and Frithjof Dau: **Using OWL DL Reasoning to decide about authorization in RBAC**. In Catherine Dolbear, Alan Ruttenberg, and Ulrike Sattler, editors, *OWLED '08: Proceedings of the OWLED 2008 Workshop on OWL: Experiences and Directions*, volume 432 of *CEUR Workshop Proceedings*, 2008.

BibTeX entry
Paper (PDF)

Martin Knechtel and Daniel Schuster: **Semantische Integration und Wiederverwendung von Produktontologien für offene Marktplätze im Web**. In *Proceedings of GeNeMe'08 Workshop*, 2008. In German.

BibTeX entry
Paper (PDF)

Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter: **CEX and MEX: Logical Diff and Semantic Module Extraction in a Fragment of OWL**. In Kendall Clark and Peter F. Patel-Schneider, editors, *In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

We consider a logical diff operator and semantic module extraction from ontologies. In recent work, we have shown that, for the fragment EL of OWL, these problems can be solved in polynomial time. In this paper, we evaluate our algorithms by experimenting with the prototype implementations CEX and MEX on real-worlds ontologies such as SNOMED. The experiments show the practicability our approach and highlight the benefits of a strictly semantic approach: the diff operation is very fine-grained and the extracted modules are smaller than the ones generated by related approaches.
Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter: **Formal Properties of Modularisation**. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, *Proceedings of the 4th International Joint Conference on Automated Reasoning (IJCAR2008)*, number 5195 in *LNCS*, pages 179–193. Springer, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Modularity of ontologies is currently an active research field, and many different notions of a module have been proposed. In this paper, we review the fundamental principles of modularity and identify formal properties that a robust notion of modularity should satisfy. We explore these properties in detail in the contexts of description logic and classical predicate logic and put them into the perspective of well-known concepts from logic and modular software specification such as interpolation, forgetting and uniform interpolation. We also discuss reasoning problems related to modularity.
Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolter: **Semantic Modularity and Module Extraction in Description Logics**. In Malik Ghallab, Constantine D. Spyropoulos, Nikos Fakotakis, and Nikos Avouris, editors, *Proceedings of the 18th European Conference on Artificial Intelligence (ECAI08)*, volume 178 of *Frontiers in Artificial Intelligence and Applications*, pages 55–59. IOS Press, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

The aim of this paper is to study semantic notions of modularity in description logic (DL) terminologies and reasoning problems that are relevant for modularity. We define two notions of a module whose independence is formalized in a model-theoretic way. Focusing mainly on the DLs EL and ALC, we the develop algorithms for module extraction, for checking whether a part of a terminology is a module, and for a number of related problems. We also analyse the complexity of these problems, which ranges from tractable to undecidable. Finally, we provide an experimental evaluation of our module extraction algorithms based on the large-scale terminology SNOMED CT.
Boris Konev, Carsten Lutz, Dirk Walther, and Frank Wolte r: **Logical Difference and Module Extraction with CEX and MEX**. In *Proceedings of the 21st International Workshop on Description Lo gics (DL2008)*, volume 353 of *CEUR-WS*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

In this paper, we study the module extraction and logical difference problems for acyclic EL-terminologies. We show that both problems are tractable, present prototype implementations, and evaluate performance on a series of examples. In particular, our implementations can handle large real-world terminologies such as SNOMED-CT.
Hongkai Liu, Carsten Lutz, and Maja Milicic: **The Projection Problem for EL Actions**. In

*Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of

*CEUR-WS*, 2008.

BibTeX entry Paper (PDF)

#### Abstract:

In this paper, we investigate the complexity of executability and projection in EL and the extension of EL with atomic negation. In both cases, we allow for negated assertions in the post-conditions of actions. Our results show that, in general, tractability does not transfer from instance checking in EL to executability and projection. Even in EL without TBoxes, the latter problems are co-NP-hard. This is due to two sources of intractability: (1) existential restrictions in the initial ABox together with negated assertions in post-conditions; and (2) conditional post-conditions. We prove a matching co-NP upper bound for EL with atomic negation. We also show that, in the presence of acyclic TBoxes, projection in EL is PSpace-hard and thus not easier than in ALC. Finally, we identify restrictions under which executability and projection in EL w.r.t. acyclic TBoxes can be decided in polynomial time.
Carsten Lutz: **The Complexity of Conjunctive Query Answering in Expressive Description Logics**. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek, editors, *Proceedings of the 4th International Joint Conference on Automated Reasoning (IJCAR2008)*, number 5195 in *LNAI*, pages 179–193. Springer, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Conjunctive query answering plays a prominent role in applications of description logics (DLs) that involve instance data, but its exact complexity was a long-standing open problem. We determine the complexity of conjunctive query answering in expressive DLs between ALC and SHIQ, and thus settle the problem. In a nutshell, we show that conjunctive query answering is 2ExpTime-complete in the presence of inverse roles, and only ExpTime-complete without them.
Carsten Lutz: **Two Upper Bounds for Conjunctive Query Answering in SHIQ**. In *Proceedings of the 21st International Workshop on Description Logics (DL2008)*, volume 353 of *CEUR-WS*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

We have shown recently that, in extensions of ALC that involve inverse roles, conjunctive query answering is harder than satisfiability: it is 2-ExpTime-complete in general and NExpTime-hard if queries are connected and contain at least one answer variable. In this paper, we show that, in SHIQ without inverse roles (and without transitive roles in the query), conjunctive query answering is only ExpTime-complete and thus not harder than satisfiability. We also show that the mentioned NExpTime-lower bound is tight.
Carsten Lutz, Frank Wolter, and Michael Zakharyaschev: **Temporal Description Logics: A Survey**. In *Proceedings of the Fifteenth International Symposium on Temporal Representation and Reasoning*. IEEE Computer Society Press, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.
Rafael Peñaloza: **Automata-based Pinpointing for DLs**. In *Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of *CEUR-WS*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

The task of pinpointing the relevant subsets of axioms for a given property has acquired relevancy in the last years. In this paper we show how automata-based decision procedures can be adapted to produce a so-called pinpointing formula. The relevance of this method is method is shown by giving an (optimal) algorithm that computes pinpointing formulas for unsatisfiability of SI concepts w.r.t. general TBoxes.
Stefan Schulz, Kornél Markó, and Boontawee Suntisrivaraporn: **Formal representation of complex SNOMED CT expressions**. *BMC Medical Informatics and Decision Making*, 8(1):S9, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Background: Definitory expressions about clinical procedures, findings and diseases constitute a major benefit of a formally founded clinical reference terminology which is ontologically sound and suited for formal reasoning. SNOMED CT claims to support formal reasoning by description-logic based concept definitions. Methods: On the basis of formal ontology criteria we analyze complex SNOMED CT concepts, such as "Concussion of Brain with(out) Loss of Consciousness", using alternatively full first order logics and the description logic EL. Results: Typical complex SNOMED CT concepts, including negations or not, can be expressed in full first-order logics. Negations cannot be properly expressed in the description logic EL underlying SNOMED CT. All concepts concepts the meaning of which implies a temporal scope may be subject to diverging interpretations, which are often unclear in SNOMED CT as their contextual determinants are not made explicit. Conclusion: The description of complex medical occurrents is ambiguous, as the same situations can be described as (i) a complex occurrent C that has A and B as temporal parts, (ii) a simple occurrent A' defined as a kind of A followed by some B, or (iii) a simple occurrent B' defined as a kind of B preceded by some A. As negative statements in SNOMED CT cannot be exactly represented without a (computationally costly) extension of the set of logical constructors, a solution can be the reification of negative statments (e.g., "Period with no Loss of Consciousness"), or the use of the SNOMED CT context model. However, the interpretation of SNOMED CT context model concepts as description logics axioms is not recommended, because this may entail unintended models.
Barış Sertkaya: **Explaining User Errors in Description Logic Knowledge Base Completion**. In *Informal Proceedings of the 2008 International Workshop on Complexity, Expressibility, and Decidability in Automated Reasoning (CEDAR'08)*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

In our previous work we have developed a method for completing a Description Logic knowledge base w.r.t. a fixed interpretation by asking questions to a domain expert. Our experiments showed that during this process the domain expert sometimes gives wrong answers to the questions, which cause the resultant knowledge base to have unwanted consequences. In the present work we consider the problem of explaining the reasons of such unwanted consequences in knowledge base completion. We show that in this setting the problem of deciding the existence of an explanation within a specified cardinality bound is NP-complete, and the problem of counting explanations that are minimal w.r.t. set inclusion is #P-complete. We also provide an algorithm that computes one minimal explanation by performing at most polynomially many subsumption tests.
Barış Sertkaya: **Explaining User Errors in Knowledge Base Completion**. In *Proceedings of the 2008 International Workshop on Description Logics (DL2008)*, volume 353 of *CEUR-WS*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Knowledge base completion is a method for extending both the terminological and assertional part of a Description Logic knowledge base by using information provided by a domain expert. It ensures that the extended knowledge base is complete w.r.t. a fixed interpretation in a certain, well-defined sense. Here we consider the problem of explaining user errors in knowledge base completion. We show that for this setting, the problem of deciding the existence of an explanation within a specified cardinality bound is NP-complete, and the problem of counting explanations that are minimal w.r.t. set inclusion is #P-complete. We also provide an algorithm that computes one minimal explanation by performing at most polynomially many subsumption tests.
Boontawee Suntisrivaraporn: **Empirical evaluation of reasoning in lightweight DLs on life science ontologies**. In *Proceedings of the 2nd Mahasarakham International Workshop on AI (MIWAI'08)*, 2008.

BibTeX entry
Paper (PDF)

#### Abstract:

Description Logics (DLs) belong to a successful family of knowledge representation formalisms with two key assets: formally well-dned semantics which allows to represent knowledge in an unambiguous way and automated reasoning which allows to infer implicit knowledge from the one given explicitly. One of the most prominent applications of DLs is their use as ontology languages, especially for the life science domain. This paper investigates several life science ontologies and summarizes their common characteristics. It suggests that the use of lightweight DLs in the EL family, in which reasoning is tractable, is bencial both in terms of expressivity and of scalability. The claim is supported by extensive empirical evaluation of various DL reasoning services on large-scale life science ontologies, including an overview comparison of state-ofthe-art DL reasoners.
Boontawee Suntisrivaraporn: **Module Extraction and Incremental Classification: A Pragmatic Approach for EL^{+} Ontologies**. In Sean Bechhofer, Manfred Hauswirth, Joerg Hoffmann, and Manolis Koubarakis, editors,

*Proceedings of the 5th European Semantic Web Conference (ESWC'08)*, volume 5021 of

*Lecture Notes in Computer Science*, pages 230–244. Springer-Verlag, 2008.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

The description logic EL+ has recently proved practically useful in the life science domain with presence of several large-scale biomedical ontologies such as SNOMED CT. To deal with ontologies of this scale, standard reasoning of classification is essential but not sufficient. The ability to extract relevant fragments from a large ontology and to incrementally classify it has become more crucial to support ontology design, maintenance and re-use. In this paper, we propose a pragmatic approach to module extraction and incremental classification for EL+ ontologies and report on empirical evaluations of our algorithms which have been implemented as an extension of the CEL reasoner.
Boontawee Suntisrivaraporn, Guilin Qi, Qiu Ji, and Peter Haase: **A Modularization-based Approach to Finding All Justifications for OWL DL Entailments**. In John Domingue and Chutiporn Anutariya, editors, *Proceedings of the 3th Asian Semantic Web Conference (ASWC'08)*, volume 5367 of *Lecture Notes in Computer Science*, pages 1–15. Springer-Verlag, 2008.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Finding the justifications for an entailment (i.e., minimal sets of axioms responsible for it) is a prominent reasoning service in ontology engineering, as justifications facilitate important tasks like debugging inconsistencies or undesired subsumption. Though several algorithms for finding all justifications exist, issues concerning efficiency and scalability remain a challenge due to the sheer size of real-life ontologies. In this paper, we propose a novel method for finding all justifications in OWL DL ontologies by limiting the search space to smaller modules. To this end, we show that so-called locality-based modules cover all axioms in the justifications. We present empirical results that demonstrate an improvement of several orders of magnitude in efficiency and scalability of finding all justifications in OWL DL ontologies.## 2007

A. Artale, R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: **Temporalising Tractable Description Logics**. In *Proceedings of the Fourteenth International Symposium on Temporal Representation and Reasoning*. IEEE Computer Society Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

It is known that for temporal languages, such as first-order LTL, reasoning about constant (time-independent) relations is almost always undecidable. This applies to temporal description logics as well: constant binary relations together with general concept subsumptions in combinations of LTL and the basic description logic ALC cause undecidability. In this paper, we explore temporal extensions of two recently introduced families of "weak" description logics known as DL-Lite and EL. Our results are twofold: temporalisations of even rather expressive variants of DL-Lite turn out to be decidable, while the temporalisation of EL with general concept subsumptions and constant relations is undecidable.
Alessandro Artale, Carsten Lutz, and David Toman: **A Description Logic of Change**. In Manuela Veloso, editor, *Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI'07)*, pages 218–223. AAAI Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

We combine the modal logic S5 with the description logic (DL) ALCQI. The resulting multi-dimensional DL S5-ALCQI supports reasoning about change by allowing to express that concepts and roles change over time. It cannot, however, discriminate between changes in the past and in the future. Our main technical result is that satisfiability of S5-ALCQI concepts with respect to general TBoxes (including GCIs) is decidable and 2-ExpTime-complete. In contrast, reasoning in temporal DLs that are able to discriminate between past and future is inherently undecidable. We argue that our logic is sufficient for reasoning about temporal conceptual models with time-stamping constraints.
F. Baader, editor: **18th International Conference on Rewriting Techniques and Applications (RTA 2007)**. Springer-Verlag, 2007.

BibTeX entry

#### Abstract:

This volume contains the papers presented at the 18th International Conference on Rewriting Techniques and Applications (RTA'07), which was held on June 26–28, 2007, on the Paris campus of the Conservatoire National des Arts et Metiers (CNAM) in Paris, France.
F. Baader and S. Ghilardi: **Connecting Many-Sorted Theories**. *The Journal of Symbolic Logic*, 72(2):535–583, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we consider variants of the basic connection scheme. Our results can be seen as a generalization of the so-called E-connection approach for combining modal logics to an algebraic setting.
F. Baader, J. Hladik, and R. Peñaloza: **Blocking Automata for PSPACE DLs**. In D. Calvanese, E. Franconi, and S. Tessaris, editors, *Proceedings of the 2007 International Workshop on Description Logics*, *CEUR-WS*, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

In Description Logics (DLs), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as concept satisfiability. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI.
F. Baader, J. Hladik, and R. Peñaloza: **SI! Automata Can Show PSPACE Results for Description Logics**. In C. Martin-Vide, editor, *Proceedings of the First International Conference on Language and Automata Theory and Applications (LATA'07)*, 2007.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

In Description Logics (DLs), both tableau-based and automata-based algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSPACE-complete logics, it is often very hard to design optimal tableau-based algorithms for EXPTIME-complete DLs. In contrast, the automata-based approach is usually well-suited to prove EXPTIME upper-bounds, but its direct application will usually also yield an EXPTIME-algorithm for a PSPACE-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSPACE-algorithm. We illustrate the usefulness of this approach by proving a new PSPACE upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
F. Baader, I. Horrocks, and U. Sattler: **Description Logics**. In Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter, editors, *Handbook of Knowledge Representation*, pages 135–179. Elsevier, 2007.

BibTeX entry

#### Abstract:

In this chapter we will introduce description logics, a family of logic-based knowledge representation languages that can be used to represent the terminological knowledge of an application domain in a structured way. We will first review their provenance and history, and show how the field has developed. We will then introduce the basic description logic ALC in some detail, including definitions of syntax, semantics and basic reasoning services, and describe important extensions such as inverse roles, number restrictions, and concrete domains. Next, we will discuss the relationship between description logics and other formalisms, in particular first order and modal logics; the most commonly used reasoning techniques, in particular tableaux, resolution and automata based techniques; and the computational complexity of basic reasoning problems. After reviewing some of the most prominent applications of description logics, in particular ontology language applications, we will conclude with an overview of other aspects of description logic research, and with pointers to the relevant literature.
Franz Baader, Bernhard Ganter, Ulrike Sattler, and Baris Sertkaya: **Completing Description Logic Knowledge Bases using Formal Concept Analysis**. In Christine Golbreich, Aditya Kalyanpur, and Bijan Parsia, editors, *Proceedings of the Third International Workshop OWL: Experiences and Directions (OWLED 2007)*. CEUR-WS, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

We propose an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the knowledge base and by a domain expert. The use of techniques from Formal Concept Analysis ensures that, on the one hand, the interaction with the expert is kept to a minimum, and, on the other hand, we can show that the extended knowledge base is complete in a certain, well-defined sense.
Franz Baader, Bernhard Ganter, Ulrike Sattler, and Baris Sertkaya: **Completing Description Logic Knowledge Bases using Formal Concept Analysis**. In *Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI-07)*. AAAI Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

We propose an approach for extending both the terminological and the assertional part of a Description Logic knowledge base by using information provided by the knowledge base and by a domain expert. The use of techniques from Formal Concept Analysis ensures that, on the one hand, the interaction with the expert is kept to a minimum, and, on the other hand, we can show that the extended knowledge base is complete in a certain, well-defined sense.
Franz Baader, Carsten Lutz, and Boontawee Suntisrivaraporn: **Is Tractable Reasoning in Extensions of the Description Logic EL Useful in Practice?**. In

*Journal of Logic, Language and Information, Special Issue on Method for Modality (M4M)*, 2007. To appear

BibTeX entry Paper (PDF) Paper (PS)

#### Abstract:

Extensions of the description logic EL have recently been proposed as lightweight ontology languages. The most important feature of these extensions is that, despite including powerful expressive means such as general concept inclusion axioms, reasoning can be carried out in polynomial time. In this paper, we consider one of these extensions, EL+, and introduce a refinement of the known polynomial-time classification algorithm for this logic. This refined algorithm was implemented in our**CEL**reasoner. We describe the results of several experiments with

**CEL**on large ontologies from practice, which show that even a relatively straightforward implementation of the described algorithm outperforms highly optimized, state-of-the-art tableau reasoners for expressive description logics.

Franz Baader and Rafael Peñaloza: **Axiom Pinpointing in General Tableaux**. In N. Olivetti, editor, *Proceedings of the 16th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods TABLEAUX 2007*, volume 4548 of *Lecture Notes in Computer Science*, pages 11–27. Aix-en-Provence, France, Springer-Verlag, 2007.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Axiom pinpointing has been introduced in description logics to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of "tableaux algorithms," which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: **Pinpointing in the Description Logic EL**. In

*Proceedings of the 30th German Conference on Artificial Intelligence (KI2007)*, volume 4667 of

*Lecture Notes in Artificial Intelligence*, pages 52–67. Osnabrück, Germany, Springer-Verlag, 2007.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

Axiom pinpointing has been introduced in description logics (DLs) to help the user understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question. Until now, the pinpointing approach has only been applied to the DL ALC and some of its extensions. This paper considers axiom pinpointing in the less expressive DL EL+, for which subsumption can be decided in polynomial time. More precisely, we consider an extension of the pinpointing problem where the knowledge base is divided into a*static*part, which is always present, and a

*refutable*part, of which subsets are taken. We describe an extension of the subsumption algorithm for EL+ that can be used to compute all minimal subsets of (the refutable part of) a given TBox that imply a certain subsumption relationship. The worst-case complexity of this algorithm turns out to be exponential. This is not surprising since we can show that a given TBox may have exponentially many such minimal subsets. However, we can also show that the problem is not even output polynomial, i.e., unless P=NP, there cannot be an algorithm computing all such minimal sets that is polynomial in the size of its input

*and output*. In addition, we show that finding out whether there is such a minimal subset within a given cardinality bound is an NP-complete problem. In contrast to these negative results, we also show that one such minimal subset can be computed in polynomial time. Finally, we provide some encouraging experimental results regarding the performance of a practical algorithm that computes one (small, but not necessarily minimal) subset that has a given subsumption relation as consequence.

Franz Baader, Rafael Peñaloza, and Boontawee Suntisrivaraporn: **Pinpointing in the Description Logic EL**. In

*Proceedings of the 2007 International Workshop on Description Logics (DL2007)*,

*CEUR-WS*, 2007.

BibTeX entry Paper (PDF)

#### Abstract:

Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in question. Until now, the pinpointing approach has only been applied to the DL ALC and some of its extensions. This paper considers axiom pinpointing in the DL EL, for which subsumption can be decided in polynomial time. We describe an extension of the subsumption algorithm for EL that can be used to compute all minimal subsets of a given TBox that imply a certain subsumption relationship. We also show that an EL TBox may have exponentially many such minimal subsets and that even finding out whether there is such a minimal subset within a given cardinality bound is an NP-complete problem. In contrast to these negative results, we also show that one such minimal set can be computed in polynomial time. Finally, we provide some encouraging experimental results regarding the performance of a practical algorithm that computes one (not necessarily minimal) set that has a given subsumption relation as consequence.
Franz Baader, Barış Sertkaya, and Anni-Yasmin Turhan: **Computing the Least Common Subsumer w.r.t. a Background Terminology**. *Journal of Applied Logic*, 5(3):392–420, 2007.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive Description Logics (DLs) whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology. We will show both theoretical results on the existence of the least common subsumer in this setting, and describe a practical approach—based on a method from formal concept analysis—for computing good common subsumers, which may, however, not be the least ones. We will also describe results obtained in a first evaluation of this practical approach.
Balder ten Cate and Carsten Lutz: **Query Containment in Very Expressive XPath dialects**. In Leonid Libkin, editor, *26th ACM Symposium on Principles of Database Systems (PODS'07)*, pages 73–82. ACM Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

Query containment has been studied extensively for fragments of XPath 1.0. For instance, the problem is known to be EXPTIMEcomplete for CoreXPath, the navigational core of XPath 1.0. Much less is known about query containment in (fragments of) the richer language XPath 2.0. In this paper, we consider extensions of CoreXPath with the following operators, which are all part of XPath 2.0 (except the last): path intersection, path equality, path complementation, for-loops, and transitive closure. For each combination of these operators, we determine the complexity of query containment, both with and without DTDs. It turns out to range from EXPTIME (for extensions with path equality) and 2-EXPTIME (for extensions with path intersection) to non-elementary (for extensions with path complementation or for-loops). In almost all cases, adding transitive closure on top has no further impact on the complexity. We also investigate the effect of dropping the upward and/or sibling axes, and show that this sometimes leads to a reduction in complexity. Since the languages we study include negation and conjunction in filters, our complexity results can equivalently be stated in terms of satisfiability. We also analyze the above languages in terms of succinctness.
Birte Glimm, Carsten Lutz, Ian Horrocks, and Ulrike Sattler: **Answering conjunctive queries in the SHIQ description logic**. In Manuela Veloso, editor,

*Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI'07)*, pages 299–404. AAAI Press, 2007.

BibTeX entry Paper (PDF)

#### Abstract:

Conjunctive queries play an important role as an expressive query language for Description Logics (DLs). Although modern DLs usually provide for transitive roles, it was an open problem whether conjunctive query answering over DL knowledge bases is decidable if transitive roles are admitted in the query. In this paper, we consider conjunctive queries over knowledge bases formulated in the popular DL SHIQ and allow transitive roles in both the query and the knowledge base. We show that query answering is decidable and establish the following complexity bounds: regarding combined complexity, we devise a deterministic algorithm for query answering that needs time single exponential in the size of the KB and double exponential in the size of the query. Regarding data complexity, we prove co-NP-completeness.
Stefan Göller, Markus Lohrey, and Carsten Lutz: **PDL with Intersection and Converse is 2EXP-complete**. In Helmut Seidl, editor, *Proceedings of the Tenth International Conference on Foundations of Software Science and Computation Structures (FoSSaCS'07)*, volume 4423 of *Lecture Notes in Computer Science*, pages 198–212. Springer-Verlag, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

We study the complexity of satisfiability in the expressive extension ICPDL of PDL (Propositional Dynamic Logic), which admits intersection and converse as program operations. Our main result is containment in 2EXP, which improves the previously known non-elementary upper bound and implies 2EXP-completeness due to an existing lower bound for PDL with intersection. The proof proceeds by showing that every satisfiable ICPDL formula has a model of tree-width at most two and then giving a reduction to the (non)-emptiness problem for alternating two-way automata on infinite trees. In this way, we also reprove in an elegant way Danecki's difficult result that satisfiability for PDL with intersection is in 2EXP.
A. Krisnadhi and C. Lutz: **Data Complexity in the EL family of DLs**. In

*Proceedings of the 2007 International Workshop on Description Logics (DL2007)*,

*CEUR-WS*, 2007. To appear.

BibTeX entry Paper (PDF)

#### Abstract:

We study the data complexity of instance checking and conjunctive query answering in the EL family of DLs, with a particular emphasis on the boundary of tractability. We identify a large number of intractable extensions of EL, but also show that in ELIf, the extension of EL with inverse roles and global functionality, conjunctive query answering is tractable regarding data complexity. In contrast, instance checking in EL extended with only inverse roles or global functionality is ExpTime-complete regarding combined complexity.
Adila Krisnadhi and Carsten Lutz: **Data Complexity in the EL family of Description Logics**. In Nachum Dershowitz and Andrei Voronkov, editors,

*Proceedings of the 14th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR2007)*, volume 4790 of

*Lecture Notes in Artificial Intelligence*, pages 333–347. Springer-Verlag, 2007.

BibTeX entry Paper (PDF)

#### Abstract:

We study the data complexity of instance checking and conjunctive query answering in the EL family of description logics, with a particular emphasis on the boundary of tractability. We identify a large number of intractable extensions of EL, but also show that in ELI^{f}, the extension of EL with inverse roles and global functionality, conjunctive query answering is tractable regarding data complexity. In contrast, already instance checking in EL extended with only inverse roles or global functionality is ExpTime-complete regarding combined complexity.

C. Löding, C. Lutz, and O. Serre: **Propositional Dynamic Logic with Recursive Programs**. *Journal of Logic and Algebraic Programming*, 73:51–69, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

We extend the propositional dynamic logic PDL of Fischer and Ladner with a restricted kind of recursive programs using the formalism of visibly pushdown automata (Alur, Madhusudan 2004). We show that the satisfiability problem for this extension remains decidable, generalising known decidability results for extensions of PDL by non-regular programs. Our decision procedure establishes a 2- ExpTime upper complexity bound, and we prove a matching lower bound that applies already to rather weak extensions of PDL with non-regular programs. Thus, we also show that such extensions tend to be more complex than standard PDL.
C. Lutz: **Inverse Roles Make Conjunctive Queries Hard**. In *Proceedings of the 2007 International Workshop on Description Logics (DL2007)*, *CEUR-WS*, 2007. To appear.

BibTeX entry
Paper (PDF)

#### Abstract:

Conjunctive query answering is an important DL reasoning task. Although this task is by now quite well-understood, tight complexity bounds for conjunctive query answering in expressive DLs have never been obtained: all known algorithms run in deterministic double exponential time, but the existing lower bound is only an ExpTime one. In this paper, we prove that conjunctive query answering in ALCI is 2-ExpTime-hard (and thus complete), and that it becomes NExpTime-complete under some reasonable assumptions.
C. Lutz and M. Milicic: **A Tableau Algorithm for DLs with Concrete Domains and GCIs**. *Journal of Automated Reasoning*, 38(1–3):227–259, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

To use description logics (DLs) in an application, it is crucial to identify a DL that is sufficiently expressive to represent the relevant notions of the application domain, but for which reasoning is still decidable. Two means of expressivity that are required by many modern applications of DLs are concrete domains and general TBoxes. The former are used for decidablening concepts based on concrete qualities of their instances such as the weight, age, duration, and spatial extension. The purpose of the latter is to capture background knowledge by stating that the extension of a concept is included in the extension of another concept. Unfortunately, it is wellknown that combining concrete domains with general TBoxes often leads to DLs for which reasoning is undecidable. In this paper, we identify a general property of concrete domains that is sucient for proving decidability of DLs with both concrete domains and general TBoxes. We exhibit some useful concrete domains, most notably a spatial one based on the RCC-8 relations, which have this property. Then, we present a tableau algorithm for reasoning in DLs equipped with concrete domains and general TBoxes.
Carsten Lutz, Dirk Walther, and Frank Wolter: **Conservative Extensions in Expressive Description Logics**. In Manuela Veloso, editor, *Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI'07)*, pages 453–458. AAAI Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

The notion of a conservative extension plays a central role in ontology design and integration: it can be used to formalize ontology refinements, safe mergings of two ontologies, and independent modules inside an ontology. Regarding reasoning support, the most basic task is to decide whether one ontology is a conservative extension of another. It has recently been proved that this problem is decidable and 2ExpTime-complete if ontologies are formulated in the basic description logic ALC. We consider more expressive description logics and begin to map out the boundary between logics for which conservativity is decidable and those for which it is not. We prove that conservative extensions are 2ExpTime-complete in ALCQI, but undecidable in ALCQIO. We also show that if conservative extensions are defined model-theoretically rather than in terms of the consequence relation, they are undecidable already in ALC.
Carsten Lutz and Frank Wolter: **Conservative Extensions in the Lightweight Description Logic EL**. In Frank Pfenning, editor,

*Proceedings of the 21th Conference on Automated Deduction (CADE-21)*, volume 4603 of

*Lecture Notes in Artificial Intelligence*, pages 84–99. Springer-Verlag, 2007.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

We bring together two recent trends in description logic (DL): lightweight DLs in which the subsumption problem is tractable and conservative extensions as a central tool for formalizing notions of ontology design such as refinement and modularity. Our aim is to investigate conservative extensions as an automated reasoning problem for the basic tractable DL EL. The main result is that deciding (deductive) conservative extensions is ExpTime-complete, thus more difficult than subsumption in EL, but not more difficult than subsumption in expressive DLs. We also show that if conservative extensions are defined model-theoretically, the associated decision problem for EL is undecidable.
Maja Milicic: **Complexity of Planning in Action Formalisms Based on Description Logics**. In *Proceedings of the 14th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2007)*, *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

In this paper, we continue the recently started work on integrating action formalisms with description logics (DLs), by investigating planning in the context of DLs. We prove that the plan existence problem is decidable for actions described in fragments of ALCQIO. More precisely, we show that its computational complexity coincides with the one of projection for DLs between ALC and ALCQIO if operators contain only unconditional post-conditions. If we allow for conditional post-conditions, the plan existence problem is shown to be in 2-ExpSpace.
Maja Milicic: **Planning in Action Formalisms based on DLs: First Results**. In *Proceedings of the 2007 International Workshop on Description Logics (DL2007)*, *CEUR-WS*, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

In this paper, we continue the recently started work on integrating action formalisms with description logics (DLs), by investigating planning in the context of DLs. We prove that the plan existence problem is decidable for actions described in fragments of ALCQIO. More precisely, we show that, if post-conditions of operators are unconditional, its computational complexity coincides with the one of projection for DLs between ALC and ALCQIO.
Stefan Schulz, Boontawee Suntisrivaraporn, and Franz Baader: **SNOMED CT's Problem List: Ontologists' and Logicians' Therapy Suggestions**. In , editor, *Proceedings of The Medinfo 2007 Congress*, volume of *Studies in Health Technology and Informatics (SHTI-series)*, page . IOS Press, 2007.

BibTeX entry
Paper (PDF)

#### Abstract:

After a critical review of the present architecture of SNOMED CT, addressing both logical and ontological issues, we present a roadmap towards an overall improvement of this terminology. In particular, we recommend the following actions: Upper level categories should be re-arranged according to a standard upper level ontology. Meta-class like concepts should be identified and removed from the taxonomy. SNOMED concepts denoting (non instantiable) individual entities (e.g. geographical regions) should be kept separate from those concepts that denote (instantiable) types. SNOMED binary relations should be reduced to a set of canonical ones, following existing recommendations. Taxonomies should be cleansed and split into disjoint partitions. The number of full definitions should be increased. Finally, we propose a new approach to modeling part-whole hierarchies, as well as the integration of qualifier relations into the description logic framework.
Boontawee Suntisrivaraporn, Franz Baader, Stefan Schulz, and Kent Spackman: **Replacing SEP-Triplets in SNOMED CT using Tractable Description Logic Operators**. In Jim Hunter Riccardo Bellazzi, Ameen Abu-Hanna, editor, *Proceedings of the 11th Conference on Artificial Intelligence in Medicine (AIME'07)*, volume of *Lecture Notes in Computer Science*, page . Springer-Verlag, 2007.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Reification of parthood relations according to the SEP-triplet encoding pattern has been employed in the clinical terminology SNOMED CT to simulate transitivity of the part-of relation via transitivity of the is-a relation and to inherit properties along part-of links. In this paper we argue that using a more expressive representation language, which allows for a direct representation of the relevant properties of the part-of relation, makes modelling less error prone while having no adverse effect on the efficiency of reasoning.
A.-Y. Turhan and Y. Bong: **Speeding up Approximation with Nicer Concepts**. In D. Calvanese, E. Franconi, V. Haarslev, D. Lembo, B. Motik, S. Tessaris, and A.-Y. Turhan, editors, *Proc. of the 2007 Description Logic Workshop (DL 2007)*, 2007.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

Concept approximation is an inference service for Description Logics that provides ``translations'' of concept descriptions from one DL to a less expressive DL. In [4] a method for optimizing the computation of ALC-ALE-approximations of ALC-concept descriptions was introduced. The idea is to characterize a certain class of concept descriptions for which conjuncts can be approximated independently. In this paper we provide relaxed conditions for this class of ALC-concept descriptions, extend this notion to number restrictions and report on a first implementation of this method for ALCN-ALEN-approximation.## 2006

F. Baader and R. Küsters: **Nonstandard Inferences in Description Logics: The Story So Far**. In D.M. Gabbay, S.S. Goncharov, and M. Zakharyaschev, editors, *Mathematical Problems from Applied Logic I*, volume 4 of *International Mathematical Series*, pages 1–75. Springer-Verlag, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Description logics (DLs) are a successful family of logic-based knowledge representation formalisms that can be used to represent the terminological knowledge of an application domain in a structured and formally well-founded way. DL systems provide their users with inference procedures that allow to reason about the represented knowledge. Standard inference problems (such as the subsumption and the instance problem) are now well-understood. Their computational properties (such as decidability and complexity) have been investigated in detail, and modern DL systems are equipped with highly optimized implementations of these inference procedures, which - in spite of their high worst-case complexity - perform quite well in practice. In applications of DL systems it has turned out that building and maintaining large DL knowledge bases can be further facilitated by procedures for other, non-standard inference problem, such as computing the least common subsumer and the most specific concept, and rewriting and matching of concepts. While the research concerning these non-standard inferences is not as mature as the one for the standard inferences, it has now reached a point where it makes sense to motivate these inferences within a uniform application framework, give an overview of the results obtained so far, describe the remaining open problems, and give perspectives for future research in this direction.
F. Baader and C. Lutz: **Description Logic**. In Patrick Blackburn, Johan van Benthem, and Frank Wolter, editors, *The Handbook of Modal Logic*, pages 757–820. Elsevier, 2006.

BibTeX entry
Paper (PS)

#### Abstract:

Description logics are a family of knowledge representation languages that were developed independently of modal logics, but later turned out to be closely related to them. This chapter introduces description logics and briefly recalls the connections between description and modal logics, but then concentrates on means of expressivity and reasoning problems that are important for description logics, but not in the focus of research in modal logics.
F. Baader, C. Lutz, and B. Suntisrivaraporn: **Efficient Reasoning in EL^{+}**. In

*Proceedings of the 2006 International Workshop on Description Logics (DL2006)*,

*CEUR-WS*, 2006.

BibTeX entry Paper (PDF) Paper (PS)

#### Abstract:

F. Baader, C. Lutz, and B. Suntisrivaraporn: **CEL—A Polynomial-time Reasoner for Life Science Ontologies**. In U. Furbach and N. Shankar, editors, *Proceedings of the 3rd International Joint Conference on Automated Reasoning (IJCAR'06)*, volume 4130 of *Lecture Notes in Artificial Intelligence*, pages 287–291. Springer-Verlag, 2006.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

CEL (Classifier for EL) is a reasoner for the small description logic EL+ which can be used to compute the subsumption hierarchy induced by EL+ ontologies. The most distinguishing feature of CEL is that, unlike other modern DL reasoners, it is based on a polynomial-time subsumption algorithm, which allows it to process very large ontologies in reasonable time. In spite of its restricted expressive power, EL+ is well-suited for formulating life science ontologies.
F. Baader and A. Okhotin: **Complexity of Language Equations With One-Sided Concatenation and All Boolean Operations**. In Jordi Levy, editor, *Proceedings of the 20th International Workshop on Unification, UNIF'06*, pages 59–73, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Language equations are equations where both the constants occurring in the equations and the solutions are formal languages. They have first been introduced in formal language theory, but are now also considered in other areas of computer science. In particular, they can be seen as unification problems in the algebra of languages whose operations are the Boolean operations and concatenation. They are also closely related to monadic set constraints. In the present paper, we restrict the attention to language equations with one-sided concatenation, but in contrast to previous work on these equations, we allow not just union but all Boolean operations to be used when formulating them. In addition, we are not just interested in deciding solvability of such equations, but also in deciding other properties of the set of solutions, like its cardinality (finite, infinite, uncountable) and whether it contains least/greatest solutions. We show that all these decision problems are ExpTime-complete.
Franz Baader, Silvio Ghilardi, and Cesare Tinelli: **A new combination procedure for the word problem that generalizes fusion decidability results in modal logics**. *Information and Computation*, 204(10):1413–1452, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics—which are not disjoint for sharing the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other types of equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.
P. Bonatti, C. Lutz, A. Murano, and M. Vardi: **The Complexity of Enriched -Calculi**. In Michele Bugliesi, Bart Preneel, Vladimiro Sassone, and Ingo Wegener, editors, *Proccedings of the 33rd International Colloquium on Automata, Languages and Programming, Part II (ICALP'06)*, volume 4052 of *Lecture Notes in Computer Science*, pages 540–551. Springer-Verlag, 2006.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

The fully enriched mu-calculus is the extension of the propositional mu-calculus with inverse programs, graded modalities, and nominals. While satisfiability in several expressive fragments of the fully enriched mu-calculus is known to be decidable and ExpTime-complete, it has recently been proved that the full calculus is undecidable. In this paper, we study the fragments of the fully enriched mu-calculus that are obtained by dropping at least one of the additional constructs. We show that, in all fragments obtained in this way, satisfiability is decidable and ExpTime-complete. Thus, we identify a family of decidable logics that are maximal (and incomparable) in expressive power. Our results are obtained by introducing two new automata models, showing that their emptyness problems are ExpTime-complete, and then reducing satisfiability in the relevant logics to this problem. The automata models we introduce are two-way graded alternating parity automata over infinite trees (2GAPT) and fully enriched automata (FEA) over infinite forests. The former are a common generalization of two incomparable automata models from the literature. The latter extend alternating automata in a similar way as the fully enriched mu-calculus extends the standard mu-calculus.
P. Bonatti, C. Lutz, and F. Wolter: **Expressive Non-Monotonic Description Logics Based on Circumscription**. In Patrick Doherty, John Mylopoulos, and Christopher Welty, editors, *Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06)*, pages 400–410. AAAI Press, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

We show that circumscription can be used to extend description logics (DLs) with non-monotonic features in a straightforward and transparent way. In particular, we consider extensions with circumscription of the expressive DLs ALCIO and ALCQO and prove that reasoning in these logics is decidable under a simple restriction: only concept names can be circumscribed, and role names vary freely during circumscription. We pinpoint the exact computational complexity of reasoning as complete for NP^{N}Exp and NExp

^{N}P, depending on whether or not the number of minimized and fixed predicates is assumed to be bounded by a constant. We also show that we cannot allow role names to be fixed during minimization rather than having them vary: this modification renders reasoning undecidable already in the basic DL ALC. Finally, we argue that non-monotonic DLs based on circumscription are an appropriate tool for modelling defeasible inheritance. In particular, we can avoid the restriction of non-monotonic reasoning to domain elements that are named by an individual constant, as adopted by other non-monotonic DLs.

S. Brandt: **Standard and Non-standard reasoning in Description Logics**. Institute for Theoretical Computer Science, TU Dresden, Germany, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
S. Ghilardi, C. Lutz, and F. Wolter: **Did I Damage my Ontology? A Case for Conservative Extensions in Description Logics**. In Patrick Doherty, John Mylopoulos, and Christopher Welty, editors, *Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06)*, pages 187–197. AAAI Press, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

In computer science, ontologies are dynamic entities: to adapt them to new and evolving applications, it is necessary to frequently perform modifications such as the extension with new axioms and merging with other ontologies. We argue that, after performing such modifications, it is important to know whether the resulting ontology is a conservative extension of the original one. If this is not the case, then there may be unexpected consequences when using the modified ontology in place of the original one in applications. In this paper, we propose and investigate new reasoning problems based on the notion of conservative extension, assuming that ontologies are formulated as TBoxes in the description logic ALC. We show that the fundamental such reasoning problems are decidable and 2ExpTime-complete. Additionally, we perform a finer-grained analysis that distinguishes between the size of the original ontology and the size of the additional axioms. In particular, we show that there are algorithms whose runtime is "only" exponential in the size of the original ontology, but double exponential in the size of the added axioms. If the size of the new axioms is small compared to the size of the ontology, these algorithms are thus not significantly more complex than the standard reasoning services implemented in modern description logic reasoners. If the extension of an ontology is not conservative, our algorithm is capable of computing a concept that witnesses non-conservativeness. We show that the computed concepts are of (worst-case) minimal size.
S. Ghilardi, C. Lutz, F. Wolter, and M. Zakharyaschev: **Conservative Extensions in Modal Logics**. In Guido Governatori, Ian Hodkinson, and Yde Venema, editors, *Advances in Modal Logics Volume 6*, pages 187–207. College Publications, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Every modal logic L gives rise to the consequence relation that relates formulas phi and psi iff is true in a world of an L-model whenever phi is true in that world. We consider the following algorithmic problem for L. Given two modal formulas phi1 and phi2, decide whether their conjunction is a conservative extension of phi1 in the sense that whenever psi is a consequence of the conjunction of phi1 and phi2 and psi does not contain propositional variables not occurring in phi1, then psi is already a consequence of phi1.. We first prove that the conservativeness problem is co-NExpTime-hard for all modal logics of unbounded width (which have rooted frames with more than N successors of the root, for any N smaller than omega. Then we show that this problem is (i) co-NExpTime-complete for S5 and K, (ii) in for S4 and (iii) -complete for GL.3 (the logic of finite strict linear orders). The proofs for S5 and K use the fact that these logics have uniform interpolants of exponential size.
Jan Hladik and Rafael Peñaloza: **PSPACE Automata for Description Logics**. In B. Parsia, U. Sattler, and D. Toman, editors, *Proceedings of the 2006 International Workshop on Description Logics (DL'06)*, volume 189 of *CEUR-WS*, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Tree automata are often used for satisfiability testing in the area of description logics, which usually yields EXPTIME complexity results. We examine conditions under which this result can be improved, and we define two classes of automata, called segmentable and weakly-segmentable, for which emptiness can be decided using space logarithmic in the size of the automaton (and thus polynomial in the size of the input). The usefulness of segmentable automata is demonstrated by reproving the known PSPACE result for satisfiability of ALC concepts with respect to acyclic TBoxes.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: **Description Logic Actions with general TBoxes: a Pragmatic Approach**. In *Proceedings of the 2006 International Workshop on Description Logics (DL2006)*, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

We recently proposed action formalisms based on description logics (DLs) as decidable fragments of well-established action theories such as the Situation Calculus and the Fluent Calculus. One short-coming of our initial proposal is that the considered formalisms admit only acyclic TBoxes, but not GCIs. In this paper, we define DL action formalisms that admit GCIs, propose a pragmatic approach to addressing the ramification problem that is introduced in this way, show that our formalim is decidable and investigate its computational complexity.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: **Reasoning about Actions using Description Logics with general TBoxes**. In Michael Fisher, Wiebe van der Hoek, Boris Konev, and Alexei Lisitsa, editors, *Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA 2006)*, volume 4160 of *Lecture Notes in Artificial Intelligence*, pages 266–279. Springer-Verlag, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Action formalisms based on description logics (DLs) have recently been introduced as decidable fragments of well-established action theories such as the Situation Calculus and the Fluent Calculus. However, existing DL action formalisms fail to include general TBoxes, which are the standard tool for formalising ontologies in modern description logics. We define a DL action formalism that admits general TBoxes, propose an approach to addressing the ramification problem that is introduced in this way, show that our formalism is decidable and perform a detailed investigation of its computational complexity.
H. Liu, C. Lutz, M. Milicic, and F. Wolter: **Updating Description Logic ABoxes**. In Patrick Doherty, John Mylopoulos, and Christopher Welty, editors, *Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR'06)*, pages 46–56. AAAI Press, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

Description logic (DL) ABoxes are a tool for describing the state of affairs in an application domain. In this paper, we consider the problem of updating ABoxes when the state changes. We assume that changes are described at an atomic level, i.e., in terms of possibly negated ABox assertions that involve only atomic concepts and roles. We analyze such basic ABox updates in several standard DLs by investigating whether the updated ABox can be expressed in these DLs and, if so, whether it is computable and what is its size. It turns out that DLs have to include nominals and the "@" constructor of hybrid logic (or, equivalently, admit Boolean ABoxes) for updated ABoxes to be expressible. We devise algorithms to compute updated ABoxes in several expressive DLs and show that an exponential blowup in the size of the whole input (original ABox + update information) cannot be avoided unless every PTIME problem is LOGTIME-parallelizable. We also exhibit ways to avoid an exponential blowup in the size of the original ABox, which is usually large compared to the update information.
C. Lutz: **Complexity and Succinctness of Public Announcement Logic**. In Peter Stone and Gerhard Weiss, editors, *Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS'06)*, pages 137–144. Association for Computing Machinery (ACM), 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

There is a recent trend of extending epistemic logic (EL) with dynamic operators that allow to express the evolution of knowledge and belief induced by knowledge-changing actions. The most basic such extension is public announcement logic (PAL), which is obtained from EL by adding an operator for truthful public announcements. In this paper, we consider the computational complexity of PAL and show that it coincides with that of EL. This holds in the single- and multi-agent case, and also in the presence of common knowledge operators. We also prove that there are properties that can be expressed exponentially more succinct in PAL than in EL. This shows that, despite the known fact that PAL and EL have the same expressive power, there is a benefit in adding the public announcement operator to EL: it exponentially increases the succinctness of formulas without having negative effects on computational complexity.
C. Lutz and M. Milicic: **A Tableau Algorithm for Description Logics with Concrete Domains and General TBoxes**. *Journal of Automated Reasoning. Special Issue on on Automated Reasoning with Analytic Tableaux and Related Methods*, 2006. To appear.

BibTeX entry
Paper (PDF)

#### Abstract:

To use description logics (DLs) in an application, it is crucial to identify a DL that is sufficiently expressive to represent the relevant notions of the application domain, but for which reasoning is still decidable. Two means of expressivity that are required by many modern applications of DLs are concrete domains and general TBoxes. The former are used for defining concepts based on concrete qualities of their instances such as the weight, age, duration, and spatial extension. The purpose of the latter is to capture background knowledge by stating that the extension of a concept is included in the extension of another concept. Unfortunately, it is well- known that combining concrete domains with general TBoxes often leads to DLs for which reasoning is undecidable. In this paper, we identify a general property of concrete domains that is sufficient for proving decidability of DLs with both concrete domains and general TBoxes. We exhibit some useful concrete domains, most notably a spatial one based on the RCC-8 relations, which have this property. Then, we present a tableau algorithm for reasoning in DLs equipped with concrete domains and general TBoxes.
C. Lutz, D. Walther, and F. Wolter: **Quantitative Temporal Logics: PSpace and below**. *Information and Computation*, 205(1):99–123, 2006.

BibTeX entry
Paper (PS)

#### Abstract:

In many cases, the addition of metric operators to qualitative temporal logics (TLs) increases the complexity of satisfiability by at least one exponential: while common qualitative TLs are complete for NP or PSpace, their metric extensions are often ExpSpace-complete or even undecidable. In this paper, we exhibit several metric extensions of qualitative TLs of the real line that are at most PSpace-complete, and analyze the transition from NP to PSpace for such logics. Our first result is that the logic obtained by extending since-until logic of the real line with the operators `sometime within n time units in the past/future' is still PSpace-complete. In contrast to existing results, we also capture the case where n is coded in binary and the finite variability assumption is not made. To establish containment in PSpace, we use a novel reduction technique that can also be used to prove tight upper complexity bounds for many other metric TLs in which the numerical parameters to metric operators are coded in binary. We then consider metric TLs of the reals that do not offer any qualitative temporal operators. In such languages, the complexity turns out to depend on whether binary or unary coding of parameters is assumed: satisfiability is still PSpace-complete under binary coding, but only NP-complete under unary coding.
C. Lutz and F. Wolter: **Modal Logics of Topological Relations**. *Logical Methods in Computer Science*, 2(2), 2006.

BibTeX entry
Paper (PS)

#### Abstract:

Logical formalisms for reasoning about relations between spatial regions play a fundamental role in geographical information systems, spatial and constraint databases, and spatial reasoning in AI. In analogy with Halpern and Shoham's modal logic of time intervals based on the Allen relations, we introduce a family of modal logics equipped with eight modal operators that are interpreted by the Egenhofer-Franzosa (or RCC8) relations between regions in topological spaces such as the real plane. We investigate the expressive power and computational complexity of logics obtained in this way. It turns out that our modal logics have the same expressive power as the two-variable fragment of first-order logic, but are exponentially less succinct. The complexity ranges from (undecidable and) recursively enumerable to Pi_{1}

^{1}-hard, where the recursively enumerable logics are obtained by considering substructures of structures induced by topological spaces. As our undecidability results also capture logics based on the real line, they improve upon undecidability results for interval temporal logics by Halpern and Shoham. We also analyze modal logics based on the five RCC5 relations, with similar results regarding the expressive power, but weaker results regarding the complexity.

Carsten Lutz, Franz Baader, Enrico Franconi, Domenico Lembo, Ralf Möller, Riccardo Rosati, Ulrike Sattler, Boontawee Suntisrivaraporn, and Sergio Tessaris: **Reasoning Support for Ontology Design**. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, *In Proceedings of the second international workshop OWL: Experiences and Directions*, November 2006. To appear

BibTeX entry
Paper (PDF)

#### Abstract:

The design of comprehensive ontologies is a serious challenge. Therefore, it is necessary to support the ontology designer by providing him with design methodologies, ontology editors, and automated reasoning tools that explicate the consequences of his design decisions. Currently, reasoning tools are largely limited to the reasoning services (i) computing the subsumption hierarchy of the classes in an ontology and (ii) determining the consistency of these classes. In this paper, we survey the most important tasks that arise in ontology design and discuss how they can be supported by automated reasoning tools. In particular, we show that it is beneficial to go beyond the usual reasoning services (i) and (ii).
Barış Sertkaya: **Computing the hierarchy of conjunctions of concept names and their negations in a Description Logic knowledge base using Formal Concept Analysis (ICFCA 2006)**. In Bernhard Ganter and Leonard Kwuida, editors, *Contributions to ICFCA 2006*, pages 73–86. Dresden, Germany, Verlag Allgemeine Wissenschaft, 2006.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

In a series of previous work, we have presented how attribute exploration can be used in the bottom-up construction of DL knowledge bases to compute a concept lattice that is isomorphic to the subsumption hierarchy of all conjunctions of concept names occurring in a knowledge base, and the negations of these concept names. This work is a continuation in the line of the previous work, that makes a step towards more efficient computation of the mentioned hierarchy. Its specific accomplishment is reducing the number of questions asked to the expert and the number of objects produced during the computation of this hierarchy, thus speeding up the computation. Despite its simple nature, the approach speeds up the computation of this hierarchy drastically.
Anni-Yasmin Turhan, Sean Bechhofer, Alissa Kaplunova, Thorsten Liebig, Marko Luther, Ralf Möller, Olaf Noppens, Peter Patel-Schneider, Boontawee Suntisrivaraporn, and Timo Weithöner: **DIG 2.0 – Towards a Flexible Interface for Description Logic Reasoners**. In Bernardo Cuenca Grau, Pascal Hitzler, Connor Shankey, and Evan Wallace, editors, *In Proceedings of the second international workshop OWL: Experiences and Directions*, November 2006.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

The DIG Interface provides an implementation-neutral mechanism for accessing Description Logic reasoner functionality. At a high level the interface can be realised as XML messages sent to the reasoner over HTTP connections, with the reasoner responding as appropriate. Key changes in the current version (DIG 2.0) include support for OWL 1.1 and well-defined mechanisms for extensions to the basic interface.
Anni-Yasmin Turhan, Thomas Springer, and Michael Berger: **Pushing Doors for Modeling Contexts with OWL DL –a Case Study**. In Jadwiga Indulska and Daniela Nicklas, editors, *Proceedings of the Workshop on Context Modeling and Reasoning (CoMoRea'06)*. IEEE Computer Society, March 2006.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

In this paper we present an integrated view for modeling and reasoning for context applications using OWL DL. In our case study, we describe a task driven approach to model typical situations as context concepts in an OWL DL ontology. At run-time OWL individuals form situation descriptions and by use of realization we recognise a certain context. We demonstrate the feasibility of our approach by performance measurements of available highly optimised Description Logics (DL) reasoners for OWL DL.
D. Walther, C. Lutz, F. Wolter, and M. Wooldridge: **ATL is Indeed ExpTime-complete**. *Journal of Logic and Computation*, 16(6):765–787, 2006.

BibTeX entry
Paper (PDF)

#### Abstract:

The Alternating-time Temporal Logic (ATL) of Alur, Henzinger, and Kupferman is being increasingly widely applied in the specification and verification of open distributed systems and game-like multi-agent systems. In this paper, we investigate the computational complexity of the satisfiability problem for ATL. For the case where the set of agents is fixed in advance, this problem was settled at ExpTime-complete in a result of van Drimmelen. If the set of agents is not fixed in advance, then van Drimmelen's construction yields a 2ExpTime upper bound. In this paper, we focus on the latter case and define three natural variations of the satisfiability problem. Although none of these variations fixes the set of agents in advance, we are able to prove containment in ExpTime for all of them by means of a type elimination construction-thus improving the existing 2ExpTime upper bound to a tight ExpTime one.## 2005

F. Baader, S. Brandt, and C. Lutz: **Pushing the EL Envelope**. In

*Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence IJCAI-05*. Edinburgh, UK, Morgan-Kaufmann Publishers, 2005.

BibTeX entry Paper (PDF)

#### Abstract:

Recently, it has been shown that the small description logic (DL) EL, which allows for conjunction and existential restrictions, has better algorithmic properties than its counterpart FL0, which allows for conjunction and value restrictions. Whereas the subsumption problem in FL0 becomes already intractable in the presence of acyclic TBoxes, it remains tractable in EL even with general concept inclusion axioms (GCIs). On the one hand, we extend the positive result for EL by identifying a set of expressive means that can be added to EL without sacrificing tractability. On the other hand, we show that basically all other additions of typical DL constructors to EL with GCIs make subsumption intractable, and in most cases even ExpTime-complete. In addition, we show that subsumption in FL0 with GCIs is ExpTime-complete.
F. Baader and S. Ghilardi: **Connecting Many-Sorted Structures and Theories through Adjoint Functions**. In *Proceedings of the 5th International Workshop on Frontiers of Combining Systems (FroCoS'05)*, volume 3717 of *Lecture Notes in Artificial Intelligence*. Vienna (Austria), Springer-Verlag, 2005.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

In a previous paper, we have introduced a general approach for connecting two many-sorted theories through connection functions that behave like homomorphisms on the shared signature, and have shown that, under appropriate algebraic conditions, decidability of the validity of universal formulae in the component theories transfers to their connection. This work generalizes decidability transfer results for so-called E-connections of modal logics. However, in this general algebraic setting, only the most basic type of E-connections could be handled. In the present paper, we overcome this restriction by looking at pairs of connection functions that are adjoint pairs for partial orders defined in the component theories.
F. Baader and S. Ghilardi: **Connecting Many-Sorted Theories**. In *Proceedings of the 20th International Conference on Automated Deduction (CADE-05)*, volume 3632 of *Lecture Notes in Artificial Intelligence*, pages 278–294. Tallinn (Estonia), Springer-Verlag, 2005.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Basically, the connection of two many-sorted theories is obtained by taking their disjoint union, and then connecting the two parts through connection functions that must behave like homomorphisms on the shared signature. We determine conditions under which decidability of the validity of universal formulae in the component theories transfers to their connection. In addition, we consider variants of the basic connection scheme.
F. Baader, I. Horrocks, and U. Sattler: **Description Logics as Ontology Languages for the Semantic Web**. In D. Hutter and W. Stephan, editors, *Mechanizing Mathematical Reasoning: Essays in Honor of Jörg H. Siekmann on the Occasion of His 60th Birthday*, volume 2605 of *Lecture Notes in Artificial Intelligence*, pages 228–248. Springer-Verlag, 2005.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality. In this paper, we describe what description logics are and what they can do for the Semantic Web. Descriptions logics are very useful for defining, integrating, and maintaining ontologies, which provide the SemanticWeb with a common understanding of the basic semantic concepts used to annotate Web pages. We also argue that, without the last decade of basic research in this area, description logics could not play such an important role in this domain.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: **A Description Logic Based Approach to Reasoning about Web Services**. In *Proceedings of the WWW 2005 Workshop on Web Service Semantics (WSS2005)*, 2005.

BibTeX entry
Paper (PDF)

#### Abstract:

Motivated by the need for semantically well-founded and algorithmically managable formalisms for describing the functionality of Web services, we introduce an action formalism that is based on description logics (DLs), but is also firmly grounded on research in the reasoning about action community. Our main contribution is an analysis of how the choice of the DL influences the complexity of standard reasoning tasks such as projection and executability, which are important for Web service discovery and composition.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: **Integrating Description Logics and Action Formalisms: First Results**. In *Proceedings of the 2005 International Workshop on Description Logics (DL2005)*, number 147 in *CEUR-WS*, 2005.

BibTeX entry
Paper (PDF)

#### Abstract:

We propose an action formalism that is based on description logics (DLs) and may be viewed as an instance of the Situation Calculus (SitCalc). In particular, description logic concepts can be used for describing the state of the world, and the pre- and post-conditions of actions. The main advantage of such a combination is that, on the one hand, the expressive power for describing world states and conditions is higher than in other decidable fragments of the SitCalc, which are usually propositional. On the other hand, in contrast to the full SitCalc, effective reasoning is still possible. In this paper, we perform a detailed investigation of how the choice of the DL influences the complexity of the standard reasoning tasks executability and projection in the corresponding action formalism. We also discuss semantic and computational problems in natural extensions of our framework.
F. Baader, C. Lutz, M. Milicic, U. Sattler, and F. Wolter: **Integrating Description Logics and Action Formalisms: First Results**. In *Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05)*, 2005.

BibTeX entry
Paper (PDF)

#### Abstract:

We propose an action formalism that is based on description logics (DLs) and may be viewed as an instance of the Situation Calculus (SitCalc). In particular, description logic concepts can be used for describing the state of the world, and the pre- and post-conditions of actions. The main advantage of such a combination is that, on the one hand, the expressive power for describing world states and conditions is higher than in other decidable fragments of the SitCalc, which are usually propositional. On the other hand, in contrast to the full SitCalc, effective reasoning is still possible. In this paper, we perform a detailed investigation of how the choice of the DL influences the complexity of the standard reasoning tasks executability and projection in the corresponding action formalism. We also discuss semantic and computational problems in natural extensions of our framework.
F. Baader, C. Lutz, and B. Suntisrivaraporn: **Is Tractable Reasoning in Extensions of the Description Logic EL Useful in Practice?**. In

*Proceedings of the Methods for Modalities Workshop (M4M-05)*, 2005.

BibTeX entry Paper (PDF) Paper (PS)

#### Abstract:

Extensions of the description logic EL have recently been proposed as lightweight ontology languages. The most important feature of these extensions is that, despite including powerful expressive means such as general concept inclusion axioms, reasoning can be carried out in polynomial time. In this paper, we consider one of these extensions, EL+, and introduce a refinement of the known polynomial-time classification algorithm for this logic, which was implemented in our CEL reasoner. We describe the results of several experiments with CEL on large ontologies from practice, which show that even a relatively straightforward implementation of the described algorithm outperforms highly optimized, state-of-the-art tableau reasoners for expressive description logics.
F. Baader and A. Voronkonv, editors: **11th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning LPAR 2004**. Montevideo, Uruguay, Springer-Verlag, 2005.

BibTeX entry

#### Abstract:

This volume contains the papers presented at the 11th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR), held from March 14 to 18, 2005, in Montevideo, Uruguay, together with the 5th International Workshop on the Implementation of Logics (organised by Stephan Schulz and Boris Konev) and the Workshop on Analytic Proof Systems (organised by Matthias Baaz).
Franz Baader, Carsten Lutz, Eldar Karabaev, and Manfred Theißen: **A New n-ary Existential Quantifier in Description Logics**. In

*Proceedings of the 28th Annual German Conference on Artificial Intelligence, KI 2005*, volume 3698 of

*Lecture Notes in Artificial Intelligence*, pages 18–33. Springer-Verlag, 2005.

BibTeX entry Paper (PDF) Paper (PS) ©Springer-Verlag

#### Abstract:

Motivated by a chemical process engineering application, we introduce a new concept constructor in Description Logics (DLs), an n-ary variant of the existential restriction constructor, which generalizes both the usual existential restrictions and so-called qualified number restrictions. We show that the new constructor can be expressed in ALCQ, the extension of the basic DL ALC by qualified number restrictions. However, this representation results in an exponential blow-up. By giving direct algorithms for ALC extended with the new constructor, we can show that the complexity of reasoning in this new DL is actually not harder than the one of reasoning in ALCQ. Moreover, in our chemical process engineering application, a restricted DL that provides only the new constructor together with conjunction, and satisfies an additional restriction on the occurrence of roles names, is sufficient. For this DL, the subsumption problem is polynomial.
Franz Baader, Carsten Lutz, Eldar Karabaev, and Manfred Theißen: **A New n-ary Existential Quantifier in Description Logics**. In

*Proceedings of the 2005 International Workshop on Description Logics (DL2005)*, number 147 in

*CEUR-WS*, 2005.

BibTeX entry Paper (PDF) Paper (PS)

#### Abstract:

Motivated by a chemical process engineering application, we introduce a new concept constructor in Description Logics (DLs), an n-ary variant of the existential restriction constructor, which generalizes both the usual existential restrictions and so-called qualified number restrictions. We show that the new constructor can be expressed in ALCQ, the extension of the basic DL ALC by qualified number restrictions. However, this representation results in an exponential blow-up. By giving direct algorithms for ALC extended with the new constructor, we can show that the complexity of reasoning in this new DL is actually not harder than the one of reasoning in ALCQ. Moreover, in our chemical process engineering application, a restricted DL that provides only the new constructor together with conjunction, and satisfies an additional restriction on the occurrence of roles names, is sufficient. For this DL, the subsumption problem is polynomial.
Sebastian Brandt and Jörg Model: **Subsumption in EL w.r.t. hybrid TBoxes**. In

*Proceedings of the 28th Annual German Conference on Artificial Intelligence, KI 2005*,

*Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2005.

BibTeX entry Paper (PDF) ©Springer-Verlag

#### Abstract:

In the area of Description Logic (DL) based knowledge representation, two desirable features of DL systems have as yet been incompatible: firstly, the support of general TBoxes containing general concept inclusion (GCI) axioms, and secondly, non-standard inference services facilitating knowledge engineering tasks, such as build-up and maintenance of terminologies (TBoxes). In order to make non-standard inferences available without sacrificing the convenience of GCIs, the present paper proposes hybrid TBoxes consisting of a pair of a general TBox F interpreted by descriptive semantics, and a (possibly) cyclic TBox T interpreted by fixpoint semantics. F serves as a foundation of T in the sense that the GCIs in F define relationships between concepts used as atomic concept names in the definitions in T. % Our main technical result is a polynomial time subsumption algorithm for hybrid EL-TBoxes based on a polynomial reduction to subsumption w.r.t. cyclic EL-TBoxes with fixpoint semantics. By virtue of this reduction, all non-standard inferences already available for cyclic EL-TBoxes become available for hybrid ones.
J. Hladik: **A Generator for Description Logic Formulas**. In I. Horrocks, U. Sattler, and F. Wolter, editors, *Proceedings of DL 2005*. CEUR-WS, 2005. Available from ceur-ws.org

BibTeX entry
Paper (PDF)

#### Abstract:

We introduce a schema for generating random formulas for different description logics, which extends an existing pattern for modal logics. Using the DL reasoners FaCT and RACER, we test the difficulty of these formulas, and it turns out that the properties that make a formula in an expressive DL hard are quite different from those known for ALC formulas.
M. Lange and C. Lutz: **2-ExpTime lower bounds for Propositional Dynamic Logics with intersection**. *Journal of Symbolic Logic*, 70(5):1072–1086, 2005.

BibTeX entry
Paper (PS)

#### Abstract:

In 1984, Danecki proved that satisfiability in IPDL, i.e., Propositional Dynamic Logic (PDL) extended with an intersection operator on programs, is decidable in deterministic double exponential time. Since then, the exact complexity of IPDL has remained an open problem: the best known lower bound was the ExpTime one stemming from plain PDL until, in 2004, the first author established ExpSpace-hardness. In this paper, we finally close the gap and prove that IPDL is hard for 2-ExpTime, thus 2-ExpTime-complete. We then sharpen our lower bound, showing that it even applies to IPDL without the test operator interpreted on tree structures.
C. Lutz: **PDL with Intersection and Converse is Decidable**. In *Annual Conference of the European Association for Computer Science Logic CSL'05*, *LNCS*. Springer Verlag, 2005.

BibTeX entry
Paper (PS)

#### Abstract:

In its many guises and variations, propositional dynamic logic (PDL) plays an important role in various areas of computer science such as databases, artificial intelligence, and computer linguistics. One relevant and powerful variation is ICPDL, the extension of PDL with intersection and converse. Although ICPDL has several interesting applications, its computational properties have never been investigated. In this paper, we prove that ICPDL is decidable by developing a translation to the monadic second order logic of infinite trees. Our result has applications in information logic, description logic, and epistemic logic. In particular, we solve a long-standing open problem in information logic. Another virtue of our approach is that it provides a decidability proof that is more transparent than existing ones for PDL with intersection (but without converse).
C. Lutz, C. Areces, I. Horrocks, and U. Sattler: **Keys, Nominals, and Concrete Domains**. *Journal of Artificial Intelligence Research*, 23:667–726, 2005.

BibTeX entry
Paper (PS)

#### Abstract:

Many description logics (DLs) combine knowledge representation on an abstract, logical level with an interface to "concrete" domains like numbers and strings with built-in predicates such as <, +, and prefix-of. These hybrid DLs have turned out to be useful in several application areas, such as reasoning about conceptual database models. We propose to further extend such DLs with key constraints that allow the expression of statements like "US citizens are uniquely identified by their social security number". Based on this idea, we introduce a number of natural description logics and perform a detailed analysis of their decidability and computational complexity. It turns out that naive extensions with key constraints easily lead to undecidability, whereas more careful extensions yield NExpTime-complete DLs for a variety of useful concrete domains.
C. Lutz and M. Milicic: **A Tableau Algorithm for DLs with Concrete Domains and GCIs**. In *Proceedings of the 2005 International Workshop on Description Logics (DL2005)*, number 147 in *CEUR-WS*, 2005.

BibTeX entry
Paper (PDF)

#### Abstract:

We identify a general property of concrete domains that is sufficient for proving decidability of DLs equipped with them and GCIs. We show that some useful concrete domains, such as a temporal one based on the Allen relations and a spatial one based on the RCC-8 relations, have this property. Then, we present a tableau algorithm for reasoning in DLs equipped with such concrete domains.
C. Lutz and M. Milicic: **A Tableau Algorithm for Description Logics with Concrete Domains and GCIs**. In *Proceedings of the 14th International Conference on Automated Reasoning with Analytic Tableaux and Related Methods TABLEAUX 2005*, *LNAI*. Koblenz, Germany, Springer, 2005.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

In description logics (DLs), concrete domains are used for defining concepts based on concrete qualities of their instances such as the weight, age, duration, and spatial extension. So-called general concept inclusions (GCIs) play an important role for capturing background knowledge. It is well-known that, when combining concrete domains with GCIs, reasoning easily becomes undecidable. In this paper, we identify a general property of concrete domains that is sufficient for proving decidability of DLs with both concrete domains and GCIs. We exhibit some useful concrete domains, most notably a spatial one based on the RCC-8 relations, which have this property. Then, we present a tableau algorithm for reasoning in DLs equipped with concrete domains and GCIs.
C. Lutz, U. Sattler, and L. Tendera: **The Complexity of Finite Model Reasoning in Description Logics**. *Information and Computation*, 199:132–171, 2005.

BibTeX entry
Paper (PS)

#### Abstract:

We analyse the complexity of finite model reasoning in the description logic ALCQI, i.e. ALC augmented with qualifying number restrictions, inverse roles, and general TBoxes. It turns out that all relevant reasoning tasks such as concept satisfiability and ABox consistency are Exptime-complete, regardless of whether the numbers in number restrictions are coded unarily or binarily. Thus, finite model reasoning with ALCQI is not harder than standard reasoning with ALCQI.
C. Lutz and D. Walther: **PDL with Negation of Atomic Programs**. *Journal of Applied Non-Classical Logic*, 15(2):189–214, 2005.

BibTeX entry
Paper (PS)

#### Abstract:

Propositional dynamic logic (PDL) is one of the most successful variants of modal logic. To make it even more useful for applications, many extensions of PDL have been considered in the literature. A very natural and useful such extension is with negation of programs. Unfortunately, as long known, reasoning with the resulting logic is undecidable. In this paper, we consider the extension of PDL with negation of atomic programs, only. We argue that this logic is still useful, e.g. in the context of description logics, and prove that satisfiability is decidable and ExpTime-complete using an approach based on Buechi tree automata.
C. Lutz, D. Walther, and F. Wolter: **Quantitative Temporal Logics: PSpace and below**. In *Proceedings of the Twelfth International Symposium on Temporal Representation and Reasoning*. Burlington, VT, USA, IEEE Computer Society Press, 2005.

BibTeX entry
Paper (PDF)

#### Abstract:

Often, the addition of metric operators to qualitative temporal logics leads to an increase of the complexity of satisfiability by at least one exponential. In this paper, we exhibit a number of metric extensions of qualitative temporal logics of the real line that do not lead to an increase in computational complexity. We show that the language obtained by extending since/until logic of the real line with the operators `sometime within n time units', n coded in binary, is PSpace-complete even without the finite variability assumption. Without qualitative temporal operators the complexity of this language turns out to depend on whether binary or unary coding of parameters is assumed: it is still PSpace-hard under binary coding but in NP under unary coding.
Anni-Yasmin Turhan: **Pushing the SONIC border — SONIC 1.0**. In Reinhold Letz, editor, *FTP 2005 — Fifth International Workshop on First-Order Theorem Proving*. Technical Report University of Koblenz, 2005. http://www.uni-koblenz.de/fb4/publikationen/gelbereihe/RR-13-2005.pdf

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

This paper reports on extensions of the Description Logics non-standard inference system SONIC. The recent contributions to the system are two-fold. Firstly, SONIC is extended by two new of non-standard inferences, namely, implementations of the good common subsumer w.r.t. a background terminology and a heuristics for computing a minimal rewriting. Secondly, SONIC is available as a plugin for the well-known ontology editor Protege.## 2004

A. Artale and C. Lutz: **A Correspondence between Temporal Description Logics**. *Journal of Applied Non-Classical Logic*, 14(1–2):209–233, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

In this paper, we investigate the relationship between two decidable interval-based temporal description logics that have been proposed in the literature, TL-ALCF and ALCF(A). Although many aspects of these two logics are quite similar, the two logics suggest two rather different paradigms for representing temporal conceptual knowledge. In this paper, we exhibit a reduction from TL-ALCF concepts to ALCF(A) concepts that serves two purposes: first, it nicely illustrates the relationship between the two knowledge representation paradigms; and second, it provides a tight PSpace upper bound for TL-ALCF concept satisfiabiliy, whose complexity was previously unknown.
F. Baader: **A Graph-Theoretic Generalization of the Least Common Subsumer and the Most Specific Concept in the Description Logic EL**. In J. Hromkovic and M. Nagl, editors,

*Proceedings of the 30th International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2004)*, volume 3353 of

*Lecture Notes in Computer Science*, pages 177–188. Bad Honnef, Germany, Springer-Verlag, 2004.

BibTeX entry Paper (PDF) Paper (PS) ©Springer-Verlag

#### Abstract:

In two previous papers we have investigates the problem of computing the least common subsumer (lcs) and the most specific concept (msc) for the description logic EL in the presence of terminological cycles that are interpreted with descriptive semantics, which is the usual first-order semantics for description logics. In this setting, neither the lcs nor the msc needs to exist. We were able to characterize the cases in which the lcs/msc exists, but it was not clear whether this characterization yields decidability of the existence problem. In the present paper, we develop a common graph-theoretic generalization of these characterizations, and show that the resulting property is indeed decidable, thus yielding decidability of the existence of the lcs and the msc. This is achieved by expressing the property in monadic second-order logic on infinite trees. We also show that, if it exists, then the lcs/msc can be computed in polynomial time.
F. Baader, S. Ghilardi, and C. Tinelli: **A New Combination Procedure for the Word Problem that Generalizes Fusion Decidability Results in Modal Logics**. In D. Basin and M. Rusinowitch, editors, *Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04)*, volume 3097 of *Lecture Notes in Artificial Intelligence*, pages 183–197. Springer-Verlag, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Previous results for combining decision procedures for the word problem in the non-disjoint case do not apply to equational theories induced by modal logics—whose combination is not disjoint since they share the theory of Boolean algebras. Conversely, decidability results for the fusion of modal logics are strongly tailored towards the special theories at hand, and thus do not generalize to other equational theories. In this paper, we present a new approach for combining decision procedures for the word problem in the non-disjoint case that applies to equational theories induced by modal logics, but is not restricted to them. The known fusion decidability results for modal logics are instances of our approach. However, even for equational theories induced by modal logics our results are more general since they are not restricted to so-called normal modal logics.
F. Baader, I. Horrocks, and U. Sattler: **Description Logics**. In S. Staab and R. Studer, editors, *Handbook on Ontologies*, *International Handbooks in Information Systems*, pages 3–28. Berlin, Germany, Springer–Verlag, 2004.

BibTeX entry

#### Abstract:

In this chapter, we explain what description logics are and why they make good ontology languages. In particular, we introduce the description logic SHIQ, which has formed the basis of several well-known ontology languages, including OWL.We argue that, without the last decade of basic research in description logics, this family of knowledge representation languages could not have played such an important role in this context. Description logic reasoning can be used both during the design phase, in order to improve the quality of ontologies, and in the deployment phase, in order to exploit the rich structure of ontologies and ontology based information. We discuss the extensions to SHIQ that are required for languages such as OWL and, finally, we sketch how novel reasoning services can support building DL knowledge bases.
F. Baader and B. Sertkaya: **Applying Formal Concept Analysis to Description Logics**. In P. Eklund, editor, *Proceedings of the 2nd International Conference on Formal Concept Analysis (ICFCA 2004)*, volume 2961 of *Lecture Notes in Artificial Intelligence*, pages 261–286. Springer, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Given a finite set S := C1, ..., Cn of description logic concepts, we are interested in computing the subsumption hierarchy of all least common subsumers of subsets of S as well as the hierarchy of all conjunctions of subsets of S. These hierarchies can be used to support the bottom-up construction of description logic knowledge bases. The point is to compute the first hierarchy without having to compute the least common subsumer for all subsets of S, and the second hierarchy without having to check all possible pairs of such conjunctions explicitly for subsumption. We will show that methods from formal concept analysis developed for computing concept lattices can be employed for this purpose.
F. Baader, B. Sertkaya, and A.-Y. Turhan: **Computing the Least Common Subsumer w.r.t. a Background Terminology**. In José Júlio Alferes and João Alexandre Leite, editors, *Proceedings of the 9th European Conference on Logics in Artificial Intelligence (JELIA 2004)*, volume 3229 of *Lecture Notes in Computer Science*, pages 400–412. Lisbon, Portugal, Springer-Verlag, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive Description Logics (DLs) whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology. We will both show a theoretical result on the existence of the least common subsumer in this setting, and describe a practical approach (based on a method from formal concept analysis) for computing good common subsumers, which may, however, not be the least ones.
Franz Baader, Baris Sertkaya, and Anni-Yasmin Turhan: **Computing the Least Common Subsumer w.r.t. a Background Terminology**. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, *CEUR-WS*, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

Methods for computing the least common subsumer (lcs) are usually restricted to rather inexpressive DLs whereas existing knowledge bases are written in very expressive DLs. In order to allow the user to re-use concepts defined in such terminologies and still support the definition of new concepts by computing the lcs, we extend the notion of the lcs of concept descriptions to the notion of the lcs w.r.t. a background terminology.
Sebastian Brandt: **On Subsumption and Instance Problem in ELH w.r.t. General TBoxes**. In

*Proceedings of the 2004 International Workshop on Description Logics (DL2004)*,

*CEUR-WS*, 2004.

BibTeX entry Paper (PDF)

#### Abstract:

Recently, it was shown for the DL EL that subsumption and instance problem w.r.t. cyclic terminologies can be decided in polynomial time. In this paper, we show that both problems remain tractable even when admitting general concept inclusion axioms and simple role inclusion axioms.
Sebastian Brandt: **Polynomial Time Reasoning in a Description Logic with Existential Restrictions, GCI Axioms, and—What Else?**. In R. López de Mantáras and L. Saitta, editors, *Proceedings of the 16th European Conference on Artificial Intelligence (ECAI-2004)*, pages 298–302. IOS Press, 2004.

BibTeX entry
Paper (PDF)

#### Abstract:

In the area of Description Logic (DL) based knowledge representation, research on reasoning w.r.t. general terminologies has mainly focused on very expressive DLs. Recently, though, it was shown for the DL EL, providing only the constructors conjunction and existential restriction, that the subsumption problem w.r.t. cyclic terminologies can be decided in polynomial time, a surprisingly low upper bound. In this paper, we show that even admitting general concept inclusion (GCI) axioms and role hierarchies in EL terminologies preserves the polynomial time upper bound for subsumption. We also show that subsumption becomes co-NP hard when adding one of the constructors number restriction, disjunction, and `allsome', an operator used in the DL K-Rep. An interesting implication of the first result is that reasoning over the widely used medical terminology SNOMED is possible in polynomial time.
Sebastian Brandt and Hongkai Liu: **Implementing Matching in ALN**. In

*Proceedings of the KI-2004 Workshop on Applications of Description Logics (KI-ADL'04)*,

*CEUR-WS*, September 2004.

BibTeX entry Paper (PDF)

#### Abstract:

Although matching in Description Logics (DLs) is theoretically well-investigated, an implementation of a matching algorithm exists only for the DL ALE. The present paper presents an implementation of an existing polynomial time matching algorithm for the DL ALN. Benchmarks using randomly generated matching problems indicate a relatively good performance even on large matching problems. Nevertheless, striking differences are revealed by direct comparison between the ALN- and the ALE-algorithm w.r.t. FL(-)-matching problems.
Mitchell A. Harris and Edward R. Reingold: **Line Drawing, Leap Years, and Euclid**. *ACM Computing Surveys*, 36:68–80, 2004.

BibTeX entry
Paper (PDF)

#### Abstract:

Bresenham's algorithm minimizes error in drawing lines on integer grid points; leap year calculations, surprisingly, are a generalization. We compare the two calculations, explicate the pattern, and discuss the connection of the leap year/line pattern with integer division and Euclid's algorithm for computing the greatest common divisor.
T. Hinze and M. Sturm: **Rechnen mit DNA - Eine Einführung in Theorie und Praxis**. ISBN 3-486-27530-5, R. Oldenbourg Wissenschaftsverlag München, 2004.

BibTeX entry
Paper (PDF)

#### Abstract:

Das Buch bietet eine umfassende und systematische Einführung in das interdisziplinär geprägte Wissensgebiet des DNA-Computing einschließlich seiner mathematischen wie auch molekularbiologischen Grundlagen. Im Zentrum des DNA-Computing stehen biologische Rechner, bei denen organische Moleküle als Speichermedium dienen und Rechenoperationen durch geeignete molekularbiologische und biochemische Prozesse im Reagenzglas ausgeführt werden. Algorithmen, die DNA-basiert konstruiert sind, nutzen eine massive Datenparallelität, die es ermöglicht, mit DNA-Computern Leistungsparameter zu erreichen, die einen Vergleich zu bekannter elektronischer Rechentechnik herausfordern. Bereits heute existiert eine Vielzahl interessanter praktischer Anwendungsfelder, deren Kommerzialisierung schon begonnen hat. Neben der Vermittlung von Basiswissen zum DNA-Computing werden Modelle, Methoden und Techniken vorgestellt, die eine Realisierung im Labor vorbereiten. Einen Schwerpunkt bildet die labornahe Simulation von Prozessen des DNA-Computing.
J. Hladik: **A Tableau System for the Description Logic SHIO**. In Ulrike Sattler, editor, *Contributions to the Doctoral Programme of IJCAR 2004*. CEUR, 2004. Available from ceur-ws.org

BibTeX entry
Paper (PS)

#### Abstract:

Tableau systems are a framework for tableau algorithms which tries to combine the advantages of tableau and automata algorithms, in particular efficiency in practice and worst-case complexity. In this paper, we present a tableau system for the expressive description logic SHIO and prove that the satisfiability problem for SHIO concepts is EXPTIME-complete. The succinctness of the proofs illustrates the usefulness of the tableau system framework.
J. Hladik: **Spinoza's Ontology**. In G. Büchel, B. Klein, and T. Roth-Berghofer, editors, *Proceedings of the 1st Workshop on Philosophy and Informatics (WSPI 2004)*, number RR-04-02 in *DFKI Research Reports*, 2004.

BibTeX entry
Paper (PDF)

#### Abstract:

We examine the possibility of applying knowledge representation and automated reasoning in the context of philosophical ontology. For this purpose, we use the axioms and propositions in the first book of Spinoza's Ethics as knowledge base and a tableau-based satisfiability tester as reasoner. We are able to reconstruct most of Spinoza's system with formal logic, but this requires additional axioms which are assumed implicitly by Spinoza. This study illustrates how tools developed in computer science can be of practical use for philosophy.
J. Hladik and J. Model: **Tableau Systems for SHIO and SHIQ**. In V. Haarslev and R. Möller, editors, *Proceedings of the 2004 International Workshop on Description Logics (DL 2004)*. CEUR, 2004. Available from ceur-ws.org

BibTeX entry
Paper (PDF)

#### Abstract:

Tableau systems are a framework for tableau algorithms which tries to combine the advantages of tableau and automata algorithms, in particular efficiency in practice and worst-case complexity. In this paper, we present tableau systems for two expressive description logics, firstly the well-known SHIQ, and secondly SHIO, which has not been examined so far. The succinctness of the proofs illustrates the usefulness of the tableau system framework. As a corollary, we obtain that satisfiability of SHIO concepts is EXPTIME-complete.
E. Karabaev and C. Lutz: **Mona as a DL Reasoner**. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, *CEUR-WS*, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

We show how the Mona tool for reasoning in the monadic second order theories WS1S and WS2S can be used to obtain decision procedures for description logics. The performance of this approach is evaluated and compared to the dedicated DL reasoners FaCT and RACER.
R. Kontchakov, C. Lutz, F. Wolter, and M. Zakharyaschev: **Temporal Tableaux**. *Studia Logica*, 76(1):91–134, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

As a remedy for the bad computational behaviour of first-order temporal logic (FOTL), it has recently been proposed to restrict the application of temporal operators to formulas with at most one free variable thereby obtaining so-called monodic fragments of FOTL. In this paper, we are concerned with constructing tableau algorithms for monodic fragments based on decidable fragments of first-order logic like the two-variable fragment or the guarded fragment. We present a general framework that shows how existing decision procedures for first-order fragments can be used for constructing a tableau algorithm for the corresponding monodic fragment of FOTL. Some example instantiations of the framework are presented.
O. Kutz, C. Lutz, F. Wolter, and M. Zakharyaschev: **E-Connections of Abstract Description Systems**. *Artificial Intelligence*, 156(1):1–73, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

Combining knowledge representation and reasoning formalisms is an important and challenging task. It is important because non-trivial AI applications often comprise different aspects of the world, thus requiring suitable combinations of available formalisms modeling each of these aspects. It is challenging because the computational behavior of the resulting hybrids is often much worse than the behavior of their components. In this paper, we propose a new combination method which is computationally robust in the sense that the combination of decidable formalisms is again decidable, and which, nonetheless, allows non-trivial interactions between the combined components. The new method, called E-connection, is defined in terms of abstract description systems (ADSs), a common generalization of description logics, many logics of time and space, as well as modal and epistemic logics. The basic idea of E-connections is that the interpretation domains of n combined systems are disjoint, and that these domains are connected by means of n-ary `link relations.' We define several natural variants of E-connections and study in-depth the transfer of decidability from the component systems to their E-connections.
C. Lutz: **Combining Interval-based Temporal Reasoning with General TBoxes**. *Artificial Intelligence*, 152(2):235–274, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

While classical Description Logics (DLs) concentrate on the representation of static conceptual knowledge, recently there is a growing interest in DLs that, additionally, allow to capture the temporal aspects of conceptual knowledge. Such temporal DLs are based either on time points or on time intervals as the temporal primitive. Whereas point-based temporal DLs are well-investigated, this is not the case for interval-based temporal DLs: all known logics either suffer from rather limited expressive power or have undecidable reasoning problems. In particular, there exists no decidable interval-based temporal DL that provides for general TBoxes-one of the most important expressive means in modern description logics. In this paper, for the first time we define an interval-temporal DL that is equipped with general TBoxes and for which reasoning is decidable (and, more precisely, ExpTime-complete).
C. Lutz and M. Milicic: **Description Logics with Concrete Domains and Functional Dependencies**. In *Proceedings of the 16th European Conference on Artificial Intelligence (ECAI-2004)*, 2004. To appear

BibTeX entry
Paper (PS)

#### Abstract:

Description Logics (DLs) with concrete domains are a useful tool in many applications. To further enhance the expressive power of such DLs, it has been proposed to add database-style key constraints. Up to now, however, only uniqueness constraints have been considered in this context, thus neglecting the second fundamental family of key constraints: functional dependencies. In this paper, we consider the basic DL with concrete domains , extend it with functional dependencies, and analyze the impact of this extension on the decidability and complexity of reasoning. Though intuitively the expressivity of functional dependencies seems weaker than that of uniqueness constraints, we are able to show that the former have a similarly severe impact on the computational properties: reasoning is undecidable in the general case, and -complete in some slightly restricted variants of our logic.
C. Lutz and D. Walther: **PDL with Negation of Atomic Programs**. In *Proceedings of the 2nd International Joint Conference on Automated Reasoning IJCAR'04*, *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2004. To appear

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Propositional dynamic logic (PDL) is one of the most successful variants of modal logic. To make it even more useful for applications, many extensions of PDL have been considered in the literature. A very natural and useful such extension is with negation of programs. Unfortunately, it is long known that reasoning with the resulting logic is undecidable. In this paper, we consider the extension of PDL with negation of atomic programs, only. We argue that this logic is still useful, e.g. in the context of description logics, and prove that satisfiability is decidable and ExpTime-complete using an approach based on Buechi tree automata.
C. Lutz and F. Wolter: **Modal Logics of Topological Relations**. In *Proceedings of Advances in Modal Logics 2004*, 2004.

BibTeX entry
Paper (PS)

#### Abstract:

We introduce a family of modal logics that are interpreted in domains consisting of regions in topological spaces, in particular the real plane. The underlying modal language has 8 operators interpreted by the RCC8 (or Egenhofer-Franzosa)-relations between regions. The following results on the expressive power and computational complexity of the resulting modal systems are obtained: they are expressively complete for the two-variable fragment of first-order logic, and are usually undecidable and often not even recursively enumerable. This also holds if we interpret our language in the class of all (finite) substructures of full region spaces. If interpreted in region spaces consisting of intervals in the real line, our results significantly extend undecidability results of Halpern and Shoham in that we prove the undecidability of interval temporal logic over the class of all substructures of all full interval structures. We also analyze modal logics based on the set of RCC5-relations which are more coarse than the RCC8 relations.
Carsten Lutz: **NExpTime-complete Description Logics with Concrete Domains**. *ACM Transactions on Computational Logic*, 5(4):669–705, 2004.

BibTeX entry

#### Abstract:

Concrete domains are an extension of Description Logics (DLs) that allows to integrate reasoning about conceptual knowledge with reasoning about ``concrete qualities'' of real-world entities such as their sizes, weights, and durations. In this paper, we are concerned with the complexity of Description Logics providing for concrete domains: starting from the complexity result established in [Lutz 2002b], which states that reasoning with the basic propositionally closed DL with concrete domains ALC(D) is PSpace-complete (provided that some weak conditions are satisfied), we perform an in-depth analysis of the complexity of extensions of this logic. More precisely, we consider five natural and seemingly ``harmless'' extensions of ALC(D) and prove that, for all five extensions, reasoning is NExpTime-complete (again if some weak conditions are satisfied). Thus, we show that the PSpace upper bound for reasoning with ALC(D) cannot be considered robust w.r.t. extensions of the language.
Baris Sertkaya and Halit Oguztuzun: **Proof of the Basic Theorem on Concept Lattices in Isabelle/HOL**. In C. Aykanat, T. Dayar, and I. Korpeoglu, editors, *Proceedings of the 19th International Symposium on Computer and Information Sciences (ISCIS2004)*, volume 3280 of *Lecture Notes in Computer Science*, pages 976–985. Springer, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper presents a machine-checked proof of the Basic Theorem on Concept Lattices, which appears in the book "Formal Concept Analysis" by Ganter and Wille, in the Isabelle/HOL Proof Assistant. As a by-product, the underlying lattice theory by Kammueller has been extended.
Anni-Yasmin Turhan and Christian Kissig: ** Sonic—Non-standard Inferences go OilEd**. In D. Basin and M. Rusinowitch, editors, *Proceedings of the 2nd International Joint Conference on Automated Reasoning (IJCAR'04)*, volume 3097 of *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

SONIC (``Simple OilEd Non-standard Inference Component'') is the first prototype implementation of non-standard inferences for Description Logics usable via a graphical user interface. The contribution of our implementation is twofold: it extends an earlier implementation of the least common subsumer and of the approximation inference to number restrictions, and it offers these reasoning services via an extension of the graphical ontology editor OilEd.
Anni-Yasmin Turhan and Christian Kissig: ** Sonic—System Description**. In *Proceedings of the 2004 International Workshop on Description Logics (DL2004)*, *CEUR-WS*, 2004.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

SONIC (``Simple OilEd Non-standard Inference Component'') is the first prototype implementation of non-standard inferences for Description Logics usable via a graphical user interface. The contribution of our implementation is twofold: it extends an earlier implementation of the least common subsumer and of the approximation inference to number restrictions, and it offers these reasoning services via an extension of the graphical ontology editor OilEd.## 2003

F. Baader: **Description Logic Terminology**. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, *The Description Logic Handbook: Theory, Implementation, and Applications*, pages 485–495. Cambridge University Press, 2003.

BibTeX entry

#### Abstract:

The purpose of this appendix is to introduce (in a compact manner) the syntax and semantics of the most prominent DLs occurring in this handbook. More information and explanations as well as some less familiar DLs can be found in the respective chapters. For DL constructors whose semantics cannot be described in a compact manner, we will only introduce the syntax and refer the reader to the respective chapter for the semantics. Following Chapter 2 on Basic Description Logics, we will first introduce the basic DL AL, and then describe several of its extensions. Thereby, we will also fix the notation employed in this handbook.
F. Baader, J. Hladik, C. Lutz, and F. Wolter: **From Tableaux to Automata for Description Logics**. In Moshe Vardi and Andrei Voronkov, editors, *Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003)*, volume 2850 of *Lecture Notes in Computer Science*, pages 1–32. Springer, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper investigates the relationship between automata- and tableau-based inference procedures for Description Logics. To be more precise, we develop an abstract notion of what a tableau-based algorithm is, and then show, on this abstract level, how tableau-based algorithms can be converted into automata-based algorithms. In particular, this allows us to characterize a large class of tableau-based algorithms that imply an ExpTime upper-bound for reasoning in the description logics for which such an algorithm exists.
F. Baader, R Küsters, and F. Wolter: **Extensions to Description Logics**. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, *The Description Logic Handbook: Theory, Implementation, and Applications*, pages 219–261. Cambridge University Press, 2003.

BibTeX entry

#### Abstract:

This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; modal, epistemic, and temporal operators; probabilities and fuzzy logic; and defaults. On the other hand, it considers non-standard inference problems for Description Logics, i.e., inference problems that—unlike subsumption or instance checking—are not available in all systems, but have turned out to be useful in applications. In particular, it addresses the non-standard inference problems: least common subsumer and most specific concept; unification and matching of concepts; and rewriting.
F. Baader and W. Nutt: **Basic Description Logics**. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, *The Description Logic Handbook: Theory, Implementation, and Applications*, pages 43–95. Cambridge University Press, 2003.

BibTeX entry

#### Abstract:

This chapter provides an introduction to Description Logics as a formal language for representing knowledge and reasoning about it. It first gives a short overview of the ideas underlying Description Logics. Then it introduces syntax and semantics, covering the basic constructors that are used in systems or have been introduced in the literature, and the way these constructors can be used to build knowledge bases. Finally, it defines the typical inference problems, shows how they are interrelated, and describes different approaches for effectively solving these problems. Some of the topics that are only briefly mentioned in this chapter will be treated in more detail in subsequent chapters.
F. Baader and U. Sattler: **Description Logics with Aggregates and Concrete Domains**. *Information Systems*, 28(8):979–1004, 2003.

BibTeX entry
Paper (PS)
Free reprint

#### Abstract:

Description Logics are a family of knowledge representation formalisms well-suited for intensional reasoning about conceptual models of databases/data warehouses. We extend Description Logics with concrete domains (such as integers and rational numbers) that include aggregation functions over these domains (such as min, max, count, and sum) which are usually available in database systems.We show that the presence of aggregation functions may easily lead to undecidability of (intensional) inference problems such as satisfiability and subsumption. However, there are also extensions for which satisfiability and subsumption are decidable, and we present decision procedures for the relevant inference problems.

Franz Baader: **Computing the least common subsumer in the description logic EL w.r.t. terminological cycles with descriptive semantics**. In

*Proceedings of the 11th International Conference on Conceptual Structures, ICCS 2003*, volume 2746 of

*Lecture Notes in Artificial Intelligence*, pages 117–130. Springer-Verlag, 2003.

BibTeX entry Paper (PDF) Paper (PS) ©Springer-Verlag

#### Abstract:

Computing the least common subsumer (lcs) is one of the most prominent non-standard inference in description logics. Baader, Kuesters, and Molitor have shown that the lcs of concept descriptions in the description logic EL always exists and can be computed in polynomial time. In the present paper, we try to extend this result from concept descriptions to concepts defined in a (possibly cyclic) EL-terminology interpreted with descriptive semantics, which is the usual first-order semantics for description logics. In this setting, the lcs need not exist. However, we are able to define possible candidates P_{k}(k 0) for the lcs, and can show that the lcs exists iff one of these candidates is the lcs. Since each of these candidates is a common subsumer, they can also be used to approximate the lcs even if it does not exist. In addition, we give a sufficient condition for the lcs to exist, and show that, under this condition, it can be computed in polynomial time.

Franz Baader: **Least Common Subsumers and Most Specific Concepts in a Description Logic with Existential Restrictions and Terminological Cycles**. In Georg Gottlob and Toby Walsh, editors, *Proceedings of the 18th International Joint Conference on Artificial Intelligence*, pages 319–324. Morgan Kaufman, 2003.

BibTeX entry
Paper (PDF)

#### Abstract:

Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can support the bottom-up construction of knowledge bases in description logics. In description logics with existential restrictions, the most specific concept need not exist if one restricts the attention to concept descriptions or acyclic TBoxes. In this paper, we extend the notions lcs and msc to cyclic TBoxes. For the description logic EL (which allows for conjunctions, existential restrictions, and the top-concept), we show that the lcs and msc always exist and can be computed in polynomial time if we interpret cyclic definitions with greatest fixpoint semantics.
Franz Baader, editor: **Proceedings of the 19th International Conference on Automated Deduction CADE-19**. Miami Beach, FL, USA, Springer-Verlag, 2003.

BibTeX entry

Franz Baader: **Restricted Role-value-maps in a Description Logic with Existential Restrictions and Terminological Cycles**. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, *CEUR-WS*, 2003.

BibTeX entry
Paper (PDF)

#### Abstract:

In a previous paper we have investigated subsumption in the presence of terminological cycles for the description logic EL, which allows conjunctions, existential restrictions, and the top concept, and have shown that the subsumption problem remains polynomial for all three types of semantics usually considered for cyclic de nitions in description logics. In this paper we show that subsumption in EL (with or without cyclic de - nitions) remains polynomial even if one adds a certain restricted form of global role-value-maps to EL. In particular, this kind of role-value-maps can express transitivity of roles.
Franz Baader: **Terminological Cycles in a Description Logic with Existential Restrictions**. In Georg Gottlob and Toby Walsh, editors, *Proceedings of the 18th International Joint Conference on Artificial Intelligence*, pages 325–330. Morgan Kaufmann, 2003.

BibTeX entry

#### Abstract:

Cyclic definitions in description logics have until now been investigated only for description logics allowing for value restrictions. Even for the most basic language FL0, which allows for conjunction and value restrictions only, deciding subsumption in the presence of terminological cycles is a PSPACE-complete problem. This paper investigates subsumption in the presence of terminological cycles for the language EL, which allows for conjunction, existential restrictions, and the top-concept. In contrast to the results for FL0, subsumption in EL remains polynomial, independent of whether we use least fixpoint semantics, greatest fixpoint semantics, or descriptive semantics.
Franz Baader: **The instance problem and the most specific concept in the description logic EL w.r.t. terminological cycles with descriptive semantics**. In

*Proceedings of the 26th Annual German Conference on Artificial Intelligence, KI 2003*, volume 2821 of

*Lecture Notes in Artificial Intelligence*, pages 64–78. Hamburg, Germany, Springer-Verlag, 2003.

BibTeX entry Paper (PS) ©Springer-Verlag

#### Abstract:

Previously, we have investigated both standard and non-standard inferences in the presence of terminological cycles for the description logic EL, which allows for conjunctions, existential restrictions, and the top concept. The present paper is concerned with two problems left open by this previous work, namely the instance problem and the problem of computing most specific concepts w.r.t. descriptive semantics, which is the usual first-order semantics for description logics. We will show that—like subsumption—the instance problem is polynomial in this context. Similar to the case of the least common subsumer, the most specific concept w.r.t. descriptive semantics need not exist, but we are able to characterize the cases in which it exists and give a decidable sufficient condition for the existence of the most specific concept. Under this condition, it can be computed in polynomial time.
Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors: **The Description Logic Handbook: Theory, Implementation, and Applications**. Cambridge University Press, 2003.

BibTeX entry

#### Abstract:

Description Logics are a family of knowledge representation languages that have been studied extensively in Artificial Intelligence over the last two decades. They are embodied in several knowledge-based systems and are used to develop various real-life applications. The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically-oriented readers, to those with more practically-oriented interests who need a sound and modern understanding of knowledge representation systems based on Description Logics. The chapters are written by some of the most prominent researchers in the field, introducing the basic technical material before taking the reader to the current state of the subject, and including comprehensive guides to the literature. In sum, the book will serve as a unique reference for the subject, and can also be used for self-study or in conjunction with Knowledge Representation and Artificial Intelligence courses.
Franz Baader, Jan Hladik, Carsten Lutz, and Frank Wolter: **From Tableaux to Automata for Description Logics**. *Fundamenta Informaticae*, 57:1–33, 2003.

BibTeX entry
Paper (PS)

#### Abstract:

This paper investigates the relationship between automata- and tableau-based inference procedures for description logics. To be more precise, we develop an abstract notion of what a tableau-based algorithm is, and then show, on this abstract level, how tableau-based algorithms can be converted into automata-based algorithms. In particular, this allows us to characterize a large class of tableau-based algorithms that imply an ExpTime upper-bound for reasoning in the description logics for which such an algorithm exists.
Sebastian Brandt: **Implementing Matching in ALE—First Results**. In

*Proceedings of the 2003 International Workshop on Description Logics (DL2003)*,

*CEUR-WS*, 2003.

BibTeX entry Paper (PDF) Paper (PS)

#### Abstract:

Matching problems in Description Logics are theoretically well understood, with a variety of algorithms available for different DLs. Nevertheless, still no implementation of a general matching algorithm exists. The present paper presents an implementation of an existing matching algorithm for the DL ALE and shows first results on benchmarks w.r.t. randomly generated matching problems. The observed computation times show that the implementation performs well even on relatively large matching problems.
Sebastian Brandt and Anni-Yasmin Turhan: **Computing least common subsumers for FLE^{+}**. In

*Proceedings of the 2003 International Workshop on Description Logics*,

*CEUR-WS*, 2003.

BibTeX entry Paper (PS)

#### Abstract:

Transitive roles are important for adequate representation of knowledge in a range of applications. In this paper we present a first algorithm to compute least common subsumers in a description logic with transitive roles.
Sebastian Brandt, Anni-Yasmin Turhan, and Ralf Küsters: **Extensions of Non-standard Inferences to Description Logics with transitive Roles**. In Moshe Vardi and Andrei Voronkov, editors, *Proceedings of the 10th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2003)*, *Lecture Notes in Computer Science*. Springer, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Description Logics (DLs) are a family of knowledge representation formalisms used for terminological reasoning. They have a wide range of applications such as medical knowledge-bases, or the semantic web. Research on DLs has been focused on the development of sound and complete inference algorithms to decide satisfiability and subsumption for increasingly expressive DLs. Non-standard inferences are a group of relatively new inference services which provide reasoning support for the building, maintaining, and deployment of DL knowledge-bases. So far, non-standard inferences are not available for very expressive DLs. In this paper we present first results on non-standard inferences for DLs with transitive roles. As a basis, we give a structural characterization of subsumption for DLs where existential and value restrictions can be imposed on transitive roles. We propose sound and complete algorithms to compute the least common subsumer (lcs).
Nachum Dershowitz and Mitchell A. Harris: **Enumerating Satisfiable Propositional Formulae**. In *Eurocomb*, 2003.

BibTeX entry
Paper (PDF)

#### Abstract:

It is known experimentally that there is a threshold for satisfiability in 3-CNF formulas around the value 4.25 for the ratio of variables to clauses. It is also known that the threshold is sharp, but that proof does not give a value for the threshold. We use purely combinatorial techniques to count the number of satisfiable boolean formulas given in conjunctive normal form. The intention is to provide information about the relative frequency of boolean functions with respect to statements of a given size, and to give a closed form formula for any number of variables, literals and clauses. We describe a correspondence between the syntax of propositions to the semantics of functions using a system of equations and show how to solve such a system.
J. Hladik and U. Sattler: **A Translation of Looping Alternating Automata to Description Logics**. In *Proc. of the 19th Conference on Automated Deduction (CADE-19)*, volume 2741 of *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

Jan Hladik: **Reasoning about Nominals with FaCT and RACER**. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, *CEUR-WS*, 2003.

BibTeX entry
Paper (PS)

#### Abstract:

We present a translation of looping alternating two-way automata into a comparably inexpressive description logic, which is contained in SHIQ. This enables us to perform the emptiness test for a language accepted by such an automaton using the systems FaCT and RACER. We implemented our translation and performed a test using automata which accept models for ALCIO concepts, so that we can use SHIQ systems to reason about nominals. Our empirical results show, however, that the resulting knowledge bases are hard to process for both systems.
I. Horrocks and U. Sattler: **Decidability of SHIQ with Complex Role Inclusion Axioms**. In *Proc. of the International Joint Conference on Artificial Intelligence (IJCAI-2003)*. Morgan-Kaufmann Publishers, 2003.

BibTeX entry
Paper (PDF)

O. Kutz, C. Lutz, F. Wolter, and M. Zakharyaschev: ** E-connections of Description Logics**. In

*Proceedings of the 2003 International Workshop on Description Logics (DL2003)*,

*CEUR-WS*, 2003.

BibTeX entry Paper (PS)

#### Abstract:

Recently, E-connections have been proposed as a new means for connecting knowledge representation systems. We illustrate how this connection technique can be used for combining description logics, thereby surveying various extensions of the original E-connections. For all these extensions, general transfer results for concept satisfiability are given.
C. Lutz: **Description Logics with Concrete Domains—A Survey**. In *Advances in Modal Logics Volume 4*. World Scientific Publishing Co. Pte. Ltd., 2003.

BibTeX entry
Paper (PS)

#### Abstract:

Description logics (DLs) are a family of logical formalisms that have initially been designed for the representation of conceptual knowledge in artificial intelligence and are closely related to modal logics. In the last two decades, DLs have been successfully applied in a wide range of interesting application areas. In most of these applications, it is important to equip DLs with expressive means that allow to describe ``concrete qualities'' of real-world objects such as their weight, temperature, and spatial extension. The standard approach is to augment description logics with so-called concrete domains, which consist of a set (say, the rational numbers), and a set of n-ary predicates with a fixed extension over this set. The ``interface'' between the DL and the concrete domain is then provided by a new logical constructor that has, to the best of our knowledge, no counterpart in modal logics. In this paper, we give an overview over description logics with concrete domains and summarize decidability and complexity results from the literature.
C. Lutz, C. Areces, I. Horrocks, and U. Sattler: **Keys, Nominals, and Concrete Domains**. In *Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence IJCAI-03*. Acapulco, Mexico, Morgan-Kaufmann Publishers, 2003.

BibTeX entry
Paper (PS)

#### Abstract:

Many description logics (DLs) combine knowledge representation on an abstract, logical level with an interface to "concrete" domains such as numbers and strings. We propose to extend such DLs with key constraints that allow the expression of statements like "US citizens are uniquely identified by their social security number". Based on this idea, we introduce a number of natural description logics and present (un)decidability results and tight NExpTime complexity bounds.
C. Lutz, U. Sattler, and L. Tendera: **The Complexity of Finite Model Reasoning in Description Logics**. In *Proc. of the 19th Conference on Automated Deduction (CADE-19)*, volume 2741 of *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

We analyze the complexity of finite model reasoning in the description logic ALCQI, i.e. ALC augmented with qualifying number restrictions, inverse roles, and general TBoxes. It turns out that all relevant reasoning tasks such as concept satisfiability and ABox consistency are ExpTime-complete, regardless of whether the numbers in number restrictions are coded unarily or binarily. Thus, finite model reasoning with ALCQI is not harder than standard reasoning with ALCQI.
C. Lutz, U.Sattler, and L. Tendera: **Finite Model reasoning in ALCQI is ExpTime-complete**. In

*Proceedings of the 2003 International Workshop on Description Logics (DL2003)*,

*CEUR-WS*, 2003.

BibTeX entry Paper (PS)

#### Abstract:

We analyze the complexity of finite model reasoning in the description logic ALCQI, i.e.ALC augmented with qualifying number restrictions, inverse roles, and general TBoxes. It turns out that all relevant reasoning tasks such as concept satisfiability and ABox consistency are ExpTime-complete, regardless of whether the numbers in number restrictions are coded unarily or binarily. Thus, finite model reasoning with ALCQI is not harder than standard reasoning with ALCQI.
C. Lutz, F. Wolter, and M. Zakharyaschev: **A tableau algorithm for reasoning about concepts and similarity**. In *Proceedings of the Twelfth International Conference on Automated Reasoning with Analytic Tableaux and Related Methods TABLEAUX 2003*, *LNAI*. Rome,Italy, Springer, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

We present a tableau-based decision procedure for the fusion (independent join) of the expressive description logic ALCQO and the logic MS for reasoning about distances and similarities. The resulting "hybrid" logic allows both precise and approximate representation of and reasoning about concepts. The tableau algorithm combines the existing tableaux for the components and shows that the tableau technique can be fruitfully applied to fusions of logics with nominals-the case in which no general decidability transfer results for fusions are available.
C. Lutz, F. Wolter, and M. Zakharyaschev: **Reasoning about concepts and similarity**. In *Proceedings of the 2003 International Workshop on Description Logics (DL2003)*, *CEUR-WS*, 2003.

BibTeX entry
Paper (PS)

#### Abstract:

In many application areas, there exist concepts that are too vague to be captured by classical DL concept definitions. Based on this observation, we combine the description logic ALCQO with the logic MS for reasoning about metric spaces, and propose to use the resulting "hybrid" for the definition of concepts based on similarity measures: concepts can be defined by referring to (the similarity to) proptypical instances. We sketch a tableau algorithm for our logic and present an undecidability result illustrating that it can be dangerous to allow a too close interaction between the DL and MS.
U. Sattler: **Description Logics for Ontologies**. In *Proc. of the International Conference on Conceptual Structures (ICCS 2003)*, volume 2746 of *LNAI*. Springer Verlag, 2003.

BibTeX entry
Paper (PS)
©Springer-Verlag

U. Sattler, D. Calvanese, and R. Molitor: **Relationship with other Formalisms**. In Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors, *The Description Logic Handbook: Theory, Implementation, and Applications*, pages 137–177. Cambridge University Press, 2003.

BibTeX entry

#### Abstract:

In this chapter, we are concerned with the relationship between description logics and other formalisms, regardless of whether they were designed for knowledge representation issues or not. Obviously, due to the lack of space, we cannot compare each representational formalism with DLs, thus we concentrated on those that either (1) had or have a strong influence on DLs (e.g., modal logics), (2) are closely related to description logics for historical reasons (e.g., semantic networks and structured inheritance networks), (3) have similar expressive power (e.g., semantic data models).## 2002

C. Areces and C. Lutz: **Concrete Domains and Nominals United.**. In Carlos Areces, Patrick Blackburn, Maarten Marx, and Ulrike Sattler, editors, *Proceedings of the fourth Workshop on Hybrid Logics (HyLo'02)*, 2002.

BibTeX entry
Paper (PS)

#### Abstract:

While the complexity of concept satisfiability in both ALCO, the basic description logic ALC enriched with nominals, and ALC(D), the extension of ALC with concrete domains, is known to be PSpace-complete, in this article we show that the combination ALCO(D) of these two logics can have a NExpTime-hard concept satisfiability problem (depending on the concrete domain D used). The proof is by a reduction of a NExpTime-complete variant of the domino problem to ALCO(D)-concept satisfiability.
F. Baader, I. Horrocks, and U. Sattler: **Description Logics for the Semantic Web**. *KI – Künstliche Intelligenz*, 4, 2002.

BibTeX entry

#### Abstract:

The vision of a Semantic Web has recently drawn considerable attention, both from academia and industry. Description Logics are often named as one of the tools that can support the Semantic Web and thus help to make this vision reality. In this paper, we try to sketch what Description Logics are and what they can do for the Semantic Web. It turns out that Descriptions Logics are very useful for defining ontologies, which provide the Semantic Web with a common understanding of the basic semantic concepts used to annotate Web pages. We also argue that, without the last decade of basic research in this area, Description Logics could not play such an important role in this domain.
F. Baader and R. Küsters: **Unification in a Description Logic with Inconsistency and Transitive Closure of Roles**. In I. Horrocks and S. Tessaris, editors, *Proceedings of the 2002 International Workshop on Description Logics*, 2002. See http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-53/

BibTeX entry
Paper (PS)

#### Abstract:

Unification considers concept patterns, i.e., concept descriptions with variables, and tries to make these descriptions equivalent by replacing the variables by appropriate concept descriptions. In a previous paper, we have shown that unification in FLreg, a description logic that allows for the concept constructors top concept, concept conjunction, and value restrictions as well as the role constructors union, composition, and transitive closure, is an ExpTime-complete problem and that solvable FLreg-unification problems always have least unifiers. In the present paper, we generalize these results to a DL which extends FLreg by the bottom concept. The proof strongly depends on the existence of least unifiers in FLreg.
F. Baader, C. Lutz, H. Sturm, and F. Wolter: **Fusions of Description Logics and Abstract Description Systems**. *Journal of Artificial Intelligence Research (JAIR)*, 16:1–58, 2002.

BibTeX entry
Paper (PS)

#### Abstract:

Fusions are a simple way of combining logics. For normal modal logics, fusions have been investigated in detail. In particular, it is known that, under certain conditions, decidability transfers from the component logics to their fusion. Though description logics are closely related to modal logics, they are not necessarily normal. In addition, ABox reasoning in description logics is not covered by the results from modal logics. In this paper, we extend the decidability transfer results from normal modal logics to a large class of description logics. To cover different description logics in a uniform way, we introduce abstract description systems, which can be seen as a common generalization of description and modal logics, and show the transfer results in this general setting.
F. Baader and C. Tinelli: **Combining Decision Procedures for Positive Theories Sharing Constructors**. In S. Tison, editor, *Proceedings of the 13th International Conference on Rewriting Techniques and Applications (RTA-02)*, volume 2378 of *Lecture Notes in Computer Science*, pages 338–352. Copenhagen, Denmark, Springer-Verlag, 2002.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper addresses the following combination problem: given two equational theories E1 and E2 whose positive theories are decidable, how can one obtain a decision procedure for the positive theory of their union. For theories over disjoint signatures, this problem was solved by Baader and Schulz in 1995. This paper is a first step towards extending this result to the case of theories sharing constructors. Since there is a close connection between positive theories and unification problems, this also extends to the non-disjoint case the work on combining decision procedures for unification modulo equational theories.
F. Baader and C. Tinelli: **Deciding the Word Problem in the Union of Equational Theories**. *Information and Computation*, 178(2):346–390, 2002.

BibTeX entry
Free reprint

#### Abstract:

The main contribution of this article is a new method for combining decision procedures for the word problem in equational theories. In contrast to previous methods, it is based on transformation rules, and also applies to theories sharing "constructors."
F. Baader and A.-Y. Turhan: **On the problem of computing small representations of least common subsumers**. In *Proceedings of the German Conference on Artificial Intelligence, 25th German Conference on Artificial Intelligence (KI 2002)*, *Lecture Notes in Artificial Intelligence*. Aachen, Germany, Springer–Verlag, 2002.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

For Description Logics with existential restrictions, the size of the least common subsumer (lcs) of concept descriptions may grow exponentially in the size of the input descriptions. The first (negative) result presented in this paper is that it is in general not possible to express the exponentially large concept description representing the lcs in a more compact way by using an appropriate (acyclic) terminology. In practice, a second and often more severe cause of complexity was the fact that concept descriptions containing concepts defined in a terminology must first be unfolded (by replacing defined names by their definition) before the known lcs algorithms could be applied. To overcome this problem, we present a modified lcs algorithm that performs lazy unfolding, and show that this algorithm works well in practice.
S. Brandt, R. Küsters, and A.-Y. Turhan: **Approximating ALCN-Concept Descriptions**. In

*Proceedings of the 2002 International Workshop on Description Logics*, 2002.

BibTeX entry Paper (PS)

#### Abstract:

Approximating a concept, defined in one DL, means to translate this concept to another concept, defined in a second typically less expressive DL, such that both concepts are as closely related as possible with respect to subsumption. In a previous work, we have provided an algorithm for approximating ALC-concept descriptions by ALE-concept descriptions. In the present paper, motivated by an application in chemical process engineering, we extend this result by taking number restrictions into account.
S. Brandt, R. Küsters, and A.-Y. Turhan: **Approximation and Difference in Description Logics**. In D. Fensel, F. Giunchiglia, D. McGuiness, and M.-A. Williams, editors, *Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002)*, pages 203–214. San Francisco, CA, Morgan Kaufman, 2002.

BibTeX entry
Paper (PS)

#### Abstract:

Approximation is a new inference service in Description Logics first mentioned by Baader, Küsters, and Molitor. Approximating a concept, defined in one Description Logic, means to translate this concept to another concept, defined in a second typically less expressive Description Logic, such that both concepts are as closely related as possible with respect to subsumption. The present paper provides the first in-depth investigation of this inference task. We prove that approximations from the Description Logic ALC to ALE always exist and propose an algorithm computing them. As a measure for the accuracy of the approximation, we introduce a syntax-oriented difference operator, which yields a concept that contains all aspects of the approximated concept that are not present in the approximation. It is also argued that a purely semantical difference operator, as introduced by Teege, is less suited for this purpose. Finally, for the logics under consideration, we propose an algorithm computing the difference.
S. Brandt and A.-Y. Turhan: **An Approach for Optimized Approximation**. In *Proceedings of the KI-2002 Workshop on Applications of Description Logics (KIDLWS'01)*, *CEUR-WS*. Aachen, Germany, RWTH Aachen, September 2002. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/

BibTeX entry
Paper (PS)

#### Abstract:

Approximation is a new inference service investigated in [BKT-KR-02]. An approximation of an ALC-concept by an ALE-concept can be computed in double exponential time. Consequently, one needs powerful optimization techniques for approximating an entire unfoldable TBox. Addressing this issue we identify a special form of ALC-concepts that can be divided into parts s.t. each part can be approximated independently.
S. Demri and U. Sattler: **Automata-Theoretic Decision Procedures for Information Logics**. *Fundamenta Informaticae*, 53(1):1–22, 2002.

BibTeX entry
Paper (PS)

T. Hinze: **Universelle Modelle und ausgewählte Algorithmen des DNA-Computing**. Technische Universität Dresden, 2002.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

Die Arbeit beleuchtet das Forschungsgebiet des DNA-Computing vordergründig aus dem Blickwinkel der Berechenbarkeitstheorie. Universelle sowie platzbeschränkt universelle Modelle des DNA-Computing, deren DNA-basierte Daten auf der Abbildung linearer DNA beruhen, werden untersucht, klassifiziert und als Beschreibungssysteme für Algorithmen angewendet. Mit dem TT6-EH-System und dem Simulationssystem Sisyphus werden zwei universelle DNA-Computing-Modelle eingeführt, deren Modelleigenschaften labornah ausgerichtet sind. Das TT6-EH-System stellt ein endlichkomponentiges verteiltes Splicing-System dar, das sich durch einen statischen Systemaufbau, eine Minimierung der in die Verarbeitung einbezogenen Ressourcen und ein niedriges Abstraktionsniveau der Modelloperationen auszeichnet. Das Simulationssystem Sisyphus berücksichtigt Seiteneffekte der den Modelloperationen zugrundeliegenden molekularbiologischen Prozesse. Zusätzlich besitzt das Modell die Eigenschaften "restriktiv" und "multimengenbasiert". Anhand einer Probleminstanz des NP-vollständigen Rucksackproblems erfolgte eine laborpraktische Verifikation.
T. Hinze, U. Hatnik, and M. Sturm: **An Object Oriented Simulation of Real Occurring Molecular Biological Processes for DNA Computing and Its Experimental Verification**. In N. Jonoska and N.C. Seeman, editors, *DNA Computing. Proceedings Seventh International Workshop on DNA-Based Computers (DNA7) Tampa, FL, USA, 2001*, volume 2340 of *Series Lecture Notes in Computer Science*. Springer Verlag, 2002.

BibTeX entry
Paper (PDF)
Paper (PS)
©Springer-Verlag

#### Abstract:

We present a simulation tool for frequently used DNA operations on the molecular level including side effects based on a probabilistic approach. The specification of the considered operations is directly adapted from detailed observations of molecular biological processes in laboratory studies. Bridging the gap between formal models of DNA computing, we use process description methods from biochemistry and show the closeness of the simulation to the reality.
J. Hladik: **Implementation and Optimisation of a Tableau Algorithm for the Guarded Fragment**. In U. Egly and C. G. Fermüller, editors, *Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2002)*, volume 2381 of *Lecture Notes in Artificial Intelligence*. Springer-Verlag, 2002.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

In this paper, we present SAGA, the implementation of a tableau-based Satisfiability Algorithm for the Guarded Fragment (GF). Satisfiability for GF with finite signature is Exptime-complete and therefore theoretically intractable, but existing tableau-based systems for Exptime-complete description and modal logics perform well for many realistic knowledge bases. We implemented and evaluated several optimisations used in description logic systems, and our results show that with an efficient combination, SAGA can compete with existing highly optimised systems for description logics and first order logic.
J. Hladik: **Implementation and evaluation of a tableau algorithm for the Guarded Fragment**. In I. Horrocks and S. Tessaris, editors, *Proceedings of the 2002 international workshop on description Logics (DL 2002)*, volume 53 of *CEUR*, 2002.

BibTeX entry
Paper (PS)

#### Abstract:

In this paper we present SAGA, an implementation of a tableau-based Satisfiability Algorithm for the Guarded Fragment (GF). Satisfiability for GF with finite signature is ExpTime-complete and therefore intractable in the worst case, but existing tableau-based systems for ExpTime-complete description and modal logics perform reasonably well for ``realistic'' knowledge bases. We implemented and evaluated several optimizations used in description logic systems, and our results show that, with an efficient combination, SAGA can compete with existing highly optimized systems for description logics.
I. Horrocks and U. Sattler: **Optimised Reasoning for SHIQ**. In *Proc. of the 15th European Conference on Artificial Intelligence*, 2002.

BibTeX entry
Paper (PDF)
Paper (PS)

O. Kupferman, U. Sattler, and M. Y. Vardi: **The Complexity of the Graded mu-Calculus**. In *Proceedings of the Conference on Automated Deduction*, volume 2392 of *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2002.

BibTeX entry
Paper (PS)
©Springer-Verlag

C. Lutz: **Adding Numbers to the SHIQ Description Logic—First Results**. In

*Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002)*. Morgan Kaufman, 2002. To appear

BibTeX entry Paper (PS)

#### Abstract:

Recently, the Description Logic (DL) SHIQ has found a large number of applications. This success is due to the fact that SHIQ combines a rich expressivity with efficient reasoning, as is demonstrated by its implementation in DL systems such as FaCT and RACER. One weakness of SHIQ, however, limits its usability in several application areas: numerical knowledge such as knowledge about the age, weight, or temperature of real-world entities cannot be adequately represented. In this paper, we propose an extension of SHIQ that aims at closing this gap. The new Description Logic Q-SHIQ, which augments SHIQ by additional, "concrete domain" style concept constructors, allows to refer to rational numbers in concept descriptions, and also to define concepts based on the comparison of numbers via predicates such as "<" "=". We argue that this kind of expressivity is needed in many application areas such as reasoning about the semantic web. We prove reasoning with Q-SHIQ to be ExpTime-complete (thus not harder than reasoning with SHIQ) by devising an automata-based decision procedure.
C. Lutz: **Description Logics with Concrete Domains—A Survey**. In *Advances in Modal Logic 2002 (AiML 2002)*, 2002. Final version appeared in Advanced in Modal Logic Volume 4, 2003.

BibTeX entry
Paper (PS)

#### Abstract:

Description logics (DLs) are a family of logical formalisms that have initially been designed for the representation of conceptual knowledge in artificial intelligence and are closely related to modal logics. In the last two decades, DLs have been successfully applied in a wide range of interesting application areas. In most of these applications, it is important to equip DLs with expressive means that allow to describe ``concrete qualities'' of real-world objects such as their weight, temperature, and spatial extension. The standard approach is to augment description logics with so-called concrete domains, which consist of a set (say, the rational numbers), and a set of n-ary predicates with a fixed extension over this set. The ``interface'' between the DL and the concrete domain is then provided by a new logical constructor that has, to the best of our knowledge, no counterpart in modal logics. In this paper, we give an overview over description logics with concrete domains and summarize decidability and complexity results from the literature.
C. Lutz: **Reasoning about Entity Relationship Diagrams with Complex Attribute Dependencies**. In *Proceedings of the 2002 International Workshop on Description Logics*, 2002. To appear

BibTeX entry
Paper (PS)

#### Abstract:

Entity Relationship (ER) diagrams are among the most popular formalisms for the support of database design. To aid database designers in building (extended) ER schemas, Description Logics (DLs) have been proposed and successfully used as a tool for reasoning about such schemas. In this paper, we propose the extension of ER diagrams with dependencies on attributes and show how such dependencies can be translated into DLs with concrete domains. The result is an integrated approach to reasoning with conceptual models and attribute dependencies.
C. Lutz: ** PSpace Reasoning with the Description Logic ALCF(D)**.

*Logic Journal of the IGPL*, 10(5):535–568, 2002.

BibTeX entry Paper (PS)

#### Abstract:

Description Logics (DLs), a family of formalisms for reasoning about conceptual knowledge, can be extended with concrete domains to allow an adequate representation of "concrete qualities" of real-worlds entities such as their height, temperature, duration, and size. In this paper, we study the complexity of reasoning with the basic DL with concrete domains ALC(D) and its extension with so-called feature agreements and disagreements ALCF(D). We show that, for both logics, the standard reasoning tasks concept satisfiability, concept subsumption, and ABox consistency are PSpace-complete if the concrete domain D satisfies some natural conditions.
C. Lutz and U. Sattler: **A Proposal for Describing Services with DLs**. In *Proceedings of the 2002 International Workshop on Description Logics*, 2002. To appear

BibTeX entry
Paper (PS)

#### Abstract:

Motivated by the semantic web application, we present a generic extension of description logics to describe actions. These actions can then be chained to service descriptions. A web page providing a service can be annotated with a description of this service, which can then be taken into account by agents searching for a web service. Besides defining syntax and semantics of this extension of DLs, we introduce and discuss inference problems which are useful to annotate web pages with a description of the service they provide.
C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: **A Tableau Decision Algorithm for Modalized ALC with Constant Domains**.

*Studia Logica*, 72(2):199–232, 2002.

BibTeX entry Paper (PDF)

#### Abstract:

The aim of this paper is to construct a tableau decision algorithm for the modal description logic K/ALC with constant domains. More precisely, we present a tableau procedure that is capable of deciding, given an ALC-formula x with extra modal operators (which are applied only to concepts and TBox axioms, but not to roles), whether x is satisfiable in a model with constant domains and arbitrary accessibility relations. Tableau-based algorithms have been shown to be `practical' even for logics of rather high complexity. This gives us grounds to believe that, although the satisfiability problem for K/ALC is known to be NEXPTIME-complete, by providing a tableau decision algorithm we demonstrate that highly expressive description logics with modal operators have a chance to be implementable. The paper gives a solution to an open problem of Baader and Laux.
G. Pan, U. Sattler, and M. Y. Vardi: **BDD-Based Decision Procedures for K**. In *Proceedings of the Conference on Automated Deduction*, volume 2392 of *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2002.

BibTeX entry
Paper (PS)
©Springer-Verlag

## 2001

F. Baader, S. Brandt, and R. Küsters: **Matching under Side Conditions in Description Logics**. In B. Nebel, editor, *Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, IJCAI'01*, pages 213–218. Seattle, Washington, Morgan Kaufmann, 2001.

BibTeX entry

#### Abstract:

Whereas matching in Description Logics is now relatively well-investigated, there are only very few formal results on matching under additional side conditions, though these side conditions were already present in the original paper by Borgida and McGuinness introducing matching in DLs. The present paper closes this gap for sublanguages of the DL ALN.
F. Baader, G. Brewka, and Th. Eiter, editors: **KI 2001: Advances in Artificial Intelligence, Proceedings of the Joint German/Austrian Conference on AI (KI 2001)**. Vienna, Austria, Springer–Verlag, 2001.

BibTeX entry

F. Baader and R. Küsters: **Unification in a Description Logic with Transitive Closure of Roles**. In R. Nieuwenhuis and A. Voronkov, editors, *Proceedings of the 8th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2001)*, volume 2250 of *Lecture Notes in Computer Science*, pages 217–232. Havana, Cuba, Springer-Verlag, 2001.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Unification of concept descriptions was introduced by Baader and Narendran as a tool for detecting redundancies in knowledge bases. It was shown that unification in the small description logic FL0, which allows for conjunction, value restriction, and the top concept only, is already ExpTime-complete. The present paper shows that the complexity does not increase if one additionally allows for composition, union, and transitive closure of roles. It also shows that matching (which is polynomial in FL0) is PSpace-complete in the extended description logic. These results are proved via a reduction to linear equations over regular languages, which are then solved using automata. The obtained results are also of interest in formal language theory.
F. Baader and P. Narendran: **Unification of Concepts Terms in Description Logics**. *J. Symbolic Computation*, 31(3):277–305, 2001.

BibTeX entry
Free reprint

#### Abstract:

Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to replace certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
F. Baader and U. Sattler: **An Overview of Tableau Algorithms for Description Logics**. *Studia Logica*, 69:5–40, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system KL-ONE. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableau-like algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which tableau procedures have been quite successful.
Nevertheless, due to different underlying intuitions and applications, most
description logics differ significantly from run-of-the-mill modal and program
logics. Consequently, the research on tableau algorithms in description logics
led to new techniques and results, which are, however, also of interest
for modal logicians. In this article, we will focus on three features that
play an important role in description logics (number restrictions, terminological
axioms, and role constructors), and show how they can be taken into account by
tableau algorithms.

F. Baader and K. Schulz: **Combining Constraint Solving**. In H. Comon, C. Marché, and R. Treinen, editors, *Constraints in Computational Logics*, volume 2002 of *Lecture Notes in Computer Science*. Springer–Verlag, 2001. See http://link.springer.de/link/service/series/0558/tocs/t2002.htm

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

In many areas of Logic, Computer Science, and Artificial Intelligence, there is a need for specialized formalisms and inference mechanisms to solve domain-specific tasks. For this reason, various methods and systems have been developed that allow for an efficient and adequate treatment of such restricted problems. In most realistic applications, however, one is faced with a complex combination of different problems, which means that a system tailored to solving a single problem can only be applied if it is possible to combine it both with other specialized systems and with general purpose systems.
F. Baader and W. Snyder: **Unification Theory**. In J.A. Robinson and A. Voronkov, editors, *Handbook of Automated Reasoning*, pages 447–533. Elsevier Science Publishers, 2001. See the handbook Web pages of Andrei Voronkov (http://www.cs.man.ac.uk/ voronkov/handbook-ar/index.html) and Elsevier (http://www.elsevier.nl/locate/isbn/0444829490).

BibTeX entry
Paper (PS)
Free reprint

#### Abstract:

This is the final version of a chapter on unification theory to appear in the Handbook of Automated Reasoning. The chapter is not intended to give a complete coverage of all the results. Instead we try to cover a number of significant topics in more detail. This should give a feeling for unification research and its methodology, provide the most important references, and enable the reader to study recent research papers on the topic.
F. Baader and S. Tobies: **The Inverse Method Implements the Automata Approach for Modal Satisfiability**. In *Proceedings of the International Joint Conference on Automated Reasoning IJCAR'01*, volume 2083 of *Lecture Notes in Artificial Intelligence*, pages 92–106. Springer-Verlag, 2001.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper ties together two distinct strands in automated reasoning: the tableau- and the automata-based approach. It shows that the inverse tableau method can be viewed as an implementation of the automata approach. This is of interest to automated deduction because Voronkov recently showed that the inverse method yields a viable decision procedure for the modal logic K.
F. Baader and A.-Y. Turhan: **TBoxes do not yield a compact representation of least common subsumers**. In *Proceedings of the International Workshop in Description Logics 2001 (DL2001)*, August 2001.

BibTeX entry
Paper (PS)

#### Abstract:

For Description Logics with existential restrictions, the size of the least common subsumer (lcs) of concept descriptions may grow exponentially in the size of the input descriptions. This paper investigates whether the possibly exponentially large concept description representing the lcs can always be represented in a more compact way when using an appropriate (acyclic) TBox for defining this description. This conjecture was supported by our experience in a chemical process engineering application. Nevertheless, it turns out that, in general, TBoxes cannot always be used to obtain a polynomial size representation of the lcs.
S. Brandt and A.-Y. Turhan: **Using Non-standard Inferences in Description Logics — what does it buy me?**. In *Proceedings of the KI-2001 Workshop on Applications of Description Logics (KIDLWS'01)*, number 44 in *CEUR-WS*. Vienna, Austria, RWTH Aachen, September 2001. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-44/

BibTeX entry
Paper (PS)

#### Abstract:

In knowledge representation systems based on Description Logics, standard inference services such as consistency, subsumption, and instance are well-investigated. In contrast, non-standard inferences like most specific concept, least common subsumer, unification, and matching are missing in most systems—or exist only as ad-hoc implementations. We give an example of how these inferences can be applied successfully in the domain of process engineering. The benefit gained in our example, however, occurs in to many domains where knowledge bases are managed by persons with little expertise in knowledge engineering.
V. Haarslev, R. Möller, and A.-Y. Turhan: **Exploiting Pseudo Models for TBox and ABox Reasoning in Expressive Description Logics**. In *Proceedings of the International Joint Conference on Automated Reasoning IJCAR'01*, *LNAI*. Springer Verlag, 2001.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper investigates optimization techniques and data structures exploiting the use of so-called*pseudo models*. These techniques are applied to speed-up TBox and ABox reasoning for the description logics ALCNHR+ and ALC(D). The advances are demonstrated by an empirical analysis using the description logic system RACE that implements TBox and ABox reasoning for ALCNHR+.

U. Hatnik, T. Hinze, and M. Sturm: **A Probabilistic Approach to Description of Molecular Biological Processes on DNA and Their Object Oriented Simulation**. In V.V. Kluev and N.E. Mastorakis, editors, *Proceedings WSES International Conference on Simulation (SIM2001), Malta*, 2001.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

Inspired by the potential of DNA based recombination techniques, we have analyzed processes used in DNA computing at the molecular level in laboratory studies with the aim to specify these processes as detailed as possible. Based on this knowledge, we have developed a simulation tool of real occurring molecular biological processes considering side effects. Side effects are described by appropriate statistical parameters. The comparison of simulation results with real observations in the laboratory shows a high degree of accordance. Using the simulation tool, prognoses about resulting DNA strands and influences of side effects to subsequent DNA operations can be obtained. The number of strand duplicates reflecting DNA concentrations is considered as an important factor for a detailed description of the DNA computing operations on the molecular level in the simulation. This property allows to evaluate the quantitative balance of DNA concentrations in a test tube. The simulation considers the DNA based reactions and processes synthesis, annealing, melting, union, ligation, digestion, labeling, polymerisation, affinity purification, and gel electrophoresis.
C. Hirsch and S. Tobies: **A Tableau Algorithm for the Clique Guarded Fragment**. In F. Wolter, H. Wansing, M. de Rijke, and M. Zakharyaschev, editors, *Advances in Modal Logics Volume 3*. Stanford, CSLI Publications, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

We describe a "modal style" tableau algorithm that decides satisfiability for the clique guarded fragment. As a corollary of constructions used to prove the correctness of the algorithm, we obtain a new proof for the generalised tree model property of the clique guarded fragment.
J. Hladik: **Implementierung eines Entscheidungsverfahrens für das Bewachte Fragment der Prädikatenlogik**. RWTH Aachen, Germany, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

In this thesis we present SAGA, an optimized implementation of a tableau algorithm for the Guarded Fragment of first order predicate logic. The empirical evaluation of this program with different sets of benchmark formulae shows that backjumping and semantic branching are crucial for most formulae, and blocking is efficient even when it is not required by the logic of the corresponding formula. Compared with other systems, the performance of SAGA is similar to that of tableau algorithms for logics in a lower complexity class.
I. Horrocks and U. Sattler: **Ontology Reasoning in the SHOQ(D) Description Logic**. In *Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence*, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

Ontologies are set to play a key role in the ``Semantic Web'' by providing a source of shared and precisely defined terms that can be used in descriptions of web resources. Reasoning over such descriptions will be essential if web resources are to be more accessible to automated processes. SHOQ(D) is an expressive description logic equipped with named individuals and concrete datatypes which has almost exactly the same expressive power as the latest web ontology languages (e.g., OIL and DAML). We present sound and complete reasoning services for this logic.
C. Lutz: **Interval-based Temporal Reasoning with General TBoxes**. In Bernhard Nebel, editor, *Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence IJCAI-01*, pages 89–94. Seattle, Washington, USA, Morgan-Kaufmann Publishers, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

Until now, interval-based temporal Description Logics (DLs) did—if at all—only admit TBoxes of a very restricted form, namely acyclic macro definitions. In this paper, we present a temporal DL that overcomes this deficieny and combines interval-based temporal reasoning with general TBoxes. We argue that this combination is very interesting for many application domains. An automata-based decision procedure is devised and a tight ExpTime-complexity bound is obtained. Since the presented logic can be viewed as being equipped with a concrete domain, our results can be seen from a different perspective: we show that there exist interesting concrete domains for which reasoning with general TBoxes is decidable.
C. Lutz: **NExpTime-complete Description Logics with Concrete Domains**. In Rajeev Goré, Alexander Leitsch, and Tobias Nipkow, editors, *Proceedings of the International Joint Conference on Automated Reasoning*, number 2083 in *Lecture Notes in Artifical Intelligence*, pages 45–60. Siena, Italy, Springer Verlag, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

Concrete domains are an extension of Description Logics (DLs) allowing to integrate reasoning about conceptual knowledge with reasoning about ``concrete properties'' of objects such as sizes, weights, and durations. It is known that reasoning with ALC(D), the basic DL admitting concrete domains, is PSpace-complete. In this paper, it is shown that the upper bound is not robust: we give three examples for seemingly harmless extensions of ALC(D)—namely acyclic TBoxes, inverse roles, and a role-forming concrete domain constructor—that make reasoning NExpTime-hard. As a corresponding upper bound, we show that reasoning with all three extensions together is in NExpTime.
C. Lutz and U. Sattler: **The Complexity of Reasoning with Boolean Modal Logics**. In Frank Wolter, Heinrich Wansing, Maarten de Rijke, and Michael Zakharyaschev, editors, *Advances in Modal Logics Volume 3*. CSLI Publications, Stanford, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

In this paper, we investigate the complexity of reasoning with various Boolean Modal Logics. The main results are that (i) adding negation of modal parameters to (multi-modal) K makes reasoning ExpTime-complete and (ii) adding atomic negation and conjunction to K even yields a NExpTime-complete logic. The last result is relativized by the fact that it depends on an infinite number of modal parameters to be available. If the number of modal parameters is bounded, full Boolean Modal Logic becomes ExpTime-complete.
C. Lutz, U. Sattler, and F. Wolter: **Description Logics and the Two-Variable Fragment**. In D.L. McGuiness, P.F. Pater-Schneider, C. Goble, and R. Möller, editors, *Proceedings of the 2001 International Workshop in Description Logics (DL-2001)*, pages 66–75, 2001. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/

BibTeX entry
Paper (PS)

#### Abstract:

We present a description logic L that is as expressive as the two-variable fragment of first-order logic and differs from other logics with this property in that it encompasses solely standard role- and concept-forming operators. The description logic L is obtained from ALC by adding full Boolean operators on roles, the inverse operator on roles and an identity role. It is proved that L has the same expressive power as the two-variable fragment FO^{2}of first-order logic by presenting a translation from FO

^{2}-formulae into equivalent L-concepts (and back). Additionally, we discuss an interesting complexity phenomenon: both L and FO

^{2}are NExpTime-complete and so is the restriction of FO

^{2}to finitely many relation symbols; astonishingly, the restriction of L to a bounded number of role names is in ExpTime.

C. Lutz, U. Sattler, and F. Wolter: **Modal Logics and the two-variable fragment**. In *Annual Conference of the European Association for Computer Science Logic CSL'01*, *LNCS*. Paris, France, Springer Verlag, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

We introduce a modal language L which is obtained from standard modal logic by adding the Boolean operators on accessibility relations, the identity relation, and the converse of relations. It is proved that L has the same expressive power as the two-variable fragment FO2 of first-order logic, but speaks less succinctly about relational structures: if the number of relations is bounded, then L-satisfiability is ExpTime-complete but FO2 satisfiability is NExpTime-complete. We indicate that the relation between L and FO2 provides a general framework for comparing modal and temporal languages with first-order languages.
C. Lutz, H. Sturm, F. Wolter, and M. Zakharyaschev: **Tableaux for Temporal Description Logic with Constant Domain**. In Rajeev Goré, Alexander Leitsch, and Tobias Nipkow, editors, *Proceedings of the International Joint Conference on Automated Reasoning*, number 2083 in *Lecture Notes in Artifical Intelligence*, pages 121–136. Siena, Italy, Springer Verlag, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

We show how to combine the standard tableau system for the basic description logic ALC and Wolper's tableau calculus for propositional temporal logic PTL (with the temporal operators `next-time' and `until') in order to design a terminating sound and complete tableau-based satisfiability-checking algorithm for the temporal description logic PTL_{A}LC interpreted in models with constant domains. We use the method of quasimodels to represent models with infinite domains, and the technique of minimal types to maintain these domains constant. The combination is flexible and can be extended to more expressive description logics or even to decidable fragments of first-order temporal logics.

U. Sattler and M. Y. Vardi: **The Hybrid mu-Calculus**. In R. Goré, A. Leitsch, and T. Nipkow, editors, *Proceedings of the International Joint Conference on Automated Reasoning*, volume 2083 of *LNAI*, pages 76–91. Springer Verlag, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

In the last years, several decision procedures for ExpTime-complete modal/description/dynamic logics were implemented and proved to behave well in practice. Due to its high expressive power, the full mu-calculus (including converse programs) is one of the ``queens'' of ExpTime modal/description/dynamic logics. However, it lacks two features important in many applications: nominals to refer to named individuals, and a universal program for the internalisation of general axioms. We present an ExpTime decision procedure for the full mu-Calculus extended with nominals and a universal program, thus creating a new, more expressive ``queen'' logic. The decision procedure is based on tree automata, and makes explicit the problems caused by nominals and how to overcome them. Roughly speaking, we show how to reason in a logic lacking the tree model property using techniques for logics with the tree model property. Hence the contribution of the paper is two-fold: we extend the choice of ExpTime logics an implementer can choose from, and we present a technique to reason in the presence of nominals.
E.P. Stoschek, M. Sturm, and T. Hinze: **DNA-Computing - ein funktionales Modell im laborpraktischen Experiment**. *Informatik Forschung und Entwicklung*, 16(1):35–52, 2001.

BibTeX entry
Paper (PDF)
©Springer-Verlag

#### Abstract:

Im Zentrum der Betrachtungen zum DNA-Computing steht die Frage nach den Chancen und Grenzen dieses neuen Berechnungsmodells, nachdem in den letzten Jahren eine rasante Entwicklung auf das Thema aufmerksam machte. Neben beachtlichen theoretischen Untersuchungen zum "Rechnen im Reagenzglas" werden auch laborpraktische Implementierungen favorisiert. An der TU Dresden wurde in interdisziplinärer Arbeit ein Integer-Rucksackproblem mittels eines DNA-Algorithmus im Labor gelöst und dabei eine Vielzahl molekularbiologischer Operationen analysiert. Mithilfe dieses Satzes von Operationen gelang eine universelle und labornahe Modellierung des DNA-Computing. Hierbei angewandte Techniken und Methoden werden vorgestellt und bewertet. Die Beschreibung des DNA-Algorithmus zeigt, wie sich Einzeloperationen vorteilhaft zu Operationsfolgen zusammensetzen lassen und gemeinsam mit einer geeigneten DNA-Kodierung der Eingangsdaten zur Lösung des Problems im Labor führen. Erstmalig wurden hierbei natürliche Zahlen verarbeitet. Die Arbeitsgemeinschaft DNA-Computing Dresden konzentriert sich auf Aufgabenstellungen, die formale Modelle des DNA-Computing mit überzeugenden Laborimplementierungen verbinden.
M. Sturm and T. Hinze: **Distributed Splicing of RE with 6 Test Tubes**. *Romanian Journal of Information Science and Technology*, 4(1-2):211–234, 2001.

BibTeX entry
Paper (PDF)
Paper (PS)

#### Abstract:

This paper introduces a functional approach to distributed splicing systems for generation of recursive enumerable languages with 6 test tubes. The specification of this system serves both, the formal mathematical and the lab-experimental aspect. The implementation of the splicing system using a functional description of laboratory operations supports particularly the last-mentioned aspect. Advantages of this approach consist in large experimental practicability as well as in the independence of certain Chomsky type 0 grammar parameters.
M. Sturm and T. Hinze: **Verfahren zur Ausführung von mathematischen Operationen mittels eines DNA-Computers und DNA-Computer hierzu**. 2001. Anmeldung als Deutsches Patent, Aktenzeichen 10159886.6, Deutsches Patentamt München

BibTeX entry

Stephan Tobies: **PSPACE Reasoning for Graded Modal Logics**. *Journal of Logic and Computation*, 11(1):85–106, 2001.

BibTeX entry
Paper (PS)
Free reprint

#### Abstract:

We present a PSPACE algorithm that decides satisfiability of the graded modal logic Gr(K_{R}) - a natural extension of propositional modal logic K

_{R}by counting expressions - which plays an important role in the area of knowledge representation. The algorithm employs a tableaux approach and is the first known algorithm which meets the lower bound for the complexity of the problem. Thus, we exactly fix the complexity of the problem and refute a EXPTIME-hardness conjecture. We extend the results to the logic Gr(K

_{R\cap-1}), which augments Gr(K

_{R}) with inverse relations and intersection of accessibility relations. This establishes a kind of ``theoretical benchmark'' that all algorithmic approaches can be measured against.

A.-Y. Turhan and R. Molitor: **Using lazy unfolding for the computation of least common subsumers**. In *Proceedings of the International Workshop in Description Logics 2001 (DL2001)*, August 2001.

BibTeX entry
Paper (PS)

#### Abstract:

For description logics with existential restrictions, the size of the least common subsumer (lcs) of concept descriptions may grow exponentially in the size of the concept descriptions. To reduce the size of the output descriptions and the run-time of the lcs algorithm we present an optimized algorithm for computing the lcs in ALE using lazy unfolding. A first evaluation of the performance of the naive algorithm in comparison to the performance of the algorithm using lazy unfolding indicates a performance gain for both concept sizes as well as run-times.## 2000

F. Baader and R. Küsters: **Matching in Description Logics with Existential Restrictions**. In A.G. Cohn, F. Giunchiglia, and B. Selman, editors, *Proceedings of the Seventh International Conference on Knowledge Representation and Reasoning (KR2000)*, pages 261–272. San Francisco, CA, Morgan Kaufmann Publishers, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Matching of concepts against patterns is a new inference task in Description Logics, which was originally motivated by applications of the CLASSIC system. Consequently, the work on this problem was until now mostly concerned with sublanguages of the CLASSIC language, which does not allow for existential restrictions. This paper extends the existing work on matching in two directions. On the one hand, the question of what are the most ``interesting" solutions of matching problems is explored in more detail. On the other hand, for languages with existential restrictions both, the complexity of deciding the solvability of matching problems and the complexity of actually computing sets of ``interesting" matchers are determined. The results show that existential restrictions make these computational tasks more complex. Whereas for sublanguages of CLASSIC both problems could be solved in polynomial time, this is no longer possible for languages with existential restrictions.
F. Baader, R. Küsters, and R. Molitor: **Rewriting Concepts Using Terminologies**. In A.G. Cohn, F. Giunchiglia, and B. Selman, editors, *Proceedings of the Seventh International Conference on Knowledge Representation and Reasoning (KR2000)*, pages 297–308. San Francisco, CA, Morgan Kaufmann Publishers, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

The problem of rewriting a concept given a terminology can informally be stated as follows: given a terminology T (i.e., a set of concept definitions) and a concept description C that does not contain concept names defined in T, can this description be rewritten into a related "better" description E by using (some of) the names defined in T? In this paper, we first introduce a general framework for the rewriting problem in description logics, and then concentrate on one specific instance of the framework, namely the minimal rewriting problem (where "better" means shorter, and "related" means equivalent). We investigate the complexity of the decision problem induced by the minimal rewriting problem for the languages FL0, ALN, ALE, and ALC, and then introduce an algorithm for computing (minimal) rewritings for the language ALE. (In the full paper, a similar algorithm is also developed for ALN.) Finally, we sketch other interesting instances of the framework. Our interest for the minimal rewriting problem stems from the fact that algorithms for non-standard inferences, such as computing least common subsumers and matchers, usually produce concept descriptions not containing defined names. Consequently, these descriptions are rather large and hard to read and comprehend. First experiments in a chemical process engineering application show that rewriting can reduce the size of concept descriptions obtained as least common subsumers by almost two orders of magnitude.
F. Baader, C. Lutz, H. Sturm, and F. Wolter: **Fusions of Description Logics**. In F. Baader and U. Sattler, editors, *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, number 33 in *CEUR-WS*, pages 21–30. Aachen, Germany, RWTH Aachen, August 2000. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/

BibTeX entry
Paper (PS)

#### Abstract:

One of the major topics in Description Logic (DL) research is investigating the trade-off between the expressivity of a DL and the complexity of its inference problems. The expressiveness of a DL is usually determined by the constructors available for building concepts and roles. Given two DLs, their union is the DL that allows the unrestricted use of the constructors of both DLs. There are well-known examples that show that decidability of DLs usually does not transfer to their union.

In this paper, we consider the fusion of two DLs, which is more restrictive than the union. Intuitively, in the fusion the role names are partitioned into two sets, and the constructors of the first DL can only use role names of one set, whereas the constructors of the second DL can only use role names of the other set. We show that under certain (rather weak) conditions decidability transfers from given DLs to their fusion. More precisely, the inference problems that we consider are satisfiability/subsumption of concept descriptions as well as satisfiability/subsumption w.r.t. general inclusion axioms.

These results adapt and generalize known transfer results from modal logic to DL. In order to capture the notion of a DL formally, we introduce the notion of an abstract description system and prove our results within this new formal framework.

F. Baader and R. Molitor: **Building and Structuring Description Logic Knowledge Bases Using Least Common Subsumers and Concept Analysis**. In B. Ganter and G. Mineau, editors, *Conceptual Structures: Logical, Linguistic, and Computational Issues – Proceedings of the 8th International Conference on Conceptual Structures (ICCS2000)*, volume 1867 of *Lecture Notes in Artificial Intelligence*, pages 290–303. Springer Verlag, 2000.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Given a finite set C:=c_{1}, ... , c

_{n}of description logic concepts, we are interested in computing the subsumption hierarchy of all least common subsumers of subsets of C. This hierarchy can be used to support the bottom-up construction and the structuring of description logic knowledge bases. The point is to compute this hierarchy without having to compute the least common subsumer for all subsets of C. In this paper, we show that methods from formal concept analysis developed for computing concept lattices can be employed for this purpose.

F. Baader and U. Sattler: **Tableau Algorithms for Description Logics**. In R. Dyckhoff, editor, *Proceedings of the International Conference on Automated Reasoning with Tableaux and Related Methods (Tableaux 2000)*, volume 1847 of *Lecture Notes in Artificial Intelligence*, pages 1–18. St Andrews, Scotland, UK, Springer-Verlag, 2000.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system KLONE. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableau-like algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which tableau procedures have been quite successful. Nevertheless, due to different underlying intuitions and applications, most description logics differ significantly from run-of-the-mill modal and program logics. Consequently, the research on tableau algorithms in description logics led to new techniques and results, which are, however, also of interest for modal logicians. In this article, we will focus on three features that play an important role in description logics (number restrictions, terminological axioms, and role constructors), and show how they can be taken into account by tableau algorithms.
F. Baader and C. Tinelli: **Combining Equational Theories Sharing Non-Collapse-Free Constructors**. In H. Kirchner and Ch. Ringeissen, editors, *Proceedings of the 3rd International Workshop on Frontiers of Combining Systems (FroCoS 2000)*, volume 1794 of *Lecture Notes in Computer Science*, pages 257–271. Nancy, France, Springer-Verlag, 2000.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

In this paper we extend the applicability of our combination method for decision procedures for the word problem to theories sharing non-collapse-free constructors. This extension broadens the scope of the combination procedure considerably, for example in the direction of equational theories axiomatizing the equivalence of modal formulae.
S. Brandt: **Matching under Side Conditions in Description Logics**. RWTH Aachen, Germany, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Matching in Description Logics originally has been inroduced by Borgida and McGuinness in order to prune aspects of concept descriptions irrelevant under certain circumstances. In this context, side conditions have been proposed to avoid trivial solutions to matching problems. In this work the computational complexity of matching algorithms is discussed for four common description logics—ALN and three of its sublanguages. Three different problems are considered. Matching modulo equivalence without side conditions, the approach of eliminating acyclic subsumption conditions and the use of fixed point algorithms for solving matching problems under subsumption conditions. As a result, we prove that matching under subsumption conditions can be solved in polynomial time in ALN and its sublanguages.
E. Franconi, F. Baader, U. Sattler, and P. Vassiliadis: **Multidimensional Data Models and Aggregation**. In M. Jarke, M. Lenzerini, Y. Vassilious, and P. Vassiliadis, editors, *Fundamentals of Data Warehousing*, pages 87–106. Springer-Verlag, 2000.

BibTeX entry

T. Hinze and M. Sturm: **Towards an in-vitro Implementation of a Universal Distributed Splicing Model for DNA Computation**. In R. Freund, editor, *Proceedings Theorietag 2000 (TT2000) Wien*, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Emphasizing a combination of recent developments in computer science with molecular bioengineering, a special distributed splicing system (TT6) is proposed. This unconventional model for computation features by a possibility for an in-vitro implementation in the laboratory as well as by a mathematical exact description. The Dresden DNA Computation Group decided to implement such a system on biohardware and optimized all relevant model parameters and components with respect to this objective.
C. Hirsch and S. Tobies: **A Tableau Algorithm for the Clique Guarded Fragment**. In *Proceedings of the Workshop Advances in Modal Logic AiML 2000*, 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.

BibTeX entry
Paper (PS)

#### Abstract:

We develop a tableau algorithm for the Clique Guarded Fragment (CGF), which we hope can serve as basis for an efficient implementation of a decision procedure for CGF. This hope is justified by the fact that some of the most efficient implementations of modal or description logic reasoners are based on tableau calculi similar to the one for CGF presented in this paper. As a corollary from the constructions used to prove the correctness of the tableau algorithm, we give an, in our opinion, simpler proof for the finite modal property of the Guarded Fragment (GF). An extension of our approach to CGF is part of future work. We also give a new proof of the fact that CGF and GF have the generalised tree model property.
Jan Hladik: **Implementing the n-ary Description Logic GF1-**. In *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

GF1- is a description logic allowing for n-ary relations, for which satisfiability is decidable in PSPACE. In this paper, the implementation and optimization of a tableau algorithm deciding GF1- are presented, and the performance is compared with that of other solvers.
I. Horrocks, U. Sattler, S. Tessaris, and S. Tobies: **How to decide Query Containment under Constraints using a Description Logic**. In Andrei Voronkov, editor, *Proceedings of the 7th International Conference on Logic for Programming and Automated Reasoning (LPAR'2000)*, number 1955 in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 2000.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

We present a procedure for deciding (database) query containment under constraints. The technique is to extend the logic DLR with an Abox, and to transform query subsumption problems into DLR Abox satisfiability problems. Such problems can then be decided, via a reification transformation, using a highly optimised reasoner for the SHIQ description logic. We use a simple example to support our hypothesis that this procedure will work well with realistic problems.
I. Horrocks, U. Sattler, S. Tessaris, and S. Tobies: **How to decide Query Containment under Constraints using a Description Logic**. In *Proceedings of the 7th International Workshop on Knowledge Representation meets Databases (KRDB-2000)*, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Query containment under constraints is the problem of determining whether the result of one query is contained in the result of another query for every database satisfying a given set of constraints. This problem is of particular importance in information integration and warehousing where, in addition to the constraints derived from the source schemas and the global schema, inter-schema constraints can be used to specify relationships between objects in different schemas. A theoretical framework for tackling this problem using the DLR logic has been established, and in this paper we show how the framework can be extended to a practical decision procedure. The proposed technique is to extend DLR with an Abox (a set of assertions about named individuals and tuples), and to transform query subsumption problems into DLR Abox satisfiability problems. We then show how such problems can be decided, via a reification transformation, using a highly optimised reasoner for the SHIQ description logic.
I. Horrocks, U. Sattler, and S. Tobies: **Practical Reasoning for Very Expressive Description Logics**. *Logic Journal of the IGPL*, 8(3):239–264, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles and functional restrictions with respect to general concept inclusion axioms and role hierarchies; early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSpace. We investigate the limits of decidability for this family of DLs, showing that relaxing the constraints placed on the kinds of roles used in number restrictions leads to the undecidability of all inference problems. Finally, we describe a number of optimisation techniques that are crucial in obtaining implementations of the decision procedures, which, despite the hight worst-case complexity of the problem, exhibit good performance with real-life problems.
I. Horrocks, U. Sattler, and S. Tobies: **Reasoning with Individuals for the Description Logic SHIQ**. In David MacAllester, editor, *Proceedings of the 17th International Conference on Automated Deduction (CADE-17)*, number 1831 in *Lecture Notes in Computer Science*. Germany, Springer Verlag, 2000.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

While there has been a great deal of work on the development of reasoning algorithms for expressive description logics, in most cases only Tbox reasoning is considered. In this paper we present an algorithm for combined Tbox and Abox reasoning in the SHIQ description logic. This algorithm is of particular interest as it can be used to decide the problem of (database) conjunctive query containment w.r.t. a schema. Moreover, the realisation of an efficient implementation should be relatively straightforward as it can be based on an existing highly optimised implementation of the Tbox algorithm in the FaCT system.
I. Horrocks and S. Tobies: **Optimisation of Terminological Reasoning**. In *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

When reasoning in description, modal or temporal logics it is often useful to consider axioms representing universal truths in the domain of discourse. Reasoning with respect to an arbitrary set of axioms is hard, even for relatively inexpressive logics, and it is essential to deal with such axioms in an efficient manner if implemented systems are to be effective in real applications. This is particularly relevant to Description Logics, where subsumption reasoning with respect to a terminology is a fundamental problem. Two optimisation techniques that have proved to be particularly effective in dealing with terminologies are lazy unfolding and absorption. In this paper we seek to improve our theoretical understanding of these important techniques. We define a formal framework that allows the techniques to be precisely described, establish conditions under which they can be safely applied, and prove that, provided these conditions are respected, subsumption testing algorithms will still function correctly. These results are used to show that the procedures used in the FaCT system are correct and, moreover, to show how efficiency can be significantly improved, while still retaining the guarantee of correctness, by relaxing the safety conditions for absorption.
I. Horrocks and S. Tobies: **Reasoning with Axioms: Theory and Practice**. In A. G. Cohn, F. Giunchiglia, and B. Selman, editors, *Principles of Knowledge Representation and Reasoning: Proceedings of the Seventh International Conference (KR2000)*. San Francisco, CA, Morgan Kaufmann Publishers, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

When reasoning in description, modal or temporal logics it is often useful to consider axioms representing universal truths in the domain of discourse. Reasoning with respect to an arbitrary set of axioms is hard, even for relatively inexpressive logics, and it is essential to deal with such axioms in an efficient manner if implemented systems are to be effective in real applications. This is particularly relevant to Description Logics, where subsumption reasoning with respect to a terminology is a fundamental problem. Two optimisation techniques that have proved to be particularly effective in dealing with terminologies are lazy unfolding and absorption. In this paper we seek to improve our theoretical understanding of these important techniques. We define a formal framework that allows the techniques to be precisely described, establish conditions under which they can be safely applied, and prove that, provided these conditions are respected, subsumption testing algorithms will still function correctly. These results are used to show that the procedures used in the FaCT system are correct and, moreover, to show how efficiency can be significantly improved, while still retaining the guarantee of correctness, by relaxing the safety conditions for absorption.
C. Lutz: **NExpTime-Complete Description Logics with Concrete Domains**. In C. Pilière, editor, *Proceedings of the ESSLLI-2000 Student Session*, August 2000.

BibTeX entry
Paper (PS)

#### Abstract:

Description Logics (DLs) are well-suited for the representation of abstract conceptual knowledge. Concrete knowledge such as knowledge about numbers, time intervals, and spatial regions can be incorporated into DLs by using so-called concrete domains. The basic Description Logics providing concrete domains is ALC(D) which was introduced by Baader and Hanschke. Reasoning with ALC(D) concepts is known to be PSpace-complete if reasoning with the concrete domain D is in PSpace. In this paper, we consider the extension of ALC(D) with acylic TBoxes and inverse roles and examine the computational complexity of the resulting formalism. As lower bounds, we show that there exists a concrete domain P for which reasoning is in PTime such that reasoning with ALC(P) and any of the above two extensions (separately) is NExpTime-hard. This is rather surprising since acyclic TBoxes and inverse roles are known to ``usually'' not increase the complexity of reasoning. For proving the lower bound, we introduce a NExpTime-complete variant of the Post Correspondence Problem and reduce it to the two logics under consideration. A corresponding upper bound, which states that reasoning with ALC(D) and both above extensions (together) is in NExpTime if reasoning with the concrete domain D is in NP, is proved in the accompanying technical report.
C. Lutz and U. Sattler: **Mary likes all Cats**. In F. Baader and U. Sattler, editors, *Proceedings of the 2000 International Workshop in Description Logics (DL2000)*, number 33 in *CEUR-WS*, pages 213–226. Aachen, Germany, RWTH Aachen, August 2000. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/

BibTeX entry
Paper (PS)

#### Abstract:

We investigate the complexity of ALC with boolean operators on roles. Tight complexity bounds are given for all logics between ALC with role negation and ALC with full boolean operators on accessibility relations (also considering restrictions to atomic role negation). More precisely, our main results are tight bounds for (1) ALC with role negation (which turns out to be ExpTime-complete), (2) ALC with atomic role negation and role intersection (which turns out to be NExpTime-complete, just like ALC with all boolean operators on roles). Moreover, in order to demonstrate the generality of our results, we show that the automata based techniques that were employed to obtain the upper bound for (1) can be extended to obtain the same result for ALC extended with both transitive roles and negation of roles.
C. Lutz and U. Sattler: **The Complexity of Reasoning with Boolean Modal Logic**. In *Advances in Modal Logic 2000 (AiML 2000)*, 2000. Final version appeared in Advanced in Modal Logic Volume 3, 2001.

BibTeX entry

R. Molitor and C.B. Tresp: **Extending Description Logics to Vague Knowledge in Medicine**. In P. Szczepaniak, P.J.G. Lisboa, and S. Tsumoto, editors, *Fuzzy Systems in Medicine*, volume 41 of *Studies in Fuzziness and Soft Computing*, pages 617–635. Springer Verlag, 2000.

BibTeX entry

#### Abstract:

This work introduces a concept language that is an extension of classical two-valued description logics to fuzzy logic. The new language allows to cope with vague concepts, e.g., more or less enlarged liver or very small kidney which are crucial notions in different medical application scenarios. To realize the extension to fuzzy logic, the classical logical notions of satisfiability, entailment and subsumption have to be modified appropriately. The main contribution of this paper are sound and complete methods for computing hierarchies between fuzzy concepts as well as processing queries in the new fuzzy concept language. Furthermore, we give an introduction to a concrete medical application that makes use of the fuzzy concept formalism.
U. Sattler: **Description Logics for the Representation of Aggregated Objects**. In W.Horn, editor, *Proceedings of the 14th European Conference on Artificial Intelligence*. IOS Press, Amsterdam, 2000.

BibTeX entry
Paper (PS)

E.P. Stoschek, M. Sturm, and T. Hinze et.al.: **Molekularbiologisches Verfahren zur Lösung von NP-Problemen**. 2000. Deutsches Patent DE 198 53 726 A 1, IPC C12N 15/10, Deutsches Patentamt München

BibTeX entry

Stephan Tobies: **The Complexity of Reasoning with Cardinality Restrictions and Nominals in Expressive Description Logics**. *Journal of Artificial Intelligence Research*, 12:199–217, 2000.

BibTeX entry
Paper (PS)

#### Abstract:

We study the complexity of the combination of the Description Logics ALCQ and ALCQI with a terminological formalism based on cardinality restrictions on concepts. These combinations can naturally be embedded into C^{2}, the two variable fragment of predicate logic with counting quantifiers, which yields decidability in NExpTime. We show that this approach leads to an optimal solution for ALCQI, as ALCQI with cardinality restrictions has the same complexity as C

^{2}(NExpTime-complete). In contrast, we show that for ALCQ, the problem can be solved in ExpTime. This result is obtained by a reduction of reasoning with cardinality restrictions to reasoning with the (in general weaker) terminological formalism of general axioms in the presence of nominals in the language. Using the same reduction, we show that for the extension of ALCQI with nominals reasoning with general axioms is a NExpTime-complete problem. Finally, we sharpen this result and show that already concept satisfiabiliy for ALCQI with nominals is NExpTime-complete. Without nominals, this problem is known to be PSPACE-complete.

## 1999

Edoardo Ardizzone and Mohand-Said Hacid: **A Semantic Modeling Approach for Video Retrieval by Content**. In *Proceedings of the IEEE International Conference on Multimedia Computing and Systems, Florence, Italy*, pages 158–162. IEEE Computer Society, June 1999.

BibTeX entry
Paper (PS)

#### Abstract:

A knowledge-based approach to model and retrieve video data by content is developed. Selected objects of interest in a video sequence are described and stored in a database. This database forms the object layer. On top of this layer, we define the schema layer used to capture the structured abstractions of the objects stored in the object layer. We propose two abstract languages on the basis of description logics: one for describing the contents of these layers, and the other, more expressive, for making queries. The query language provides possibilities for navigation of the schema through forward and backward traversal of links, sub-setting of attributes, and constraints on links.
A. Artale and C. Lutz: **A Correspondance between Temporal Description Logics**. In Patrick Lambrix, Alex Borgida, Maurizio Lenzerini, Ralf Möller, and Peter Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics (DL'99)*, number 22 in *CEUR-WS*, pages 145–149. Linkoeping, Sweden, Linköping University, July 30 – August 1 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

BibTeX entry
Paper (PS)

#### Abstract:

Description Logics (DLs) are formalisms for representing and reasoning about conceptual knowledge. There exist several extensions of DLs for an appropriate integration of temporal knowledge. This paper investigates the relation between the two DLs TL-ALCF and ALCF(D). TL-ALCF is an interval-based, temporal DL for reasoning about objects whose properties vary over time. ALCF(D) is a logic for integrated reasoning about conceptual and so-called concrete knowledge. If instantiated with a ``temporal'' concrete domain, ALCF(D) is well-suited for reasoning about temporal objects, i.e., objects which have a unique temporal extension. This paper is a first attempt to clarify the relationship between this two formalisms. It is showed that satisfiability of TL-ALCF concepts can be reduced to satisfiability of ALCF(D) concepts. This allows to use the available ALCF(D) tableau calculus for reasoning with TL-ALCF. Furthermore, it allows to settle the complexity of satisfiability of TL-ALCF concepts, which was previously unknown.
F. Baader: **Logic-Based Knowledge Representation**. In M.J. Wooldridge and M. Veloso, editors, *Artificial Intelligence Today, Recent Trends and Developments*, number 1600 in *Lecture Notes in Computer Science*, pages 13–41. Springer Verlag, 1999.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

After a short analysis of the requirements that a knowledge representation language must satisfy, we introduce Description Logics, Modal Logics, and Nonmonotonic Logics as formalisms for representing terminological knowledge, time-dependent or subjective knowledge, and incomplete knowledge respectively. At the end of each section, we briefly comment on the connection to Logic Programming.
F. Baader and R. Küsters: **Matching in Description Logics with Existential Restrictions**. In P. Lambrix, A. Borgida, M. Lenzerini, R. Möller, and P. Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics 1999 (DL'99)*, number 22 in *CEUR-WS*. Sweden, Linköping University, 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

BibTeX entry
Paper (PS)

#### Abstract:

Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of description logics, originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. Previous work on this problem has produced polynomial-time matching algorithms for sublanguages of the DL used in CLASSIC. Consequently, these algorithms cannot handle existential restrictions. In this paper, we consider matching in DLs allowing for existential restrictions. We describe decision procedures that test solvability of matching problems as well as algorithms for computing complete sets of matchers. Unfortunately, these algorithms are no longer polynomial-time, even for the small language EL, which allows for the top concept, conjunction and existential restrictions.
F. Baader, R. Küsters, A. Borgida, and D. McGuinness: **Matching in Description Logics**. *Journal of Logic and Computation*, 9(3):411–447, 1999.

BibTeX entry
Free reprint

#### Abstract:

Matching concepts against patterns (concepts with variables) is a relatively new operation that has been introduced in the context of concept description languages (description logics). The original goal was to help filter out unimportant aspects of complicated concepts appearing in large industrial knowledge bases. We propose a new approach to performing matching, based on a ``concept-centered'' normal form, rather than the more standard ``structural subsumption'' normal form for concepts. As a result, matching can be performed (in polynomial time) using arbitrary concept patterns of the description language ALN, thus removing restrictions from previous work. The paper also addresses the question of matching problems with additional ``side conditions'', which were motivated by practical needs.
F. Baader, R. Küsters, and R. Molitor: **Computing Least Common Subsumers in Description Logics with Existential Restrictions**. In T. Dean, editor, *Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI'99)*, pages 96–101. Morgan Kaufmann, 1999.

BibTeX entry

#### Abstract:

Computing the least common subsumer (lcs) is an inference task that can be used to support the "bottom-up" construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for universal value restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. Our approach for computing the lcs is based on an appropriate representation of concept descriptions by certain trees, and a characterization of subsumption by homomorphisms between these trees. The lcs operation then corresponds to the product operation on trees.
F. Baader and R. Molitor: **Rewriting Concepts Using Terminologies**. In P. Lambrix, A. Borgida, M. Lenzerini, R. Möller, and P. Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics 1999 (DL'99)*, number 22 in *CEUR-WS*. Sweden, Linköping University, 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

BibTeX entry
Paper (PS)

#### Abstract:

In this work we consider the inference problem of computing (minimal) rewritings of concept descriptions using defined concepts from a terminology. We introduce a general framework for this problem and instantiate it with the small description logic FLo, which provides us with conjunction and value restrictions. We show that the decision problem induced by the minimal rewriting problem is NP-complete for FLo.
F. Baader, R. Molitor, and S. Tobies: **Tractable and Decidable Fragments of Conceptual Graphs**. In W. Cyre and W. Tepfenhart, editors, *Proceedings of the Seventh International Conference on Conceptual Structures (ICCS'99)*, number 1640 in *Lecture Notes in Computer Science*, pages 480–493. Springer Verlag, 1999.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

This paper is concerned with decidability and tractability of reasoning in conceptual graphs (CGs). It is well-known that problems like validity and subsumption of general CGs are undecidable, whereas subsumption is NP-complete for simple conceptual graphs (SGs) and tractable for the fragment of SGs that are trees. On the one hand, we will employ results on decidable fragments of first-order logic to identify a natural and expressive fragment of CGs for which validity and subsumption is decidable in deterministic exponential time. On the other hand, we will extend existing work on the connection between SGs and description logics (DLs) by identifying a DL that corresponds to the class of SGs that are trees. This yields a previously unknown tractability result for the DL in question. As a by-product, we will extend the tractability results for trees to SGs that can be transformed into trees by ``cutting cycles.''
F. Baader and U. Sattler: **Expressive Number Restrictions in Description Logics**. *Journal of Logic and Computation*, 9(3):319–350, 1999.

BibTeX entry
Paper (PS)
Free reprint

#### Abstract:

Number restrictions are concept constructors that are available in almost all implemented Description Logic systems. However, they are mostly available only in a rather weak form, which considerably restricts their expressive power. On the one hand, the roles that may occur in number restrictions are usually of a very restricted type, namely atomic roles or complex roles built using either intersection or inversion. In the present paper, we increase the expressive power of Description Logics by allowing for more complex roles in number restrictions. As role constructors, we consider composition of roles (which will be present in all our logics) and intersection, union, and inversion of roles in different combinations. We will present two decidability results (for the basic logic that extends ALC by number restrictions on roles with composition, and for one extension of this logic), and three undecidability results for three other extensions of the basic logic. On the other hand, with the rather weak form of number restrictions available in implemented systems, the number of role successors of an individual can only be restricted by a fixed non-negative integer. To overcome this lack of expressiveness, we allow for variables ranging over the non-negative integers in place of the fixed numbers in number restrictions. The expressive power of this constructor is increased even further by introducing explicit quantifiers for the numerical variables. The Description Logic obtained this way turns out to have an undecidable satisfiability problem. For a restricted logic we show that concept satisfiability is decidable.
F. Baader and C. Tinelli: **Deciding the Word Problem in the Union of Equational Theories Sharing Constructors**. In P. Narendran and M. Rusinowitch, editors, *Proceedings of the 10th International Conference on Rewriting Techniques and Applications (RTA-99)*, volume 1631 of *Lecture Notes in Computer Science*, pages 175–189. Trento, Italy, Springer-Verlag, 1999.

BibTeX entry
©Springer-Verlag

#### Abstract:

The main contribution of this paper is a new method for combining decision procedures for the word problem in equational theories sharing ``constructors.'' The notion of constructor adopted in this paper has a nice algebraic definition and is more general than a related notion introduced in previous work on the combination problem.
Franz Baader and Cesare Tinelli: **Combining Equational Theories Sharing Non-Collapse-Free Constructors**. 99-13, Department of Computer Science, University of Iowa, October 1999.

BibTeX entry
Paper (PS)

#### Abstract:

In a previous work, we describe a method to combine decision procedures for the word problem for theories sharing constructors. One of the requirements of our combination method is that the constructors be collapse-free. This paper removes that requirement by modifying the method so that it applies to non-collapse-free constructors as well. This broadens the scope of our combination results considerably, for example in the direction of equational theories corresponding to modal logics.
A. Borgida and R. Küsters: **What's not in a name? Initial Explorations of a Structural Approach to Integrating Large Concept Knowledge-Bases**. DCS-TR-391, Rutgers University, USA, 1999.

BibTeX entry
Paper (PS)

Cyril Decleir, Mohand-Saïd Hacid, and Jacques Kouloumdjian: **A Database Approach for Modeling and Querying Video Data**. In Masaru Kitsuregawa, Leszek Maciaszek, and Mike Papazoglou, editors, *Proceedings of the 15th International Conference on Data Engineering, Sydney, Australia*, pages 6–13. IEEE Computer Society, March 1999.

BibTeX entry
Paper (PS)

#### Abstract:

Indexing video data is essential for providing content based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues even if these are partial or fragmented. From a database perspective, video applications provide beautiful challenges. Next generation database systems will need to provide support for multimedia data (e.g., image, video, audio). These data types require new techniques for their management (i.e., storing, modeling, querying, etc.). Hence new solutions are significant. This paper develops a data model and a rule-based query language for video content based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (i.e., symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (i.e., objects) of interest in the domain of a video sequence, (2) video frames which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics.
E. Franconi and U. Sattler: **A Data Warehouse Conceptual Data Model for Multidimensional Aggregation**. In *Workshop on Design and Management of Data Warehouses (DMDW'99)*, June 1999.

BibTeX entry
Paper (PS)

E. Franconi and U. Sattler: **A Data Warehouse Conceptual Data Model for Multidimensional Aggregation: a preliminary report**. *Italian Association for Artificial Intelligence AI*IA Notizie*, 1:9–21, 1999.

BibTeX entry
Paper (PS)

V. Haarslev, C. Lutz, and R. Möller: **A Description Logic with Concrete Domains and Role-forming Predicates**. *Journal of Logic and Computation*, 9(3):351–384, 1999.

BibTeX entry
Paper (PS)
Free reprint

#### Abstract:

This article presents the description logic ALCRP(D) with concrete domains and a role-forming predicate operator as its prominent aspects. We demonstrate the feasibility of ALCRP(D) for reasoning about spatial objects and their qualitative spatial relationships and provide an appropriate concrete domain for spatial objects. The general significance of ALCRP(D) is demonstrated by adding temporal reasoning to spatial and terminological reasoning using a combined concrete domain. The theory is motivated as a basis for knowledge representation and query processing in the domain of geographic information systems. In contrast to existing work in this domain, which mainly focuses either on conceptual reasoning or on reasoning about qualitative spatial relations, we integrate reasoning about spatial information with terminological reasoning.
Mohand-Saïd Hacid and Christophe Rigotti: **Representing and Reasoning on Conceptual Queries Over Image Databases**. In Zbigniew W. Ras and Andrzej Skowron, editors, *Proceedings of the Eleventh International Symposium on Methodologies for Intelligent Systems, Warsaw, Poland*, *LNCS 1609*, pages 340–348. Springer, June 1999.

BibTeX entry
Paper (PS)

#### Abstract:

The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. In this paper we develop a knowledge-based framework for modeling and retrieving image data. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes, etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language.
I. Horrocks and U. Sattler: **A Description Logic with Transitive and Inverse Roles and Role Hierarchies**. *Journal of Logic and Computation*, 9(3):385–410, 1999.

BibTeX entry
Free reprint

#### Abstract:

The combination of transitive and inverse roles is important in a range of applications, and is crucial for the adequate representation of aggregated objects, allowing the simultaneous description of parts by means of the whole to which they belong and of wholes by means of their constituent parts. In this paper we present tableaux algorithms for deciding concept satisfiability and subsumption in Description Logics that extend with both transitive and inverse roles, a role hierarchy, and functional restrictions. In contrast to earlier algorithms for similar logics, those presented here are well-suited for implementation purposes: using transitive roles and role hierarchies in place of the transitive closure of roles enables sophisticated blocking techniques to be used in place of the cut rule, a rule whose high degree of non-determinism strongly discourages its use in an implementation. As well as promising superior computational behaviour, this new approach is shown to be sufficiently powerful to allow subsumption and satisfiability with respect to a (possibly cyclic) knowledge base to be reduced to concept subsumption and satisfiability, and to support reasoning in a Description Logic that no longer has the finite model property.
Ian Horrocks, Ulrike Sattler, and Stephan Tobies: **Practical Reasoning for Description Logics with Functional Restrictions, Inverse and Transitive Roles, and Role Hierarchies**. In *Proceedings of the 1999 Workshop Methods for Modalities (M4M-1)*, 1999.

BibTeX entry
Paper (PS)

#### Abstract:

Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles, role hierarchies, and functional restrictions; early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSPACE. Finally, we investigate the limits of decidability for this family of DLs, showing that relaxing the constraints placed on the kinds of roles used in number restrictions leads to the undecidability of all inference problems.
Ian Horrocks, Ulrike Sattler, and Stephan Tobies: **Practical Reasoning for Expressive Description Logics**. In Harald Ganzinger, David McAllester, and Andrei Voronkov, editors, *Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning (LPAR'99)*, number 1705 in *Lecture Notes in Artificial Intelligence*, pages 161–180. Springer-Verlag, September 1999.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

Description Logics (DLs) are a family of knowledge representation formalisms mainly characterised by constructors to build complex concepts and roles from atomic ones. Expressive role constructors are important in many applications, but can be computationally problematical. We present an algorithm that decides satisfiability of the DL ALC extended with transitive and inverse roles, role hierarchies, and qualifying number restrictions. Early experiments indicate that this algorithm is well-suited for implementation. Additionally, we show that ALC extended with just transitive and inverse roles is still in PSPACE. Finally, we investigate the limits of decidability for this family of DLs.
Stephan Kepser and Jörn Richts: **Optimisation Techniques for Combining Constraint Solvers**. In Dov Gabbay and Maarten de Rijke, editors, *Frontiers of Combining Systems 2, Papers presented at FroCoS'98*, pages 193–210. Amsterdam, Research Studies Press/Wiley, 1999.

BibTeX entry
Paper (PS)

#### Abstract:

In recent years, techniques that had been developed for the combination of unification algorithms for equational theories were extended to combining constraint solvers. These techniques inherited an old deficit that was already present in the combination of equational theories which makes them rather unsuitable for practical use: The underlying combination algorithms are highly non-deterministic. This paper is concerned with the practical problem of how to optimise the combination method of Baader and Schulz. We present an optimisation method, called the deductive method, which uses specific algorithms for the components to reach certain decisions deterministically. We also give a strategy how to select an order of non-deterministic decisions. Run time tests of our implementation indicate that the optimised combination method yields combined decision procedures that are efficient enough to be used in practice.
Stephan Kepser and Jörn Richts: **UniMoK: A System for Combining Equational Unification Algorithms**. In *Rewriting Techniques and Applications, Proceedings RTA-99*, volume 1631 of *Lecture Notes in Computer Science*, pages 248–251. Springer-Verlag, 1999.

BibTeX entry

R. Küsters and A. Borgida: **What's in an Attribute? Consequences for the Least Common Subsumer**. DCS-TR-404, Rutgers University, USA, 1999.

BibTeX entry
Paper (PS)

#### Abstract:

Functional relationships between objects, called ``attributes'', are of considerable importance in knowledge representation languages, including Description Logics (DLs). A study of the literature indicates that papers have made, often implicitly, different assumptions about the nature of attributes: whether they are always required to have a value, or whether they can be partial functions. The work presented here is the first explicit study of this difference for (sub-)classes of the CLASSIC DL, involving the same-as concept constructor. It is shown that although determining subsumption between concept descriptions has the same complexity (though requiring different algorithms), the story is different in the case of determining the least common subsumer (lcs). For attributes interpreted as partial functions, the lcs exists and can be computed relatively easily; even in this case our results correct and extend three previous papers about the lcs of DLs. In the case where attributes must have a value, the lcs may not exist, and even if it exists it may be of exponential size. Interestingly, it is possible to decide in polynomial time if the lcs exists.
Ralf Küsters: **What's in a name? — First Steps Towards a Structural Approach to Integrating Large Content-based Knowledge-Bases**. In S. Abiteboul, D. Florescu, A. Levy, and G. Moerkotte, editors, *Foundations for Information Integration*, *Dagstuhl-Seminar-Report 244*, 1999. ISSN 0940-1121

BibTeX entry

#### Abstract:

We address the problem of integrating two content-based knowledge bases. It is well known that even the same slice of reality can be modeled in various ways. This ranges from single morphological variants of identifiers to reification of relationships. Hence, human intervention is required in order to set up some kind of correspondence mapping between the knowledge bases. However, modern real-life ontologies, such as the Galen medical ontology, are so large that it seems not feasible for humans to perform this task without computer aid. We therefore aim at exploiting the structural information to help find candidate equivalent concepts from one ontology to the other. To this end, we have set up a formal framework for correspondence mappings and have investigated limitations of structural information to find such mappings. Finally, first algorithmic and empirical results have been presented.
C. Lutz: **Complexity of Terminological Reasoning Revisited**. In *Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning LPAR'99*, *Lecture Notes in Artificial Intelligence*, pages 181–200. Springer-Verlag, September 6 – 10, 1999.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

TBoxes in their various forms are key components of knowledge representation systems based on description logics (DLs) since they allow for a natural representation of terminological knowledge. Largely due to a classical result given by Nebel, complexity analyses for DLs have, until now, mostly failed to take into account the most basic form of TBoxes, so-called acyclic TBoxes. In this paper, we concentrate on DLs for which reasoning without TBoxes is PSpace-complete, and show that there exist logics for which the complexity of reasoning remains in PSpace if acyclic TBoxes are added and also logics for which the complexity increases. This demonstrates that it is necessary to take acyclic TBoxes into account for complexity analyses.
C. Lutz: **Reasoning with Concrete Domains**. In Thomas Dean, editor, *Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence IJCAI-99*, pages 90–95. Stockholm, Sweden, Morgan-Kaufmann Publishers, July 31 – August 6, 1999.

BibTeX entry
Paper (PS)

#### Abstract:

Description logics are formalisms for the representation of and reasoning about conceptual knowledge on an abstract level. Concrete domains allow the integration of description logic reasoning with reasoning about concrete objects such as numbers, time intervals, or spatial regions. The importance of this combined approach, especially for building real-world applications, is widely accepted. However, the complexity of reasoning with concrete domains has never been formally analyzed and efficient algorithms have not been developed. This paper closes the gap by providing a tight bound for the complexity of reasoning with concrete domains and presenting optimal algorithms.
C. Lutz, U. Sattler, and S. Tobies: **A Suggestion for an n-ary Description Logic**. In Patrick Lambrix, Alex Borgida, Maurizio Lenzerini, Ralf Möller, and Peter Patel-Schneider, editors,

*Proceedings of the International Workshop on Description Logics*, number 22 in

*CEUR-WS*, pages 81–85. Linkoeping, Sweden, Linköping University, July 30 – August 1 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

BibTeX entry Paper (PS)

#### Abstract:

A restriction most Description Logics (DLs) have in common with most Modal Logics is their restriction to unary and binary predicates. To our knowledge, the only DLs that overcome these restrictions and allow for arbitrary n-ary relations are n-ary Kandor and the very expressive DL DLR. In the field of Modal Logics, there are two generalisations that allow for n-ary predicates: Polyadic Modal Logics and the more expressive Guarded Fragment, which was shown to be ExpTime-complete and for which a resolution based decision procedure exists. Unfortunately, when extended by operators that are standard in DLs such as number restrictions, features, or transitive roles, this logic becomes undecidable. In this paper, we present a new DL, GF1-, that was designed to meet three goals: 1. It should allow for n-ary relations; 2. ``concept'' subsumption and satisfiability should be in PSpace; and 3., it should allow the extension with number restrictions and/or transitive roles (without losing decidability).
S .Tobies: **A NExpTime-complete Description Logic Strictly Contained in C^{2}**. In J. Flum and M. Rodríguez-Artalejo, editors,

*Proceedings of the Annual Conference of the European Association for Computer Science Logic (CSL-99)*,

*LNCS 1683*, pages 292–306. Springer-Verlag, 1999.

BibTeX entry Paper (PS) Extended technical report (PS) ©Springer-Verlag

#### Abstract:

We examine the complexity and expressivity of the combination of the Description Logic ALCQI with a terminological formalism based on cardinality restrictions on concepts. This combination can naturally be embedded into C^{2}, the two variable fragment of predicate logic with counting quantifiers. We prove that ALCQI has the same complexity as C

^{2}but does not reach its expressive power.

S. Tobies: **A PSpace Algorithm for Graded Modal Logic**. In H. Ganzinger, editor, *Automated Deduction – CADE-16, 16th International Conference on Automated Deduction*, *LNAI 1632*, pages 52–66. Trento, Italy, Springer-Verlag, July 7–10, 1999.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

We present a PSPACE algorithm that decides satisfiability of the graded logic Gr(K_{R}) - a natural extension of propositional modal logic K

_{R}by counting expressions -which plays an important role in the area of knowledge representation. The algorithm employs a tableaux approach and is the first known algorithm which meets the lower bound for the complexity of the problem. Thus, we exactly fixe the complexity of the problem and refute a EXPTIME-hardness conjecture. This establishes a kind of ``theoretical benchmark'' that all algorithmic approaches can be measured with.

S. Tobies: **On the Complexity of Counting in Description Logics**. In P. Lambrix, A. Borgida, M. Lenzerini, R. Möller, and P. Patel-Schneider, editors, *Proceedings of the International Workshop on Description Logics 1999 (DL'99)*, number 22 in *CEUR-WS*. Sweden, Linköping University, 1999. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-22/

BibTeX entry
Paper (PS)

#### Abstract:

Many Description Logics (DLs) allow for counting expressions of various forms that are important in many applications, e.g., for reasoning with semantic data models and for applications concerned with the configuration of technical systems. We present two novel complexity results for DLs that contain counting constructs: (1) We prove that concept satisfiability for ALCQI is decidable in PSPACE even if binary coding of numbers in the input is assumed. (2) We prove that TBox consistency for ALCQI with cardinality restrictions is NEXPTIME-complete.## 1998

C.A. Albayrak: **Die WHILE-Hierarchie für Programmschemata**. RWTH Aachen, 1998.

BibTeX entry

Can Adam Albayrak and Thomas Noll: **The WHILE Hierarchy of Program Schemes is Infinite**. In Maurice Nivat, editor, *Proceedings of Foundations of Software Science and Computation Structures*, pages 35–47. LNCS 1378, Springer, 1998.

BibTeX entry

F. Baader: **On the Complexity of Boolean Unification**. *Information Processing Letters*, 67(4):215–220, 1998.

BibTeX entry

#### Abstract:

Unification modulo the theory of Boolean algebras has been investigated by several autors. Nevertheless, the exact complexity of the decision problem for unification with constants and general unification was not known. In this research note, we show that the decision problem is*-complete for unification with constants and PSPACE-complete for general unification. In contrast, the decision problem for elementary unification (where the terms to be unified contain only symbols of the signature of Boolean algebras) is ``only'' NP-complete.*

^{p}_{2}
F. Baader, A. Borgida, and D.L. McGuinness: **Matching in Description Logics: Preliminary Results**. In M.-L. Mugnier and M. Chein, editors, *Proceedings of the Sixth International Conference on Conceptual Structures (ICCS-98)*, volume 1453 of *Lecture Notes in Computer Science*, pages 15–34. Montpelier (France), Springer–Verlag, 1998.

BibTeX entry

#### Abstract:

Matching of concepts with variables (concept patterns) is a relatively new operation that has been introduced in the context of concept description languages (description logics), originally to help filter out unimportant aspects of large concepts appearing in industrial-strength knowledge bases. This paper proposes a new approach to performing matching, based on a ``concept-centered'' normal form, rather than the more standard ``structural subsumption'' normal form for concepts. As a result, matching can be performed (in polynomial time) using arbitrary concept patterns of a description language allowing for conjunction, value restriction, and atomic negation, thus removing restrictions on the form of the patterns from previous work. The paper also addresses the question of matching problems with additional ``side conditions'', which were motivated by practical experience.
F. Baader and R. Küsters: **Computing the least common subsumer and the most specific concept in the presence of cyclic ALN-concept descriptions**. In O. Herzog and A. Günter, editors,

*Proceedings of the 22nd Annual German Conference on Artificial Intelligence, KI-98*, volume 1504 of

*Lecture Notes in Computer Science*, pages 129–140. Bremen, Germany, Springer–Verlag, 1998.

BibTeX entry

#### Abstract:

Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can be used to support the ``bottom up'' construction of knowledge bases for KR systems based on description logic. For the description logic ALN, the msc need not always exist if one restricts the attention to acyclic concept descriptions. In this paper, we extend the notions lcs and msc to cyclic descriptions, and show how they can be computed. Our approach is based on the automata-theoretic characterizations of fixed-point semantics for cyclic terminologies developed in previous papers.
F. Baader and R. Küsters: **Least common subsumer computation w.r.t. cyclic ALN-terminologies**. In

*Proceedings of the 1998 International Workshop on Description Logics (DL'98)*, 1998.

BibTeX entry Paper (PS)

#### Abstract:

Computing least common subsumers (lcs) and most specific concepts (msc) are inference tasks that can be used to support the ``bottom up'' construction of knowledge bases for KR systems based on description logic. For the description logic*ALN*, the msc need not always exist if one restricts the attention to acyclic concept descriptions. In this paper, we extend the notions lcs and msc to cyclic descriptions, and show how they can be computed. Our approach is based on the automata-theoretic characterizations of fixed-point semantics for cyclic terminologies developed in previous papers.

F. Baader, R. Küsters, and R. Molitor: **Structural Subsumption Considered from an Automata Theoretic Point of View**. In *Proceedings of the 1998 International Workshop on Description Logics DL'98*, 1998.

BibTeX entry
Paper (PS)

#### Abstract:

This paper compares two approaches for deriving subsumption algorithms for the description logic ALN: structural subsumption and an automata-theoretic characterization of subsumption. It turns out that structural subsumption algorithms can be seen as special implementations of the automata-theoretic characterization.
F. Baader and P. Narendran: **Unification of Concept Terms in Description Logics**. In H. Prade, editor, *Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)*, pages 331–335. John Wiley & Sons Ltd, 1998.

BibTeX entry

#### Abstract:

Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to replace certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
F. Baader and U. Sattler: **Description Logics with Concrete Domains and Aggregation**. In H. Prade, editor, *Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98)*, pages 336–340. John Wiley & Sons Ltd, 1998.

BibTeX entry
Paper (PS)

F. Baader and K. Schulz: **Combination of Constraint Solvers for Free and Quasi-Free Structures**. *Theoretical Computer Science*, 192:107–161, 1998.

BibTeX entry
Free reprint

#### Abstract:

When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. We define so-called quasi-free structures (called ``strong simply-combinable structures'' in a previous publication) as a generalization of free structures. For quasi-free structures over disjoint signatures, we describe a canonical amalgamation construction that yields the free amalgamated product. The combination techniques known from unification theory can be used to combine constraint solvers for quasi-free structures over disjoint signatures into a solver for their free amalgamated product. In addition to term algebras modulo equational theories (i.e., free algebras), the class of quasi-free structures contains many solution structures that are of interest in constraint logic programming, such as the algebra of rational trees, feature structures, and domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
F. Baader and K.U. Schulz: **Unification Theory**. In W. Bibel and P.H. Schmidt, editors, *Automated Deduction – A Basis for Applications, Vol. I: Foundations – Calculi and Methods*, volume 8 of *Applied Logic Series*, pages 225–263. Dordrecht, NL, Kluwer Academic Publishers, 1998.

BibTeX entry

#### Abstract:

In this chapter, we first motivate equational unification by its applications in theorem proving and term rewriting. In addition to applications that require the computation of unifiers, we will also mention constraint-based approaches, in which only solvability of unification problems (i.e., the existence of unifiers) must be tested. Then we extend the definitions known from syntactic unification (such as most general unifier) to the case of equational unification. It turns out that, for equational unification, one must be more careful when introducing these notions. In the third section, we will mention some unification results for specific equational theories. In the fourth, and central, section of this chapter, we treat the important problem of how to combine unification algorithms. This problem occurs, for example, if we have a unification algorithm that can treat the commutative symbol ``+'' and another algorithm that can treat the associative symbol ``x'', and we want to unify terms that contain both symbols. Finally, we conclude with a short section in which other interesting topics in the field of equational unification are mentioned, which could not be treated in more detail in this chapter.
F. Baader and C. Tinelli: **Deciding the Word Problem in the Union of Equational Theories**. UIUCDCS-R-98-2073, Department of Computer Science, University of Illinois at Urbana-Champaign, 1998.

BibTeX entry
Paper (PS)

#### Abstract:

The main contribution of this report is a new method for combining decision procedures for the word problem in equational theories. In contrast to previous methods, it is based on transformation rules, and also applies to theories sharing ``constructors.'' In addition, we show that—contrary to a common belief—the Nelson-Oppen combination method cannot be used to combine decision procedures for the word problem, even in the case of equational theories with disjoint signatures.
Franz Baader and Tobias Nipkow: **Term Rewriting and All That**. United Kingdom, Cambridge University Press, 1998.

BibTeX entry

#### Abstract:

This is the first English language textbook offering a unified and self-contained introduction to the field of term rewriting. It covers all the basic material (abstract reduction systems, termination, confluence, completion, and combination problems), but also some important and closely connected subjects: universal algebra, unification theory and Gröbner bases. The main algorithms are presented both informally and as programs in the functional language Standard ML (an appendix contains a quick and easy introduction to ML). Certain crucial algorithms like unification and congruence closure are covered in more depth and efficient Pascal programs are developed. The book contains many examples and over 170 exercises.This text is also an ideal reference book for professional researchers: results that have been spread over many conference and journal articles are collected together in a unified notation, detailed proofs of almost all theorems are provided, and each chapter closes with a guide to the literature.

More
Information (Table of contents, sample programs, errata)

C. Decleir, M. S. Hacid, and J. Kouloumdjian: **A Generic Model for Video Content Based Retrieval**. In *Proceedings of the 1998 ACM Symposium on Applied Computing, Multimedia Track, Atlanta, GA, USA (to appear)*, 1998. Extended Abstract

BibTeX entry
Paper (PS)

#### Abstract:

This paper presents a generic data model and a rule-based query language for content based video access. The model allows user-defined attributes as well as explicit relations between object of interest. A declarative, rule-based query language is used to infer relationships about information represented in the model.
Cyril Decleir, Mohand-Saïd Hacid, and Jacques Kouloumdjian: **Modeling and Querying Video Data: A Hybrid Approach**. In *Proceedings of the IEEE Workshop on Content-Based Access of Image & Video Libraries (CBAIVL'98), Santa Barbara, CA, USA*, pages 86–90. IEEE Computer Society, June 1998.

BibTeX entry
Paper (PS)

#### Abstract:

This paper develops a video data model and a rule-based query language for video retrieval. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (i.e., objects) of interest in a video sequence, (2) generalized strata (a set of fragments), which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. The language has a clear declarative and operational semantics.
Cyril Decleir, Mohand-Saïd Hacid, and Jacques Kouloumdjian: **Modeling and Querying Video Databases**. In *Proceedings 24th EUROMICRO'98 Conference Workshop on Multimedia and Telecommunications, Vasteras, Sweden*, pages 492–498. IEEE Computer Society, August 1998.

BibTeX entry
Paper (PS)

#### Abstract:

Indexing video data is essential for providing content based access. This paper develops a data model and a rule-based query language for video content based indexing and retrieval. The data model is based on the notion of generalized strata, which can be seen as a set of intervals. Each interval can be analyzed to extract symbolic descriptions of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: (1) the entities (i.e., objects) in the domain of a video sequence, (2) video frames, called generalized strata, which contain these entities. To represent these information, our data model allows facts as well as objects and constraints. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. In the video application we are interested in, we wish to construct new generalized strata from old ones. To do this, our language has an interpreted function term (i.e., constructive term) to concatenate generalized strata. This language has a clear declarative and operational semantics.
H.-W. Denker, J. Hiltner, H.-P. Hohn, D. C. Novak, B. Reusch, C. Tresp, and J. Weidemann: **Schnittbildanatomie – Interaktives klinisch-topographisches Lernprogramm**. W. de Gruyter, 1998.

BibTeX entry

I. Horrocks and U. Sattler: **A Description Logic with Transitive and Converse Roles and Role Hierarchies**. In *Proceedings of the International Workshop on Description Logics*. Povo - Trento, Italy, IRST, 1998.

BibTeX entry
Paper (PS)

R. Küsters: **Characterizing the Semantics of Terminological Cycles in ALN using Finite Automata**. In

*Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR'98)*, pages 499–510. Morgan Kaufmann, 1998.

BibTeX entry

#### Abstract:

The representation of terminological knowledge may naturally lead to terminological cycles. In addition to descriptive semantics, the meaning of cyclic terminologies can also be captured by fixed-point semantics, namely, greatest and least fixed-point semantics. To gain a more profound understanding of these semantics and to obtain inference algorithms as well as complexity results for inconsistency, subsumption, and related inference tasks, this paper provides automata theoretic characterizations of these semantics. More precisely, the already existing results for are extended to the language , which additionally allows for primitive negation and number-restric\-tions. Unlike , the language allows to express inconsistent concepts, which makes non-trivial extensions of the characterizations and algorithms necessary. Nevertheless, the complexity of reasoning does not increase when going from to . This distinguishes from the very expressive languages with fixed-point operators proposed in the literature. It will be shown, however, that cyclic -terminologies are expressive enough to capture schemas in certain semantic data models.
Martin Leucker and Stephan Tobies: **Truth—A Platform for Verification of Distributed Systems**. 98-05, RWTH Aachen, May 1998.

BibTeX entry
Paper (PS)

#### Abstract:

Formal Methods are becoming more an more important for the development of hardware and software systems. Verification tools support the employment of Formal Methods. This paper gives an overview of the design and implementation of the verification tool Truth. We define and explain requirements for verification tools. Furthermore, we discuss several semantic models, specification languages and logics and their visualisation from a tool builder's perspective and show how these requirements were adopted in Truth.
U. Sattler: **Terminological knowledge representation systems in a process engineering application**. LuFG Theoretical Computer Science, RWTH-Aachen, 1998.

BibTeX entry
Paper (PS)

#### Abstract:

This work is concerned with the question of how far terminological knowledge representation systems can support the development of mathematical models of chemical processes. Terminological knowledge representation systems are based on Description Logics, a highly expressive formalism with well-defined semantics, and provide powerful inference services. These system services can be used to support the structuring of the application domain, namely the organised storage of parts of process models. However, the process systems engineering application asks for the extension of the expressive power of already existing Description Logics, particularly with transitive roles and expressive number restrictions. These extensions are introduced and investigated with respect to the computational complexity of the corresponding inference problems.
Stephan Tobies: **Design und Implementierung einer Plattform zur Verifikation verteilter Systeme**. RWTH Aachen, Germany, 1998.

BibTeX entry
Paper (PS)

#### Abstract:

Formal Methods are becoming more an more important for the development of hardware and software systems. Verification tools support the employment of Formal Methods. This thesis gives an overview of the design and implementation of the verification tool Truth.
C. Tresp and U. Tüben: **Medical Terminology Processing for a Tutoring System**. In *International Conference on Computational Intelligence and Multimedia Applications (ICCIMA98)*, Februar 1998.

BibTeX entry

C.B. Tresp and R. Molitor: **A Description Logic for Vague Knowledge**. In *Proceedings of the 13th biennial European Conference on Artificial Intelligence (ECAI'98)*, pages 361–365. Brighton, UK, J. Wiley and Sons, 1998.

BibTeX entry

#### Abstract:

This work introduces the concept language ALC F(M), which is an extension of ALC to many-valued logics. ALC F(M) allows to express vague concepts, e.g. more or less enlarged or very small. To realize this extension to many-valued logics, the classical notions of satisfiability and subsumption had to be modified appropriately. The main contribution of this paper is a sound and complete method for computing the degree of subsumption between two ALC F(M)-concepts.## 1997

F. Baader: **Combination of Compatible Reduction Orderings that are Total on Ground Terms**. In G. Winskel, editor, *Proceedings of the Twelfth Annual IEEE Symposium on Logic in Computer Science (LICS-97)*, pages 2–13. Warsaw, Poland, IEEE Computer Society Press, 1997.

BibTeX entry

#### Abstract:

Reduction orderings that are compatible with an equational theory E and total on (the E-equivalence classes of) ground terms play an important role in automated deduction. This paper presents a general approach for combining such orderings: it shows how E1-compatible reduction orderings total on S1-ground terms and E2-compatible reduction orderings total on S2-ground terms can be used to construct an E-compatible reduction ordering total on S-ground terms (where F is the union of the theories E1 and E2, and S is the union of the signatures S1 and S2), provided that S1 and S2 are disjoint and some other (rather weak) restrictions are satisfied. This work was motivated by the observation that it is often easier to construct such orderings for "small" signatures and theories separately, rather than directly for their union.
F. Baader and P. Narendran: **Unification of Concept Terms**. In *Proceedings of the 11th International Workshop on Unification, UNIF-97, LIFO Technical Report 97-8*. LIFO, Universitè de Orlèans, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

Unification of concept terms in Description Logics can be used to determine whether a newly introduced concept may have already been defined before, possibly using other atomic names or modelling concepts on a different level of granularity. We show that unification of concept terms in the small concept description language*FL*can be reduced to unification modulo an appropriate equational theory. Using results from unification theory, we can further reduce this unification problem to a formal language problem, which can be solved (in Exptime) with the help of tree automata. It can also be shown that the problem is PSPACE hard.

_{0}
F. Baader and P. Narendran: **Unification of Concept Terms in Description Logics**. In *Proceedings of the International Workshop on Description Logics, DL'97*, pages 34–38. LRI, Universitè PARIS-SUD, Cente d'Orsay, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

Unification of concept terms is a new kind of inference problem for Description Logics, which extends the equivalence problem by allowing to substitute certain concept names by concept terms before testing for equivalence. We show that this inference problem is of interest for applications, and present first decidability and complexity results for a small concept description language.
F. Baader and U. Sattler: **Description Logics with Aggregates and Concrete Domains**. In *Proceedings of the International Workshop on Description Logics*, 1997.

BibTeX entry
Paper (PS)

F. Baader and C. Tinelli: **A New Approach for Combining Decision Procedures for the Word Problem, and Its Connection to the Nelson-Oppen Combination Method**. In W. McCune, editor, *Proceedings of the 14th International Conference on Automated Deduction (CADE-97)*, volume 1249 of *Lecture Notes in Artificial Intelligence*, pages 19–33. Springer-Verlag, 1997.

BibTeX entry

#### Abstract:

The Nelson-Oppen combination method can be used to combine decision procedures for the validity of quantifier-free formulae in first-order theories with disjoint signatures, provided that the theories to be combined are stably infinite. We show that, even though equational theories need not satisfy this property, Nelson and Oppen's method can be applied, after some minor modifications, to combine decision procedures for the validity of quantifier-free formulae in equational theories. Unfortunately, and contrary to a common belief, the method cannot be used to combine decision procedures for the word problem. We present a method that solves this kind of combination problem. Our method is based on transformation rules and also applies to equational theories that share a finite number of constant symbols.
Franz Baader and Klaus U. Schulz: **Unification Theory – An Introduction**. CIS-Rep-97-103, Center for Language and Information Processing (CIS), Oettingenstraße 67, D-80538 Munich, Germany, January 1997.

BibTeX entry
Paper (PS)

#### Abstract:

This work is a preliminary version of the chapter on unification theory in a volume on automated deduction produced by the participants of the nationwide German research programme on automated deduction (SSP ``Deduktion'').
M. Baumeister, A. Becks, S. Sklorz, C. Tresp, and U. Tüben: **Indexing Medical Abstract Databases**. In *Proceedings of the European Workshop on Multimedia Technology in Medical Training*, September 1997.

BibTeX entry

M. Baumeister, H.-P. Hohn, S. Sklorz, and C. Tresp (Hrsg.): **Multimedia Technology in Medical Training**. Aachen, Augustinus, 1997.

BibTeX entry

M. S. Hacid, P. Marcel, and C. Rigotti: **A Rule-Based Language for Ordered Multidimensional Databases**. In *Proc. of the 5th Intl. Workshop on Deductive Database and Logic Programming (DDLP'97)*, volume 317 of *GMD-Studien*, pages 69–81, july 1997.

BibTeX entry
Paper (PS)

#### Abstract:

This paper presents a rule-based language that supports multidimensional tables. It provides a simple and declarative way to express every query computable in polynomial time on ordered tables. We define its model-theoretic semantics and develop an equivalent fixpoint theory that is a basis for the reuse of standard optimization techniques.
M. S. Hacid, P. Marcel, and C. Rigotti: **A rule based data manipulation language for OLAP systems**. In *Proc. of the 5th Intl. Conf. on Deductive and Object-Oriented Databases (DOOD'97)*, *LNCS*. Montreux, Switzerland, SPRINGER, december 1997.

BibTeX entry
Paper (PS)

#### Abstract:

This paper proposes an extension of Datalog devoted to data manipulations in On-Line Analytical Processing (OLAP) systems. This language provides a declarative and concise way to specify the basic standard restructuring and summarizing operations on multidimensional cubes used in these systems. We define its model-theoretic semantics and an equivalent fixpoint semantics that leads to a naive evaluation procedure. We also illustrate its applicability to specify usefull more complex data manipulations arising in OLAP systems.
Mohand-Saïd Hacid and Ulrike Sattler: **An Object-Centered Multi-dimensional Data Model with Hierarchically Structured Dimensions**. In *Proceedings of the IEEE Knowledge and Data Engineering Workshop, Newport Beach, CA, USA*, pages 65–72. IEEE Computer Society, November 1997.

BibTeX entry

#### Abstract:

In this paper, we propose a formal framework based on description logics as a basis for both modeling multidimensional databases and understanding related reasoning problems and services. We extend a description logic with new constructors for defining operators on cubes together with a representational framework for hierarchically structured dimensions. The constructors we propose correspond in part to the operators on data cubes given in [1]. Finally, our representation of cubes naturally allows a symmetric treatment of dimensions and measures.–> [1] Rakesh Agrawal and Ashish Gupta and Sunita Sarawagi, Modeling Multidimensional Databases, Proceedings of the International Conference on Data Engineering (ICDE'97), Birmingham, UK. Also available as Research Report via WWW at http://www.almaden.ibm.com/cs/people/ragrawal/pubs.html\#olap,
J. Hiltner, M. Jäger, E. Meyer zu Bexten, C. Tresp, and M. Fathi: **Analyse medizinischer Bilddaten mit Hilfe unscharfen Wissens**. In Bernhard Arnolds, Heinrich Müller, Dietmar Saupe, and Thomas Tolxdorff, editors, *Digitale Bildverarbeitung in der Medizin*, *Tagungsband zum 5. Freiburger Workshop (Deutschland)*, März 1997.

BibTeX entry

Claudia Krobb: **Entwicklung einer Spezialisierungshierarchie für Modellierungsschritte im objekt-orientierten Datenmodell VeDa**. RWTH Aachen, Germany, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

The object-oriented data-model VeDa is designed to represent objects as well as steps relevant for the modeling of chemical plants and processes. The main objective of this thesis is to develop a relation which supports the organization of modeling-steps into an hierarchy. This hierarchy facilitates the search for and the reuse of existing steps. It also helps to decide whether a step can be exchanged in a given sequence of steps without loss of executability of that sequence. The first step in the definition of this hierarchy was the development of a language for the specification of classes of modeling-steps. Parallel to this a formalism was developed to formulate pre- and postconditions for individual steps. These conditions characterize the effects of the above named steps. Next, an algorithm is presented that decides interesting properties of these pre- and postconditions such as satisfiability. Finally, it is shown how the formalism developed is used as a basis for both the definition of the hierarchy and an algorithm to check whether a given sequence of steps is executable.
R. Küsters: **Characterizing the semantics of terminological cycles with the help of finite automata**. In *Proceedings of the International Workshop on Description Logics, DL'97*, pages 10–14. LRI, Universitè PARIS-SUD, Cente d'Orsay, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

The representation of terminological knowledge may naturally lead to terminological cycles. In addition to descriptive semantics, the meaning of cyclic terminologies can also be captured by fixed-point semantics. To gain a more profound understanding of these semantics and to obtain inference algorithms for inconsistency, subsumption, and related inference tasks, this paper provides automata theoretic characterizations of these semantics. The already existing results for the language FL_{0}are extended to ALN, which additionally allows for primitive negation and number-restrictions. Moreover, this work considers the relationship between certain schemas and ALN-terminologies.

Ralf Küsters: **Charakterisierung der Semantik terminologischer Zyklen mit Hilfe endlicher Automaten**. RWTH Aachen, Germany, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

The representation of terminological knowledge may naturally lead to terminological cycles. In addition to descriptive semantics, the meaning of a (cyclic) terminology can also be captured by fixed-point semantics, which were first introduced by B. Nebel. These semantics were analyzed by F. Baader with the help of finite automata for the very small language FL_{0}, which allows for concept conjunctions and (universal) value-restrictions. This automata theoretic characterization helps deciding which semantic is to prefer in a specific representation task, and in addition, it yields decision procedures and complexity results for subsumption. Since FL

_{0}is not expressive enough for most practical representation problems, my diploma thesis extends FL

_{0}to ALN by adding primitive negation and number-restrictions. Beside the characterizations of cyclic ALN-terminologies, in this work the relationship between ALN-terminologies and SL

_{d}is-schemata (which were introduced by M. Buchheit, F. M. Donini, W. Nutt and A. Schaerf) is analyzed. It turns out that SL

_{d}is-schemata can be seen — w.r.t. inconsistency, validity, and subsumption — as special terminologies. Thus, inference problems involving schemata can be reduced to inference problems of the corresponding terminologies.

Ralf Molitor: **Konsistenz von Wissensbasen in Beschreibungslogiken mit Rollenoperatoren**. RWTH Aachen, Germany, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

Im Rahmen dieser Arbeit werden Entscheidbarkeitsprobleme für Erweiterungen der Beschreibungslogik ALC um Konzeptkonstruktoren auf Rollenketten untersucht. Ausgehend von einem regelbasierten Vervollständigungsvefahren, das die Erfüllbarkeit von ALCN(o)-Konzepten entscheidet, wird zunächst versucht, die Konsistenz von ALCN(o)-ABoxen mit Hilfe geeigneter Erweiterungen des Verfahrens zu entscheiden. Die Untersuchungen liefern zum einen für eine spezielle Klasse von (zulässigen) ALCN(o)-ABoxen das gewünschte Entscheidbarkeitsresultat. Zum anderen werden anhand von Beipsielen die Probleme ausführlich diskutiert, die sich (im Beweis der Terminierung) für das Vervollständigungsverfahren und unzulässige ALCN(o)-ABoxen ergeben. Die Entscheidbarkeit des Konsistenzproblems für ALCN(o) bleibt dabei offen. Für das erste Entscheidbarkeitsresultat nutzt man die Levelstruktur-Eigenschaft von ALCN(o) aus. Da die Erweiterung von ALC um Role-Value-Maps auf Rollenketten gleicher Länge (ALC+rvm) ebenfalls die Levelstruktur-Eigenschaft besitzt, untersucht man nun das Erfüllbarkeits- und Subsumtionsproblem für ALC+rvm. Die Entscheidbarkeit dieser Probleme für ALC+rvm bleibt wiederum offen, allerdings gelingt der Nachweis der Endlich-Modell-Eigenschaft für einige Fragmente von ALC+rvm, so daß man für diese Fragmente entsprechende Entscheidbarkeitsresultate erhält.
Madjid Nassiri: **Berechnung einer erweiterten Subsumtionshierarchie**. RWTH Aachen, Germany, 1997.

BibTeX entry
Paper (PS)

#### Abstract:

In dieser Diplomarbeit geht es um die Berechnung einer erweiterten Subsumtionshierarchie mit Hilfe der Merkmalexploration. Die Berechnung einer einfachen Subsumtionshierarchie (Klassifikation) ist häufig als ein Systemdienst in heutigen KR-Systemen realisiert. Die Klassifikation vereinfacht das Schlußfolgerungsproblem während der Laufzeit. Die Informationen, die aus dieser einfachen Hierarchie gewonnen werden können, sind aber sehr beschränkt, da diese Hierarchie keine Auskunft über die Subsumtionsrelationen zwischen den Konjunktionen von Konzeptnamen gibt. Solche Konjunktionen stellen einen großen Teil der zusammengesetzten Eigenschaften dar. Es ist wünschenswert, eine erweiterte Subsumtionshierarchie zu berechnen, die alle Konjunktionen zwischen den definierten Konzeptnamen repräsentiert. Zur Berechnung einer solchen Hierarchie möchte man natürlich so wenig wie möglich den teuren Subsumtionsalgorithmus aufrufen müssen.Das Ziel dieser Arbeit war, diese Aufgabe zu lösen. Hierzu wurden die Resultate aus der Welt der Formalen Begriffsanalyse herangezogen. In diesem Gebiet werden Objekte, Eigenschaften (Merkmale) und die Objekt-Eigenschafts-Beziehung durch formale Kontexte und formale Begriffe dargestellt. Begriffsanalytische Methoden, die von Ganter und Wille entwickelt wurden, ermöglichen die Berechnung der Begriffshierarchie eines gegebenen Kontextes, die als ein vollständiger Verband dargestellt werden kann.

Durch die Definition eines geeigneten Kontextes zu einer gegebenen Tbox wurde der Zusammenhang zwischen der Begriffsanalyse und terminologischen Tboxen hergestellt, in der Hoffnung, daß dadurch die Berechnung einer erweiterten Subsumtionshierarchie im oben definierten Sinne ermöglicht wird. In diesem Kontext entsprechen die Merkmale den in der Tbox definierten Konzeptnamen. Entsprechend ist jeder Begriffsinhalt eine Konjunktion zwischen Konzeptnamen.

Ganter und Wille haben eine Methode angegeben (Merkmalexploration), durch
die man eine minimale Implikationsbasis für einen gegebenen Kontext berechnen
kann, auch wenn der Kontext unendlich ist. Diese Basis repräsentiert dann den
ganzen Begriffsverband des Kontextes. Die durch die Merkmalexploration
berechnete Basis *L* besteht aus einer minimalen Anzahl von
Implikationen, die im Kontext gelten. Die Menge aller im Kontext gültigen
Implikationen ist dann genau die Menge aller Implikationen, die aus *L*
folgen. Man berechnet diese Basis für einen gegebenen Kontext, indem man alle
sogenannten Pseudoinhalte des Kontextes mit Hilfe einer auf der
Potenzmenge der Menge aller Merkmale definierten Ordnung (lektische
Ordnung) berechnet.

In dem zu einer Tbox geeignet definierten Kontext entspricht diese Basis einer
minimalen Repräsentation aller Subsumtionen zwischen den Konjunktionen der
definierten Konzeptnamen. Um diese Basis zu berechnen, hat der ,,Gantersche``
Algorithmus einen menschlichen Experten vorausgesetzt, der sich in dem
Anwendungsgebiet auskennt und zu jeder Implikation zwischen Merkmalen
entscheiden kann, ob sie im Kontext gilt oder nicht. Im Zusammenhang mit Tboxen
wird diese Rolle von einem Subsumtionsalgorithmus übernommen. Hierzu wurde ein
bekannter, optimierter und regelbasierter Erfüllbarkeitsalgorithmus
(funktionaler Algorithmus) zu dem gewünschten Experten erweitert, der eingesetzt
im ,,Ganterschen`` Algorithmus die Berechnung der gewünschten Implikationsbasis
ermöglicht.

C. Tresp: **Fuzzy Reasoning Techniques for the Management of Complex Information in Medicine**. In *42. Internationales Wissenschaftliches Kolloquium*, September 1997.

BibTeX entry

C. Tresp: **Queries in Fuzzy Deductive Databases Using Medical Information**. In *5th European Congress on Intelligent Techniques & Soft Computing (EUFIT 97)*, September 1997.

BibTeX entry

C. Tresp and S. Sklorz: **Medizinische Aus- und Weiterbildung: Mit dem Computer lernen**. *interMed - Arzt & neue Medien*, 1, 1997. Braun Fachverlage

BibTeX entry

C. Tresp and S. Sklorz: **Multimedia Technology in Medical Training**. In M. Jarke, K. Parsedach, and K. Pohl, editors, *Informatik'97: Informatik als Innovationsmotor*, *27. Jahrestagung der GI*. Aachen, Springer LNCS, 1997.

BibTeX entry

J. Weidemann, H.-P. Hohn, J. Hiltner, K. Tochtermann, C. Tresp, D. Bozinov, K. Venjakob, A. Freund, B. Reusch, and H.-W. Denker: **A Hypermedia Tutorial for Cross-Sectional Anatomy: HyperMed**. *Acta Anatomica*, 158, 1997.

BibTeX entry

## 1996

F. Baader: **A Formal Definition for the Expressive Power of Terminological Knowledge Representation Languages**. *J. of Logic and Computation*, 6(1):33–54, 1996.

BibTeX entry
Free reprint

#### Abstract:

The notions `expressive power' or `expressiveness' of knowledge representation languages (KR languages) can be found in most papers on knowledge representation; but these terms are usually just employed in an intuitive sense. The papers contain only informal descriptions of what is meant by expressiveness. There are several reasons that speak in favour of a formal definition of expressiveness: for example, if we want to show that certain expressions in one language cannot be expressed in another language, we need a strict formalism that can be used in mathematical proofs. Even though we shall only consider terminological KR languages—i.e. KR languages descending from the original system KL-ONE—in our motivation and in the examples, the definition of expressive power that will be given in this paper can be used for all KR languages with Tarski-style model-theoretic semantics. This definition will shed a new light on the tradeoff between expressiveness of a representation language and its computational tractability. There are KR languages with identical expressive power, but different complexity results for reasoning, which comes from the fact that sometimes the tradeoff lies between convenience and computational tractability. The definition of expressive power will be applied to compare various terminological KR languages known from the literature with respect to their expressiveness. This will yield examples for how to utilize the definition both in positive proofs—that is, proofs where it is shown that one language can be expressed by another language—and, more interestingly, in negative proofs—which show that a given language cannot be expressed by the other language.
F. Baader: **Combination of Compatible Reduction Orderings that are Total on Ground Terms**. In *Proceedings of the 10th International Workshop on Unification, UNIF-96, CIS-Report 96-91*, pages 97–106. CIS, Universität München, 1996.

BibTeX entry
Paper (PS)

#### Abstract:

Reduction orderings that are compatible with an equational theory*E*and total on the

*E*-equivalence classes of ground terms play an important role in automated deduction. It has turned out to be rather hard to define such orderings. This paper supports the process of designing compatible total reduction orderings. It describes how total reduction orderings

*>*and

_{1}*>*that are respectively compatible with

_{2}*E*and

_{1}*E*can be combined to a total reduction ordering

_{2}*>*that is compatible with

*E*, provided that the theories are over disjoint signatures and some other properties are satisfied.

_{1}E_{2}
F. Baader: **Using Automata Theory for Characterizing the Semantics of Terminological Cycles**. *Annals of Mathematics and Artificial Intelligence*, 18(2–4):175–219, 1996.

BibTeX entry
Free reprint

#### Abstract:

In most of the implemented terminological knowledge representation systems it is not possible to state recursive concept definitions, so-called terminological cycles. One reason is that it is not clear what kind of semantics to use for such cyles. In addition, the inference algorithms used in such systems may go astray in the presence of terminological cycles. In this paper we consider terminological cycles in a very small terminological representation language. For this language, the effect of the three types of semantics introduced by B. Nebel can completely be described with the help of finite automata. These descriptions provide for a rather intuitive understanding of terminologies with recursive definitions, and they give an insight into the essential features of the respective semantics. In addition, one obtains algorithms and complexity results for the subsumption problem and for related inference tasks. The results of this paper may help to decide what kind of semantics is most appropriate for cyclic definitions, depending on the representation task.
F. Baader: **Logik-basierte Wissensrepräsentation**. *KI*, 3/96:8–16, 1996.

BibTeX entry

#### Abstract:

Nach einer kurzen Betrachtung der Anforderungen, die eine Wissensrepr"asentationssprache erf"ullen sollte, werden wir auf Beschreibungslogiken, Modallogiken und nichtmonotone Logiken als Formalismen zur Repr"asentation terminologischen Wissens, zeitabh"angigen und subjektiven Wissens sowie unvollst"andigen Wissens eingehen. Am Ende jedes Abschnitts wird kurz auf die Verbindung zur Logischen Programmierung eingegangen.
F. Baader, M. Buchheit, and B. Hollunder: **Cardinality Restrictions on Concepts**. *Artificial Intelligence*, 88(1–2):195–213, 1996.

BibTeX entry
Free reprint

#### Abstract:

The concept description formalisms of existing description logics systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. This article argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of description logics systems that is currently gaining in interest. It shows that including such restrictions in the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualified number restrictions and of general terminological axioms.
F. Baader and W. Nutt: **Combination Problems for Commutative/Monoidal Theories: How Algebra Can Help in Equational Reasoning**. *J. Applicable Algebra in Engineering, Communication and Computing*, 7(4):309–337, 1996.

BibTeX entry

#### Abstract:

We study the class of theories for which solving unification problems is equivalent to solving systems of linear equations over a semiring. It encompasses important examples like the theories of Abelian monoids, idempotent Abelian monoids, and Abelian groups. This class has been introduced by the authors independently of each other as ``commutative theories'' (Baader) and ``monoidal theories'' (Nutt). We show that commutative theories and monoidal theories indeed define the same class (modulo a translation of the signature), and we prove that it is undecidable whether a given theory belongs to it. In the remainder of the paper we investigate combinations of commutative/monoidal theories with other theories. We show that finitary commutative/monoidal theories always satisfy the requirements for applying general methods developed for the combination of unification algorithms for disjoint equational theories. Then we study the adjunction of monoids of homomorphismss to commutative/monoidal theories. This is a special case of a non-disjoint combination, which has an algebraic counterpart in the corresponding semiring. By studying equations over this semiring, we identify a large subclass of commutative/monoidal theories that are of unification type zero by. We also show with methods from linear algebra that unitary and finitary commutative/monoidal theories do not change their unification type when they are augmented by a finite monoid of s, and how algorithms for the extended theory can be obtained from algorithms for the basic theory.
F. Baader and U. Sattler: **Description Logics with Symbolic Number Restrictions**. In W. Wahlster, editor, *Proceedings of the Twelfth European Conference on Artificial Intelligence (ECAI-96)*, pages 283–287. John Wiley & Sons Ltd, 1996. An extended version has appeared as Technical Report LTCS-96-03

BibTeX entry
Paper (PS)

#### Abstract:

Motivated by a chemical engineering application, we introduce an extension of the concept description language ALCN by symbolic number restrictions. This first extension turns out to have an undecidable concept satisfiability problem. For a restricted language-whose expressive power is sufficient for our application-we show that concept satisfiability is decidable.
F. Baader and U. Sattler: **Knowledge Representation in Process Engineering**. In *Proceedings of the International Workshop on Description Logics*. Cambridge (Boston), MA, U.S.A., AAAI Press/The MIT Press, 1996.

BibTeX entry
Paper (PS)

#### Abstract:

In process engineering, as in many other application domains, the domain specific knowledge is far too complex to be described entirely using description logics. Hence this knowledge is often stored using an object-oriented system, which, because of its high expressiveness, provides only weak inference services. In particular, the process engineers at RWTH Aachen have developed a frame-like language for describing process models. In this paper, we investigate how the powerful inference services provided by a DL system can support the users of this frame-based system. In addition, we consider extensions of description languages that are necessary to represent the relevant process engineering knowledge.
F. Baader and U. Sattler: **Number Restrictions on Complex Roles in Description Logics**. In *Proceedings of the Fifth International Conference on the Principles of Knowledge Representation and Reasoning (KR-96)*. Morgan Kaufmann, Los Altos, 1996. An extended version has appeared as Technical Report LTCS-96-02

BibTeX entry

#### Abstract:

Number restrictions are concept constructors that are available in almost all implemented description logic systems. However, even though there has lately been considerable effort on integrating expressive role constructors into description logics, the roles that may occur in number restrictions are usually of a very restricted type. Until now, only languages with number restrictions on atomic roles and inversion of atomic roles, or with number restrictions on intersection of atomic roles have been investigated in detail. In the present paper, we increase the expressive power of description languages by allowing for more complex roles in number restrictions. As role constructors, we consider composition of roles (which will be present in all our languages), and intersection, union and inversion of roles in different combinations. We will present one decidability result (for the basic language that extends ALC by number restrictions on roles with composition), and three undecidability results for three different extensions of the basic language.
F. Baader and K. U. Schulz: **Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures**. *J. Symbolic Computation*, 21:211–243, 1996.

BibTeX entry
Free reprint

#### Abstract:

Most of the work on the combination of unification algorithms for the union of disjoint equational theories has been restricted to algorithms that compute finite complete sets of unifiers. Thus the developed combination methods usually cannot be used to combine decision procedures, i.e., algorithms that just decide solvability of unification problems without computing unifiers. In this paper we describe a combination algorithm for decision procedures that works for arbitrary equational theories, provided that solvability of so-called unification problems with constant restrictions—a slight generalization of unification problems with constants—is decidable for these theories. As a consequence of this new method, we can, for example, show that general A-unifiability, i.e., solvability of A-unification problems with free function symbols, is decidable. Here A stands for the equational theory of one associative function symbol. Our method can also be used to combine algorithms that compute finite complete sets of unifiers. Manfred Schmidt-Schauß' combination result, the until now most general result in this direction, can be obtained as a consequence of this fact. We also obtain the new result that unification in the union of disjoint equational theories is finitary, if general unification—i.e., unification of terms with additional free function symbols—is finitary in the single theories.
Franz Baader and Klaus U. Schulz, editors: **Frontiers of Combining Systems**. Kluwer Academic Publishers, 1996.

BibTeX entry

#### Abstract:

The combination of formal systems and algorithms, the logical and algebraic background, as well as the general architecture of complex and interacting systems has recently become a very active research area. The first international workshop Frontiers of Combining Systems created a common forum for the different research activities on this topic in the fields of logic, computer science, and artificial intelligence. Its main intention was to stimulate an interdisciplinary discussion that focuses on different aspects of the combination problem.The volume contains research papers that cover the combination of logics, the combination of constraint-solving techniques and decision procedures, the combination of deductive systems, the integration of data structures into Constraint Logic Programming formalisms, and logic modelling of multi-agent systems. These problems are addressed on different conceptual levels: from the investigation of formal properties of combined systems using methods of logic and mathematics to the consideration of physical connections and communication languages relavent for combination of software tools.

Kamel Ben-Khalifa: **Kombination freier Strukturen**. RWTH Aachen, Germany, November 1996.

BibTeX entry
Paper (PS)

#### Abstract:

Bei dem Kombinationsproblem in der Unifikationstheorie geht es darum, Unifikationsalgorithmen für Gleichungstheorien über disjunkten Signaturen zu einem Unifikationsalgorithmus zu kombinieren, der Unifikationsprobleme über der gemischten Signatur, d.h. Unifikationsprobleme in denen Symbole aus verschiedenen Gleichungstheorien vorkommen können, behandeln kann. Eine Erweiterung dieses Problems besteht darin, Termrelationen zu betrachten, die allgemeiner sind, als die Gleichheit modulo einer Gleichungstheorie. Die Klasse der Knuth-Bendix Reduktionsordnungen, die z.B. bei der Vervollständigung von Termersetzungssystemen Verwendung findet, ist ein wichtiges Beispiel solcher Termrelationen.
Dazu muß man aber zuerst festlegen, wie diese Relationen auf den gemischten
Termen zu interpretieren sind. Dazu existieren zwei Definitionen. Die
Definition von Baader und Schulz ist eine algebraische Definition, die auf einer
geeigneten Kombination von freien Strukturen basiert. Die Definition von
Kirchner und Ringeissen ist hingegen syntaktisch. Sie verwendet Termreduktion und
Termabstraktion. In dieser Arbeit wird im wesentlichen gezeigt, daß beide
Definitionen äquivalent sind. Anschließend wird gezeigt, wie man ausgehend von
einer Modifikation der syntaktischen Definition Entscheidungsverfahren für die
Gültigkeit von reinen atomaren Formeln zu einem Entscheidungsverfahren für die
Gültigkeit von atomaren Formeln über der gemischten Signatur kombinieren kann.

J. Hiltner and C. Tresp: **Verbunddokumente: Objektmodelle und Implementierungen**. In *15. Workshop Interdisziplinäre Methoden in der Informatik*, 1996.

BibTeX entry

Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: **Die Beweisentwicklungsumgebung Omega-MKRP**. *Informatik – Forschung und Entwicklung*, 11(1):20–26, 1996. In German

BibTeX entry

#### Abstract:

The main goal of the proof development environment Omega-MKRP is to support mathematicians in one of their main activities, namely proving mathematical theorems. This support must be so comfortable that formal proofs can be generated without undue difficulties and the correctness of the generated proofs is ensured. Such a system will only succeed if the computer supported generation of proofs is less time consuming than manual generation. In order to achieve this, there are different requirements to be fulfilled, which we describe in this paper. In particular, we discuss the expressive power of the object language, the possibility to communicate abstract proof plans, the automated support in filling proof gaps, and the human-oriented presentation of proofs generated. Omega-MKRP is a synthesis of the approaches of fully automated, interactive, and plan-based theorem proving. This article gives a survey of various aspects of the system.
Martin Leucker: **Comparison of Two Semantic Approaches to Unification**. RWTH Aachen, Germany, 1996. In German

BibTeX entry
Paper (PS)

#### Abstract:

This thesis compares the two most prominent semantic approaches to unification in equational theories. We can show that unification in primal algebras is not a direct instance of unification in monoidal theories. However, it is possible to reduce unification in a given primal algebra to unification in a corresponding monoidal theory. As by-products of this work we have shown that unification in algebras is an instance of unification modulo equational theories, and we have introduced a new notion of equivalence for equational theories.
E. Meyer zu Bexten, C. Tresp, M. Jäger, M. Moser, and J. Hiltner: **Consistency Checking in Applications based on Fuzzy Rules**. In *Second International Conference on Applications of Fuzzy Systems and Soft Computing*, Juni 1996.

BibTeX entry

U. Sattler: **A Concept Language Extended with Different Kinds of Transitive Roles**. In G. Görz and S. Hölldobler, editors, *20. Deutsche Jahrestagung für Künstliche Intelligenz*, number 1137 in *Lecture Notes in Artificial Intelligence*. Springer Verlag, 1996.

BibTeX entry
Paper (PS)

#### Abstract:

Motivated by applications that demand for the adequate representation of part-whole relations, different possibilities of representing transitive relations in terminological knowledge representation systems are investigated. A well-known concept language, ALC, is extended by three different kinds of transitive roles. It turns out that these extensions differ largely in expressiveness and computational complexity, hence this investigation gives insight into the diverse alternatives for the representation of transitive relations such as part-whole relations, family relations or partial orders in general.
U. Sattler: **Knowledge Representation in Process Engineering**. In F. Baader, H. J. Bürckert, A. Günter, and W. Nutt, editors, *Proceedings of the Workshop on Knowledge Representation and Configuration (WRKP'96)*, *DFKI Document D-96-04*, 1996.

BibTeX entry
Paper (PS)

K. Tochtermann, C. Tresp, J. Hiltner, and A. Freund: **HyperMed: A Hypermedia System for Anatomical Education**. In *ED-Media, World Conference on Educational Multimedia and Hypermedia*, August 1996.

BibTeX entry

C. Tresp, A. Becks, R. Klinkenberg, and J. Hiltner: **Knowledge Representation in a World with Vague Concepts**. In *Intelligent Systems: A semiotic perspective*, September 1996.

BibTeX entry

C. Tresp, M. Jäger, M. Moser, J. Hiltner, and M. Fathi: **A New Method for Image Segmentation Based on Fuzzy Knowledge**. In *Int. IEEE Symposia on Intelligence and Systems*, Oktober 1996.

BibTeX entry

## 1995

F. Baader: **Computing a Minimal Representation of the Subsumption Lattice of all Conjunctions of Concepts Defined in a Terminology**. In *Proceedings of the International Symposium on Knowledge Retrieval, Use, and Storage for Efficiency, KRUSE 95*, pages 168–178, 1995.

BibTeX entry
Paper (PS)

#### Abstract:

For a given TBox of a terminological KR system, the classification algorithm computes (a representation of) the subsumption hierarchy of all concepts introduced in the TBox. In general, this hierarchy does not contain sufficient information to derive all subsumption relationships between conjunctions of these concepts. We show how a method developed in the area of ``formal concept analysis'' for computing minimal implication bases can be used to determine a minimal representation of the subsumption hierarchy between conjunctions of concepts introduced in a TBox. To this purpose, the subsumption algorithm must be extended such that it yields (sufficient information about) a counterexample in cases where there is no subsumption relationship. For the concept language ALC, this additional requirement does not change the worst-case complexity of the subsumption algorithm. One advantage of the extended hierarchy is that it is a lattice, and not just a partial ordering.
F. Baader, M. Buchheit, M.A Jeusfeld, and W. Nutt: **Reasoning About Structured Objects: Knowledge Representation Meets Databases**. *The Knowledge Engineering Review*, 10(1):73–76, 1995.

BibTeX entry

F. Baader and B. Hollunder: **Embedding Defaults into Terminological Representation Systems**. *J. Automated Reasoning*, 14:149–180, 1995.

BibTeX entry
Free reprint

#### Abstract:

We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are only applied to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable. We describe an algorithm for computing extensions, and show how the inference procedures of terminological systems can be modified to give optimal support to this algorithm.
F. Baader and B. Hollunder: **Priorities on Defaults with Prerequisites, and their Application in Treating Specificity in Terminological Default Logic**. *J. Automated Reasoning*, 15:41–68, 1995.

BibTeX entry
Free reprint

#### Abstract:

In a recent paper we have proposed terminological default logic as a formalism which combines both means for structured representation of classes and objects, and for default inheritance of properties. The major drawback that terminological default logic inherits from general default logic is that it does not take precedence of more specific defaults over more general ones into account. This behaviour has already been criticized in the general context of default logic, but it is all the more problematic in the terminological case where the emphasis lies on the hierarchical organization of concepts. The present paper addresses the problem of modifying terminological default logic such that more specific defaults are preferred. We assume that the specificity ordering is induced by the hierarchical organization of concepts, which means that default information is not taken into account when computing priorities. It turns out that the existing approaches for expressing priorities between defaults do not seem to be appropriate for defaults with prerequisites. Therefore we shall consider an alternative approach for dealing with prioritization in the framework of Reiter's default logic. The formalism is presented in the general setting of default logic where priorities are given by an arbitrary partial ordering on the defaults. We shall exhibit some interesting properties of the new formalism, compare it with existing approaches, and describe an algorithm for computing extensions. In the terminological case, we thus obtain an automated default reasoning procedure that takes specificity into account.
F. Baader and A. Laux: **Terminological Logics with Modal Operators**. In C. Mellish, editor, *Proceedings of the 14th International Joint Conference on Artificial Intelligence*, pages 808–814. Montréal, Canada, Morgan Kaufmann, 1995.

BibTeX entry

#### Abstract:

Terminological knowledge representation formalisms can be used to represent objective, time-independent facts about an application domain. Notions like belief, intentions, and time which are essential for the representation of multi-agent environments can only be expressed in a very limited way. For such notions, modal logics with possible worlds semantics provides a formally well-founded and well-investigated basis. This paper presents a framework for integrating modal operators into terminological knowledge representation languages. These operators can be used both inside of concept expressions and in front of terminological and assertional axioms. We introduce syntax and semantics of the extended language, and show that satisfiability of finite sets of formulas is decidable, provided that all modal operators are interpreted in the basic logic K, and that the increasing domain assumption is used.
F. Baader and H.-J. Ohlbach: **A Multi-Dimensional Terminological Knowledge Representation Language**. *J. Applied Non-Classical Logics*, 5:153–197, 1995.

BibTeX entry

F. Baader and K.U. Schulz: **Combination Techniques and Decision Problems for Disunification**. *Theoretical Computer Science B*, 142:229–255, 1995.

BibTeX entry
Free reprint

#### Abstract:

Previous work on combination techniques considered the question of how to combine unification algorithms for disjoint equational theories*E*in order to obtain a unification algorithm for the union

_{1},,E_{n}*E*of the theories. Here we want to show that variants of this method may be used to decide solvability and ground solvability of disunification problems in

_{1}E_{n}*E*. Our first result says that solvability of disunification problems in the free algebra of the combined theory

_{1}E_{n}*E*is decidable if solvability of disunification problems with linear constant restrictions in the free algebras of the theories

_{1}E_{n}*E*(

_{i}*i = 1,,n*) is decidable. In order to decide ground solvability (i.e., solvability in the initial algebra) of disunification problems in

*E*we have to consider a new kind of subproblem for the particular theories

_{1}E_{n}*E*, namely solvability (in the free algebra) of disunification problems with linear constant restriction under the additional constraint that values of variables are not

_{i}*E*-equivalent to variables. The correspondence between ground solvability and this new kind of solvability holds, (1) if one theory

_{i}*E*is the free theory with at least one function symbol and one constant, or (2) if the initial algebras of all theories

_{i}*E*are infinite. Our results can be used to show that the existential fragment of the theory of the (ground) term algebra modulo associativity of a finite number of function symbols is decidable; the same result follows for function symbols which are associative and commutative, or associative, commutative and idempotent.

_{i}
F. Baader and K.U. Schulz: **Combination of Constraint Solving Techniques: An Algebraic Point of View**. In *Proceedings of the 6th International Conference on Rewriting Techniques and Applications*, volume 914 of *Lecture Notes in Artificial Intelligence*, pages 352–366. Kaiserslautern, Germany, Springer Verlag, 1995.

BibTeX entry

#### Abstract:

In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply—namely that unification with so-called linear constant restrictions is decidable in the single theories—is equivalent to requiring decidability of the positive fragment of the first order theory of the equational theories. Thus, the combination method can also be seen as a tool for combining decision procedures for positive theories of free algebras defined by equational theories. The present paper uses this observation as the starting point of a more abstract, algebraic approach to formulating and solving the combination problem. Its contributions are twofold. As a new result, we describe an optimization and an extension of our combination method to the case of constraint solvers that also take relational constraints (such as ordering constraints) into account. The second contribution is a new proof method, which depends on abstract notions and results from universal algebra, as opposed to technical manipulations of terms (such as ordered rewriting, abstraction functions, etc.)
F. Baader and K.U. Schulz: **On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers**. In *Proceedings of the International Conference on Principles and Practice of Constraint Programming, CP95*, volume 976 of *Lecture Notes in Artificial Intelligence*, pages 380–397. Cassis, France, Springer Verlag, 1995.

BibTeX entry

#### Abstract:

When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. Subsequently, we define so-called simply-combinable structures (SC-structures). For SC-structures over disjoint signatures, a canonical amalgamation construction exists, which for the subclass of strong SC-structures yields the free amalgamated product. The combination technique of BaaderSchulzCADE,BaaderSchulzRTA95 can be used to combine constraint solvers for (strong) SC-structures over disjoint signatures into a solver for their (free) amalgamated product. In addition to term algebras modulo equational theories, the class of SC-structures contains many solution structures that have been used in constraint logic programming, such as the algebra of rational trees, feature structures, and domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
Franz Baader and Can Adam Albayrak: **Termersetzungssysteme, Skript zur Vorlesung**. Pontstr. 96, D-52062 Aachen, Verlag der Augustinus Buchhandlung, 1995. ISBN 3-86073-148-3

BibTeX entry

#### Abstract:

Termersetzungssysteme sind ein wichtiges Hilfmittel zur automatisierten Behandlung von Gleichheitsaxiomen, da sie das Rechnen in gleichungsdefinierten Algebren ermöglichen. Sie finden deshalb zum Beispiel im Bereich der Algebraischen Spezifikation, der funktionalen Programmierung und des automatischen Theorembeweisens Verwendung. Das Skriptum stellt eine ausführliche Einführung in die im Bereich Termersetzungssysteme wichtigen Begriffe, Methoden und Resultate dar. Es werden Eigenschaften abstrakter Redunktionssysteme, Wortersetzungssysteme (Semi-Thue-Systeme), Grundbegriffe aus der universellen Algebra, Konfluenz und Terminierung von Termersetzungssystemen, Unifikation, Knuth-Bendix-Vervollständigung, Vervollständigung ohne Abbruch und Termersetzung modulo Gleichungstheorien behandelt.
Franz Baader and Hans Jürgen Ohlbach: **A Multi-Dimensional Terminological Knowledge Representation Language**. MPI-I-95-2-005, Max-Planck-Institut für Informatik, Saarbrücken, 1995.

BibTeX entry
Paper (PS)

#### Abstract:

An extension of the concept description language ALC used in KL-ONE-like terminological reasoning is presented. The extension includes multi-modal operators that can either stand for the usual role quantifications or for modalities such as belief, time etc. The modal operators can be used at all levels of the concept terms, and they can be used to modify both concepts and roles. This is an instance of a new kind of combination of modal logics where the modal operators of one logic may operate directly on the operators of the other logic. Different versions of this logic are investigated and various results about decidability and undecidability are presented. The main problem, however, decidability of the basic version of the logic, remains open.
François Bergeron and Ulrike Sattler: **Constructible differentially finite algebraic series in several variables**. *Theoretical Computer Science*, 144(1-2):59–66, 1995.

BibTeX entry

#### Abstract:

We extend the concept of CDF-series to the context of several variables, and show that the series solution of first order differential equations y=x(t,y) and functional equations y=x(t,y), with x CDF in two variables, are CDF-series. We also give many effective closure properties for CDF-series in several variables.
M. Fathi, C. Tresp, K. Holte, and J. Hiltner: **Development of Objective Functions for Soft Computing in Medical Applications**. In *ACM Computing Week*, Februar 1995.

BibTeX entry

Ulrike Sattler: **A Concept Language for an engeneering application with part-whole relations**. In A. Borgida, M. Lenzerini, D. Nardi, and B. Nebel, editors, *Proceedings of the International Workshop on Description Logics*, pages 119–123, 1995.

BibTeX entry
Paper (PS)

#### Abstract:

We investigate how terminological knowledge representation systems can be used to support modeling in an engeneering application. Because of the high complexity of the application, support of top–down modeling is a quite ambitious, but useful task for TKR Systems. An interesting problem to solve in this context is the handling of composite objects. Therefor, not only different part-whole relations have to be represented (some of them are transitive) but also their transitivity-like interaction as well as local properties of these relations. Hence this application calls for a concept language with powerful role forming operators. A concept language P with the expressive power to represent part-whole relations is defined, but it turns out that satisfiability of concept terms in P is undecidable. Hence it is necessary to drop some (but not many) of the demands made for the benefit of decidability. Several ways to handle the high complexity of inference algorithms of P are discussed.## 1994

F. Baader, M. Buchheit, and B. Hollunder: **Cardinality Restrictions on Concepts**. In *Proceedings of the German AI Conference, KI'94*, volume 861 of *Lecture Notes in Computer Science*, pages 51–62. Saarbrücken (Germany), Springer–Verlag, 1994.

BibTeX entry

#### Abstract:

The concept description formalisms of existing terminological systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. This paper argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of terminological systems that is currently gaining in interest. It shows that including such restrictions into the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualifying number restrictions and of general terminological axioms.
F. Baader, E. Franconi, B. Hollunder, B. Nebel, and H.J. Profitlich: **An Empirical Analysis of Optimization Techniques for Terminological Representation Systems**. *Applied Intelligence*, 4(2):109–132, 1994.

BibTeX entry
Paper (PS)

#### Abstract:

We consider different methods of optimizing the classification process of terminological representation systems and evaluate their effect on three different types of test data. Though these techniques can probably be found in many existing systems, until now there has been no coherent description of these techniques and their impact on the performance of a system. One goal of this article is to make such a description available for future implementors of terminological systems. Building the optimizations that came off best into theKRIS system greatly enhanced its efficiency.
F. Baader and B. Hollunder: **Computing extensions of terminological default theories**. In G. Lakemeyer, editor, *Foundations of Knowledge Representation and Reasoning*, volume 810 of *Lecture Notes in Artificial Intelligence*. Springer–Verlag, 1994.

BibTeX entry

F. Baader and A. Laux: **Terminological Logics with Modal Operators**. RR-94-33, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1994.

BibTeX entry

#### Abstract:

Terminological knowledge representation formalisms can be used to represent objective, time-independent facts about an application domain. Notions like belief, intentions, time—which are essential for the representation of multi-agent environments—can only be expressed in a very limited way. For such notions, modal logics with possible worlds semantics provides a formally well-founded and well-investigated basis.
This paper presents a framework for integrating modal operators into
terminological knowledge representation languages. These operators can be used
both inside of concept expressions and in front of terminological and
assertional axioms. The main restrictions are that all modal operators are
interpreted in the basic logic *K*, and that we consider increasing domains
instead of constant domains. We introduce syntax and semantics of the extended
language, and show that satisfiability of finite sets of formulas is decidable.

F. Baader and K. Schulz: **On the Combination of Symbolic Constraints, Solution Domains, and Constraint Solvers**. 94-82, Universität München, 1994.

BibTeX entry
Paper (PS)

#### Abstract:

When combining languages for symbolic constraints, one is typically faced with the problem of how to treat ``mixed'' constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a ``free amalgamated product'' as a possible solution to the first problem. Subsequently, we define so-called simply-combinable structures (SC-structures). For SC-structures over disjoint signatures, a canonical amalgamation construction exists, which for the subclass of strong SC-structures yields the free amalgamated product. The combination technique of [Baader&Schulz92,Baader&Schulz95] can be used to combine constraint solvers for (strong) SC-structures over disjoint signatures into a solver for their (free) amalgamated product. In addition to term algebras modulo equational theories, the class of SC-structures contains many solution structures that have been used in constraint logic programming, such as the algebra of rational trees, feature structures, domains consisting of hereditarily finite (wellfounded or non-wellfounded) nested sets and lists.
F. Baader and J.H. Siekmann: **Unification Theory**. In D.M. Gabbay, C.J. Hogger, and J.A. Robinson, editors, *Handbook of Logic in Artificial Intelligence and Logic Programming*, pages 41–125. Oxford, UK, Oxford University Press, 1994.

BibTeX entry

Franz Baader and Klaus U. Schulz: **Combination of Constraint Solving Techniques: An Algebraic Point of View**. CIS-Rep-94-75, Center for Language and Information Processing (CIS), Wagmüllerstraße 23, D-80538 Munich, Germany, July 1994.

BibTeX entry
Paper (PS)

#### Abstract:

In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply—namely that unification with so-called linear constant restrictions is decidable in the single theories—is equivalent to requiring decidability of the positive fragment of the first order theory of the equational theories. Thus, the combination method can also be seen as a tool for combining decision procedures for positive theories of free algebras defined by equational theories. The present paper uses this observation as the starting point of a more abstract, algebraic approach to formulating and solving the combination problem. Its contributions are twofold. As a new result, we describe an (optimized) extension of our combination method to the case of constraint solvers that also take relational constraints (such as ordering constraints) into account. The second contribution is a new proof method, which depends on abstract notions and results from universal algebra, as opposed to technical manipulations of terms (such as ordered rewriting, abstraction functions, etc.)
M. Fathi, C. Tresp, J. Hiltner, and K. Becker: **Fuzzy Set Optimization in Use of Medical MR-Image Analysis based on Evolution Strategies**. In *IEEE World Wisemen/Women Workshop (WWW), Nagoya University*, August 1994.

BibTeX entry

M. Fathi, C. Tresp, J. Hiltner, and K. Becker: **Possibilities for Evolution Strategies to optimize Fuzzy Sets in Medical Applications**. In *First Industry Academic Symposium on research for Future Supersonic and Hypersonic Vehicles*, Dezember 1994.

BibTeX entry

Xiaorong Huang, Manfred Kerber, Michael Kohlhase, Erica Melis, Dan Nesmith, Jörn Richts, and Jörg Siekmann: **KEIM: A Toolkit for Automated Deduction**. In Alan Bundy, editor, *Automated Deduction — CADE-12*, *Proceedings of the 12th International Conference on Automated Deduction*, pages 807–810. Nancy, Springer-Verlag LNAI 814, 1994.

BibTeX entry

#### Abstract:

KEIM is a collection of software modules, written in Common Lisp with CLOS, designed to be used in the implementation of automated reasoning systems. KEIM is intended to be used by those who want to build or use deduction systems (such as resolution theorem provers) without having to write the entire framework. KEIM is also suitable for embedding a reasoning component into another Common Lisp program. It offers a range of datatypes implementing a logical language of type theory (higher order logic), in which first order logic can be easily embedded. KEIM's datatypes and algorithms include: types; terms (symbols, applications, abstractions); unification and substitutions; proofs, including resolution and natural deduction styles.
Xiaorong Huang, Manfred Kerber, Michael Kohlhase, and Jörn Richts: **Adapting Methods to Novel Tasks in Proof Planning**. In Bernhard Nebel and Leonie Dreschler-Fischer, editors, *KI-94: Advances in Artificial Intelligence*, *Proceedings of the 18th German Annual Conference on Artificial Intelligence*, pages 379–390. Saarbrücken, Germany, Springer-Verlag LNAI 861, 1994.

BibTeX entry

#### Abstract:

In this paper we generalize the notion of method for proof planning. While we adopt the general structure of methods introduced by Alan Bundy, we make an essential advancement in that we strictly separate the declarative knowledge from the procedural knowledge. This change of paradigm not only leads to representations easier to understand, it also enables modeling the important activity of formulating meta-methods, that is, operators that adapt the declarative part of existing methods to suit novel situations. Thus this change of representation leads to a considerably strengthened planning mechanism. After presenting our declarative approach towards methods we describe the basic proof planning process with these. Then we define the notion of meta-method, provide an overview of practical examples and illustrate how meta-methods can be integrated into the planning process.
Xiaorong Huang, Manfred Kerber, Jörn Richts, and Arthur Sehn: **Planning Mathematical Proofs with Methods**. *Journal of Information Processing and Cybernetics, EIK*, 30(5-6):277–291, 1994.

BibTeX entry

#### Abstract:

In this article we formally describe a declarative approach for encoding plan operators in proof planning, the so-called methods. The notion of method evolves from the much studied concept tactic and was first used by Bundy. While significant deductive power has been achieved with the planning approach towards automated deduction, the procedural character of the tactic part of methods, however, hinders mechanical modification. Although the strength of a proof planning system largely depends on powerful general procedures which solve a large class of problems, mechanical or even automated modification of methods is nevertheless necessary for at least two reasons. Firstly methods designed for a specific type of problem will never be general enough. For instance, it is very difficult to encode a general method which solves all problems a human mathematician might intuitively consider as a case of homomorphy. Secondly the cognitive ability of adapting existing methods to suit novel situations is a fundamental part of human mathematical competence. We believe it is extremely valuable to account computationally for this kind of reasoning.
The main part of this article is devoted to a declarative language for encoding
methods, composed of a tactic and a specification. The major feature of our
approach is that the tactic part of a method is split into a declarative and a
procedural part in order to enable a tractable adaption of methods. The
applicability of a method in a planning situation is formulated in the
specification, essentially consisting of an object level formula schema and a
meta-level formula of a declarative constraint language. After setting up our
general framework, we mainly concentrate on this constraint language.
Furthermore we illustrate how our methods can be used in a STRIPS-like planning
framework. Finally we briefly illustrate the mechanical modification of
declaratively encoded methods by so-called meta-methods.

## 1993

F. Baader: **Unification in Commutative Theories, Hilbert's Basis Theorem and Gröbner Bases**. *J. ACM*, 40(3):477–503, 1993.

BibTeX entry

F. Baader, M. Buchheit, and B. Hollunder: **Cardinality Restrictions on Concepts**. RR-93-48, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1993. A short version has appeared in Proceedings of the KI'94, Springer LNCS 861

BibTeX entry

#### Abstract:

The concept description formalisms of existing terminological systems allow the user to express local cardinality restrictions on the fillers of a particular role. It is not possible, however, to introduce global restrictions on the number of instances of a given concept. The paper argues that such cardinality restrictions on concepts are of importance in applications such as configuration of technical systems, an application domain of terminological systems that is currently gaining in interest. It shows that including such restrictions into the description language leaves the important inference problems such as instance testing decidable. The algorithm combines and simplifies the ideas developed for the treatment of qualifying number restrictions and of general terminological axioms.
F. Baader, H.-J. Bürckert, B. Nebel, W. Nutt, and G. Smolka: **On the Expressivity of Feature Logics with Negation, Functional Uncertainty, and Sort Equations**. *Journal of Logic, Language and Information*, 2:1–18, 1993.

BibTeX entry
Paper (PS)

F. Baader and P. Hanschke: **Extensions of Concept Languages for a Mechanical Engineering Application**. In *Proceedings of the 16th German AI-Conference, GWAI-92*, volume 671 of *Lecture Notes in Computer Science*, pages 132–143. Bonn (Germany), Springer–Verlag, 1993.

BibTeX entry

F. Baader and B. Hollunder: **Embedding Defaults into Terminological Representation Systems**. RR-93-20, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1993.

BibTeX entry
Paper (PS)

#### Abstract:

We consider the problem of integrating Reiter's default logic into terminological representation systems. It turns out that such an integration is less straightforward than we expected, considering the fact that the terminological language is a decidable sublanguage of first-order logic. Semantically, one has the unpleasant effect that the consequences of a terminological default theory may be rather unintuitive, and may even vary with the syntactic structure of equivalent concept expressions. This is due to the unsatisfactory treatment of open defaults via Skolemization in Reiter's semantics. On the algorithmic side, we show that this treatment may lead to an undecidable default consequence relation, even though our base language is decidable, and we have only finitely many (open) defaults. Because of these problems, we then consider a restricted semantics for open defaults in our terminological default theories: default rules are only applied to individuals that are explicitly present in the knowledge base. In this semantics it is possible to compute all extensions of a finite terminological default theory, which means that this type of default reasoning is decidable.
F. Baader and B. Hollunder: **How to Prefer More Specific Defaults in Terminological Default Logic**. In *Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI-93*, pages 669–674, 1993.

BibTeX entry

F. Baader, B. Hollunder, B. Nebel, H.J. Profitlich, and E. Franconi: **An Empirical Analysis of Optimization Techniques for Terminological Representation Systems, or: Making KRIS get a move on**. RR-93-03, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1993.

BibTeX entry

F. Baader and H.-J. Ohlbach: **A Multi-Dimensional Terminological Knowledge Representation Language**. MPI-I-93-212, Max-Planck-Institut für Informatik, Saarbrücken, 1993.

BibTeX entry

F. Baader and K. Schlechta: **A Semantics for Open Normal Defaults via a Modified Preferential Approach**. RR-93-13, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1993.

BibTeX entry

F. Baader and K. Schlechta: **A Semantics for Open Normal Defaults via a Modified Preferential Approach**. In *Proceedings of the European Conference on Symbolic and Quantitative Approaches to Reasoning under Uncertainty, ECSQARU 93*, volume 747 of *Lecture Notes in Computer Science*, pages 9–16. Granada (Spain), Springer–Verlag, 1993.

BibTeX entry

F. Baader and K. Schulz: **Combination Techniques and Decision Problems for Disunification**. RR-93-05, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1993.

BibTeX entry

F. Baader and K. Schulz: **Combination Techniques and Decision Problems for Disunification**. In *Proceedings of the International Conference on Rewriting Techniques and Applications, RTA 93*, volume 690 of *Lecture Notes in Computer Science*, pages 301–315. Montreal (Canada), Springer–Verlag, 1993.

BibTeX entry

F. Baader, J. Siekmann, and W. Snyder, editors: **Proceedings of the Sixth International Workshop on Unification, Schloß Dagstuhl, July 29–31, 1992**. 1993.

BibTeX entry

H.-J. Ohlbach and F. Baader: **A Multi-Dimensional Terminological Knowledge Representation Language**. In *Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI-93*, pages 690–695, 1993.

BibTeX entry

## 1992

F. Baader: **Unification Theory**. RR-92-33, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1992.

BibTeX entry

F. Baader, H.-J. Bürckert, B. Hollunder, A. Laux, and W. Nutt: **Terminologische Logiken**. *KI*, 3/92:23–33, 1992.

BibTeX entry

F. Baader and P. Hanschke: **Extensions of Concept Languages for a Mechanical Engineering Application**. RR-92-36, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1992.

BibTeX entry

F. Baader and B. Hollunder: **Embedding Defaults into Terminological Representation Systems**. In *Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, KR-92*, pages 306–317, 1992.

BibTeX entry

F. Baader and B. Hollunder: **How to Prefer More Specific Defaults in Terminological Default Logic**. RR-92-58, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1992.

BibTeX entry

F. Baader, B. Hollunder, B. Nebel, H.J. Profitlich, and E. Franconi: **An Empirical Analysis of Optimization Techniques for Terminological Representation Systems, or: Making KRIS get a move on**. In *Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, KR-92*, pages 270–281, 1992.

BibTeX entry

F. Baader and K. Schulz: **General A- and AX-Unification via Optimized Combination Procedures**. 92-58, Universität München, 1992.

BibTeX entry

F. Baader and K. Schulz: **Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures**. In *Proceedings of the 11th International Conference on Automated Deduction, CADE-92*, volume 607 of *Lecture Notes in Computer Science*, pages 50–65. Saratoga Springs (USA), Springer–Verlag, 1992.

BibTeX entry

F. Baader and K.U. Schulz: **General A- and AX-Unification via Optimized Combination Procedures**. In *Proceedings of the Second International Workshop on Word Equations and Related Topics, IWWERT-91*, volume 677 of *Lecture Notes in Computer Science*, pages 23–42. Rouen (France), Springer–Verlag, 1992.

BibTeX entry

F. Baader, J. Siekmann, and W. Snyder, editors: **6th Workshop on Unification**. 1992.

BibTeX entry

Jörn Richts: **Allgemeine AC-Unifikation durch Variablenabstraktion mit Fremdtermbedingungen**. SWP–92–12, Fachbereich Informatik, Universität Kaiserslautern, Postfach 3049, D–67663 Kaiserslautern, Germany, 1992.

BibTeX entry Paper (PS)

#### Abstract:

This work investigates general*AC*-unification, i.e. unification in the combination of free function symbols and free Abelian semigroups, whose function symbols fulfill associativity and commutativity.

The three necessary parts of general *AC*-unification are presented: a
combination algorithm, a procedure for elementary *AC*-unification, and
methods for solving systems of diophantine equations. Starting with A. Boudet's
unification algorithm for the combination of regular and collapse-free theories,
an efficient algorithm for general *AC*-unification is developed, setting
out conditions that must be fulfilled by the solutions of elementary
*AC*-unification. These conditions are used by the procedure for
elementary *AC*-unification whereby further conditions are set out that
must be fulfilled by the solutions of the diophantine equations. Then three
methods (those of E. Contejean and H. Devie, E. Domenjoud, L. Pottier) for
solving systems of diophantine equations are presented. The three methods are
compared and evaluated, trying to use the conditions efficiently.

Finally some problems which arise from the reuse of *AC*-unifiers in
unification, such as merging, are presented. It is shown that reusing
*AC*-unifiers for partial problems is often worse than solving the entire
problem from the beginning.

## 1991

F. Baader: **Augmenting Concept Languages by Transitive Closure of Roles: An Alternative to Terminological Cycles**. In *Proceedings of the 12th International Joint Conference on Artificial Intelligence, IJCAI-91*, pages 446–451, 1991.

BibTeX entry

F. Baader: **Unification Theory**. In *Proceedings of the First International Workshop on Word Equations and Related Topics, IWWERT-90*, volume 572 of *Lecture Notes in Computer Science*, pages 151–170. Tübingen (Germany), Springer–Verlag, 1991.

BibTeX entry

F. Baader: **Unification in Varieties of Completely Regular Semigroups**. In *Proceedings of the First International Workshop on Word Equations and Related Topics, IWWERT-90*, volume 572 of *Lecture Notes in Computer Science*, pages 210–230. Tübingen (Germany), Springer–Verlag, 1991.

BibTeX entry

F. Baader: **Unification, Weak Unification, Upper Bound, Lower Bound and Generalization Problems**. In *Proceedings of the 4th International Conference on Rewriting Techniques and Applications, RTA 91*, volume 488 of *Lecture Notes in Computer Science*, pages 86–97. Como (Italy), Springer–Verlag, 1991.

BibTeX entry

F. Baader, H.-J. Bürckert, B. Nebel, W. Nutt, and G. Smolka: **On the Expressivity of Feature Logics with Negation, Functional Uncertainty, and Sort Equations**. RR-91-01, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1991.

BibTeX entry

F. Baader and P. Hanschke: **A Scheme for Integrating Concrete Domains into Concept Languages**. RR-91-10, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1991.

BibTeX entry
Paper (PS)

#### Abstract:

A drawback which concept languages based on KL-ONE have is that all the terminological knowledge has to be defined on an abstract logical level. In many applications, one would like to be able to refer to concrete domains and predicates on these domains when defining concepts. Examples for such concrete domains are the integers, the real numbers, or also non-arithmetic domains, and predicates could be equality, inequality, or more complex predicates. In the present paper we shall propose a scheme for integrating such concrete domains into concept languages rather than describing a particular extension by some specific concrete domain. We shall define a terminological and an assertional language, and consider the important inference problems such as subsumption, instantiation, and consistency. The formal semantics as well as the reasoning algorithms are given on the scheme level. In contrast to existing KL-ONE based systems, these algorithms will be not only sound but also complete. They generate subtasks which have to be solved by a special purpose reasoner of the concrete domain.
F. Baader and P. Hanschke: **A Scheme for Integrating Concrete Domains into Concept Languages**. In *Proceedings of the 12th International Joint Conference on Artificial Intelligence, IJCAI-91*, pages 452–457, 1991.

BibTeX entry

F. Baader and B. Hollunder: **A Terminological Knowledge Representation System with Complete Inference Algorithms**. In *Proceedings of the First International Workshop on Processing Declarative Knowledge*, volume 572 of *Lecture Notes in Computer Science*, pages 67–85. Kaiserslautern (Germany), Springer–Verlag, 1991.

BibTeX entry
Paper (PS)
©Springer-Verlag

#### Abstract:

The knowledge representation system KLONE first appeared in 1977. Since then many systems based on the idea of KLONE have been built. The formal model-theoretic semantics which has been introduced for KLONE languages provides means for investigating soundness and completeness of inference algorithms. It turned out that almost all implemented KLONE systems such as BACK, KLTWO, LOOM, NIKL, SBONE use sound but incomplete algorithms. Until recently, sound AND complete algorithms for the basic reasoning facilities in these systems such as consistency checking, subsumption checking (classification) and realization were only known for rather trivial languages. However, in the last two years concept languages (term subsumption languages) have been thoroughly investigated. As a result of these investigations it is now possible to provide sound and complete algorithms for relatively large concept languages. In this paper we describe KRIS, which is an implemented prototype of a KLONE system where all reasoning facilities are realized by sound and complete algorithms. This system can be used to investigate the behaviour of sound and complete algorithms in practical applications. Hopefully, this may shed a new light on the usefulness of complete algorithms for practical applications, even if their worst case complexity is NP or worse. KRIS provides a very expressive concept language, an assertional language, and sound and complete algorithms for reasoning. We have chosen the concept language such that it contains most of the constructs used in KLONE systems, with the obvious restriction that the interesting inferences such as consistency checking, subsumption checking, and realization are decidable. The assertional language is similar to languages normally used in such systems. The reasoning component of KRIS depends on sound and complete algorithms for reasoning facilities such as consistency checking, subsumption checking, retrieval, and querying.
F. Baader and B. Hollunder: **KRIS: Knowledge Representation and Inference System, System Description**. *ACM SIGART Bulletin*, 2:8–14, 1991.

BibTeX entry

F. Baader and W. Nutt: **Adding Homomorphisms to Commutative/Monoidal Theories, or: How Algebra Can Help in Equational Unification**. In *Proceedings of the 4th International Conference on Rewriting Techniques and Applications, RTA 91*, volume 488 of *Lecture Notes in Computer Science*, pages 124–135. Como (Italy), Springer–Verlag, 1991.

BibTeX entry

F. Baader and K. Schulz: **Unification in the Union of Disjoint Equational Theories: Combining Decision Procedures**. RR-91-33, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1991.

BibTeX entry
Paper (PS)

#### Abstract:

Most of the work on the combination of unification algorithms for the union of disjoint equational theories has been restricted to algorithms which compute finite complete sets of unifiers. Thus the developed combination methods usually cannot be used to combine decision procedures, i.e., algorithms which just decide solvability of unification problems without computing unifiers. In this paper we describe a combination algorithm for decision procedures which works for arbitrary equational theories, provided that solvability of so-called unification problems with constant restrictions—a slight generalization of unification problems with constants—is decidable for these theories. As a consequence of this new method, we can for example show that general A-unifiability, i.e., solvability of A-unification problems with free function symbols, is decidable. Here A stands for the equational theory of one associative function symbol. Our method can also be used to combine algorithms which compute finite complete sets of unifiers. Manfred Schmidt-Schauss' combination result, the until now most general result in this direction, can be obtained as a consequence of this fact. We also get the new result that unification in the union of disjoint equational theories is finitary, if general unification—i.e., unification of terms with additional free function symbols—is finitary in the single theories.
B. Hollunder and F. Baader: **Qualifying Number Restrictions in Concept Languages**. In *Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, KR-91*, pages 335–346, 1991.

BibTeX entry

B. Hollunder and F. Baader: **Qualifying Number Restrictions in Concept Languages**. RR-91-03, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1991.

BibTeX entry

## 1990

F. Baader: **A Formal Definition for Expressive Power of Knowledge Representation Languages**. RR-90-05, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry

F. Baader: **A Formal Definition for Expressive Power of Knowledge Representation Languages**. In *Proceedings of the 9th European Conference on Artificial Intelligence, ECAI-90*, pages 53–58, 1990.

BibTeX entry

F. Baader: **Augmenting Concept Languages by Transitive Closure of Roles: An Alternative to Terminological Cycles**. RR-90-13, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry
Paper (PS)

F. Baader: **Rewrite Systems for Varieties of Semigroups**. In *Proceedings of the 10th International Conference on Automated Deduction, CADE-90*, volume 488 of *Lecture Notes in Computer Science*, pages 396–410. Kaiserslautern (Germany), Springer–Verlag, 1990.

BibTeX entry

F. Baader: **Terminological Cycles in KL-ONE-based Knowledge Representation Languages**. RR-90-01, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry

F. Baader: **Terminological Cycles in KL-ONE-based Knowledge Representation Languages**. In *Proceedings of the Eighth National Conference on Artificial Intelligence, AAAI-90*, pages 621–626, 1990.

BibTeX entry

F. Baader: **Unification in Commutative Theories, Hilbert's Basis Theorem and Gröbner Bases**. SR-90-1, Universität Kaiserslautern, 1990.

BibTeX entry

F. Baader: **Unification, Weak Unification, Upper Bound, Lower Bound and Generalization Problems**. SR-90-2, Universität Kaiserslautern, 1990.

BibTeX entry

F. Baader, H.-J. Bürckert, J. Heinsohn, J. Müller, B. Hollunder, B. Nebel, W. Nutt, and H.-J. Profitlich: **Terminological Knowledge Representation: A Proposal for a Terminological Logic**. TM-90-04, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990. Updated version, taking into account the results of a discussion at the ``International Worksop on Terminological Logics,'' Dagstuhl, May 1991.

BibTeX entry
Paper (PS)

#### Abstract:

This paper contains a proposal for a terminological logic. The formalisms for representing knowledge as well as the needed inferences are described.
F. Baader, H.-J. Bürckert, B. Hollunder, W. Nutt, and J. Siekmann: **Concept Logic**. RR-90-10, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry
Paper (PS)

F. Baader, H.-J. Bürckert, B. Hollunder, W. Nutt, and J. Siekmann: **Concept Logic**. In *Proceedings of the Symposium on Computational Logic*, pages 177–201, 1990.

BibTeX entry

F. Baader and B. Hollunder: **KRIS: Knowledge Representation and Inference System, System Description**. TM-90-03, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry

F. Baader and W. Nutt: **Adding Homomorphisms to Commutative/Monoidal Theories, or: How Algebra Can Help in Equational Unification**. RR-90-16, Deutsches Forschungszentrum für Künstliche Intelligenz, Kaiserslautern, 1990.

BibTeX entry

## 1989

F. Baader: **Characterizations of Unification Type Zero**. In *Proceedings of the 3rd International Conference on Rewriting Techniques and Applications, RTA 89*, volume 355 of *Lecture Notes in Computer Science*, pages 2–14. Chapel Hill (USA), Springer–Verlag, 1989.

BibTeX entry

F. Baader: **Unification Properties of Commutative Theories: A Categorical Treatment**. In *Proceedings of the Conference on Category Theory and Computer Science*, volume 389 of *Lecture Notes in Computer Science*, pages 273–299. Manchester (UK), Springer–Verlag, 1989.

BibTeX entry

F. Baader: **Unification in Commutative Theories**. *J. Symbolic Computation*, 8:479–497, 1989.

BibTeX entry
Free reprint

F. Baader: **Unifikation und Reduktionssysteme für Halbgruppenvarietäten**. 8, Institut für Mathematische Maschinen und Datenverarbeitung, Universität Erlangen, 1989. Dissertation

BibTeX entry

## 1988

F. Baader: **A Note on Unification Type Zero**. *Information Processing Letters*, 27:91–93, 1988.

BibTeX entry
Free reprint

F. Baader and W. Büttner: **Unification in Commutative Idempotent Monoids**. *J. Theoretical Computer Science*, 56:345–352, 1988.

BibTeX entry
Free reprint

## 1987

F. Baader: **Unification in Varieties of Idempotent Semigroups**. *Semigroup Forum*, 36:127–145, 1987.

BibTeX entry
Free reprint

## 1986

F. Baader: **The Theory of Idempotent Semigroups is of Unification Type Zero**. *J. Automated Reasoning*, 2:283–286, 1986.

BibTeX entry
Free reprint

## 1985

F. Baader: **Die S-Varietät DS und einige Untervarietäten**. 8, Institut für Mathematische Maschinen und Datenverarbeitung, Universität Erlangen, 1985.

BibTeX entry

Back to the hompage of the Chair of Automata Theory.

Generated Mon Feb 26 18:45:00 2018.