- Implementation of a Subsumption Algorithm for the Description Logic FL0
The goal of this project is to implement an algorithm from a newly
developed approach to decide subsumption in the description logic
FL0, which allows for universal quantification, conjunction and the
top-concept. This task can be addressed either by a direct
implementation from scratch or it can be investigated whether and how
existing rule based systems (e.g. Datalog+/-) can be employed to
implement this novel algorithm. Even though FL0 is strictly contained
in the more expressive description logic ALC, they share the same
(theoretical) computational complexity. Therefore, this project
includes a performance comparison between existing (non-deterministic)
tableaux based algorithms for ALC and the newly developed
(deterministic) approach.
Remarks: Good programming skills are expected and basic knowledge in
description logics is helpful. The project is suitable for 4 or 8 SWS
as well as a group of students.
References:
Tutors: Maximilian Pensel, Dr. Anni-Yasmin Turhan
- Implementation of an ontology-based stream processing system
The task is to implement a processing pipeline for event
recognition for stream data based on ontology
services. The task involves to identify a freely-available
data stream (e.g. weather data) and then implement a
databased approach to preprocessing the data such that it
can be used as instance data for a description logic query
answering system. Furthermore, the background ontology
that captures knowledge on the domain the data comes from
needs to be crafted.
Remark: This project is suited for students who need 8
SWS or for a group of students.
Tutor: Dr. Anni-Yasmin Turhan
- Implementing algorithms to decide (k-)piecewise testability
A regular language is piecewise testable if its minimal
DFA satisfies several structural conditions. Nowadays,
there exist two quadratic algorithms to decide whether a
minimal DFA represents a piecewise testable language. The
first one, quadratic with respect to the number of states
of the DFA, is due to A. Trahtman [1], and the second one,
quadratic with respect to the size of the input alphabet,
is due to L. Polak and O. Klima [2]. The first part of this
project is to implement both methods and compare their
performance on a suitable selected set of input DFAs. In
the case a regular language is piecewise testable, it is
possible to find a minimal k (less then or equal to the
depth of the minimal DFA) for which the language is
k-piecewise testable (k-PT). The algorithms to do so are
described in [3] and [4]. The second part of the project
is to implement these tests, namely, when checking
k=0,1,2,3, the methods from [3], and for k>= 4 the method
of [4]. Since the problem for k>=4 is in general
coNP-complete, the point here is to look at the practical
limits of the algorithm. The input format for DFAs will be
specified (XML-like input).
References:
- [1] Trahtman, A.N., Piecewise and local threshold testability of DFA. FCT 2001, LNCS 2138, pp. 347-358.
- [2] Klima, O., Polak, L., Alternative automata characterization of piecewise testable languages. DLT 2013, LNCS 7907, pp. 289-300.
- [3] Masopust, T., Thomazo, M., On the Complexity of k-Piecewise Testability and the Depth of Automata, DLT 2015, LNCS 9168, pp. 364-376.
- [4] Klima, O., Kunc, M., Polak, L.: Deciding k-piecewise testability (manuscript)
Tutor: Dr. Thomas Masopust
- Implementing algorithms for piecewise
testable separability
Given two regular langauges (represented by NFAs),
the problem to decide whether there exists a piecewise
testable (PT) language (a regular language of a special
form) that separates the input languages has recently been
shown to be decidable in PTIME [1,2]. (More on the
motivation can be found there, too.) However, this
algorithm is not applicable to compute a PT language that
separates the input languages (called a separator). A
simple exponential algorithm to compute a separator has
been discussed in [3], which in the first phase also
decides the existence of a separator.
The aim of the project is to implement the PTIME algorithm
and the algorithm of [3] and to compare their performance on a
realistic sample of selected examples. Here the PTIME algorithm should
be compared with the first phase of algorithm of [3].
Then the separator should be computed using the algorithm of
[3]. This algorithm however supports implementations based
on minimal DFAs, NFAs and AFAs. All three approaches
should be implemented and their performance compared. This
requires not only to implement the structures to represent
DFAs, NFAs and AFAs, but also some of the classical and
some of the non-classical operations on languages. The
advantage is to keep the representations as small as
possible. This is easy for DFAs, but will require some
investigation for NFAs and AFAs.
The input format for NFAs will be specified (XML-like input).
Remark: This project is suited for students who need 8
SWS or for a group of students.
References:
- [1] Czerwinski, W., Martens, W., Masopust, T.,
Efficient Separability of Regular Languages by
Subsequences and Suffixes. ICALP 2013, LNCS 7966,
pp. 150-161
(http://arxiv.org/abs/1303.0966)
- [2] Place, T., Van Rooijen, L., Zeitoun, M.,
Separating Regular Languages by Piecewise Testable and
Unambiguous Languages. MFCS 2013, pp. 729-740
(http://arxiv.org/abs/1304.6734)
- [3] Holub, S., Masopust, T., Thomazo, M.,
Alternating Towers and Piecewise Testable Separators,
2014
(http://arxiv.org/abs/1409.3943)
Tutor: Dr. Thomas Masopust
-
Implementing most specific generalizations w.r.t. general TBoxes in ELGFP
This project aims at the implementation and evaluation of
methods for calculating least common subsumers and most
specific concepts in the leight-weight description logic
ELGFP, i.e. EL interpreted with greatest fixpoint
semantics. In contrast to standard EL with
descriptive semantics, those most specific generalizations
always exist in ELGFP and can be computed in polynomial
time. The student should furthermore make a comparison with
most specific generalizations w.r.t. standard descriptive
semantics and with role-depth bounded most specific
generalizations w.r.t descriptive semantics, both of these approaches have recently been implemented.
Remark: Basic knowledge in description logics and good programming
skills are expected. For an easier handling and integration
with existing software (e.g. ELK, Protege etc.), the program
should be built on top of the OWL API.
References:
Tutor: Francesco Kriegel
-
Formal concept analysis & propositional logic: from pseudo-intents to implicational bases
This projects aim is the formulation of a minimal theory of
implications, which holds in a given set of propositional
models and furthermore entails all other implications, that
hold in the given model set. This requires an isomorphism
between propositional model sets and formal contexts, to
adapt the existing results for implicational bases of formal
contexts. Furthermore the student should give an algorithm to
compute the implicational base. The results can be extended
to propositional bases.
Remark: Basic knowledge in propositional logic and
set theory is expected. Further knowledge in logic and order
theory may be helpful.
References:
- Bernhard Ganter & Rudolf Wille: Formal Concept Analysis
- Bernhard Ganter: Attribute Exploration with Background Knowledge
Tutor: Francesco Kriegel