9. Managing Tuple Tables¶
As explained in Section 4, a data store uses tuple tables as containers for facts – that is, triples and other kind of data that RDFox should process. Each tuple table is identified by a name that is unique for a data store. Moreover, each tuple table has a minimal and maximal arity, which are numbers determining the smallest and the largest numbers of RDF resources in a fact stored in the tuple table. In most cases, the minimal and maximal arity are the same, in which case they are called just arity.
9.1. Types of Tuple Tables¶
RDFox supports three kinds of tuple tables.
In-memory tuple tables are the most commonly used kind of tuple table, which, as the name suggests, store facts in RAM. RDFox uses in-memory tuple tables of arity three to store triples of the default graph and the named graphs of RDF. In particular, an in-memory tuple table called
http://oxfordsemantic.tech/RDFox#DefaultTriples
is created automatically when a fresh data store is created to act as the default graph, and RDFox will create additional in-memory tuple tables for each named graph it encounters. RDFox provides ways to add and delete facts in in-memory tuple tables.Built-in tuple tables contain some well-known facts that can be useful in various applications of RDFox. The facts in such tuple tables cannot be modified by users; rather, they are produced on the fly by RDFox as needed. They are described in more detail in Section 9.5.
Data source tuple tables provide a ‘virtual view’ over data in non-RDF data sources, such as CSV files, relational databases, or a full-text Solr index. Such tuple tables must be created explicitly by the user, and doing so requires specifying how the external data is to be transformed into a format compatible with RDF. The facts in data source tuple tables are ‘virtual’ in the sense that they are constructed automatically by RDFox based on the data in the data source — that is, there is no way to add/delete such facts directly. Finally, data source tuple tables can be of arbitrary arity — that is, such tuple tables are not limited to containing just triples. Data source tuple tables and the process of importing external data are described in detail in Section 10.
9.2. Fact Domains¶
Each fact in a tuple is associated with one or more fact domains.
The
EDB
fact domain contains facts that were imported explicitly by the user. The name EDB is an abbreviation of Extensional Database.The
IDB
fact domain contains facts that were derived using rules. The name IDB is an abbreviation of Intensional Database. This fact domain is used as the default in all operations that take a fact domain as argument.The
IDBrep
fact domain contains the representative facts of the IDB domain. This fact domain differs only in data stores for which equality reasoning (i.e., reasoning withowl:sameAs
) is turned on.The
IDBrepNoEDB
fact domain contains facts of the IDB domain that are not in the EDB domain, which are essentially facts that were derived during reasoning and were not present in the input.
A fact can belong to more than one domain. For example, facts added to
the store are stored into the EDB
domain, and during reasoning they
are transferred into the IDB
domain.
Only the EDB
fact domain can be directly affected by users. That is, all
explicitly added facts are added to the EDB
domain, and only those facts
can be deleted. It is not possible to manually delete derived facts since the
meaning of such deletions is unclear.
Many RDFox operations accept a fact domain as an argument. For example, SPARQL
query evaluation takes a fact domain as an argument, which determines what
subset of the facts the query should be evaluated over. Thus, if a query is
evaluated with respect to the EDB
domain, it will ‘see’ only the facts that
were explicitly added to a data store, and it will ignore the the facts that
were derived by reasoning.
9.3. Managing and Using Tuple Tables¶
RDFox provide ways for creating and deleting tuple tables: this can be
accomplished in the shell using the tupletable
command (see
Section 16.2.2.47), and the relevant APIs are described in
Section 14.7. When creating a tuple table, one must specify a
list of key-value parameters that determine what kind of tuple table is to be
created. The parameters for data source tuple tables depend on the type of data
source and are described in detail in Section 10. Moreover, the
parameters for in-memory and built-in tuple tables are described in
Section 9.4 and Section 9.5,
respectively.
RDFox provides ways to add and delete facts to in-memory tuple tables: this can
be accomplished in the shell using the import
command (see
Section 16.2.2.23), and the relevant APIs are described in
Section 14.5.5.
Facts in a tuple table can be accessed during querying and reasoning. In
queries, tuple tables corresponding to the default graph and the named graphs
can be accessed using standard SPARQL syntax for triple patterns and the
GRAPH
operator — that is, a triple pattern outside a GRAPH
operator
will access the http://oxfordsemantic.tech/RDFox#DefaultTriples
tuple
table, and a triple pattern inside a GRAPH :G
operator will access the
in-memory tuple table with name :G
. To access tuple tables of other types,
one can use either the proprietary operator TT
or the reserved IRI
rdfox:TT
, both of which are described in Section 5.4.
Note that the default graph and the named graphs can also be accessed using the
TT
operator and the rdfox:TT
IRI. Moreover, tuple tables can be
accessed in rules using the general atom syntax described in
Section 6.4.1.3. Since only in-memory tuple tables can be modified by
users, any atom occurring in the head of a rule is allowed to mention only an
in-memory tuple table.
9.4. In-Memory Tuple Tables¶
RDFox uses in-memory tuple tables to store facts imported by the users. At
present, RDFox supports only tuple tables of arity three, thus allowing the
system to store only triples. An in-memory tuple table called
http://oxfordsemantic.tech/RDFox#DefaultTriples
is created automatically
when a fresh data store is created to act as the default graph. Moreover,
in-memory tuple tables can be created in the following three ways.
When instructed to import data containing triples in graphs other than the default one, RDFox will automatically create a tuple table for each named graph it encounters.
The SPARQL 1.1 Update command
CREATE GRAPH
creates an in-memory tuple table for each named graph.In-memory tuple tables can be created using tuple table management APIs. The main benefit of this over the above two methods is the ability to specify additional parameters, as described in the following table.
Parameter |
Default value |
Description |
---|---|---|
|
|
Specifies that the tuple table will be used to store triples — that is, the tuple table backs the default or a named graph. This parameter must be specified when creating an in-memory tuple table. |
|
(as in the data store) |
Specifies the maximum number of triples that the new tuple table will be able to hold. The main purpose of this parameter is to reduce the amount of address space that the tuple table will use. The default value is the value of the data store parameter with the same name. |
|
(as in the data store) |
Provides a hint as to how many facts the system should expect to store initially in the tuple table. When importing large data sets, setting this parameter to be roughly equal to the number of facts to be imported can significantly improve the speed of importation. |
9.5. Built-In Tuple Tables¶
Built-in tuple tables are similar to built-in functions; however, whereas a
built-in function returns just one value for a given number of arguments, a
built-in tuple table can relate sets of values. Thus, facts in built-in tuple
tables are not stored explicitly; rather, they are produced on the fly as query
and/or rule evaluation progresses. Other than this internal detail, built-in
tuple tables are used in queries and rules just like any other tuple table:
they are referenced in queries using the proprietary TT
operator or the
reserved IRI rdfox:TT`
(see Section 5.4), and they are
referenced in rules using general atoms (see Section 6.4.1.3). Built-in
tuple tables are the only ones for which the minimal and the maximal arity are
not necessarily the same.
Each built-in tuple table is identified by a well-known name, which cannot be
changed. The names of all of built-in tuple tables starts with
http://oxfordsemantic.tech/RDFox#
, which is abbreviated in the rest of this
section as rdfox:
. For example, the rdfox:SKOLEM
built-in tuple table
is always available under that name. When a data store is created, all built-in
tuple tables supported by RDFox will be created automatically. It is very
unlikely that users will ever need to delete built-in tuple tables;
nevertheless, for the sake of consistency, RDFox allows such tuple tables to be
deleted just like any other tuple table. In case a built-in tuple table is
deleted, it can be recreated using standard methods, by simply specifying the
tuple table name without any parameters. (Please note that, as a consequence of
this, it is not possible to create an in-memory or a data source tuple table
with a name that is reserved for a built-in tuple table.)
9.5.1. rdfox:SKOLEM
¶
The rdfox:SKOLEM
tuple table can have arity from one onwards. Moreover, in
each fact in this tuple table, the last resource of the fact is a blank node
that is uniquely determined by all remaining arguments. This can be useful in
queries and/or rules that need to create new objects. This is explained using
the following example.
Example: Let us assume we are dealing with a dataset where each person
is associated with zero or more companies using the :worksFor
relationship. For example, our dataset could contain the following triples.
:Peter :worksFor :Company1 .
:Peter :worksFor :Company2 .
:Paul :worksFor :Company1 .
Now assume that we wish to attach additional information to each individual
employment. For example, we might want to say that the employment of
:Peter
in :Company1
started on a specific date. To be able to
capture such data, we will ‘convert’ each :worksFor
link to a separate
instance of the :Employment
class; then, we can attach arbitrary
information to such instances. This presents us with a key challenge: for
each combination of a person and company, we need to ‘invent’ a fresh
object that is uniquely determined by the person and company.
This problem is solved using the rdfox:SKOLEM
built-in tuple table. In
particular, we can restructure the data using the following rule.
:Employment[?E], :employee[?E,?P], :inCompany[?E,?C] :- :worksFor[?P,?C], rdfox:SKOLEM("Employment",?P,?C,?E) .
The above rule can be understood as follows. Body atom :worksFor[?P,?C]
selects all combinations of a person and a company that the person works
for. Moreover, atom rdfox:SKOLEM("Employment",?P,?C,?E)
contains all
facts where the value of ?E
is uniquely determined by the fixed string
"Employment"
, the value of ?P
, and the value of ?C
. Thus, for
each combination of ?P
and ?C
, the built-in tuple table will
produce a unique value of ?E
, which is then used in the rule head to
derive new triples.
How a value of ?E
is computed from the other arguments is not under
application control: each value is a blank node whose name is guaranteed to
be unique. However, what matters is that the value of ?E
is always the
same whenever the values of all other arguments are the same. Thus, we can
use the following rule to specify the start time of Peter’s employment in
Company 1.
:startDate[?E,"2020-02-03"^^xsd:date] :- rdfox:SKOLEM("Employment",:Peter,:Company1,?E) .
After evaluating these rules, the following triples will be added to the
data store. We use blank node names such as _:new_1
for clarity: the
actual names of new blank nodes will be much longer in practice.
_:new_1 rdf:type :Employment .
_:new_1 :employee :Peter .
_:new_1 :inCompany :Company1 .
_:new_1 :startDate "2020-02-03"^^xsd:date .
_:new_2 rdf:type :Employment .
_:new_2 :employee :Peter .
_:new_2 :inCompany :Company2 .
_:new_3 rdf:type :Employment .
_:new_3 :employee :Paul .
_:new_3 :inCompany :Company1 .
When creating fresh objects using the rdfox:SKOLEM
built-in tuple table, it
is good practice to incorporate object type into the argument. The above
example achieved this by passing a fixed string "Employment"
as the first
argument of rdfox:SKOLEM
. This allows us to create another, distinct blank
node for each combination of a person and a company by simply varying the first
argument of rdfox:SKOLEM
.
Atoms involving the rdfox:SKOLEM
built-in tuple table must satisfy certain
binding restrictions in rules and queries. Essentially, it must be possible
to evaluate a query/rule so that, once an rdfox:SKOLEM
atom is reached,
either the value of the last argument, or the values of all all but the last
argument must be known. This is explained using the following example.
Example: The following query cannot be evaluated by RDFox — that is, the system will respond with a query planning error.
SELECT ?P ?C ?E WHERE { TT rdfox:SKOLEM { "Employment" ?P ?C ?E } }
This query essentially says “return all ?P
, ?C
, and ?E
where
the value of ?E
is uniquely defined by "Employment"
, ?P
, and
?C
”. The problem with this is that the values of ?P
and ?C
have
not been restricted in any way, so the query should, in principle, return
infinitely many answers.
To evaluate the query, one must provide the values of ?P
and ?C
, or
for ?E
, either explicitly as arguments or implicitly by binding the
arguments in other parts of the query. Thus, both of the following queries
can be successfully evaluated.
SELECT ?E WHERE { TT rdfox:SKOLEM { "Employment" :Paul :Company2 ?E } }
SELECT ?T ?C ?P WHERE { TT rdfox:SKOLEM { ?T ?C ?P _:new_1 } }
The latter query aims to unpack _:new_1
into the values of ?T
,
?C
, and ?P
for which _:new_1
is the uniquely generated fresh
blank node. Note that such ?T
, ?C
, and ?P
may or may not exist,
depending on the algorithm RDFox uses to generate blank nodes. The
following is a more realistic example of blank node ‘unpacking’.
SELECT ?T ?C ?P WHERE { ?E rdf:type :Employment . TT rdfox:SKOLEM { ?T ?C ?P ?E } }
9.5.2. rdfox:SHACL
¶
RDFox supports the RDF constraint validation language SHACL by the means of the built-in tuple table
called rdfox:SHACL
. The tuple table has the following form.
rdfox:SHACL { DataGraph [FactDomain = rdfox:IDB] ShapesGraph S P O }
The DataGraph
argument specifies the name of the data graph — that is, the graph whose
content is to be validated. The FactDomain
argument specifies the domain of
the facts in the data graph that will be validated. This argument is optional
with default value rdfox:IDB
and possible values rdfox:EDB
,
rdfox:IDB
, rdfox:IDBrep
, and rdfox:IDBrepNoEDB
, corresponding to
the respective fact domain values described in Section 9.2. The
ShapesGraph
argument specifies the name of the shapes graph — that is, the graph that
contains the SHACL constraints. The last three arguments receive the subject,
the predicate and the object of each triple in the validation report that results from
validating the data graph with respect to the constraints in the shapes graph.
Basic SHACL Validation
Example: Assume that the following data graph about employees and
their employers is imported into the named graph :data
.
@prefix sh: <http://www.w3.org/ns/shacl#>.
@prefix : <http://oxfordsemantic.tech/shacl#>.
:John a :Employee;
:worksFor :Company1.
:Jane a :Employee;
:worksFor [ a :Employer ].
Furthermore, assume that the following shapes graph, which asserts that
each value of the property :worksFor
is of type :Employer
, is
imported into the named graph :shacl
.
@prefix sh: <http://www.w3.org/ns/shacl#>.
@prefix : <http://oxfordsemantic.tech/shacl#>.
:ClassShape
sh:targetClass :Employee ;
sh:path :worksFor ;
sh:class :Employer.
One can now query the SHACL tuple table to generate the validation
report resulting from the validation of the data graph :data
using the
shapes graph :shacl
as follows.
PREFIX : <http://oxfordsemantic.tech/shacl#>
PREFIX rdfox: <http://oxfordsemantic.tech/RDFox#>
SELECT ?s ?p ?o {
TT rdfox:SHACL { :data :shacl ?s ?p ?o }
}
The validation report should look as follows, modulo blank node names and prefix abbreviations:
_:anonymous1001 rdf:type sh:ValidationReport .
_:anonymous1001 sh:conforms false .
_:anonymous1001 sh:result _:anonymous1002 .
_:anonymous1002 rdf:type sh:ValidationResult .
_:anonymous1002 sh:focusNode :John .
_:anonymous1002 sh:sourceConstraintComponent sh:ClassConstraintComponent .
_:anonymous1002 sh:sourceShape :ClassShape .
_:anonymous1002 sh:resultPath :worksFor .
_:anonymous1002 sh:value :Company1 .
_:anonymous1002 sh:resultSeverity sh:Violation .
_:anonymous1002 sh:resultMessage "The current value node is not a member of the specified class <http://oxfordsemantic.tech/shacl#Employer>." .
Saving a Validation Report
A validation report can be saved into a named graph using the INSERT
update
of SPARQL. This is illustrated in the following example.
Example: The following update saves the validation report into the
named graph :report
:
PREFIX sh: <http://www.w3.org/ns/shacl#>
PREFIX : <http://oxfordsemantic.tech/shacl#>
PREFIX rdfox: <http://oxfordsemantic.tech/RDFox#>
INSERT {
GRAPH :report { ?s ?p ?o }
}
WHERE {
TT rdfox:SHACL { :data :shacl ?s ?p ?o }
}
Rejection of Non-Conforming Updates
Certain use cases may require the content of a data store to be kept consistent
with SHACL constraints at all times — that is, any updates that result in a
violation of a SHACL constraint should be rejected. To achieve this behaviour
in RDFox, one can query the rdfox:SHACL
tuple table before committing a
transaction as follows and, in case any violations are detected, adding an
instance of the rdfox:ConstraintViolation
class in the default graph; As
discussed in Section 12.2, the latter will prevent
a transaction from committing. This technique is demonstrated in the following
example.
Example: Consider the data and shape graphs from the previous examples and assume the insertion of the data graph is performed using the following RDFox commands.
begin
import > :data data.ttl
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } }
# the transaction fails
commit
The INSERT
update checks whether the SHACL constraints are satisfied,
and if not, adds the value of ?report
as an instance of
rdfox:ConstraintViolation
. As discussed earlier, the constraints are
not satisfied for the data in this example, so the WHERE
part of the
update will bind variable ?report
to _:anonymous1001
; thus, triple
_:anonymous1001 a rdfox:ConstraintViolation
will be added to the default graph,
which will prevent the transaction from completing successfully.
In contrast, if we fix the data prior to committing the transaction as in the following example, the transaction will be successfully committed.
begin
import > :data data.ttl
# the following tuple makes the data in data.ttl consistent with the SHACL graph
import > :data ! :Company1 a :Employer.
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } }
# the transaction succeeds
commit
If we now attempt to remove the triple :Company1 a :Employer
using the
same approach, the transaction in question will be rejected, since the
remaining data would no longer conform with the constraints in the SHACL
graph.
begin
# attempting to remove a tuple that would invalidate the remaining of the data
import > :data - ! :Company1 a :Employer.
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT rdfox:SHACL { :data :shacl ?report sh:conforms false } }
# the transaction fails
commit
If we want the error message to contain additional information about the constraint violation, we can insert other triples with the rdfox:ConstraintViolation instance in the subject postion into the default graph, for exmaple:
begin
import > :data - ! :Company1 a :Employer.
INSERT { \
?s a rdfox:ConstraintViolation . \
?s ?p ?o \
} WHERE { \
TT rdfox:SHACL { :data :shacl ?s ?p ?o} . \
FILTER(?p IN (sh:sourceShape, sh:resultMessage, sh:value)) \
}
commit
This should produce an error message like this:
An error occurred while executing the command:
The transaction could not be committed because it would have introduced the following constraint violation:
_:anonymous1 sh:resultMessage "The current value node is not a member of the specified class <http://oxfordsemantic.tech/shacl#Employer>.";
sh:value <http://oxfordsemantic.tech/shacl#Company1>;
sh:sourceShape <http://oxfordsemantic.tech/shacl#ClassShape> .
Scope of SHACL support:
RDFox supports SHACL Core.
SHACL validation is available during query answering, but not in rules.
The definitions of SHACL Subclass, SHACL Superclass, and SHACL Type rely on a limited form of taxonomical reasoning. This is not automatically performed during SHACL validation, since the desired consequences can be derived using the standard reasoning facilities of RDFox.
owl:imports
in shapes graph is not supported.sh:shapesGraph
in data graphs is not supported.
9.5.3. rdfox:DependencyGraph
¶
The tuple table rdfox:DependencyGraph
generates the dependency graph of a
given Datalog program. The tuple table has the following form.
rdfox:DependencyGraph { NamedGraph [FactDomain = rdfox:IDB] S P O }
The NamedGraph
argument specifies the named graph that contains the Datalog
program encoded as RDF triples. The FactDomain
argument specifies the
domain of the facts in the named graph that will be analysed. This argument is
optional with default value rdfox:IDB
and possible values rdfox:EDB
,
rdfox:IDB
, rdfox:IDBrep
, and rdfox:IDBrepNoEDB
, corresponding to
the respective fact domain values described in Section 9.2. The
last three arguments receive the subject, the predicate and the object of
each triple in the RDF encoding of the dependency graph. The arguments
NamedGraph and FactDomain, if specified, should be bound at the time of
evaluation, while the arguments S
, P
, and O
can be either bound or
unbound. The tuple table is available during query answering, but not in rules.
Dependency Graph
Datalog rules have to be evaluated in a specific order due to the presence of negation and aggregation. In particular, a rule can only be evaluated after all of its negated and aggregated atoms have been fully computed, i.e. the rules deriving such atoms and all the rules that they depend on have been fully evaluated. RDFox uses the dependency graph of a Datalog program to determine the evaluation order of its rules.
The dependency graph of an RDFox Datalog program encodes the dependencies between the atoms in the program. The nodes of the dependency graph are the atoms in the program, while the edges of the dependency graph determine the different types of dependencies between the atoms. (Note that RDFox uses an extension of the standard definition of a dependency graph in which the nodes of the graph are atoms rather than predicates. This is because in RDF there is typically only one predicate, i.e. the predicate for all triples in the default graph. Therefore, using the standard definition of a dependency graph, most programs with negation and aggregation would not have a valid rule evaluation order.)
There are three types of dependencies between atoms. Positive dependencies
encode the dependencies of head atoms on the body atoms of the rule that are
not under aggregation or negation. Negative dependencies encode the
dependencies of head atoms of a rule on the body atoms that are under
aggregation or negation. Finally, unification dependencies encode that two
atoms match a common fact, e.g. [?X, :r, :b]
and [:a, :r, ?Y]
unify
since they both match the triple :a :r :b
.
Once the dependency graph of a Datalog program has been constructed, RDFox determines its strongly connected components. A component can be evaluated only if it is stratifiable, i.e. if it contains no atom that negatively depends on another atom from the same component. If all strongly connected components are stratifiable, then the whole program is stratifiable. RDFox groups the strongly connected components by strata. The first stratum contains all components that don’t depend on other components; the second stratum contains all components that depend only on components from the first stratum; and so on. The rules are then evaluated by RDFox according to the stratification of components. Rules that derive facts in the first stratum are evaluated first, rules that derive facts in the second stratum are evaluated next, and so on.
RDF Encoding of Datalog Programs
To extract the dependency graph of a Datalog program, one first has to add
its RDF encoding into a named graph. The RDF encoding of a Datalog program is
done using the predicate rdfox:rule
to specify the rules of the program and
the predicate rdfox:prefix
to specify the prefixes used in the rule
definitions.
Example: Consider for example the following program.
prefix : <https://oxfordsemantic.tech/RDFox/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
:C[?x] :- not :A[?x], :r[?x, ?y].
This program can be encoded using the following RDF triples.
_:p1 rdfox:prefix "prefix : <https://oxfordsemantic.tech/RDFox/>".
_:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
_:r1 rdfox:rule ":C[?x] :- not :A[?x], :r[?x, ?y].".
Querying for the Dependency Graph of a Program
Once the RDF encoding of a Datalog program is in a named graph, one can simply
query the tuple table rdfox:DependencyGraph
.
Example: Let’s assume that the RDFox encoding of the above Datalog
program has been added to the graph :G
. To extract the dependency we
can simply run the following SPARQL query.
SELECT ?s ?p ?o WHERE { rdfox:TT rdfox:DependencyGraph (:G ?s ?p ?o) }
The result of this query will contain the following triples.
1) _:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
2) _:p1 rdfox:prefix "prefix : <https://oxfordsemantic.tech/RDFox/>".
3) _:r1 rdfox:rule ":C[?x] :- not :A[?x], :r[?x, ?y]." .
4) _:r1 rdfox:headAtom _:atom2 .
5) _:r1 rdfox:negativeBodyAtom _:atom0 .
6) _:r1 rdfox:positiveBodyAtom _:atom1 .
7) _:component0 rdfox:stratumIndex 1 .
8) _:component0 rdfox:stratifiable true .
9) _:atom2 rdfox:component _:component0 .
10) _:atom2 rdfox:atom "[*, rdf:type, :C]" .
11) _:atom2 rdfox:dependsPositivelyOn _:atom1 .
12) _:atom2 rdfox:dependsNegativelyOn _:atom0 .
13) _:component1 rdfox:stratumIndex 0 .
14) _:component1 rdfox:stratifiable true .
15) _:atom0 rdfox:component _:component1 .
16) _:atom0 rdfox:atom "[*, rdf:type, :A]" .
17) _:component2 rdfox:stratumIndex 0 .
18) _:component2 rdfox:stratifiable true .
19) _:atom1 rdfox:component _:component2 .
20) _:atom1 rdfox:atom "[*, :r, *]" .
The triples 1-3 encode the input program. The triples 4-6 establish the
link between the rule and its atoms. The remaining triples describe the
strongly connected components of the dependency graph of the program. There
are three components for each of the three atoms in the program. The
components of the body atoms [*, :r, *]
and [*, rdf:type, :A]
are
in the first statum (index 0), since they don’t depend on other components.
The component for the head atom [*, rdf:type, :C]
is in the second
stratum (stratum 1), since it depends on the components in stratum 0. The
result set also encodes that atom [*, rdf:type, :C]
depends negatively
on atom [*, rdf:type, :A]
and that it depends positively on the atom
[*, :r, *]
. All three components are stratifiable.
Example: We now give an example of a program that is not stratifiable and therefore cannot be evaluated by RDFox.
prefix : <https://oxfordsemantic.tech/RDFox/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
:B[?x] :- :A[?x].
:C[?x] :- :B[?x], not :A[?x].
:A[?x] :- :C[?x].
Now assume that the following encoding has been added to the named graph
:G
.
_:p1 rdfox:prefix "prefix : <https://oxfordsemantic.tech/RDFox/>".
_:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
_:r1 rdfox:rule ":B[?x] :- :A[?x].".
_:r2 rdfox:rule ":C[?x] :- :B[?x], not :A[?x].".
_:r3 rdfox:rule ":A[?x] :- :C[?x].".
Querying the tuple table rdfox:DependencyGraph
as before will result in
the following triples.
1) _:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>" .
2) _:p1 rdfox:prefix "prefix : <https://oxfordsemantic.tech/RDFox/>" .
3) _:r3 rdfox:rule ":A[?x] :- :C[?x]." .
4) _:r3 rdfox:headAtom _:atom1 .
5) _:r3 rdfox:positiveBodyAtom _:atom0 .
6) _:r2 rdfox:rule ":C[?x] :- :B[?x], not :A[?x]." .
7) _:r2 rdfox:headAtom _:atom0 .
8) _:r2 rdfox:positiveBodyAtom _:atom2 .
9) _:r2 rdfox:negativeBodyAtom _:atom1 .
10) _:r1 rdfox:rule ":B[?x] :- :A[?x]." .
11) _:r1 rdfox:headAtom _:atom2 .
12) _:r1 rdfox:positiveBodyAtom _:atom1 .
13) _:component0 rdfox:stratumIndex 0 .
14) _:component0 rdfox:stratifiable false .
15) _:atom1 rdfox:component _:component0 .
16) _:atom1 rdfox:atom "[*, rdf:type, :A]" .
17) _:atom1 rdfox:dependsPositivelyOn _:atom0 .
18) _:atom0 rdfox:component _:component0 .
19) _:atom0 rdfox:atom "[*, rdf:type, :C]" .
20) _:atom0 rdfox:dependsNegativelyOn _:atom1 .
21) _:atom0 rdfox:dependsPositivelyOn _:atom2 .
22) _:atom2 rdfox:component _:component0 .
23) _:atom2 rdfox:atom "[*, rdf:type, :B]" .
24) _:atom2 rdfox:dependsPositivelyOn _:atom1 .
As before, the first two blocks of triples encode the input program and the
relationships between rules and their head and body atoms (1-12). The
remaining triples describe the dependency graph of the program. In this
example, all atoms in the program depend on each other in a recursive
fashion. As a result, the dependency graph has exactly one strongly
connected component, which contains all the atoms in the program. Since the
atom [*, rdf:type, :C]
negatively depends on another atom from the same
component (i.e. [*, rdf:type, :A]
), the component is not stratifiable.
Therefore, the program as a whole has no valid rule evaluation order and
will thus be rejected by RDFox.
RDF Vocabulary for Dependency Graph Encoding
The following table describes the vocabulary used in the RDF encoding of Datalog programs and their dependency graphs.
Predicate |
Description |
Example |
---|---|---|
rdfox:prefix |
Specifies a prefix mapping. |
|
rdfox:rule |
Specifies a rule. |
|
rdfox:atom |
Specifies an atom. |
|
rdfox:headAtom |
Links a rule with a head atom. |
|
rdfox:positiveBodyAtom |
Links a rule with a positive body atom. |
|
rdfox:negativeBodyAtom |
Links a rule with a negative body atom. |
|
rdfox:component |
Links an atom with its strongly connected component in the dependency graph. |
|
rdfox:stratumIndex |
Links a strongly connected coponent with its stratum index. |
|
rdfox:stratifiable |
Specifies whether a strongly connected component is stratifiable. |
|
rdfox:dependsPositivelyOn |
Specifies a positive dependency between two atoms. |
|
rdfox:dependsNegativelyOn |
Specifies a negative dependency between two atoms. |
|
rdfox:unifiesWith |
Specifies that two atoms unify. |
|