6. Tuple Tables¶
As explained in Section 4, a data store uses tuple tables as containers for facts – that is, triples and other kind of data that RDFox® should process. Each tuple table is identified by a name that is unique for a data store. Moreover, each tuple table has a minimal and maximal arity, which are numbers determining the smallest and the largest numbers of RDF resources in a fact stored in the tuple table. In most cases, the minimal and maximal arity are the same, in which case they are called just arity.
6.1. Types of Tuple Tables¶
RDFox supports three kinds of tuple tables.
In-memory tuple tables are the most commonly used kind of tuple table, which, as the name suggests, store facts in RAM. The RDF dataset of a data store is represented using the in-memory tuple tables
DefaultTriples
andQuads
. Tuple tableDefaultTriples
has arity three and contains the triples of the default graph, and tuple tableQuads
has arity four and contains the triples of every named graph. RDFox provides ways to add and delete facts in in-memory tuple tables. See Section 5.2 for more detail on how RDFox stores RDF datasets.Built-in tuple tables contain facts that can be useful in various applications of RDFox. Their content is determined by RDFox and cannot be modified by users. They are described in more detail in Section 6.5.
Data source tuple tables provide a ‘virtual view’ over data in non-RDF data sources, such as CSV files, relational databases, or a full-text Apache Solr index. Such tuple tables must be created explicitly by the user, and doing so requires specifying how the external data is to be transformed into a format compatible with RDF. The facts in data source tuple tables are ‘virtual’ in the sense that they are constructed automatically by RDFox based on the data in the data source — that is, there is no way to add/delete such facts directly. Finally, data source tuple tables can be of arbitrary arity — that is, such tuple tables are not limited to containing just triples. Data source tuple tables and the process of importing external data are described in detail in Section 7.
6.2. Fact Domains¶
Each fact in a tuple is associated with one or more fact domains.
The
explicit
fact domain contains facts that were imported explicitly by the user.The
derived
fact domain contains facts that were not imported explicitly by the user, but were derived by a rule.The
all
fact domain contains all facts — that is,all
is the union ofexplicit
andderived
.
A fact can belong to more than one domain. For example, facts added to the
store are stored into the explicit
domain, and during reasoning they are
transferred into the all
domain.
Only the explicit
fact domain can be directly affected by users. That is,
all explicitly added facts are added to the explicit
domain, and only those
facts can be deleted. It is not possible to manually delete derived facts since
the meaning of such deletions is unclear.
Many RDFox operations accept a fact domain as an argument. For example, SPARQL
query evaluation takes a fact domain as an argument, which determines what
subset of the facts the query should be evaluated over. Thus, if a query is
evaluated with respect to the explicit
domain, it will ‘see’ only the facts
that were explicitly added to a data store, and it will ignore the facts that
were derived by reasoning.
6.3. Managing and Using Tuple Tables¶
RDFox provide ways for creating and deleting tuple tables: this can be
accomplished in the shell using the tupletable
command (see
Section 15.2.49) and the corresponding APIs described in
Section 16.10. When creating a tuple table, one must specify a
list of key-value parameters that determine what kind of tuple table is to be
created. The parameters for data source tuple tables depend on the type of data
source and are described in detail in Section 7. Moreover, the
parameters for in-memory and built-in tuple tables are described in
Section 6.4 and Section 6.5,
respectively.
Facts in tuple tables can be accessed during querying and reasoning. In
queries, tuple tables DefaultTriples
and Quads
can be accessed using
standard SPARQL syntax for querying RDF datasets. Access to arbitrary
tuple tables in the data store is established using the proprietary SPARQL
operator TT
and the reserved IRI rdfox:TT
, both of which are described
in Section 9.4. In rules, tuple tables DefaultTriples
and Quads
can be accessed using the dedicated syntax for default graph
atoms (see Section 10.4.1.1) and named graph atoms (see
Section 10.4.1.2), respectively. Access to arbitrary tuple tables is
established using the general atom syntax described in
Section 10.4.1.3.
RDFox provides different ways of updating the content of in-memory tuple
tables. The content of tuple tables DefaultTriples
and Quads
can be
updated by adding or removing RDF data using the shell command import
(see
Section 15.2.23) and the corresponding APIs described in
Section 16.8. Similarly, one can update the content of
arbitrary in-memory tuple tables by importing facts using the Datalog general
atom syntax (see Section 10.4.1.3). Another way of updating the content
of in-memory tuple tables is to use the SPARQL Update Language. One can use
standard syntax to update the tuple tables DefaultTriples
and Quads
,
and they can use the operator TT
and the reserved IRI rdfox::TT
(see
Section 9.4) to update the content of arbitrary in-memory
tuple tables. The final way of updating the content of in-memory tuple tables
is via reasoning. For example, adding an OWL ontology to the default graph or
to a named graph will derive facts in DefaultTriples
and Quads
,
respectively. Similarly, adding a set of Datalog rules to the data store, will
update the tuple tables referenced in their head atoms.
6.4. In-Memory Tuple Tables¶
RDFox uses in-memory tuple tables to store facts imported by the users.
In-memory tuple tables DefaultTriples
and Quads
are created
automatically when a fresh data store is created to represent the default RDF
dataset of the data store. Moreover, in-memory tuple tables can be created and
deleted using the tuple table management APIs. During table creation, the user
can specify the following parameters.
Parameter |
Default value |
Description |
---|---|---|
|
— |
Specifies the type of the in-memory tuple table. The available options are |
|
(as in the data store) |
Specifies the maximum number of tuples that the new tuple table will be able to hold. The main purpose of this parameter is to reduce the amount of address space that the tuple table will use. The default value is the value of the data store parameter with the same name. |
|
(as in the data store) |
Provides a hint as to how many facts the system should expect to store initially in the tuple table. When importing large data sets, setting this parameter to be roughly equal to the number of facts to be imported could improve the speed of importation. |
6.5. Built-In Tuple Tables¶
Every data store contains a fix set of built-in tuple tables whose content is
determined by RDFox. Built-in tables are different to other tuple tables in
that they don’t necessarily store facts explicitly, they may have a variable
arity, they may not be allowed in rules, and they may impose restrictions on
what positions need to be fixed when accessed. As with other tuple tables,
built-in tuple tables are referenced in queries using the proprietary TT
operator or the reserved IRI rdfox:TT
(see
Section 9.4). Similarly, when allowed in rules, built-in
tuple tables are referenced using general atoms (see Section 10.4.1.3).
Each built-in tuple table is identified by a fixed name, which cannot be changed. When a data store is created, all built-in tuple tables supported by RDFox will be created automatically. Like with other tuple tables, RDFox allows built-in tuple tables to be deleted. Once deleted, a built-in table can be recreated as outlined in Section 6.3 by specifying the tuple table name without any parameters. Note that names of built-in tuple tables cannot be used for the creation of other types of tuple tables.
6.5.1. SKOLEM
¶
The SKOLEM
tuple table can have arity from one onwards. Moreover, in each
fact in this tuple table, the last resource of the fact is a blank node that is
uniquely determined by all remaining arguments. This can be useful in queries
and/or rules that need to create new objects. This is explained using the
following example.
Example: Let us assume we are dealing with a dataset where each person
is associated with zero or more companies using the :worksFor
relationship. For example, our dataset could contain the following triples.
:Peter :worksFor :Company1 .
:Peter :worksFor :Company2 .
:Paul :worksFor :Company1 .
Now assume that we wish to attach additional information to each individual
employment. For example, we might want to say that the employment of
:Peter
in :Company1
started on a specific date. To be able to
capture such data, we will ‘convert’ each :worksFor
link to a separate
instance of the :Employment
class; then, we can attach arbitrary
information to such instances. This presents us with a key challenge: for
each combination of a person and company, we need to ‘invent’ a fresh
object that is uniquely determined by the person and company.
This problem is solved using the SKOLEM
built-in tuple table. In
particular, we can restructure the data using the following rule.
:Employment[?E],
:employee[?E,?P],
:inCompany[?E,?C] :-
:worksFor[?P,?C],
SKOLEM("Employment",?P,?C,?E) .
The above rule can be understood as follows. Body atom :worksFor[?P,?C]
selects all combinations of a person and a company that the person works
for. Moreover, atom SKOLEM("Employment",?P,?C,?E)
contains all
facts where the value of ?E
is uniquely determined by the fixed string
"Employment"
, the value of ?P
, and the value of ?C
. Thus, for
each combination of ?P
and ?C
, the built-in tuple table will
produce a unique value of ?E
, which is then used in the rule head to
derive new triples.
How a value of ?E
is computed from the other arguments is not under
application control: each value is a blank node whose name is guaranteed to
be unique. However, what matters is that the value of ?E
is always the
same whenever the values of all other arguments are the same. Thus, we can
use the following rule to specify the start time of Peter’s employment in
Company 1.
:startDate[?E,"2020-02-03"^^xsd:date] :- SKOLEM("Employment",:Peter,:Company1,?E) .
After evaluating these rules, the following triples will be added to the
data store. We use blank node names such as _:new_1
for clarity: the
actual names of new blank nodes will be much longer in practice.
_:new_1 rdf:type :Employment .
_:new_1 :employee :Peter .
_:new_1 :inCompany :Company1 .
_:new_1 :startDate "2020-02-03"^^xsd:date .
_:new_2 rdf:type :Employment .
_:new_2 :employee :Peter .
_:new_2 :inCompany :Company2 .
_:new_3 rdf:type :Employment .
_:new_3 :employee :Paul .
_:new_3 :inCompany :Company1 .
When creating fresh objects using the SKOLEM
built-in tuple table, it
is good practice to incorporate object type into the argument. The above
example achieved this by passing a fixed string "Employment"
as the first
argument of SKOLEM
. This allows us to create another, distinct blank
node for each combination of a person and a company by simply varying the first
argument of SKOLEM
.
Atoms involving the SKOLEM
built-in tuple table must satisfy certain
binding restrictions in rules and queries. Essentially, it must be possible
to evaluate a query/rule so that, once an SKOLEM
atom is reached,
either the value of the last argument, or the values of all all but the last
argument must be known. This is explained using the following example.
Example: The following query cannot be evaluated by RDFox — that is, the system will respond with a query planning error.
SELECT ?P ?C ?E WHERE {
TT SKOLEM { "Employment" ?P ?C ?E }
}
This query essentially says “return all ?P
, ?C
, and ?E
where
the value of ?E
is uniquely defined by "Employment"
, ?P
, and
?C
”. The problem with this is that the values of ?P
and ?C
have
not been restricted in any way, so the query should, in principle, return
infinitely many answers.
To evaluate the query, one must provide the values of ?P
and ?C
, or
for ?E
, either explicitly as arguments or implicitly by binding the
arguments in other parts of the query. Thus, both of the following queries
can be successfully evaluated.
SELECT ?E WHERE {
TT SKOLEM { "Employment" :Paul :Company2 ?E }
}
SELECT ?T ?C ?P WHERE {
BIND (_:new_1 as ?E)
TT SKOLEM { ?T ?P ?C ?E }
}
The latter query aims to unpack _:new_1
into the values of ?T
,
?C
, and ?P
for which _:new_1
is the uniquely generated fresh
blank node. Note that such ?T
, ?C
, and ?P
may or may not exist,
depending on the algorithm RDFox uses to generate blank nodes. The
following is a more realistic example of blank node ‘unpacking’.
SELECT ?T ?C ?P WHERE {
?E rdf:type :Employment .
TT SKOLEM { ?T ?P ?C ?E }
}
6.5.2. SHACL
¶
SHACL constraint validation in RDFox can be performed using the following RDFox tuple tables.
SHACL { DataGraph [FactDomain = rdfox:all] ShapesGraph S P O }
SHACL_NN { DataGraph [FactDomain = rdfox:all] ShapesGraph S P O }
SHACL_ND { DataGraph [FactDomain = rdfox:all] S P O }
SHACL_DN { [FactDomain = rdfox:all] ShapesGraph S P O }
SHACL_DD { [FactDomain = rdfox:all] S P O }
The tables differ in whether the validated graph (i.e. the SHACL data graph) and the graph storing the
constraints (i.e. the SHACL shapes graph) are named graphs or the default
graph. In particular, tuple tables SHACL
, SHACL_NN
and SHACL_ND
validate the content of the named graph DataGraph
, while tuple tables
SHACL_DN
and SHACL_DD
validate the content of the default graph.
Similarly, tuple tables SHACL
, SHACL_NN
and SHACL_DN
perform
validation using the SHACL shapes stored in the named graph ShapesGraph
,
while tuple tables SHACL_ND
and SHACL_DD
perform validation using the
shapes stored in the default graph.
In all variants, the FactDomain
argument specifies the domain of the facts
in the data graph that will be validated. This argument is optional with
default value rdfox:all
and possible values rdfox:explicit
,
rdfox:derived
, and rdfox:all
, corresponding to the respective fact
domain values described in Section 6.2. The last three arguments
receive the subject, the predicate and the object of each triple in the
validation report that
results from validating the data graph with respect to the constraints in the
shapes graph.
Basic SHACL Validation
Example: Assume that the following data graph about employees and
their employers is imported into the named graph :data
.
@prefix sh: <http://www.w3.org/ns/shacl#>.
@prefix : <https://rdfox.com/examples/shacl#>.
:John a :Employee;
:worksFor :Company1.
:Jane a :Employee;
:worksFor [ a :Employer ].
Furthermore, assume that the following shapes graph, which asserts that
each value of the property :worksFor
is of type :Employer
, is
imported into the named graph :shacl
.
@prefix sh: <http://www.w3.org/ns/shacl#>.
@prefix : <https://rdfox.com/examples/shacl#>.
:ClassShape
sh:targetClass :Employee ;
sh:path :worksFor ;
sh:class :Employer.
One can now query the SHACL tuple table to generate the validation
report resulting from the validation of the data graph :data
using the
shapes graph :shacl
as follows.
PREFIX : <https://rdfox.com/examples/shacl#>
SELECT ?s ?p ?o {
TT SHACL { :data :shacl ?s ?p ?o }
}
The validation report should look as follows, modulo blank node names and prefix abbreviations:
_:anonymous1001 rdf:type sh:ValidationReport .
_:anonymous1001 sh:conforms false .
_:anonymous1001 sh:result _:anonymous1002 .
_:anonymous1002 rdf:type sh:ValidationResult .
_:anonymous1002 sh:focusNode :John .
_:anonymous1002 sh:sourceConstraintComponent sh:ClassConstraintComponent .
_:anonymous1002 sh:sourceShape :ClassShape .
_:anonymous1002 sh:resultPath :worksFor .
_:anonymous1002 sh:value :Company1 .
_:anonymous1002 sh:resultSeverity sh:Violation .
_:anonymous1002 sh:resultMessage "The current value node is not a member of the specified class <https://rdfox.com/examples/shacl#Employer>." .
Saving a Validation Report
A validation report can be saved into a named graph using the INSERT
update
of SPARQL. This is illustrated in the following example.
Example: The following update saves the validation report into the
named graph :report
:
PREFIX sh: <http://www.w3.org/ns/shacl#>
PREFIX : <https://rdfox.com/examples/shacl#>
INSERT {
GRAPH :report { ?s ?p ?o }
}
WHERE {
TT SHACL { :data :shacl ?s ?p ?o }
}
Rejection of Non-Conforming Updates
Certain use cases may require the content of a data store to be kept consistent
with SHACL constraints at all times — that is, any updates that result in a
violation of a SHACL constraint should be rejected. To achieve this behavior
in RDFox, one can query the SHACL
tuple table before committing a
transaction as follows and, in case any violations are detected, adding an
instance of the rdfox:ConstraintViolation
class in the default graph; As
discussed in Section 11.5, the latter will prevent
a transaction from committing. This technique is demonstrated in the following
example.
Example: Consider the data and shape graphs from the previous examples and assume the insertion of the data graph is performed using the following RDFox commands.
begin
import > :data data.ttl
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT SHACL { :data :shacl ?report sh:conforms false } }
# the transaction fails
commit
The INSERT
update checks whether the SHACL constraints are satisfied,
and if not, adds the value of ?report
as an instance of
rdfox:ConstraintViolation
. As discussed earlier, the constraints are
not satisfied for the data in this example, so the WHERE
part of the
update will bind variable ?report
to _:anonymous1001
; thus, triple
_:anonymous1001 a rdfox:ConstraintViolation
will be added to the default graph,
which will prevent the transaction from completing successfully.
In contrast, if we fix the data prior to committing the transaction as in the following example, the transaction will be successfully committed.
begin
import > :data data.ttl
# the following tuple makes the data in data.ttl consistent with the SHACL graph
import > :data ! :Company1 a :Employer.
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT SHACL { :data :shacl ?report sh:conforms false } }
# the transaction succeeds
commit
If we now attempt to remove the triple :Company1 a :Employer
using the
same approach, the transaction in question will be rejected, since the
remaining data would no longer conform with the constraints in the SHACL
graph.
begin
# attempting to remove a tuple that would invalidate the remaining of the data
import > :data - ! :Company1 a :Employer.
INSERT { ?report a rdfox:ConstraintViolation } \
WHERE { TT SHACL { :data :shacl ?report sh:conforms false } }
# the transaction fails
commit
If we want the error message to contain additional information about the constraint violation, we can insert other triples with the rdfox:ConstraintViolation instance in the subject postion into the default graph, for example:
begin
import > :data - ! :Company1 a :Employer.
INSERT { \
?s a rdfox:ConstraintViolation . \
?s ?p ?o \
} WHERE { \
TT SHACL { :data :shacl ?s ?p ?o} . \
FILTER(?p IN (sh:sourceShape, sh:resultMessage, sh:value)) \
}
commit
This should produce an error message like this:
An error occurred while executing the command:
The transaction could not be committed because it would have introduced the following constraint violation:
_:anonymous1 sh:resultMessage "The current value node is not a member of the specified class <https://rdfox.com/examples/shacl#Employer>.";
sh:value <https://rdfox.com/examples/shacl#Company1>;
sh:sourceShape <https://rdfox.com/examples/shacl#ClassShape> .
Scope of SHACL support:
RDFox supports SHACL Core.
SHACL validation is available during query answering, but not in rules.
The definitions of SHACL Subclass, SHACL Superclass, and SHACL Type rely on a limited form of taxonomical reasoning. This is not automatically performed during SHACL validation, since the desired consequences can be derived using the standard reasoning facilities of RDFox.
owl:imports
in shapes graph is not supported.sh:shapesGraph
in data graphs is not supported.
6.5.3. DependencyGraph
¶
The dependency graph of a Datalog program can be inspected using the following tuple tables.
DependencyGraph { NamedGraph [FactDomain = rdfox:all] S P O }
DependencyGraph_N { NamedGraph [FactDomain = rdfox:all] S P O }
DependencyGraph_D { [FactDomain = rdfox:all] S P O }
The tuple tables work with RDF encoded Datalog programs stored as RDF graphs.
Tuple tables DependencyGraph
and DependencyGraph_N
read the encoded
program from the named graph NamedGraph
, while the tuple table
DependencyGraph_D
reads the encoded program from the default graph. The RDF
encoding of Datalog programs is defined below. The FactDomain
argument
specifies the domain of the facts in the named graph that will be analyzed.
This argument is optional with default value rdfox:all
, and possible values
rdfox:explicit
, rdfox:derived
, and rdfox:all
, corresponding to the
respective fact domain values described in Section 6.2. The last
three arguments receive the subject, the predicate and the object of each
triple in the RDF encoding of the dependency graph. The arguments NamedGraph
and FactDomain, if specified, should be bound at the time of evaluation, while
the arguments S
, P
, and O
can be either bound or unbound. The tuple
table is available during query answering, but not in rules.
Dependency Graph
Datalog rules have to be evaluated in a specific order due to the presence of negation and aggregation. In particular, a rule can only be evaluated after all of its negated and aggregated atoms have been fully computed, i.e. the rules deriving such atoms and all the rules that they depend on have been fully evaluated. RDFox uses the dependency graph of a Datalog program to determine the evaluation order of its rules.
The dependency graph of an RDFox Datalog program encodes the dependencies between the atoms in the program. The nodes of the dependency graph are the atoms in the program, while the edges of the dependency graph determine the different types of dependencies between the atoms. (Note that RDFox uses an extension of the standard definition of a dependency graph in which the nodes of the graph are atoms rather than predicates. This is because in RDF there is typically only one predicate, i.e. the predicate for all triples in the default graph. Therefore, using the standard definition of a dependency graph, most programs with negation and aggregation would not have a valid rule evaluation order.)
There are three types of dependencies between atoms. Positive dependencies
encode the dependencies of head atoms on the body atoms of the rule that are
not under aggregation or negation. Negative dependencies encode the
dependencies of head atoms of a rule on the body atoms that are under
aggregation or negation. Finally, unification dependencies encode that two
atoms match a common fact, e.g. [?X, :r, :b]
and [:a, :r, ?Y]
unify
since they both match the triple :a :r :b
.
Once the dependency graph of a Datalog program has been constructed, RDFox determines its strongly connected components. A component can be evaluated only if it is stratifiable, i.e. if it contains no atom that negatively depends on another atom from the same component. If all strongly connected components are stratifiable, then the whole program is stratifiable. RDFox groups the strongly connected components by strata. The first stratum contains all components that don’t depend on other components; the second stratum contains all components that depend only on components from the first stratum; and so on. The rules are then evaluated by RDFox according to the stratification of components. Rules that derive facts in the first stratum are evaluated first, rules that derive facts in the second stratum are evaluated next, and so on.
RDF Encoding of Datalog Programs
To extract the dependency graph of a Datalog program, one first has to add
its RDF encoding into a named graph. The RDF encoding of a Datalog program is
done using the predicate rdfox:rule
to specify the rules of the program and
the predicate rdfox:prefix
to specify the prefixes used in the rule
definitions.
Example: Consider for example the following program.
prefix : <https://rdfox.com/examples/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
:C[?x] :- not :A[?x], :r[?x, ?y].
This program can be encoded using the following RDF triples.
_:p1 rdfox:prefix "prefix : <https://rdfox.com/examples/>".
_:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
_:r1 rdfox:rule ":C[?x] :- not :A[?x], :r[?x, ?y].".
Querying for the Dependency Graph of a Program
Once the RDF encoding of a Datalog program is in a named graph, one can simply
query the tuple table DependencyGraph
.
Example: Let’s assume that the RDFox encoding of the above Datalog
program has been added to the graph :G
. To extract the dependency we
can simply run the following SPARQL query.
SELECT ?s ?p ?o WHERE { (:G ?s ?p ?o) rdfox:TT "DependencyGraph" }
The result of this query will contain the following triples.
1) _:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
2) _:p1 rdfox:prefix "prefix : <https://rdfox.com/examples/>".
3) _:r1 rdfox:rule ":C[?x] :- not :A[?x], :r[?x, ?y]." .
4) _:r1 rdfox:headAtom _:atom2 .
5) _:r1 rdfox:negativeBodyAtom _:atom0 .
6) _:r1 rdfox:positiveBodyAtom _:atom1 .
7) _:component0 rdfox:stratumIndex 1 .
8) _:component0 rdfox:stratifiable true .
9) _:atom2 rdfox:component _:component0 .
10) _:atom2 rdfox:atom "[*, rdf:type, :C]" .
11) _:atom2 rdfox:dependsPositivelyOn _:atom1 .
12) _:atom2 rdfox:dependsNegativelyOn _:atom0 .
13) _:component1 rdfox:stratumIndex 0 .
14) _:component1 rdfox:stratifiable true .
15) _:atom0 rdfox:component _:component1 .
16) _:atom0 rdfox:atom "[*, rdf:type, :A]" .
17) _:component2 rdfox:stratumIndex 0 .
18) _:component2 rdfox:stratifiable true .
19) _:atom1 rdfox:component _:component2 .
20) _:atom1 rdfox:atom "[*, :r, *]" .
The triples 1-3 encode the input program. The triples 4-6 establish the
link between the rule and its atoms. The remaining triples describe the
strongly connected components of the dependency graph of the program. There
are three components for each of the three atoms in the program. The
components of the body atoms [*, :r, *]
and [*, rdf:type, :A]
are
in the first statum (index 0), since they don’t depend on other components.
The component for the head atom [*, rdf:type, :C]
is in the second
stratum (stratum 1), since it depends on the components in stratum 0. The
result set also encodes that atom [*, rdf:type, :C]
depends negatively
on atom [*, rdf:type, :A]
and that it depends positively on the atom
[*, :r, *]
. All three components are stratifiable.
Example: We now give an example of a program that is not stratifiable and therefore cannot be evaluated by RDFox.
prefix : <https://rdfox.com/examples/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
:B[?x] :- :A[?x].
:C[?x] :- :B[?x], not :A[?x].
:A[?x] :- :C[?x].
Now assume that the following encoding has been added to the named graph
:G
.
_:p1 rdfox:prefix "prefix : <https://rdfox.com/examples/>".
_:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>".
_:r1 rdfox:rule ":B[?x] :- :A[?x].".
_:r2 rdfox:rule ":C[?x] :- :B[?x], not :A[?x].".
_:r3 rdfox:rule ":A[?x] :- :C[?x].".
Querying the tuple table DependencyGraph
as before will result in
the following triples.
1) _:p2 rdfox:prefix "prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>" .
2) _:p1 rdfox:prefix "prefix : <https://rdfox.com/examples/>" .
3) _:r3 rdfox:rule ":A[?x] :- :C[?x]." .
4) _:r3 rdfox:headAtom _:atom1 .
5) _:r3 rdfox:positiveBodyAtom _:atom0 .
6) _:r2 rdfox:rule ":C[?x] :- :B[?x], not :A[?x]." .
7) _:r2 rdfox:headAtom _:atom0 .
8) _:r2 rdfox:positiveBodyAtom _:atom2 .
9) _:r2 rdfox:negativeBodyAtom _:atom1 .
10) _:r1 rdfox:rule ":B[?x] :- :A[?x]." .
11) _:r1 rdfox:headAtom _:atom2 .
12) _:r1 rdfox:positiveBodyAtom _:atom1 .
13) _:component0 rdfox:stratumIndex 0 .
14) _:component0 rdfox:stratifiable false .
15) _:atom1 rdfox:component _:component0 .
16) _:atom1 rdfox:atom "[*, rdf:type, :A]" .
17) _:atom1 rdfox:dependsPositivelyOn _:atom0 .
18) _:atom0 rdfox:component _:component0 .
19) _:atom0 rdfox:atom "[*, rdf:type, :C]" .
20) _:atom0 rdfox:dependsNegativelyOn _:atom1 .
21) _:atom0 rdfox:dependsPositivelyOn _:atom2 .
22) _:atom2 rdfox:component _:component0 .
23) _:atom2 rdfox:atom "[*, rdf:type, :B]" .
24) _:atom2 rdfox:dependsPositivelyOn _:atom1 .
As before, the first two blocks of triples encode the input program and the
relationships between rules and their head and body atoms (1-12). The
remaining triples describe the dependency graph of the program. In this
example, all atoms in the program depend on each other in a recursive
fashion. As a result, the dependency graph has exactly one strongly
connected component, which contains all the atoms in the program. Since the
atom [*, rdf:type, :C]
negatively depends on another atom from the same
component (i.e. [*, rdf:type, :A]
), the component is not stratifiable.
Therefore, the program as a whole has no valid rule evaluation order and
will thus be rejected by RDFox.
RDF Vocabulary for Dependency Graph Encoding
The following table describes the vocabulary used in the RDF encoding of Datalog programs and their dependency graphs.
Predicate |
Description |
Example |
---|---|---|
rdfox:prefix |
Specifies a prefix mapping. |
|
rdfox:rule |
Specifies a rule. |
|
rdfox:atom |
Specifies an atom. |
|
rdfox:headAtom |
Links a rule with a head atom. |
|
rdfox:positiveBodyAtom |
Links a rule with a positive body atom. |
|
rdfox:negativeBodyAtom |
Links a rule with a negative body atom. |
|
rdfox:component |
Links an atom with its strongly connected component in the dependency graph. |
|
rdfox:stratumIndex |
Links a strongly connected component with its stratum index. |
|
rdfox:stratifiable |
Specifies whether a strongly connected component is stratifiable. |
|
rdfox:dependsPositivelyOn |
Specifies a positive dependency between two atoms. |
|
rdfox:dependsNegativelyOn |
Specifies a negative dependency between two atoms. |
|
rdfox:unifiesWith |
Specifies that two atoms unify. |
|