Warning: This document is for an old version of RDFox. The latest version is 7.1.

5. Data Stores

As explained in Section 4, a data store encapsulates a unit of logically related information. Many applications will store all of their related data in one data store (although some applications may use more than one data store). It is important to keep in mind that a query and rule can operate only on one data store; thus, all information that should be queried or reasoned within one unit should be loaded into one data store.

As explained in Section 4, a data store serves as a container for other kinds of objects:

  • tuple tables are data store components that store facts (see Section 6);

  • data sources can be registered with a data store to access external, non-RDF data (see Section 7);

  • OWL axioms and Datalog rules are used to specify rules of inference that are to be applied to the data loaded into the data store (see Section 10);

  • a dictionary keeps track of all RDF resources (i.e., IRIs, blank nodes, and literals) occurring in the facts in the data store; and

  • statistics modules summarize the data loaded into a data store in a way that helps query planning.

The behavior of a data store can be customized using various parameters, which are listed in Section 5.2.

5.1. Operations on Data Stores

The following list summarizes the operations on data stores available in the shell or via one of the available APIs.

  • A data store can be created on a server. To create a data store, one must specify the data store name and zero or more parameters expressed as key-value pairs. When a data store is created, the in-memory tuple tables DefaultTriples and Quads are created automatically. They are used to store the triples of the default graph and the named graphs of the data store’s RDF dataset, respectively. A newly created data store will also contain all supported built-in tuple tables (see Section 6.5), but it will not contain any axioms, user-defined rules, or facts, and no data sources will be registered.

  • A data store can be deleted on the server. RDFox allows a data store to be deleted only if there are no active connections to the data store.

  • A data store can be saved to and subsequently loaded from a binary file. The file obtained in this way contains all data store content; thus, when a data store is loaded from a file, it is restored to exactly the same state as before saving. RDFox supports the following binary formats.

    • The ‘standard’ format stores the data in a way that is more resilient to changes in RDFox implementation. This format should be used in most cases.

    • The ‘raw’ format stores the data in exactly the same way as the data is stored in RAM. This format allows one to reconstruct the state of a data store exactly and is therefore useful when reporting bugs, but it is more likely to change between RDFox releases.

5.2. Data Store Parameters

The behavior of a data store is determined by a number of options encoded as key-value pairs. The options specified at data store creation time cannot be subsequently changed.

5.2.1. auto-update-statistics

The auto-update-statistics option governs how RDFox manages statistics about the data loaded into the system. RDFox uses these statistics during query planning in order to identify an efficient plan, so query performance may be suboptimal if the statistics are not up to date. The allowed values are as follows.

  • off: Statistics are never u[dated automatically, but they can be updated manually using the stats update command or via one of the available APIs.

  • balanced: The cost of updating the statistics is balanced against the possibility of using outdated statistics. This is the default.

  • eager: Statistics are updated after each operation that has the potential to invalidate the statistics (e.g., importing data).

5.2.2. equality

The equality option determines how RDFox deals with the semantics of equality, which is encoded using the owl:sameAs property. This option has the following values.

  • off: There is no special handling of equality and the owl:sameAs property is treated as just another property. This is the default if the equality option is not specified.

  • noUNA: The owl:sameAs property is treated as equality, and the Unique Name Assumption is not used — that is, deriving an equality between two IRIs does not result in a contradiction. This is the treatment of equality in OWL 2 DL.

  • UNA: : The owl:sameAs property is treated as equality, but interpreted under UNA — that is, deriving an equality between two IRIs results in a contradiction, and only equalities between an IRI and a blank node, or between two blank nodes are allowed. Thus, if a triple of the form <IRI₁, owl:sameAs, IRI₂> is derived, RDFox detects a clash and derives <IRI₁, rdf:type, owl:Nothing> and <IRI₂, rdf:type, owl:Nothing>.

  • chase: The owl:sameAs property is treated as equality with UNA, and furthermore no reflexivity axioms are derived. A data store initialized with this option does not support incremental reasoning. This option is intended to simulate the “chase” procedure commonly used in database research.

In all equality modes (i.e., all modes other than off), distinct RDF literals (e.g., strings, numbers, dates) are assumed to refer to distinct objects, and so deriving an equality between the distinct literals results in a contradiction.

Note RDFox will reject rules that use negation-as-failure or aggregation in all equality modes other than off.

5.2.3. import.invalid-literal-policy

The import.invalid-literal-policy option governs how RDFox handles invalid literals during import.

  • error: Invalid literals in the input are treated as errors, and so files containing such literals cannot be imported. This is the default.

  • as-string: Invalid literals are converted to string literals during import. Moreover, for each invalid literal, a warning is emitted alerting the user to the fact that the value was converted.

  • as-string-silent: Invalid literals are converted to string literals during import, but without emitting a warning.

Note that this option applies only to data importation, and not to DELETE/INSERT updates or queries.

5.2.4. import.rename-user-blank-nodes

If the import.rename-user-blank-nodes option is set to true, then user-defined blank nodes imported from distinct files are renamed apart during the importation process; hence, importing data merges blank nodes according to the RDF specification. There is no way to control the process of renaming blank nodes, which can be problematic in some applications. Because of that, the default value of this option is false since this ensures that the data is imported ‘as is’. Regardless of the state of this option, autogenerated blank nodes (i.e., blank nodes obtained by expanding [] or (...) in Turtle files) are always renamed apart.

5.2.5. init-resource-capacity

The value of the init-resource-capacity option is an integer that is used as a hint to the data store specifying the number of resources that the store will contain. This hint is used to initialize certain data structures to the sizes that ensure faster importation of data. The actual number of resources that a data store can contain is not limited by this option: RDFox will resize the data structures as needed if this hint is exceeded.

5.2.6. init-tuple-capacity

The value of the init-tuple-capacity option is an integer that is used as a hint to the data store specifying the number of tuples that the store will contain. This hint is used to initialize certain data structures to the sizes that ensure faster importation of data. The actual number of tuples that a data store can contain is not limited by this option: RDFox will resize the data structures as needed if this hint is exceeded.

5.2.7. max-data-pool-size

The value of the max-data-pool-size option is an integer that determines the maximum number of bytes that RDFox can use to store resource values (e.g., IRIs and strings). Specifying this option can reduce significantly the amount of virtual memory that RDFox uses per data store.

5.2.8. max-resource-capacity

The value of the max-resource-capacity option is an integer that determines the maximum number of resources that can be stored in the data store. Specifying this option can reduce significantly the amount of virtual memory that RDFox uses per data store.

5.2.9. max-tuple-capacity

The value of the max-tuple-capacity option is an integer that determines the maximum number of tuples that can be stored by the in-memory tuple tables of a data store. Specifying this option can reduce significantly the amount of virtual memory that RDFox uses per data store.

5.2.10. persist-ds

The persist-ds option controls how RDFox persists data contained in a data store. The option can be set to:

  • off. The content of the data store will reside in memory only and will discarded when RDFox exits.

  • file. The content of the data store will be automatically and incrementally saved to a file within the server directory. This option can be selected only if the server parameter persist-ds is also set to file.

  • file-sequence. The content of the data store will be automatically and incrementally saved to a sequence of files within the server directory. This option can be selected only if the server parameter persist-ds is also set to file-sequence.

If the persist-ds option is not specified for a data store then it will use the value of the persist-ds option specified for the server. Please refer to Section 13.2 for more information on how to configure persistence in RDFox.

5.2.11. swrl-negation-as-failure

The swrl-negation-as-failure option determines how RDFox treats ObjectComplementOf class expressions in SWRL rules.

  • off. SWRL rules are interpreted under the open-world assumption and SWRL rules featuring ObjectComplementOf are rejected. This is the default value.

  • on. SWRL rules are interpreted under the closed-world assumption, as described in Section 10.7.3.

5.2.12. type

The type option determines the storage scheme used by the data store. The value determines the maximum capacity of a data store (i.e., the maximum number of resources and/or facts), its memory footprint, the speed with which it can answer certain types of queries, and whether a data store can be used concurrently. The following data store types are currently supported:

  • sequential

  • parallel-nn (default)

  • parallel-nw

  • parallel-ww

A data store can be either sequential (sequential) or parallel (parallel). A sequential data store supports only single-threaded access, whereas a parallel data store is able to run tasks such as materialization in parallel on multiple threads.

In suffixes nn, nw, and ww, the first character determines whether the system uses 32-bit (n for narrow) or 64-bit (w for wide) unsigned integers for representing resource IDs, and the second character determines whether the system uses 32-bit (n) or 64-bit (w) unsigned integers for representing triple IDs. Thus, an nw store can contain at most 4 × 109 resources and at most 1.8 × 1019 triples.

5.2.13. quad-table-type

The parameter quad-table-type, determines the type of the table Quads, which is used to store named graph triples. The available options are quad-table-lg, the default value, and quad-table-sg. Tuple table type quad-table-lg uses indexing suitable for the typical use cases where named graphs contain non-trivial amount of facts. In contrast, the type quad-table-sg uses indexing suitable for the rare cases where each graph consists of a very few triples.