[Lds] DNB RDF dumps loaded into a triplestore
konstantin.baierer at bib.uni-mannheim.de
Die Feb 2 11:06:06 CET 2016
Am 02.02.2016 um 09:52 schrieb Thomas Gängler:
> Hi all,
> is there anyone out there who did load successfully the DNB RDF dumps
> into a triplestore? - If yes, how was your experience? For example,
> which triplestore did you utilise? Which version of the triplestore?
> Which operating system? Which version of the DNB RDF dump? Which
> serialisation of the DNB RDF dump?
> Thanks a lot in advance for all your help.
Joachim Neubert deployed the GND dumps into a Fuseki endpoint in 2014:
. It is available online , has a lot of examples  and works
The dumps are quite big, so the more memory is available to the
triplestore the better. Speed, memory and disk usage vary with the depth
of indexing, full indexing of all S,P,O,G permutations being the worst
case. 32+ GB RAM and a large SSD are a good start, I'd still recommend
ingesting the data in chunks. What I did for the GND was:
* Download the Turtle version from  (~1GB)
* Convert it to N-TRIPLE with rapper  (~12GB)
* Split it into files with 1M statements
* Load them one by one into a Apache Jena TDB triplestore with
tdbloader. Can probably be sped up with tdbloader2data and
* Run Fuseki on the TDB triplestore
Once the data is loaded and indexed, SPARQL queries are fast.
For trivial, static queries, it can be faster to just search the
N-TRIPLE data using some command line magic. Not as elegant and fast as
doing SPARQL but no infrastructure to setup and less memory
requirements. I did that for extracting the DDC concordances from the
GND due to memory and disk limitations on that particular machine.
Best of luck,
Abteilung Digitale Bibliotheksdienste / Projekt InFoLiS II
Email: konstantin.baierer at bib.uni-mannheim.de