News from the LODUM headquarters and all things Linked Open Data.

Our former LODUM team member Tomi Kauppinen and his team at Aalto University have developed a nice vocabulary visualizer. It allows you to compare the use of vocabularies on any given number of SPARQL endpoints, showing both the overlap and the number of instances of the different classes used.

For most of the LODUM team members, the SPARQL by example tutorial provided by Cambridge Semantics was immensely useful when we wrote our first SPARQL queries. So, why not provide a similar tutorial, focusing on the LODUM data?

We hope that the following list of sample queries makes the learning curve a little less steep for developers who are new to LODUM and/or querying RDF data in general. Each query can be executed and modified by clicking on the title, which will take you to our SPARQL editor, pre-filled with the query. Just edit and customize the examples and see what happens. Happy querying!

Find URI of Person

prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix xsd: <http://www.w3.org/2001/XMLSchema#>
prefix foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?person WHERE {
?person foaf:name "Trame, Johannes"^^xsd:string.
?person rdf:type foaf:Person.

Search for a Person by Keyword

prefix foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?name ?person WHERE {
?person foaf:name ?name;
a foaf:Person.
FILTER regex(?name,"kuhn","i").

The predicate a is a short form for rdf:type. The line FILTER regex(?name,"kuhn","i") has the effect that only those graph patterns are chosen, in which the string variable ?name contains the string "kuhn". The tag "i" makes the regular expression filter case-insensitive. Omission of this tag creates a case-sensitive filter, for example FILTER regex(?name,"Kuhn").

Find all Publications (Including URI, Title and Year) of a Person, Sorted by Year

prefix xsd: <http://www.w3.org/2001/XMLSchema#>
prefix dct: <http://purl.org/dc/terms/>
prefix bibo: <http://purl.org/ontology/bibo/>
prefix foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?pub ?title ?year WHERE {
"Kuhn, Werner"^^xsd:string ^foaf:name ?person.
?pub bibo:producer ?person;
dct:title ?title;
dct:issued ?year.

A ^-symbol in front of the predicate turns around its direction, i.e. the syntax ?a ^foaf:name ?b. is equivalent to ?b foaf:name ?a. The ORDER BY ?x command orders the results of the query by the variable ?x. ORDER BY DESC(?x) would sort the results in descending order.

Find Distinct Names of All Co-Authors of a Person

prefix xsd: <http://www.w3.org/2001/XMLSchema#>
prefix bibo: <http://purl.org/ontology/bibo/>
prefix foaf: <http://xmlns.com/foaf/0.1/>
?person foaf:name "Kuhn, Werner"^^xsd:string.
?pub bibo:producer ?person,?coauthor.
?coauthor foaf:name ?coauthorname.
ORDER BY ?coauthorname

Using DISTINCT has the effect that each result in the output appears only once, i.e. there are no repetitions in the output, even if the pattern can be found more than once.

Find All Distinct Names of Coauthors of a Person (Excluding That Person’s Own Name)

prefix xsd: <http://www.w3.org/2001/XMLSchema#>
prefix bibo: <http://purl.org/ontology/bibo/>
prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix lodum:<http://vocab.lodum.de/helper/>
?person foaf:name "Kuhn, Werner"^^xsd:string;
lodum:personID ?person_id.
?pub bibo:producer ?person,?coauthor.
?coauthor foaf:name ?coauthorname;
lodum:personID ?coauthor_id.

Each person stored in the LODUM triple store, i.e. each entity of type foaf:Person, is assigned a unique ID number (inherited from the university’s CRIS research database). The condition FILTER(?person_id!=?coauthor_id) selects only those grpah patterns where the ?person and ?coauthor have different IDs.

Find Title and Year of Coauthors’ Publications (Limited to 100 Results, Ordered by Issued Year)

prefix xsd: <http://www.w3.org/2001/XMLSchema#> 
prefix dct: <http://purl.org/dc/terms/>
prefix bibo: <http://purl.org/ontology/bibo/>
prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix lodum:<http://vocab.lodum.de/helper/>
SELECT DISTINCT ?title ?publication ?year WHERE {
?person foaf:name "Kuhn, Werner"^^xsd:string; lodum:personID ?pID.
?pub bibo:producer ?person,?coauthor .
?coauthor foaf:name ?coauthorname; lodum:personID ?cID.
FILTER (?cID!=?pID).
?pubco bibo:producer ?coauthor;
dct:title ?title;
dct:issued ?year.
BIND(?pubco AS ?publication).
} ORDER BY DESC(?year)

The function BIND(?pubco AS ?publication) creates a variable ?publication which is the assigned the same value as ?pubco. LIMIT 100 limits the output to the first 100 results.

Find All RDF Triples That Contain a Particular Person as Subject or Object

prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix lodum:<http://vocab.lodum.de/helper/>
SELECT DISTINCT ?name (?p AS ?property) (?q AS ?isPropertyOf) (?v AS ?value)
?x a foaf:Person; lodum:personID "9325"; foaf:name ?name.
{?x ?p ?v.}UNION{?v ?q ?x.}

Instead of using the BIND function, renaming variables can be done before the WHERE-statement. This can be useful to avoid long variable names in the body of the query. Combining two graph patterns with {...}UNION{...} in a query returns all results matching either of the graph patterns.

Find Names of Distinct Organizations (and Their Websites) Concerned with Economics

prefix foaf: <http://xmlns.com/foaf/0.1/>
SELECT DISTINCT (?c AS ?organization) (?n AS ?name) (?h AS ?homepage) WHERE{
?c a foaf:Organization; foaf:name ?n.
OPTIONAL{?c foaf:homepage ?h.}
{FILTER regex(?n, "wirtschaft", "i").}UNION{FILTER regex(?n, "econom", "i").}UNION
{FILTER regex(?n, "ökonom", "i").}
ORDER BY ?name

Graph patterns enclosed in the brackets of OPTIONAL{...} do not have to be fulfilled by all the results of the query (i.e., they are optional – hence the name).

Find Distinct English Names of all Departments

prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix lodum:<http://vocab.lodum.de/helper/>
SELECT DISTINCT (?c AS ?department) WHERE{
?c ^foaf:name/a lodum:Department.
FILTER langmatches(lang(?c),"EN").
ORDER BY ?department

The syntax ?c ^foaf:name/a lodum:Department. is a short form of ?c ^foaf:name ?x. ?x a lodum:Department.
The second line FILTER langmatches(lang(?c),"EN"). selects those results where ?c has the language “EN”.

Find the Names of All Departments Including Links to Their Data Sites

prefix foaf: <http://xmlns.com/foaf/0.1/>
PREFIX lodum:<http://vocab.lodum.de/helper/>
sEleCt DiStiNct (?e as $department_en) (?d as ?department_de) ($x AS ?link) where{
?x a lodum:Department; foaf:name $e,$d.
FIltER langmatches(lang(?e),"En").
fiLTer langmatches(lang($d),"dE").
order by ?department

This query shows that much of the SPARQL syntax is not case sensitive, and also that for defining variables the symbol $ can be used instead of ?.

Find All Entities Belonging to the University Located at ‘Schlossplatz’ (plus Websites & Addresses)

prefix foaf: <http://xmlns.com/foaf/0.1/> 
prefix vcard:<http://www.w3.org/2006/vcard/ns#>
SELECT DISTINCT ?thing (?h as ?homepage) ?adress WHERE{
?x vcard:adr ?adress;
foaf:name ?thing.
{?x foaf:homepage ?h.}UNION
MINUS{?x foaf:homepage ?g.}
BIND(?x as ?h).
FILTER regex(?adress, "Schlossplatz").
ORDER BY ?thing

Using MINUS{...} specifies a graph pattern which the results are not allowed to fulfill.
This query diplays the homepage if it exists, otherwise it shows the data site.

Find All Canteens (Mensa, Bistro) Run by the Uni Münster

prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix vcard:<http://www.w3.org/2006/vcard/ns#>
prefix db_ont: <http://dbpedia.org/ontology/>
prefix lodum: <http://vocab.lodum.de/helper/>
SELECT DISTINCT (?c as?canteen) (?a as ?adress) (?l as ?location_map) WHERE {
?x a db_ont:Restaurant;
foaf:name ?c;
vcard:adr ?a;
lodum:building ?l.

Find the Canteen Offering for the Next Days

prefix foaf: <http://xmlns.com/foaf/0.1/>
prefix db_ont: <http://dbpedia.org/ontology/>
prefix gr: <http://purl.org/goodrelations/v1#>
SELECT DISTINCT ?canteen ?food ?start ?end ?min_price ?max_price
?o a gr:Offering; ^gr:offers ?res.
?res a db_ont:Restaurant; foaf:name ?canteen.
?o gr:name ?food;
gr:availabilityStarts ?start;
gr:availabilityEnds ?end;
gr:hasPriceSpecification ?price.
?price gr:hasCurrency ?cur; gr:hasMinCurrencyValue ?min; gr:hasMaxCurrencyValue ?max.
bind(concat(?min," ", ?cur) as ?min_price).
bind(concat(?max," ", ?cur) as ?max_price).
filter(?end >= now() ).
ORDER BY ?start ?canteen

The expression concat(.. , .. , .. , ..) concatenates many strings to another string.
The function now() returns the current time and date, as you can see here.

Of course, there are always several ways to get to the same results. So if you find any cool queries or discover a more efficient version of a query, please drop us a line.

For those interested on getting started with GeoSPARQL (Spatial SPARQL Queries), here it is new tutorial about how to set up Parliament Triple Store with GeoSPARQL support.

This tutorial was written and successfully tested using the Parliament version 2.7.4, running on Ubuntu Linux 12.04 64-bit, Windows Server 2008 R2 64-bit and Windows 7 Home Premium 64-bit.

Note: Before you start, make sure that your Java platform matches to the parliament platform you downloaded. Ex: If you’re using a 32-bit Java Runtime Environment in a 64-bit operating system, you must use the 32-bit version of parliament. In other words, Java 32-bit + Parliament 64-bit = trouble! Once you have your appropriate Java installed, don’t forget also to properly set the JAVA_HOME variable.

Download Parliament and uncompress it to a folder and follow the instructions below according to your operating system.

Ubuntu Linux 12.04 64-bit

For the sake of this sample, let’s assume we downloaded and uncompressed the ZIP file into the path /home/jones/parliament/. After you downloaded the Parliament for Ubuntu Linux 64-bit and uncompressed it, you have to execute the /home/jones/parliament/StartParliament.sh in the terminal. After you’ve done so, you might see a message like this.


Windows Server 2008 64-bit

For the sake of this sample, let’s assume you downloaded and uncompressed the Windows 64-bit Parliament files in the directory c:\parliament. Afterwards, you must execute the following application C:\parliament\RedistributablePackages\msvc-10.0-sp1\vcredist_x64.exe, which is the Visual C Compiler (Needed only for Windows).

Once you’re done with the vcredist_x64.exe installation, you just need to start Parliament via the file C:\parliament\StartParliament.bat in the DOS console. After you execute it, you might see a message like this.


Parliament starts by default in the port 8080, so you can access it in your browser under the address http://localhost:8080/parliament/. If you already have a service running in this port, you can fix it by changing the Jetty XML file at c:\parliament\conf\jetty.xml or /home/jones/parliament/conf/jetty.xml and change the following line:

<Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

Now that you’ve got Parliament up and running, there is one last detail for you to start making your first GeoSPARQL query, namelly create the Spatial Indexes. Go to http://localhost:8080/parliament/indexes.jsp (considering you didn’t change port and path) and click on the link “Create All”. This will enable your parliament to create spatial indexes to new incoming triples.

After you’ve done so, you’re ready to start geo-querying! To test your Parliament, insert the following triple by using the GUI under the address http://localhost:8080/parliament/insert.jsp, pasting the triples in the text field “Text Insert” and pressing “Insert Data”.

<http://www.lanuv.nrw.de/osiris/geometries/adce33b2-42c6-469a-8aac-27860d51df0b> <http://www.opengis.net/ont/geosparql#asWKT> "<http://www.opengis.net/def/crs/OGC/1.3/CRS84> POINT (8.46035239692792 51.48661096320327)"^^<http://www.opengis.net/ont/sf#wktLiteral> .

The following message should appear:

HTTP OK: 200

Insert operation successful. 1 statement added.

With your first RDF statement added, it is time to test a spatial query! Go to http://localhost:8080/parliament/query.jsp and execute the following GeoSPARQL Query:

PREFIX geo: <http://www.opengis.net/ont/geosparql#>
PREFIX geof: <http://www.opengis.net/def/function/geosparql/>
PREFIX sf: <http://www.opengis.net/ont/sf#>

?bGeom geo:asWKT ?bWKT .
FILTER (geof:sfWithin(?bWKT, "Polygon((7 50, 7 53, 9 53, 9 50, 7 50))"^^sf:wktLiteral))

After running this query, you have to get the following result:


For shutting down Parliament, type exit in the terminal. By no means use the famous Ctrl+C to quit the server, since Parliament indexes are pretty sensible and can get very easily corrupted.

And this is it, enjoy GeoSPARQL!

This tutorial was originally published at the Linked Open Data section of the NASA World Wind SDK Tutorial.

We are looking for a highly motivated Master student who would like to develop a Geonames geocoding tool and API for linked data in LIFE. Even though library and other data is being published as linked open data in LIFE, currently, spatial reference of this data is implicit in the form of strings. In LIFE, resources should be searchable by places in a gazetteer, such as Geonames.org, as linked open data (LOD). The master thesis should address requirements for such a tool and an API reusable in the library context, discuss and analyse available solutions, and the development of an easy-to-use geocoder.

Linked Data for eScience Services (LIFE) is a two-year project funded by the German Research Foundation, jointly carried out by the Semantic Interoperability Lab (MUSIL) at Institute for Geoinformatics (http://ifgi.uni-muenster.de) and the University Library (http://ulb.uni-muenster.de) at University of Münster.The overall goal of LIFE is to facilitate sharing of research data and thus improve interdisciplinary collaboration in science and education. The approach addresses all kinds of resources, ranging from articles and books through maps to raw data.

Interested students should contact Simon Scheider.

We’re involved in this year’s edition of the Linked Science workshop again, which has been accepted for ISWC 2013 in Sydney, Australia, in October. This year, we will focus on Supporting Reproducibility, Scientific Investigations and Experiments. The workshop page (which will be continually updated until the event) and the initial call for papers can be found here.

Admittedly, it’s been a while since you last heard from us. But this does not mean that LODUM has been asleep. In fact, we have been quite active over the last months. In this post, we will give you a quick overview of the things we have been up to.

LIFE project

Last fall, we got funding from the German Research Foundation for a proposal we had put together in collaboration with our library. The project deals with Linked Data for eScience Services (LIFE) and will be running for two years. It is the first externally funded project in the context of LODUM. You can get all the details about it on the project website.

New team members

One of the great things about the LIFE project is that it allowed us to significantly expand our team. We have a very good mix of staff at different levels of their careers, from post-docs to undergraduate students. This will allow the team members to grow into their roles in the project and hopefully lead to a sustainable solution for our team, where we always have experienced team members on board to seamlessly include new staff.

CampusPlan app

Strictly speaking, the university’s Campusplan App is not a LODUM activity (even though it has been developed by some of our team members). The big news for us is, that it is entirely driven by LODUM data: Everything shown in the app comes from our SPARQL endpoint. You can download the app for iOS or Android, or simply visit app.uni-muenster.de with you mobile device for a test drive.


Developing an app like this is a considerable effort – so why not share the code, so that others can learn from what we have done, and maybe tell us how to get better at what we are doing? We always had the feeling that at least some of the things we do might be of value for other developers. To collect all the code we are producing in one place, we have set up an organization account on GitHub. Besides the CampusPlan app, we already have some other repositories on github.com/lodum, with more to come in the next months.


Linked Data has become an integral part of ifgi’s course curriculum over the last semester. We have taught classes on Linked Data Engineering and Linked Science, the Linked Open Data (R)Evolution, and embraced Linked Data as a core technology in the Spatio-Temporal Information in Society course. The outcome of this course – a biographic thesaurus for the region of Westphalia – is just getting the finishing touches and will be presented here soon. This line of courses continues with a course on Linked Citizen Science, which is about to start next week.

Events and publications

At the International Semantic Web Conference in Boston, we held the second workshop on Linked Science, which had some very interesting submissions and discussions again. The proposal for the next workshop at ISWC 2013 in Sydney is currently under review. Of the numerous publications with LODUM participation, I would like to highlight the Semantic Web Journal special issue on Linked Data for Science and Education that was published earlier this year. It gives a nice overview of the current research activities around the use of Linked Data in research and teaching environments.

A fresh look for lodum.de

If you have been to our website before, you may have noticed that we were running parts of it on Tumblr. While this was fine to kickstart LODUM, we were increasingly frustrated by the limitations, so we built an entirely new website. It runs WordPress in the back, with our own theme based on Bootstrap. This is also the basis for our data pages, which will be using this new layout soon.

With all those new activities, there will be a lot more to report in the coming months. We will keep you posted here on the blog – promised!

Our colleagues at the spatio-temporal modelling lab offer an MSc thesis on “A Linked Open Data portal for annotating statistical datasets”:

The statistical datasets currently available in the Web often lack important information such as

  • descriptions where the spatial coordinates can be found,
  • which spatial coordinate system is used,
  • whether the data represents objects or a continuous phenomenon in space and time (fields),
  • what the observation window for a point pattern variable is.

In order to enable automated integration of this information in statistical software and hence to ensure meaningful analysis, the missing information should be queried from data providers or users and needs to be made accessible in the Web in a structured way.

The student will work on a Website that allows users to upload links to datasets available in the Web and add descriptions to these datasets. Useful description items need to be identified and existing methods for annotating spatio-temporal datasets need to be analysed. The dataset descriptions will be made accessible in the Web as Linked Open Data. To illustrate usage of the descriptions in statistical software, the SPARQL R package will be used to retrieve the descriptions and automatically import the annotated dataset in R.

Required Skills: Interest in spatial statistical analysis. Knowledge of common spatio-temporal data formats and Web technologies such as JavaScript and/or PHP. Knowledge of Linked Data technologies is an advantage, but can also be acquired during the thesis development.

Contact: Christoph Stasch, Simon Scheider, Edzer Pebesma.

In case you didn’t know why Open Access matters, here’s the whole story. Nicely illustrated by Jorge Cham of PhD Comics fame.

Slides from the presentation of our spatial@linkedscience paper at GIScience 2012 in Columbus, OH:

Carsten Keßler, Krzystof Janowicz and Tomi Kauppinen (2012) spatial@linkedscience – Exploring the Research Field of GIScience with Linked Data. In Ningchuan Xiao, Mei-Po Kwan, Michael F. Goodchild, and Shashi Shekhar: Geographic Information Science. 7th International Conference, GIScience 2012, Columbus, OH, USA, September 18-21, 2012. Proceedings. Springer Lecture Notes in Computer Science Volume 7478: 102–115. DOI:10.1007/978-3-642-33024-7_8

Recently, we linked a number of bibliographic resources (~28k) as well as some organizations to the North Rhine-Westphalian Library Service Center’s (hbz) LOD dataset. The big deal is that the HBZ stores data about all available copies (exemplars) of bibliographic resources, such as their signature and the organization owning the copy. Consequently, by linking to the HBZ dataset we can now easily integrate further data on the fly and display, for example, all locations and signatures of existing copies on a map. Some examples:

In order to match the bibliographic resources we used ISBNs and DOIs initially. We still investigating matching frameworks like SILK or LIMES and try to define some more sophisticated linkage rules (Titles,Authors,Time,Space…).