1. Understanding Data Model

All information in Linked Data is structured in RDF format. The main objective of using RDF in a database is to produce triples where the relationships of data occur and further information can be produced. This relationship of data is also known as a statement built by a subject, a predicate and an object.

For example:

<subject> <predicate> <object> .

<http://data.europeana.eu/item/2021604/C2D27CB79870761BE291A3FACAB963F62D7CA39B> <http://purl.org/dc/terms/creator> “Picasso” .

<http://collection.britishmuseum.org/id/object/YCA62958> <rdf:type> <ecrm:E22_Man-Made_Object?> .

To see this in context, lets go to the British Museum’s SPARQL endpoint (http://collection.britishmuseum.org/sparql) and lets explore some of these data elements and their relationships.

bullet 00. Paste the text below in the endpoint’s text box and click Submit.

  ?sub ?pred ?obj .

This now displays a table containing the different triples used to define some of the elements of a specific item of a collection. This is just an extensive list that shows (up to 10 items) all the different objects in the database.

As you can see from the example above, there are different elements that will help defining a particular object. Lets try to do the same with some Wikidata content. Lets try to do a basic search where we are going to identify things made by Picasso.

bullet 1.1. Paste the text below in WIKIDATA’s (https://query.wikidata.org) endpoint’s text box and click on the blue triangle to submit.

SELECT ?thing 
  ?thing wdt:P170 wd:Q5593.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE]". }

You will notice that in Wikidata, we can hover on the specific data element to see what is it that we are describing. Here we can see that made by is represented by wdt:P170 which represents a property that identifies a creator.  In this case we chosen Picasso which has its own identifier wd:Q5593.

On the bottom of the page we can now see all the things created by Picasso. We can also visualise this as a graph.

bullet 1.1. Click on the eye or table view, and change this to graph, which is located at the bottom of the page. This will then visualise these 20 results as graph bubbles. Each one of those objects represents a thing created by Picasso.


Lets explore a bit further such items. After clicking on one of the objects (thing) we can see that there are other Linked Data elements attached to this thing. We have instance of – Painting, genre – Portrait and creator – Pablo Picasso.  This way we can visualise all the diverse data elements that when linked produce such semantic relationships that make the collection description more accurate.

By looking at the diverse data elements of that further describe the statement. This is to say, that this is showing us that particular subject, predicate and object relationship contained in that particular node.

There are other nodes that will contain more than one statement and are depicted by the little flowers. These flowers will contain two or more circles within them showing us that they have more statements to describe the object.

Now lets see what data elements are used to describe Picasso as a creator.

bullet Click on Pablo Picasso once to hook it to the bubble

bullet Once hooked click on that same Pablo Picasso again to expand it.

We are able to see now all the different Linked Data elements attached to Picasso that will help the computer (and us) understand what this object is and make inferences about it.

Each Linked Data system will have their own Data Model (Ontology) that will provide the system with the specific interpretation of the different entities. This way organisations (people) and the knowledge (computers) won’t be confused.

Lets visualise all the elements that will describe what a creator is.

bullet Copy the link below to your browser to open this visualisation of the subclasses of creator.


From these results we can see that a creator is a person, which at the same time is an agent and an individual.

There is a wide range of data models or ontologies that specialise in describing very specific fields. The Europeana Data Model in particular has been created all these different ontologies.  You can click on any of the links to access their own website and find out more about them.

Prefix Namespace URI Description
cc https://creativecommons.org/ns Creative Commons
dc http://purl.org/dc/elements/1.1/ Dublin Core
dcterms http://purl.org/dc/terms/ Dublin Core Mmetadata Initiative (DCMI) Metadata Terms
edm http://www.europeana.eu/schemas/edm/ Europeana Data Model
foaf http://xmlns.com/foaf/0.1/ FOAF (Friend of a Friend) Vocabulary
ore http://www.openarchives.org/ore/terms/ Open Archives Initiative Object Reuse and Exchange
owl http://www.w3.org/2002/07/owl# OWL Web Ontology Language
rdaGr2 http://rdvocab.info/ElementsGr2/ RDA Group 2 elements
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns# Resource Description Framework
skos http://www.w3.org/2004/02/skos/core# Simple Knowledge Organization System
wgs84 http://www.w3.org/2003/01/geo/wgs84_pos# WGS84 Geo Positioning

Europeana has combined these diverse ontologies to better describe the collections from Galleries, Libraries, Archives and Museums (GLAM). In this case Europeana uses Dublin Core to describe creator.  Its URI is (http://purl.org/dc/elements/1.1/creator). Lets open it on a visualiser to see how Dublin Core in particular describes what a creator has to be.

bullet Copy the link below to your browser to open this visualisation of the subclasses of creator in Dublin Core.


After loading the visualisation, this should display the diverse terms used in Dublin Core. To find creator faster, you type it on the search box at the bottom on the window. This will highlight its location.

The strict textual description of creator in Dublin Core is: “An entity primarily responsible for making the resource.”, “Examples of a Creator include a person, an organisation, or a service.”. Therefore, a creator does not necessarily needs to be a person.  In this case creator is a sub-class of Agent, which is an entity that has the capability of acting (performing a task).

Finally, lets explore DBPedia and lets find Pablo Picasso and perhaps lets try to analyse who he was influenced by and who influenced him.

bullet Copy the link below to your browser to open this visualisation of Pablo Picasso in DBPedia. 


What we see now is Pablo Picasso centred on our visualisation page.

bullet Click on the star of the Picasso Node, then select Find content in visible nodes and type creator to display Pablo Picasso – type | Creator. This will display a new window of the creator Node. Click on it to add it.

After adding the creator node, your should see both nodes: Picasso – Creator, and both of them are joined by the predicate type.

We can see that creator is a property that links out from Picasso. Now lets click on Links In and click on Influenced by(42 links). This will expand the panel, and lets add Joan Miro by clicking on it.

Lets carry on finding who he influenced and who he was influenced by…

Doing this kind of visualisations, we can use linked data elements to find some interesting relationships. Nevertheless, this might be very time consuming and difficult to analyse all the different nodes available. For this reason, we can use SPARQL or other alternative query methods to explore more complex relationships and data inferences.



CSDS – Exploring Semantic Web Collections


The Semantic Web focuses on producing a Web of linked data. The technologies used in the Semantic Web enables people to produce vocabularies and define specific rules of how such vocabularies are meant to be used. These rules will then help the computer produce inferences of how such data is linked, thus producing more accurate datasets. A growing number of Cultural Heritage organisations are now the new standards of linked data as the main infrastructure to support their collections. There is a wide range of data models or vocabularies such as CIDOC CRM, Dublin Core and Europeana Data Model (EDM). In this workshop we will focus on exploring collections under the Europeana repository and the EDM. Europeana currently holds billions of records from over 4,000 heritage organisations such as galleries, libraries, archives and museums (GLAM) among others. There are many challenges of how such collections are meant to be explored, such as the use of SPARQL querying languages. This workshop will introduce participants to the Semantic Web, and how to query such Linked Data collections. Finally, this session will use Europeana’s API to produce query interfaces to explore those collections, using the data model and producing some visualisations.


10.30: Welcome and Introductions

11.00-11.45: Introduction to the Semantic Web. New Challenges in Cultural Heritage

11.45-12.15: Coffee Break

12.15: Understanding Data Models (open taster lesson)

Querying the Semantic Web (Heritage)

The Europeana Data Model through SPARQL

1.30-2.30: Lunch

2.30: Querying through APIs (Europeana)

Advanced API Queries (bonus lesson)

* Interaction Development

Developing Interfaces for Exploring Information through APIs

* Further Visualising the Data

Introduction to JQuery UI

Implementing JQuery UI

4.00-4.30: Coffee Break

4.30-5.30: Interaction and Exploration (Tangible User Interfaces) — Potential

Final Output TUIO//Europeana

08. Introduction to TUIO

09. TUIO First Query

10. Final Build



Most of the work we will produce uses Open Source tools, and it does not require much computer resources.

To test some API calls you can download Postman.


We will be working with data from Europeana. Make sure you register for an API Key here:


You will need to work with some HTML, CSS and JavaScript (JQuery), therefore a nice scripting/programming text editor can be handy. On MacOSX, I use TextMate (https://macromates.com/), for Windows or Linux many people use SublimeText (https://www.sublimetext.com/).

Finally, we are going to build some quick Tangible queries using ReactiVision (http://reactivision.sourceforge.net/), you will need a webcam for this. If your computer already has one, that will work fine. Otherwise, any USB webcam will work fine as well. ReactiVision also provides an emulator that we can also use to prototype interfaces. Finally, we are going to be testing some of these tools on the Web in a non-standard way. For the sake of the experiment, you will need to install a legacy version of Firefox. This is because Firefox has currently disabled the option of installing plugins, that enables us to connect to ReactiVision. Please go to https://ftp.mozilla.org/pub/firefox/releases/ and scroll down to Dir 48.0.1 (Firefox 48) and install that version on your Mac or PC computer. Finally, make sure you can have access to the Web on the computer you are going to use.

Masterclass – Multimodal Engagements with Cultural Heritage

The research institute in the Humanities at Maynooth University has organised a Masterclass on ‘Multimodal Engagements with Cultural Heritage‘. This 3-day Masterclass was designed to introduce participants to methods of producing and re-using cultural heritage. During three days, participants will learn how to convert physical objects to digital and then back to physical through 3D printing techniques. Second, we will re-embed some of those physical objects with digital information.


I will be leading the second part where we will design a Tangible User Interface to query data from Europeana. We will be using physical objects and embed them with interactive properties to perform queries in Europeana’s repositories.


The interactive system that we are going to build combines a wide range of technologies such as TUIO/ReacTIVision to connect the physical objects to the computer and use them to produce queries through Europeana’s API using JQuery and JavaScript.

TUIO Table

You can see the interface working on my Youtube channel.


Learning Objectives

Day1. The Semantic Web and Linked Data

The first part of the Masterclass will introduce basic concepts of how current Web technologies such as the Semantic Web are being used to enhance the quality of the information in cultural heritage organisations.


Part 1. Foundations of Semantic Web and the Europeana Data Model

Part 2. Europeana API

Day 2. Tangible Interaction on the Web

Europeana’s data is very complex and extensive. By understanding how the data model uses the different semantic concepts to conceptualise the information, those semantic relationships and data fields can be used to query and visualise according to our user needs.


Part 3. Visualising Europeana Data

Extending Visualisations with JQuery UI

Part 4. Tangible Interaction on the Web


Through this Masterclass, we will also work with participatory design principles to explore what particular behaviours users might have when approaching this type of content. The main objective of this second part of the Masterclass is to re-think how  we can interact with Cultural Heritage on the Web and how those interactions might take place.









Europeana TUIO – Final Build!

We have been building the web application through a wide range of services and libraries. We used JQuery and JQuery UI to change the way the different HTML objects react and look on the interface. We used the TUIO protocol to translate the data sent from the sensor (in this case is a webcam), into a JavaScript usable protocol. To make the library of actions, we used nptTuioClient and its plugin and attached several functions to decide what is the interface meant to do whenever a fiducial enter, moves or leaves the active area.

The current object used to search, attaches the query syntax to the API call to Europeana, and by rotating it, we can change the term of the particular dataField that we want to reference.

The list of fiducials being used are:


Using the printed fiducials.

Since we have been using the emulator, we haven’t had the chance to see how the interface reacts when we use the printed markers. Once we try it, we will finalise the fiducial or pyfos and make finalised isometric shapes as volumes to build with paper and make them graspable and manipulable.

To do this, we need to use the reacTIVision toolkit and use the fiducial tracker instead of the emulator. The link can be found here:


Depending on your operating system, you will have to download and use the reactivision application inside.

In addition, reacTIVision has a large number of fiducials and each one has their own id. Here is the link to the PDF file.



Running reacTIVision

When you open reacTIVison, the software should just recognise the video device that you have currently available in your computer. Nevertheless, you might have more than one device plugged and you might want to trigger it to that specific device.

bullet 1. Open reacTIVIsion

If your camera is detected and you can see the display. Then skip to the next section.


If your camera was not detected, reacTIVision will give you a notification and close


To fix this, follow these steps:

bullet 1.a. Open the reacTIVision folder and then the calibration folder. Inside it run the list_devices application. Once opened, it will give you a list of the different devices and the number to identify it. REMEMBER THIS NUMBER!

In this example, the camera to be used is number 2 (USB Camera).


bullet 1.b. Close the list_devices app by clicking in the ok button.

bullet 1.c. Go back to the reacTIVision folder and right-click the reacTIVision app and select Show Package Contents


This is going to open a new browser window.

bullet 1.d. Open the folder Resources, and then the file camera.xml

bullet 1.e. In camera.xml, change <camera id=”Number”> for whatever number your camera was listed with. In this example it is 2. 


bullet 1.f. Save the file, close the file explorer and open the reacTIVIsion application once again.

Viewing the final result

You should see the objects on the interface reacting in the same way as they did when we used the emulator.


If you move the pyfo and the #mySearch box moves to the opposite side, we can press i to flip the x or y axis of reacTIVision so it fits to our browser position.


bullet 2. Press to see the different options

If the camera is having trouble with the light or detecting the markers we can open the camera options to fix it.

bullet 3. Press to open the camera options and change the calibration.



Here is the the final prototype working! Don’t forget to share and promote Tangible User Interfaces! 🙂