====== Querying the Semantic Web with SPARQL ====== ^ Last verification: | 20180914 | ^ Tools required for this lab: | -- | ===== Before the lab ===== Video [minimum!]: * [[https://www.youtube.com/watch?v=FvGndkpa4K0|SPARQL in 11 minutes]] Reading: * [[https://supportcenter.cambridgesemantics.com/sparql-by-example/slides.html|SPARQL by Example]] * {{:pl:dydaktyka:semweb:sparql-cheat-sheet.pdf|SPARQL by Example: the Cheat Sheet}} (from http://www.slideshare.net/LeeFeigenbaum/sparql-cheat-sheet) * [[#if_you_want_to_know_more|If you want to know more...]] ===== Lab instructions ===== During this lab we will use two services to execute SPARQL queries: - [[http://tw.rpi.edu/endpoint/sparql.html|SPARQLer]] (a general purpose SPARQL query processor) will be used for querying RDF files. - [[http://yasgui.org/|YASGUI]] (Yet Another Sparql GUI) will be used for querying SPARQL Endpoints (it has more powerful editor, but it can't be used against simple RDF files :-() ==== - Introduction [5 minutes] ==== - What can we do with our RDF models? In this section some "magic" will happen on [[wp>Periodic_table|Periodic Table]] saved in [[http://www.daml.org/2003/01/periodictable/PeriodicTable.owl|RDF]]! - Open **[[http://tw.rpi.edu/endpoint/sparql.html|SPARQLer]]**. - Paste ''http://www.daml.org/2003/01/periodictable/PeriodicTable.owl'' into "Target graph URI (or use FROM in the query)" field and select ''text output'' option. * There is also a backup (if original URI cannot be resolved): ''http://krzysztof.kutt.pl/didactics/semweb/PeriodicTable.owl'' - Run the following two queries (paste code in text field and click ''Get Results''):PREFIX table: PREFIX xsd: SELECT ?element ?name WHERE { ?element table:group ?group . ?group table:name "Noble gas"^^xsd:string . ?element table:name ?name . }PREFIX table: PREFIX xsd: PREFIX rdfs: CONSTRUCT { ?element rdfs:label ?name . } WHERE { ?element table:group ?group . ?group table:name "Noble gas"^^xsd:string . ?element table:name ?name . } * Both queries run on the same dataset * Both queries extract the same data: list of all elements in [[wp>Noble_gas|Noble gases]] group with their names * Analyze queries and results: how they differ? - 8-) What do ''SELECT'' queries do? - 8-) What do ''CONSTRUCT'' queries do? ==== - SPARQL = Pattern matching [10 minutes] ==== * General Idea: **SPARQL is an RDF graph pattern matching system.** * E.g.: there is a triple saved in RDF: :Hydrogen :standardState :gas . * Now we can simply replace part of the triple with a question word (with a question mark at the start) and we get simple queries, e.g.: * //Query:// '':Hydrogen :standardState **what?** .'' \\ //Answer:// '':gas'' * //Query:// ''**?what** :standardState :gas .'' \\ //Answer:// '':Hydrogen'' * //Query:// '':Hydrogen **?what** :gas .'' \\ //Answer:// '':standardState'' - Now, let's do some more queries against Periodic Table. Prepare the following ones: * __elements which have name and symbol defined__ * elements which have name and symbol defined __and are placed in ''period_7'' period__ * elements which have name and symbol defined and are placed in ''period_7'' period __and have OPTIONAL color__ (some of them does not have color!) * elements which have name and symbol defined and are placed in ''period_7'' period and have OPTIONAL color, __sorted by name descending__ - 8-) Put the constructed queries in the report. * **Hints:** * [[http://www.w3.org/TR/sparql11-query/#optionals|SPARQL 1.1 documentation]] may be useful for specifying optional values * [[http://xmlns.com/foaf/spec/|FOAF Vocabulary Specification]] ==== - Constraints: FILTER [10 minutes] ==== * After matching RDF graph pattern, there is also possibility to put some constraints on the rows that will be excluded or included in the results. This is achieved using FILTER construct. Let's try it now on the Periodic Table. * Prepare and execute queries to retrive: * elements which name starts with 's' * elements which has digit '2' in atomicNumber * elements which name starts with 's' **or** which has digit '2' in atomicNumber, make search caseinsensitive * **Hints:** * SPARQL 1.1 Documentation parts about [[http://www.w3.org/TR/sparql11-query/#termConstraint|constraints]] and [[http://www.w3.org/TR/sparql11-query/#alternatives|alternatives]] may be useful * 8-) Put the queries in the report. ==== - SPARQL Endpoint [20 minutes] ==== * SPARQL queries may be asked against RDF file as we did in previous sections. But there is also possibility to use special purpose web service called SPARQL Endpoint. It wraps some data set and provides a service that responds to the SPARQL protocol, providing access to the data set. * Many SPARQL Endpoints are available today, providing information about a variety of subjects. In this section we will use [[http://dbpedia.org/|DBpedia]] SPARQL Endpoint at **''http://dbpedia.org/sparql''**. - DBpedia is a dump of Wikipedia annotated using RDF. So, like Wikipedia, DBpedia should contain some information about Poland. What we can do? \\ We don't know what URI Poland has in DBpedia, but we know the name Poland, and from previous lab we know ''rdfs:label'' property. Maybe this will help us? Let's try! - Open the **[[http://yasgui.org/|YASGUI]]**. - What we know so far? There should be some URI (''?country'') that probably has a relation ''rdfs:label'' with object ''"Polska"@pl''. This can be easily translated into SPARQL WHERE clause: ?country rdfs:label "Polska"@pl . - To execute this query properly, enter the ''http://dbpedia.org/sparql'' URI in the dropdown list at the top. - Then, specify the query:PREFIX rdfs: SELECT ?country WHERE { ?country rdfs:label "Polska"@pl . } - Success! There is something that has ''rdfs:label'' ''"Polska"@pl''! \\ 8-) Now expand this query to find information about Poland population and put the final query in the report. * Hint: result should look like this: -------------- | population | ============== | 38483957 | -------------- - 8-) Prepare a query that returns a list of 10 countries in Europe with the biggest population. Put the query in the report. ==== - Aggregation [15 minutes] ==== * SPARQL provides grouping and aggregation mechanisms known from SQL: * grouping: GROUP BY * aggregation: COUNT, SUM, MIN, MAX, AVG, GROUP_CONCAT, and SAMPLE * filter on groups: HAVING * See [[http://www.w3.org/TR/sparql11-query/#aggregates|SPARQL 1.1 documentation]] for wider description. - Poland is divided into 16 voivodeships (PL: województwo), and then into 380 counties (PL: powiat). In this task, we will examine it closer. - Prepare a query (in [[http://yasgui.org/|YASGUI]], against DBpedia) which returns list of voivodeships and number of counties inside them. List should consist only of voivodeships with 7 or more counties and should be ordered by number of counties. - Results should look like that:------------------------------------------------ | voivodeship | counties | ================================================ | "Masovian Voivodeship"@en | 15 | | "Greater Poland Voivodeship"@en | 12 | | "Lesser Poland Voivodeship"@en | 11 | | "Podkarpackie Voivodeship"@en | 10 | | "Pomeranian Voivodeship"@en | 9 | | "Warmian-Masurian Voivodeship"@en | 9 | | "West Pomeranian Voivodeship"@en | 9 | | "Opole Voivodeship"@en | 8 | ------------------------------------------------ or in Polish:-------------------------------------------------- | wojewodztwo | powiaty | ================================================== | "Województwo mazowieckie"@pl | 15 | | "Województwo wielkopolskie"@pl | 12 | | "Województwo małopolskie"@pl | 11 | | "Województwo podkarpackie"@pl | 10 | | "Województwo pomorskie"@pl | 9 | | "Województwo warmińsko-mazurskie"@pl | 9 | | "Województwo zachodniopomorskie"@pl | 9 | | "Województwo opolskie"@pl | 8 | -------------------------------------------------- - **Hint** -- useful URIs: * county: ''http://dbpedia.org/resource/Powiat'' * voivodeship: ''http://dbpedia.org/resource/Voivodeships_of_Poland'' - 8-) Put the query in the report. ==== - SPARQL as rule language [10 minutes] ==== * So far, we have seen that the answers to questions in SPARQL can take the form of a table. In this section we will take a look at CONSTRUCT queries which answers take the form of an RDF graph. You have already seen one such example in [[#introduction_5_minutes|Introduction]]. * CONSTRUCT queries provides a way to introduce "rules" into RDF datasets: - Let's back to [[.:lab-rdfmodel2|The Bold and the Beautiful/The Game of Thrones]] model you prepared previously. Probably you had a problem which relations should be placed in RDF file: ''is_father_of'' or ''is_child_of'' or maybe both of them? - CONSTRUCT queries make this simpler. In the initial data set you can put one of them, let's assume it was ''is_father_of''. Now, you can execute CONSTRUCT query that creates inverse relation: PREFIX bb: . CONSTRUCT { ?child bb:is_child_of ?father . } WHERE { ?father bb:is_father_of ?child } - Or maybe ''is_uncle_of'' relation will be useful? No problem! PREFIX bb: . PREFIX rdf: CONSTRUCT { ?uncle bb:is_uncle_of ?child . } WHERE { ?uncle bb:is_sibling_of ?parent; a bb:Man. ?child bb:is_child_of ?parent } - You don't have ''is_sibling_of'' but instead you have ''is_sister_of'' and ''is_brother_of''. Simply prepare query (or queries) that creates ''is_sibling_of'' for you. * 8-) Put this query in the report. * **Note:** this query have to be executed in [[http://tw.rpi.edu/endpoint/sparql.html|SPARQLer]] (not in YASGUI) * **Note 2:** your dataset has to be available online as a **RAW file**, e.g. if you has the file on Dropbox, you have to change the ''www.dropbox.com'' part of the shared link to ''dl.dropboxusercontent.com'' * OK, we created some new RDF triples using CONSTRUCT query. What now? Depending on your plans, you can: * Add these triples back to the original dataset, * Create new dataset (e.g. save results in RDF file). * And then simply execute queries against this new knowledge. * 8-) What 3 rules you will find useful in your [[.:lab-rdfmodel2|model from previous lab]]. Put 3 CONSTRUCT queries in the report. ==== - ASK and DESCRIBE queries [10 minutes] ==== SPARQL also provides two more query types: ASK and DESCRIBE. * **ASK queries** simply provide Yes/No answer and no information about founded triples (in case of "Yes" answer). * E.g. Is there anything with name "aluminium" in this data set? PREFIX table: PREFIX xsd: ASK { ?element table:name "aluminium"^^xsd:string . } If you run this query against Periodic Table, answer will be yes. * 8-) Prepare query that checks something interesting in your [[.:lab-rdfmodel2|model]] :) * If you have no idea what you can check, you can simply prepare a query that checks if there is anything that is a ''MusicCD'' and was published by ''Warner Music Group'' (if you don't have such classes in your library, use analogous class that you have). * **DESCRIBE queries** return all knowledge associated with given Subject URI(s). * The simplest DESCRIBE query specifies only the URI that should be described: DESCRIBE (it should be executed against ''http://www.daml.org/2003/01/periodictable/PeriodicTable.owl'' file) * There is also possibility to select URI(s) from data set using constraints defined in WHERE clause. Read about it in [[http://www.w3.org/TR/sparql11-query/#describe|SPARQL 1.1 documentation]]. * 8-) Prepare query that describes all ''foaf:Person'' items from your [[.:lab-rdfmodel2|model]] (if you don't have a ''foaf:Person'' class in your library, use analogous class that you have). ==== - "Negation" under Open World Assumption [5 minutes] ==== * RDF vs SQL: * RDF: Open World Assumption * SQL: Closed World Assumption * Let's imagine that we are preparing query about all the living actors who played in [[wp>Return of the Jedi|Star Wars Episode VI: Return of the Jedi]]. * Scheme of this query in RDF: SELECT ?actor WHERE {?actor :playedIn :ReturnOfTheJedi . NOT EXISTS {?actor :diedOn ?deathdate . } } * Idea scheme of this query in SQL: SELECT actor_name FROM movies WHERE title = "Return of the Jedi" AND NOT EXISTS (SELECT * FROM deaths WHERE movies.actor_name = deaths.name); * These queries look like the same but they are different! * 8-) What are Open World Assumption (OWA) and Close World Assumption (CWA)? * 8-) What is the difference between these two queries (refer to the knowledge of OWA and CWA)? ==== - Wikipedia, DBpedia, Wikidata ==== If you are interested in querying the huge amount of data available in Wikipedia, there are two projects you may be interested in: * [[https://dbpedia.org/|DBpedia]] -- an attempt to extract data from Wikipedia infoboxes and links (using developed parsers) * [[https://www.wikidata.org/|Wikidata]] -- an attempt to create an RDF base from scratch by the community (using provided GUI) They overlap in part, but are independent of each other and have different uses. For you, a student of the Semantic Web Technologies course, it does not matter much. They are simply large knowledge bases with which you can do a lot of things.\\ If you want to dive into this data you can start with a **[[https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples|Big set of SPARQL queries against Wikidata]]**. ===== Control questions ===== * How we create the SPARQL queries? * What are the four SPARQL query types and how they differ? What is the form of the result in these queries? * What is SPARQL Endpoint? ===== If you want to know more... ===== SPARQL: * [[http://www.w3.org/TR/sparql11-query/|SPARQL 1.1 Query Language]] * [[http://www.w3.org/TR/sparql11-overview/|SPARQL 1.1 Overview]] * [[http://www.cambridgesemantics.com/semantic-university/learn-sparql|Learn SPARQL @Cambridge Semantics]] * You can combine results from many SPARQL Endpoints in one query -- see [[https://www.w3.org/TR/sparql11-federated-query/|SPARQL Federated Query]] for more information. Sample queries in SPARQL: * [[http://chem-bla-ics.blogspot.com/2018/09/wikidata-query-service-recipe.html|Wikidata Query Service recipe: qualifiers and the Greek alphabet]] * **[[https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples|Big set of SPARQL queries against Wikidata]]** * [[pl:dydaktyka:semweb:2014:projects:loddemo|Linked Open Data - demo]] (//in Polish//) * [[https://semantic-web.com/2015/09/29/sparql-analytics-proves-boxers-live-dangerously/|SPARQL analytics proves boxers live dangerously]] * [[http://www.snee.com/bobdc.blog/2017/11/sparql-queries-of-beatles-reco.html|SPARQL queries of Beatles recording sessions]] Tools: * [[http://tw.rpi.edu/endpoint/sparql.html|SPARQLer]] -- general purpose tool for executing SPARQL queries * [[http://sparql.org/query-validator.html|SPARQLer Query Validator]] * [[http://yasgui.org/|YASGUI]] -- online visual tool for querying SPARQL Endpoints * [[http://www.ldodds.com/projects/twinkle/|Twinkle: A SPARQL Query Tool]] * [[http://jena.apache.org/tutorials/sparql.html|Apache Jena -- SPARQL]] * [[https://en.wikipedia.org/wiki/GeoSPARQL|GeoSPARQL]] -- standard for representing and querying the geospatial data using RDF Open Data Sets: * [[http://news.ycombinator.com/item?id=1493768|List of open/public databases]] DB2RDF (RDF and Relational Databases): * [[http://esw.w3.org/RdfAndSql|RDFandSQL]] * [[http://www.w3.org/wiki/ConverterToRdf#SQL|ConvertToRDF -- SQL]]