To jest stara wersja strony!


Reasoning on the Semantic Web

Before the lab

Software:

  • Protege Ontology Editor
  • Snoggle - A Graphical, SWRL-based Ontology Mapper

Introduction

Lab instructions

1 OWL Reasoning - Class Inference

  • In Description Logics on which the OWL language is based there are the following reasoning tasks for classes (terminology, TBox):
    1. Class subsumption (Structuring the knowledge base)
    2. Class equivalence (Are two classes in fact the same class?)
    3. Class disjointness (Do classes A and B common members?)
    4. Class consistency (Isn't class A empty?)
  1. Model the following ontology axioms in Protege:
    Class: bus_driver
        EquivalentTo: 
            person
            that drives some bus
    
    Class: driver
        EquivalentTo: 
            person
            that drives some vehicle
    
    Class: bus
        SubClassOf: 
            vehicle


    Hints:

    1. To build the first axiom:
      1. Create appropriate classes

      2. and ObjectProperties:

      3. Define the BusDriver class with Equivalent classes button
    2. Build the second axiom analogously:

    3. Build the third axiom using the Superclasses button:

  2. Start the reasoner (Reasoner → start reasoner) and observe the inferred class hierarchy. What conclusions have been drawn?

    1. 8-) Include the modified ontology in the report.
    2. 8-) Explain the inferred relations and conclusions.
  3. Analyze the reasoning examples given here in „Class Inferences” section.

2 OWL Reasoning - Instance Inferences

  • In Description Logics on which the OWL language is based there are the following reasoning tasks for instances (world description, ABox):
    1. Class membership (is instance a contained in class C?)
    2. Instance retrieval (find all (known) individuals belonging to a given class)
  • and for the whole Knowledge Base (ABox+TBox):
    1. Global consistency of a knowledge base (Is the knowledge base meaningful?)
  1. Download the pre-prepared ontology and load it in Protege (NB: the ontology classes and those on screenshots may differ a little).
  2. Model the following ontology axioms in Protege:
    Individual: Daily_Mirror
        Types: 
            owl:Thing
    
    Individual: Q123ABC
        Types: 
            van,
            white_thing
    
    Individual: Mick
        Types: 
            male
        Facts: 
            reads  Daily_Mirror,
            drives  Q123ABC
    
    Class: white_van_man
        EquivalentTo: 
            man
            that drives some (van
            and white_thing)
        SubClassOf: 
            reads only tabloid


    For those not familiar with UK culture, White Van Man is a stereotype used to describe a particular kind of driver. Wikipedia provides an entry with some additional information and references.
    Hints:

    1. Create appropriate instances in Individuals tab
    2. If the instance is of 2 types, create it only once and then add the second type:
    3. Add object properties to connect individuals:
    4. Define the class:
  3. Start the reasoner (Reasoner → start reasoner) and observe the inferred class hierarchy. What conclusions have been drawn?
    1. 8-) Include the modified ontology in the report.
    2. 8-) Explain the inferred relations and conclusions.
  4. Analyze the reasoning examples given here in „Instance Inferences” section.

3 OWL Reasoning - Reasoners

DL reasoners may be integrated with other tools - as in the case of Protege presented before - as well as run independently via various interfaces. Popular DL reasoners include: FaCT++, Pellet, HermiT, RacerPro and many others.

We will use the Pellet reasoner.

  1. On charon: enter /usr/local/pellet/.
  2. Run pellet.sh help to get familiar with available commands.
  3. Try pellet.sh consistency <ontology> where the <ontology> is:
    1. Your ontology from previous lab
    2. people+pets.owl ontology provided with Pellet in examples/data/ directory
      and observe the results.
  4. Try pellet.sh classify <ontology> with the two above mentioned ontologies and observe the results.
    1. 8-) What are the results? Write them down or provide a screenshot of the answer in the report.

4 Forward-chaining vs. Backward-chaining Inference

A quick reminder

As given in Introduction to AI Dr Manfred Kerber's course: 1)

  • Forward chaining or data-driven inference works from an initial state, and by looking at the premises of the rules (IF-part), perform the actions (THEN-part), possibly updating the knowledge base or working memory. This continues until no more rules can be applied or some cycle limit is met, e.g.
    www.cs.bham.ac.uk_mmk_teaching_ai_figures_forward-rules.jpg
    • Problem with forward chaining: many rules may be applicable. The whole process is not directed towards a goal.
  • Backward chaining or goal-driven inference works towards a final state, and by looking at the working memory to see if goal already there. If not look at the actions (THEN-parts) of rules that will establish goal, and set up subgoals for achieving premises of the rules (IF-part). This continues until some rule can be applied, apply to achieve goal state.
    www.cs.bham.ac.uk_mmk_teaching_ai_figures_backward-rules.jpg
    • Advantage of backward chaining: search is directed
    • Disadvantage of backward chaining: goal has to be known
  • Forward or Backward Reasoning? - Four major factors
    1. More possible start states or goal states? Move from smaller set of states to the larger
    2. Has program to justify reasoning? Prefer direction that corresponds more closely to the way users think.
    3. What kind of events triggers problem-solving? If it is arrival of a new fact, forward chaining makes sense. If it is a query to which a response is required, backward chaining is more natural.
    4. In which direction is branching factor greatest? Go in direction with lower branching factor,
      1. backward chaining better:
        www.cs.bham.ac.uk_mmk_teaching_ai_figures_forward-branching.jpg
      2. forward chaining better:
        www.cs.bham.ac.uk_mmk_teaching_ai_figures_backward-branching.jpg

How about the Semantic Web?

As given in Semantic Web Programming:

  • Choosing the right inference method is often a matter of assessing requirements and constraints and determining which method works best for a given application
  • Most frameworks provide forward-chaining knowledgebases
    • they are easier to implement,
    • additional requirements during insertions and removel operations are acceptable
    • forward chaining prioritizes query performance over insertion and removal ones - in most applications queries are the most common operation
  • Backward chaining is often desirable when ontologies are volatile or when KB modifications (incl. statement removal ) are frequent.
    • backward chaining may be necessary when working with distributed reasoning systems with no centralized KB → there is no place to store new entailments, queries must be expanded and distributed using a backward chaining-based approach.

5 Semantic Web Rule Languages

Motivation for using rules beyond ontologies

Rules are popular and intuitive knowledge representation method. There are some things impossible to express with ontologies or only in a very complicated manner.

Some reasons for rules on the Semantic Web are the following:

  1. No support for Property composition in OWL 1 (this is partially solved in OWL 2 RL)
    1. e.g. it was not possible to express relations as: hasUncle(?nephew , ?uncle) ← hasParent (?nephew , ?parent ) ∧ hasBrother (?parent , ?uncle)
  2. Use od Built-ins
    1. Built-ins allow for common transformations of data (e.g. mathematical operations, conditional checks, datatype/unit conversions)
  3. Ontological Mediation
    1. Mapping resources between different ontologies
  4. Limiting Assumptions
    1. Rules can be used to limit OWL's open world assumption or to support unique name assumption. Both CWA and UNA are often needed in various applications.

Semantic Web Rule Language (SWRL)

SWRL is a rule language based on OWL using a subset of RuleML rules modeled on Horn clauses. SWRL is undecidable in a general case. There are various subsets of SWRL defined to regain tractability.

  1. Download and analyze the family.swrl ontology.
  2. Open it in Protege.
  3. Enable the rule view (Window → Views → Ontology views → Rules) and read the SWRL rules.
  4. 8-) Propose an application that would use this or similar set of rules. Describe the system in a few sentences.

6 Mapping ontologies with SWRL Rules

Rules can be also used to align two or more ontologies. If you want to make your data compatible with ontologies used worldwide (and this way enable external tools to use your data), you may sometimes need to map the concepts of your ontology to an external one.

An example tool supporting ontology mapping is Snoggle which is „a graphical, SWRL-based ontology mapper to assist in the task of OWL ontology alignment. It allows users to visualize ontologies and then draw mappings from one to another on a graphical canvas. Users draw mappings as they see them in their head, and then Snoggle turns these mappings into SWRL/RDF or SWRL/XML for use in a knowledge base.”

Here only a small example of using Snoggle will be presented:

  1. Run Snoggle.
     java -jar /usr/local/snoggle/snoggle.jar
  2. Load two ontologies (people+pets and foaf):
    1. Ontology → To Ontology → Load Ontology: http://xmlns.com/foaf/spec/index.rdf
  3. State that anyone who is an ns0:person is also a foaf:Person
    1. Drag the ns0:person item and drop it in the center canvas in the body.
    2. Drag the foaf:Person item and drop it in the center canvas in the head.
    3. Click and drag the small square at the center of the ns0:person element to the center of the foaf:Person element.
  4. Create a more complex rule: Anyone who is a ns0:elderly that ns0:has_pet var1 (Snoggle convention for referencing variables) is a foaf:Person who has foaf:interest in var1, has a foaf:nick PetLady[1a] (you can change the variable name [1b]) and has foaf:gender „Female” (a string) [2]
  5. See the generated SWRL rules (File → Preview).
  6. 8-) Save the file and put it in the report archive.

If you want to know more

1)
See also:
  • Rules (from an Intelligent Systems course)
pl/dydaktyka/semweb/reasoning.1411378953.txt.gz · ostatnio zmienione: 2019/06/27 15:55 (edycja zewnętrzna)
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0