This is an old revision of the document!


Probabilistic Programming --- Medical Cases

This class will cover use cases of the Bayesian methods in the medical domain. First part of the class is based on article: “Local computations with probabilities on graphical structures and their application to expert systems” by Lauritzen, Steffen L. and David J. Spiegelhalter. Second part is inspired by “An intercausal cancellation model for bayesian-network engineering. International Journal of Approximate Reasoning” by S.P. Woudenberg, L. C. van der Gaag, and C. M. Rademaker.

Medical Diagnosis

In this section we will follow a simplified use case of the medical diagnosis, as defined in the following quote from the article.

Structure

First task of the “knowledge engineer” is to find a structure of Bayesion network which fits the story. There exist automatic tools to learn the structure from examples, but in this case the structure should be clear enough to create the network by hand.

Assignments

  1. Draw (on paper?) a Bayesian network describing the story from the previous section.
  2. Write the corresponding ProbLog program:
    1. there is no need for the first order logic here
    2. use arbitrary probabilities

Probabilities

The problem with Bayesian model you've just created is that it doesn't provide with any useful info. Mostly because of the arbitrary prior probabilities, you've used. Reality is rather harsh, often you don't have access to any realistic priors (one of the arguments of critics of Bayesian methods). In this section we will try to make up for that and find make the network useful.

Learning

The simplest way to have realistic priors is to not have any priors at all :) In other words — we assume, we know nothing about probabilities. In ProbLog you can state this fact by using t(_) predicate, e.g.

t(_)::smoker.

Says you do not know nothing about probability of patient being smoker.

Now when we have admitted our lack of knowledge, we can start learning! In ProbLog learning can be achieved either by command line tool:

problog lfi

or in on-line editor by simply choosing Learning from the list.

In both cases you have to provide some learning examples, that consists simply of evidences separated by dotted line, e.g. two different patients can be described as:

evidence(smoker).
evidence(\+visitedAsia).
evidence(\+tubercolosis).
evidence(\+lung_cancer).
evidence(\+dyspnea).
evidence(\+xray_positive).
----------------
evidence(\+xray_positive).
evidence(tubercolosis).
evidence(visitedAsia).
evidence(\+lung_cancer).
evidence(dyspnea).
evidence(\+smoker).

The learning should result in new model with new probabilities.

If you receive an “Inconsistent Evidence” error, include leak probabilities in the model. Leak probabilities are probabilities stating that some random variable can be assigned to a value without any particular reason, e.g. here we state that variable var can't be true if no reason is true.

t(_)::var :- reason1.
t(_)::var :- reason2.
0.0::var. % leak probability

Command line tool can do it automatically (check prolog lfi –help).

Assignments:
  1. replace all probabilities in your model with t(_)
  2. put some random learning data in the on-line IDE and check results of learning
  3. can you think of possible difficulties of this approach?
    1. try to use it in on-line IDE
    2. try to use it offline (cmdline)
      1. you may have to ask the teacher to install the problog for you
      2. you may have to limit number of iterations learning takes
    3. what are the learned probabilites?
    4. what is the probability of a smoker with positive x-ray to have a lung cancer?

Learning + Priors

The previous section was neat, but reality is really harsh and it's realy difficult to get good learning data of 10 000 patients. Therefore we need to go for compromise and combine priors with learning. To do that in Problog it's enough to put a probability in the t/1 predicate, e.g. to tell that in the society 50% people smoke, but you are not sure of this, you should write:

t(0.50)::smoker.

Now, let's say we have found some wise books that say:

  • 30% of the population smokes
  • 0.01% of the population has been in Asia
  • dyspnea is mostly caused by asthma and causes other than TB, lung cancer, or bronchitis
  • smoking has greater impact on lung cancer than bronchitis

Assignments

  1. introduce the knowledge from the wise books in your model
  2. download data of 100 random patients and do the learning
    1. do you see any problem with the learned model?
    2. fix the problem ;)

Sampling

Now, if you want to generate learning data yourself, you can do it via so called sampling of the model. To do that you have to add queries for every variable you want to sample and then:

  • problog sample
  • select sample in the on-line IDE

Assignments

  1. generate 100 random patients and feed them to the model
    1. it may be easier using the command line tool ;)

Predicting Treating Effects

Every doctor has to prescribe some kind of treatment. Our task is to provide him with tools that could help predict the treatments effects. Here is a new story:

Doctor can prescribe two different medications for patients with primary type-1 osteoporosis. The risks of bone fracture associated with osteoporosis, can be reduced to some small extent by treatment. Two common treatments for osteoporosis are calcium supplementation and medication with bisphosphonates. But the concurrent intake of both medications will fully cancel out the effect of the bisphosphonates and the effect of the calcium supplementation is cancelled out partially.

en/dydaktyka/problog/lab1.1496068856.txt.gz · Last modified: 2019/06/27 16:00 (external edit)
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0