**Knowledge-dependent frequentist probabilities**

This is going to be a (relatively) geeky post which I tried to make understandable for lay people.

Given the important role than epistemological assumptions play in debate between theists and atheists, I deemed it necessary to first write a groundwork upon which more interesting discussions (about the existence of God, the historicity of Jesus, miracles, the paranormal…) will lie.

## Bayesianism, Degrees of belief

In other posts I explained why I am skeptical about the Bayesian interpretation of probabilities as degrees of belief. I see no need to adjust the intensity of our belief in string theory (which is a subjective feeling) in order to do good science or to avoid irrationality.

Many Bayesians complain that if we don’t consider subjective probabilities, a great number of fields such as economy, biology, geography or even history would collapse.

This is a strong pragmatic ground for being a Bayesian I hear over and over again.

## Central limit theorem and frequencies

I don’t think this is warranted for I believe that the incredible successes brought about by probabilistic calculations concern events which are **(in principle)** repeatable and therefore open to a frequentist interpretation of the related likelihoods.

According to a knowledge-dependent interpretation of frequentism I rely on the probability of an event is its frequency if the **known circumstances** were to be *repeated* an infinite number of times.

Let us consider an ideal dice which is thrown in a perfectly random way. Obviously we can only find approximations of this situation in the real world, but a computer can reasonably do the job.

In the following graphics, I plotted the results for five series of trials.

The frequentist probability of the event is defined as

,

that is the limit of the frequency of “3” when the number of trials becomes close to infinity.

This is a mathematical abstraction which never exists in the real world, but from the 6000-th trial onward the frequency is a very good approximation of the probability which will converge to the probability according to the central limit theorem.

Actually my **knowledge-dependent frequentist interpretation** allows me to consider the probability of unique events which have not yet occurred.

For example, a Bayesian wrote that *“the advantage of this view over the frequency interpretation is that it can deal with cases where there is no relative frequency to draw on: for example, Gigerenzer mentions the first ever heart transplant patient who was given a 70% chance of survival by the surgeon. Under the frequency interpretation that statement made no sense, because there had never actually been any similar operations by then.“*

I think there are many confusions going on here.

Let us call K the total knowledge of the physician which might include the different bodily features of the patient, the state of his organs and the hazard of the novel procedure.

The frequentist probability would be defined as the ratio of surviving patients divided by the total number of patients undergoing the operation if the **known circumstances** underlying K were to be repeated a very great (actually infinite) number of times.Granted, for many people this does not seem as intuitive as the previous example with the dice.

And it is obvious there existed for the physician no frequency he could have used to directly approximate the probability.

Nevertheless, this frequentist interpretation is by no means absurd.

The physician could very well have used Bayes’s theorem to approximate the probability while having only used other frequentist probabilities, such as the probability that the body reacting in a certain way would be followed by death or the probability that introducing a device in some organs could have lethal consequences.

Another example is the estimation of the probability it is going to rain tomorrow morning as you will wake up.

While the situation you are confronted with might very well be unique in the whole history of mankind, the probability is well defined by the frequency of rain if all the circumstances you know of were to be repeated an extremely high number of times.

Given this extended, knowledge-dependent variant of frequentism, the probabilities of single events are meaningful and many fields considered as Bayesian (such as economical simulations, history or evolutionary biology) could be as well interpreted according to this version of frequentism.

It has a great advantage: it allows us to bypass completely subjective degrees of belief and to focus on an objective concept of probability.

Now, some Bayesians could come up and tell me that it is possible that the frequentist probabilities of the survival of the first heart transplant patient or of the weather** does not exist**: in other words, if the known circumstances were to be repeated an infinite number of times, the frequency would keep oscillating instead of converging to a fixed value (such as 1/6 for the dice).

This is a fair objection, but such a situation would not only show that the frequentist probability does not exist but that the Bayesian interpretation is meaningless as well.

It seems utterly nonsensical to my mind to say that **every rational agent** ought to have a degree of belief of (say) 0.45 or 0.87 if the frequency of the event (given all known circumstances) would keep fluctuating between 0.01 and 0.99.

For in this case the event is completely unpredictable and it seems entirely misguided to associate a probability to it.

Another related problem is that in such a situation a degree of belief could be no nothing more than a pure mind state **with no relation to the objective world** whatsoever.

As professor Jon Williamson wrote:

“*Since Bayesian methods for estimating physical probabilities depend on a given prior prob**ability function, and it is precisely the prior that is in question here, this leaves clas**sical (frequentist) estimation methods—in particular confi**dence interval estimation methods—as the natural candidate for determining physical probabilities. Hence the Bayesian needs the frequentist for calibration.”*

But if this frequentist probability does not exist, the Bayesian has absolutely no way to relate his degree of belief to reality since no prior can be defined and evaluated.

Fortunately, the incredible success of the mathematical treatment of uncertain phenomenons (in biology, evolution, geology, history, economics and politics to name only a few) show that we are justified in believing in the meaningfulness of the probability of the underlying events, even if they might be quite unique.

In this way, I believe that many examples Bayesians use to argue for the indispensability of their subjectivist probabilistic concept ultimately fail because the same cases could have been handled using the frequentist concept I have outlined here.

However this still leaves out an important aspect: what are we to do about theories such as the universal gravitation, string theory or the existence of a multiverse?

It is obvious no frequentist interpretation of their truth can be given.

Does that mean that without Bayesianism we would have no way to evaluate the relative merits of such competing models in these situations?

Fortunately no, but this will be the topic of a future post.

At the moment I would hate to kill the suspense :-)

Homepage of Lotharlorraine: link here

*(List of topics and posts) *

My other controversial blog: Shards of Magonia (link here)

Hauptseite von Lotharlorraine: Link hier

(Liste von Themen und Posten).

Mein anderer umstrittener Blog: Scherben von Magonia.

Tags: Bayes, Bayesianism, frequentism, knowledge, priors, Probabilities, theorem, unique events

### 8 responses to “”

### Trackbacks / Pingbacks

- The great duel: Ken ham versus Bill Nye | lotharlorraine - February 8, 2014

### Leave a Reply Cancel reply

### Recent Posts

### Recent Comments

### Archives

### Categories

- Abortion
- Abuse / Missbrauch
- Anti-white racism
- Antisemitism
- Antitheism
- aplogetics
- Apologetics
- Atheism
- Bayesianism
- Being a Christian / Christsein
- Bible
- C.S. Lewis
- Calvinism
- Capitalism
- Christmas / Weihnacht
- Church of Rom
- Complementarianism
- Conditional immortality
- Consciousness / Bewusstsein
- Cosmological Argument
- Creationism
- Culture War / Kulturkampf
- Determinism
- Deutsch / German
- Drugs
- Easter tale
- Egalitarianism
- Epistemology
- Evangelicalism
- Evidence
- Evolution
- evolution of morality
- evolution of violence
- Experience / Erfahrung
- Extraordinary claims demand extraordinary evidence
- Extraterrestrial
- Faith / Glaube
- Feminism
- France / Frankreich
- fundamentalism
- Fundamentalism / Fundamentalismus
- Genocide / Völkermord
- God / Gott
- Golden Rule / Goldene Regel
- Hell
- Hell / Hölle
- Hitler
- Homosexuality / Homosexualität
- Imperialism
- Inerrancy / Irrtumslosigkeit
- Infanticide
- Intelligent Design
- Interview
- Islam
- Jesus
- John Loftus
- Jonathan Haidt
- Kimberly Knight
- Likelihood
- Lorraine Franconian
- Luther
- Materialism / Materialismus
- Metaphysics
- Moral
- Music / Musik
- Narnia
- NDE
- Neurology
- New Atheism
- Occam
- Parapsychology
- Parsimony / Sparsamkeit
- Paul
- Paul Copan
- Politic
- Political Conservatism
- Political liberalism
- Predestination
- Probabilities
- Problem of evil
- Progressive Christianity / Progressives Christentum
- Qualia
- Racism
- Randal Rauser
- Rap
- Reductionism
- Responsability / Verantwortlichkeit
- Resurrection / Auferstehung
- Salvation / Erlösung
- Sin / Sünde
- sinful nature
- Sionism
- Social Justice
- Socialism / Sozialismus
- Steven Pinker
- Syria
- The inerrant Gospel of Lotharson
- Thom Stark
- Transcendence
- UFO
- Uncategorized
- Universalism
- Video games – Videospiele
- William Lane Craig

Erm…. that´s what they did. Frequency distributions are commonly used for priors and background evidency in bayesian approaches, that doesn´t make them any less bayesian.

So, as an evolutionary biologist, I should just keep doing what I always did and merely stop calling it a bayesian approach and instead call it a frequentist approach? That does sound very humpty-dumpty like to me…

I agree with Andy.

I’m not sure any Bayesian I’ve met does not understand degrees of belief as in some way frequentist: in all the possible situations where I could have this information, in what proportion of them would this fact be true. Isn’t that a reasonable definition of a degree of belief?

Of course there is a continuum of how reasonable such a belief estimate is, which is based on our ability to understand or intuit the underlying process giving rise to both evidence and the fact-in-question. But again, would a competent Bayesian disagree that some probability estimates are better founded than others, and when dealing with those that are very tenuous, the errors in the calculation can easily swamp any data?

Clearly folks like Carrier don’t understand the latter, but I don’t think phrasing in terms of ‘if you could repeat the situation many times with the known information, how many times would X happen’, helps. Carrier in his book explicitly gives a frequentist interpretation of what he’s doing. So I think he’d be fine with what you’ve written, he’d still make his estimates, and still not notice that the errors were so large his results are meaningless.

Thanks Ian!

“I’m not sure any Bayesian I’ve met does not understand degrees of belief as in some way frequentist: in all the possible situations where I could have this information, in what proportion of them would this fact be true. Isn’t that a reasonable definition of a degree of belief? ”

This is an interesting interpretation, but I doubt that all Bayesians agree with it.

Someone based his presentation of Bayes’s theorem while relying on this very concept (possible worlds):

The problem is that there are certain abstract propositions such as “Numbers are real”, “God exists”, “We live in an infinite multiverse” which cannot obviously be expressed in that way.

To my mind, the greatest obstacle in the project of Dr. Richard Carrier is the

extremely grossapproximations which he will necessarily have to employ (sadly enough without being aware of them).Each historical event is unique, but many can be regrouped within categories and be used to compute statistics,

I have absolutely no problem with that.The hugest hurdle is that historical events which are singular (such as the origin of a new religion or very rapid political changes) stand alone in their category and there is no

reasonableway to approximate them as being similar to other events for estimating frequencies.Do you think it is similar to your own concern or something related but different?

There is no doubt that Carrier is convinced he possesses a ground-breaking and revolutionary approach and that so few historians agree with him because they are blinded by their own prejudices and not because they think that his method is

not workable in practice.I think it would be great if his work were to receive a serious

academiccriticism.Until now most reviews of his book have been written by his fans who celebrate him as the new Messiah and seem to be completely impervious to a critical examination of his ideas.

Otherwise, do you believe that people computing the probability of an Artificial Intelligence Apocalypse (the so-called singularity) fall prey to the same kind of problems you and I went into?

Cheers.

That there are plenty of historians who say that this approach is not workable in practics is news to me (and I´m also surprised at this fast reception given that Proving History has only been published half a year ago or so…) Could you name a few?

Also, two reading recommendations:

http://freethoughtblogs.com/carrier/archives/3923

http://freethoughtblogs.com/carrier/archives/3666

I know very emotional responses to Carrier by Biblical historians (most often either secular or liberal Christian).

They reacted in a very strong way to his grandiose claims but I find their tone clearly unfortunate.

James Hoffmann published one interesting critique:

http://rjosephhoffmann.wordpress.com/2011/06/06/%CF%80-ness-envy-the-irrelevance-of-bayess-theorem/

Unlike many of his detractors, I agree with Carrier that historical events can be associated with a probability or likelihood, and according to Ian Carrier himself would have few objections against my own frequentist approach.

http://irrco.wordpress.com/2012/09/08/a-mathematical-review-of-proving-history-by-richard-carrier/

But as I explained in my comment you quoted above, I believe that estimating the

priorsof the probabilities he is interested in cannot be done without resorting to extremely gross approximations orarbitraryassumptions.And as Ian pointed out, only small uncertainties can produce a large variation of the output of the Bayesian landscape:

http://irrco.wordpress.com/2012/10/11/the-effect-of-error-in-bayess-theorem/

Maybe you can ask him for more details, he is (most likely) more competent than I am for this subject.

Cheers.

“Aviezer Tucker ” I don’t know this author.

I agree that historical reasoning is

probabilisticin its very nature, but it is begging the question to call it “Bayesian”.The knowledge-dependent frequentist approach I have outlined here can perfectly account for the historical process as well, there is no need to consider subjective degrees of belief.

Now I am not sure that Tucker believes (like Carrier does) that one can

practicallycompute such likelihoods.Maybe all he was saying is that the historical process is probabilistic while accepting that it is impossible to calculate the values in most practical situations.

I should read his book but I have a very long list on my shelf :-)

By the way, I don’t like going to Carrier’s blog due to the arrogant tone he all too often uses towards his opponents, theists and atheists alike.

Again, there is no difference whatsoever between what you are proposing and what people already do in practice. You are mixing up methodology with philosophical interpretations.