# Are miracles improbable natural events?

Deutsche Version: Sind Wunder unwahrscheinliche Naturereignisse?

Stefan Hartmann is one of the most prominent scholars who deal with the philosophy of probability.

In an interview for the university of Munich, he went into a well-known faith story of the Old Testament in order to illustrate some concepts in a provoking way.

*****

Interviewer: let us start at the very beginning in the Old Testament. In the Book of Genesis, God reveals to hundred-years old Abraham that he’d become father. Why shall Abraham believe this?

Hartmann: if we get a new information and wonder how we should integrate it into our belief system, we start out analysing it according to different criteria.
Three of them are especially important: the initial plausibility of the new information, the coherence of the new information and the reliability of the information source.
These factors often point towards the same direction, but sometimes there are tensions. Like in this example.
We have to do with a highly reliable source, namely God who always says the Truth.
However, the information itself is very implausible, hundred-years old people don’t get children. And it is incoherent: becoming a father at the age of hundred doesn’t match our belief system.
Now we have to weigh out all these considerations and come to a decision about whether or not we should take this information in to our belief system. When God speaks, we are left with no choice but to do that. But if anyone else were to come up with this information, we’d presumably not do it, because the missing coherence and the lacking plausibility would be overwhelming.
The problem for epistemology consists of how to weigh out these three factors against each other.

*****

It must be clearly emphasised that neither the interviewer nor Hartmann believe in the historicity of this story between God and Abraham. It is only used as an illustration for epistemological (i.e. knowledge-related) problems.

As a progressive Christian, I consider that this written tradition has shown up rather late so that its historical foundations are uncertain.

Still, from the standpoint of the philosophy of religion it represents a vital text and lies at the very core of the “leap of faith” of Danish philosopher Søren Kierkegaard.

For that reason, I want to go into Hartmann’s interpretation for I believe that it illustrates a widespread misunderstanding among modern intellectuals.

I am concerned with the following sentence I underscored:

However, the information itself is very implausible, hundred-years old people don’t get children. And it is incoherent: becoming a father at the age of hundred doesn’t match our belief system.

According to Hartmann’s explanation, it looks like as if the Lord had told to Abraham: “Soon you’ll get a kid in a wholly natural way.”

And in that case I can figure out why there would be a logical conflict.

But this isn’t what we find in the original narrative:

Background knowledge: hundred years old people don’t get children in a natural way.

New information:a mighty supernatural being promised to Abraham that he would become father through a miracle.

Put that way, there is no longer any obvious logical tension.

The “father of faith” can only conclude out of his prior experience (and that of countless other people) that such an event would be extremely unlikely under purely natural circumstances.

This doesn’t say anything about God’s abilities to bring about the promised son in another way.

Interestingly enough, one could say the same thing about advanced aliens who would make the same assertion.

The utter natural implausibility of such a birth is absolutely no argument against the possibility that superior creatures might be able to perform it.

## Did ancient people believe in miracles because they didn’t understand well natural processes?

A closely related misconception consists of thinking that religious people from the past believed in miracles because their knowledge of the laws of Nature was extremely limited.

As C.S. Lewis pointed out, it is misleading to say that the first Christians believed in the virgin birth of Jesus because they didn’t know how pregnancy works.

On the contrary, they were very well aware of these states of affairs and viewed this event as God’s intervention for that very reason.

Saint Joseph would not have come to the thought of repudiating his fiancée if he hadn’t known that a pregnancy without prior sexual intercourses goes against the laws of nature.

Although professor Hartmann is doubtlessly an extremely intelligent person, I think he missed the main point.

Are we open to the existence of a God whose actions do not always correspond to the regular patterns of nature? And whose preferences might not always been understood by human reason?

But as progressive Evangelical theologian Randal Rauser argued, I think that the true epistemological and moral conflict only begins when God demands Abraham many years later to sacrifice his son, which overthrows very deep moral intuitions.

Like the earlier German philosopher Immanual Kant, Rauser strongly doubts that such a command is compatible with God’s perfection.

# The crazy bookmaker and the Cult of probability

## A Critique of the Dutch Book Argument

Many neutral observers concur into thinking we are assisting to the formation of a new religion among hopelessly nerdy people.

I’m thinking of course on what has been called hardcore Bayesianism, the epistemology according to which each proposition (“Tomorrow it’ll rain”, “String theory is the true description of the world”, “There is no god” etc.) has a probability which can and should be computed under almost every conceivable circumstance.

In a previous post I briefly explained the two main theories of probabilities, frequentism and Bayesianism. In another post, I laid out my own alternative view called “knowledge-dependent frequentism” which attempts at keeping the objectivity of frequentism while including the limited knowledge of the agent. An application to the Theory of Evolution can be found here.

It is not rare to hear Bayesians talk about their own view of probability as a life-saving truth you cannot live without, or a bit more modestly as THE “key to the universe“.

While trying to win new converts, they often put it as if it were all about accepting Bayes theorem whose truth is certain since it has been mathematically proven. This is a tactic I’ve seen Richard Carrier repeatedly employing.

I wrote this post as a reply for showing that frequentists accept Bayes theorem as well, and that the matter of the dispute isn’t about its mathematical demonstration but about whether or not one accepts that for every proposition, there exists a rational degree of belief behaving like a probability.

## Establishing the necessity of probabilistic coherence

One very popular argument aiming at establishing this is the “Dutch Book Argument” (DBA). I think it is no exaggeration to state that many committed Bayesians venerate it with almost the same degree of devotion a Conservative Evangelical feels towards the doctrine of Biblical inerrancy.

Put forward by Ramsey and De Finetti, it defines a very specific betting game whose participants are threatened by a sure loss (“being Dutch booked”) if the amounts of their odds do not fulfill the basic axioms of probabilities, the so-called Kolmogorov’s axioms (I hope my non-geeky readers will forgive me one day for becoming so shamelessly boring…):

1) the probability of an event is always a real positive number

2)  the probability of an event regrouping all possibilities is equal to 1

3) the probability of the sum of disjoint events is equal to the sum of the probability of each event

The betting game upon which the DBA lies is defined as follows: (You can skip this more technical green part whose comprehension isn’t necessary for following the basic thrust of my criticism of the DBA).

## A not very wise wager

Let us consider an event E upon which it must be wagered.

The bookmaker determines a sum of money S (say 100 €) that a person R  (Receiver) will get from a person G (Giver) if E comes true. But the person R  has to give p*S to the person G beforehand.

The bookmaker determines himself who is going to be R and who is going to be G.

Holding fast to these rules, it’s possible to demonstrate that a clever bookmaker can set up things in such a way that any better not choosing p respecting the laws of probabilities will lose money regardless of the outcome of the event.

Let us consider for example that a better wagers upon the propositions

1) “Tomorrow it will snow” with P1 = 0.65  and upon

2) “Tomorrow it will not snow” with P2 = 0.70.

P1 and P2 violate the laws of probability because the sum of the probabilities of these two mutually exclusive events should be 1 instead of 1.35

In this case, the bookmaker would choose to be G and first get P1*S + P2*S = 100*(1.135) = 135 €  from his better R. Afterwards, he wins in the two cases:

– It snows. He must give 100 € to R because of 1).  The bookmaker’s gain is  135 € – 100 = 35 €

– It doesn’t snow. He must give 100 € to R because of 2).  The bookmaker’s gain is also 135 € – 100 = 35 €

Let us consider the same example where this time the better comes up with P1 = 0.20 and P2 = 0.3 whose sum is largely inferior to 1.

The Bookmaker would choose to be R giving 0.20*100 = 20 € about the snow and 0.3*100 = 30 € about the absence of snow. Again, he wins in both cases:

– It snows. The better must give 100 € to R (the bookmaker) because of 1).  The bookmaker’s gain is -30 – 20 +100 = 50 €

– It does not snows. The better must give 100 € to R (the bookmaker) because of 2).  The bookmaker’s gain is  -30 – 20 +100 = 50 €

In both cases, P1 and P2 having fulfilled the probability axioms would have been BOTH a necessary and sufficient condition for keeping the sure loss from happening.

The same demonstration can be generalized to all other basic axioms of probabilities.

## The thrust of the argument and its shortcomings

The Dutch Book Argument can be formulated as follows:

1) It is irrational to be involved in a bet where you’re bound to lose

2) One can make up a betting game such that for every proposition, you’re doomed to lose if the sums you set do not satisfy the rules of probabilities. In the contrary case you’re safe.

3) Thus you’d be irrational if the amounts you set broke the rules of probabilities.

4) The amounts you set are identical to your psychological degrees of belief

5) Hence you’d be irrational if your psychological degrees of beliefs do not behave like probabilities

Now I could bet any amount you wish there are demonstrably countless flaws in this reasoning.

### I’m not wagering

One unmentioned premise of this purely pragmatic argument is that the agent is willing to wager in the first place. In the large majority of situations where there will be no opportunity for him to do so, he wouldn’t be irrational if his degrees of beliefs were non-probabilistic because there would be no monetary stakes whatsoever.

Moreover, a great number of human beings always refuse to bet by principle and would of course undergo no such threat of “sure loss”.

Since it is a thought experiment, one could of course modify it in such a way that:

“If you don’t agree to participate, I’ll bring you to Guatemala where you’ll be water-boarded until you’ve given up”.

But to my eyes and that of many observers, this would make the argument look incredibly silly and convoluted.

### I don’t care about money

Premise 1) is far from being airtight.

Let us suppose you’re a billionaire who happens to enjoy betting moderate amounts of money for various psychological reasons. Let us further assume your sums do not respect the axioms of probabilities and as a consequence you lose 300 €, that is 0.00003% of your wealth while enjoying the whole game. One must use an extraordinarily question-begging notion of rationality for calling you “irrational” in such a situation.

### Degrees of belief and actions

It is absolutely not true that our betting amounts HAVE to be identical or even closely related to our psychological degree of beliefs.

Let us say that a lunatic bookie threatens to kill my children if I don’t accept to engage in a series of bets concerning insignificant political events in some Chinese provinces I had never heard of previously.

Being in a situation of total ignorance, my psychological degree of beliefs are undefined and keep fluctuating in my brain. But since I want to avoid a sure loss, I make up amounts behaving like probabilities which will prevent me from getting “Dutch-booked”, i.e. amounts having nothing to do with my psychology.

So I avoid sure loss even if my psychological states didn’t behave like probabilities at any moment.

### Propositions whose truth we’ll never discover

There are countless things we will never know (at least assuming atheism is true, as do most Bayesians.)

Let us consider the proposition: “There exists an unreachable parallel universe which is fundamentally governed by a rotation between string-theory and loop-quantum gravity and many related assertions.

Let us suppose I ask to a Bayesian friend: “Why am I irrational if my corresponding degrees of belief in my brain do not fulfill the basic rules of probability?”

The best thing he could answer me (based on the DBA) would be:

“Imagine we NOW had to set odds about each of these propositions. It is true we’ll never know anything about that during our earthly life. But imagine my atheism was wrong: there is a hell, we are both stuck in it, and the devil DEMANDS us to abide by the sums we had set at that time.

You’re irrational because the non-probabilistic degrees of belief you’re having right now means you’ll get dutch-booked by me in hell in front of the malevolent laughters of fiery demons.”

Now I have no doubt this might be a good joke for impressing a geeky girl being not too picky (which is truly an extraordinarily unlikely combination).

But it is incredibly hard to take this as a serious philosophical argument, to say the least.

## A more modest Bayesianism is probably required

To their credits, many more moderate Bayesians have started backing away from the alleged strength and scope of the DBA and state instead that:

“First of all, pretty much no serious Bayesian that I know of uses the Dutch book argument to justify probability. Things like the Savage axioms are much more popular, and much more realistic. Therefore, the scheme does not in any way rest on whether or not you find the Dutch book scenario reasonable. These days you should think of it as an easily digestible demonstration that simple operational decision making principles can lead to the axioms of probability rather than thinking of it as the final story. It is certainly easier to understand than Savage, and an important part of it, namely the “sure thing principle”, does survive in more sophisticated approaches.”

Given that Savage axioms rely heavily on risk assessment, they’re bound to be related to events very well treatable through my own knowledge-dependent frequentism, and I don’t see how they could justify the existence and probabilistic nature of degree of beliefs having no connection with our current concerns (such as the evolutionary path through which a small sub-species of dinosaurs evolved countless years ago).

To conclude, I think there is a gigantic gap between:

– the fragility of the arguments for radical Bayesianism, its serious problems such as magically turning utter ignorance into specific knowledge.

and

– the boldness, self-righteousness and terrible arrogance of its most ardent defenders.

I am myself not a typical old-school frequentist and do find valuable elements in Bayesian epistemology but I find it extremely unpleasant to discuss with disagreeable folks who are much more interested in winning an argument than in humbly improving human epistemology.

Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)

# On the probability of evolution

In the following post, I won’t try to calculate specific values but rather to explicate my own Knowledge-dependent frequentist probabilities by using particular examples.

The great evolutionary biologist Stephen Jay Gould was famous for his view that Evolution follows utterly unpredictable paths so that the emergence of any species can be viewed as a “cosmic accident”.

He wrote:

We are glorious accidents of an unpredictable process with no drive to complexity, not the expected results of evolutionary principles that yearn to produce a creature capable of understanding the mode of its own necessary construction.

“We are here because one odd group of fishes had a peculiar fin anatomy that could transform into legs for terrestrial creatures; because the earth never froze entirely during an ice age; because a small and tenuous species, arising in Africa a quarter of a million years ago, has managed, so far, to survive by hook and by crook. We may yearn for a ‘higher answer’– but none exists”

“Homo sapiens [are] a tiny twig on an improbable branch of a contingent limb on a fortunate tree.”

Dr. Stephen Jay Gould, the late Harvard paleontologist, crystallized the question in his book ”Wonderful Life.” What would happen, he asked, if the tape of the history of life were rewound and replayed? For many, including Dr. Gould, the answer was clear. He wrote that ”any replay of the tape would lead evolution down a pathway radically different from the road actually taken.”

You’re welcome to complement my list by adding other quotations. 🙂

## Evolution of man

So, according to Stephen Jay Gould, the probability that human life would have evolved on our planet was extremely low, because countless other outcomes would have been possible as well.

Here, I’m interested to know what this probability p(Homo) means ontologically.

### Bayesian interpretation

For a Bayesian, p(Homo) means the degree of belief we should have that a young planet having exactly the same features as ours back then would harbor a complex evolution leading to our species.

Many Bayesians like to model their degrees of belief in terms of betting amount, but in that situation this seems rather awkward since none of them would still be alive when the outcome of the wager will be known.

Let us consider (for the sake of the argument) an infinite space which also necessarily contain an infinite number of planets perfectly identical to our earth (according to the law of the large numbers.)

According to traditional frequentism, the probability p(Homo) that a planet identical to our world would produce mankind is given as the ratio of primitive earths having brought about humans divided by the total number of planets identical to ours for a large enough (actually endless) number of samples:

p(Homo)   ≈           f(Homo) = N(Homo) / N(Primitive_Earths).

### Knowledge-dependent frequentism

According to my own version of frequentism, the planets considered in the definition of probability do not have to be identical to our earth but to ALL PAST characteristics of our earth we’re aware of.

Let PrimiEarths  be the name of such a planet back then.

The probability of the evolution of human life would be defined as the limit  p'(Homo) of

f'(Homo) = N'(Homo) / N(PrimiEarths‘)

whereby N(PrimiEarths‘)  are all primitive planets in our hypothetical endless universe encompassing all features we are aware of on our own planet back then and N'(Homo) is the number of such planets where human beings evolved.

It is my contention that if this quantity exists (that is the ratio converges to a fixed value whereas the size of the sample is enlarged), all Bayesians would adopt p'(Homo)  as their own degree of belief.

But what if there were no such convergence?  In other words, while one would consider more and more  N(PrimiEarths‘) f'(Homo) would keep fluctuating between 0 and 1 without zooming in to a fixed value.

If that is the case, this means that the phenomenon  “Human life evolving on a planet gathering the features we know” is completely unpredictable and cannot therefore be associated to a Bayesian degree of belief either, which would mean nothing more than a purely subjective psychological state.

## Evolution of bird

I want to further illustrate the viability of my probabilistic ontology by considering another evolutionary event, namely the appearance of the first birds.

Let us define D as : “Dinosaurs were the forefathers of all modern birds”, a view which has apparently become mainstream over the last decades.

For a Bayesian, p(D) is the degree of belief about this event every rational agent ought to have.

Since this is an unique event of the past, many Bayesians keep arguing that it can’t be grasped by frequentism and can only be studied if one adopts a Bayesian epistemology.

It is my contention this can be avoided by resorting to my Knowledge-Dependent Frequentism (KDF).

Let us define N(Earths’) the number of planets encompassing all features we are aware of on our modern earth (including, of course, the countless birds crowding out the sky, and the numerous fossils found under the ground).

Let us define N(Dino’) as the number of these planets where all birds originated from dinosaurs.

According to my frequentism, f(D) = N(Dino’) / N(Earths’), and p(D) is the limit of f(D) as the sample is increasingly enlarged.

If p(D) is strong, this means that on most earth-like planets containing birds, the ancestors of birds were gruesome reptilians.

But if p(D) is weak (such as 0.05), it means than among the birds of 100 planets having exactly the known features of our earth, only 5 would descend from the grand dragons of Jurassic Park.

Again, what would occur if p(D) didn’t exist because f(d) doesn’t converge as the sample is increased?

This would mean that given our current knowledge,  bird evolution is an entirely unpredictable phenomenon for which there can be no objective degree of belief every rational agent ought to satisfy.

## A physical probability dependent on one’s knowledge

In my whole post, my goal was to argue for an alternative view of probability which can combine both strengths  of traditional Frequentism and Bayesianism.

Like Frequentism, it is a physical or objective view of probability which isn’t defined in terms of the psychological or neurological state of the agent.

But like Bayesianism, it takes into account the fact that the knowledge of a real agent is always limited and include it into the definition of the probability.

To my mind, Knowledge-Dependent Frequentism (KDF) seems promising in that it allows one to handle the probabilities of single events while upholding a solid connection to the objectivity of the real world.

In future posts I’ll start out applying this concept to the probabilistic investigations of historical problems, as Dr. Richard Carrier is currently doing.

Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)

# Did Jesus think he was God?

The outstanding liberal Biblical scholar James McGrath wrote a thought-provoking post on this very topic.

I mentioned a few posts about Bart Ehrman’s recent book yesterday, and there are already a couple more. Larry Hurtado offered some amendments to his post, in light of feedback from Bart Ehrman himself. And Ken Schenck blogged about chapter 3 and whether Jesus thought he was God. In it he writes:

I think we can safely assume that, in his public persona, Jesus did not go around telling everyone he was the Messiah, let alone God.

But one must then ask whether these is a good reason to regard the process that follows, in which Jesus comes to be viewed as the second person of the Trinity, is a legitimate or necessary one.

Schenck also criticizes Ehrman for giving voice to older formulations of scholarly views, as though things had not moved on.

The only people who think that Jesus was viewed as a divine figure from the beginning are some very conservative Christians on the one hand , and mythicists on the other. That in itself is telling.

I’d be very interested to see further exploration of the idea that, in talking about the “son of man,” Jesus was alluding to a future figure other than himself, and that it was only his followers who merged the two, coming up with the notion of a “return” of Jesus. It is a viewpoint that was proposed and then set aside decades ago, and I don’t personally feel like either case has been explored to the fullest extent possible. Scholarship on the Parables of Enoch has shifted since those earlier discussions occurred, and the possibility that that work could have influenced Jesus can no longer be dismissed.

But either way, we are dealing with the expectations of a human being, either regarding his own future exaltation, or the arrival of another figure. We simply do not find in Paul or in our earliest Gospels a depiction of Jesus as one who thought he was God.

Here was my response to that:

Well I’m not really a Conservative Christian (since I reject a fixed Canon and find some forms of pan-en-theism interesting philosophically) but I do believe that Jesus was more than a mere prophet. Along with N.T. Wright I think He viewed Himself as the new temple embodying God’s presence on earth.

I once defended the validity of C.S. Lewis trilemma provided Jesus viewed himself as God.

I’m well aware that Jesus divine sayings in John’s gospel are theological creations .

But here there is something curious going on here.

Many critical scholars think that the historical Jesus falsely predicted the end of the world in the Gospel of Mattew
“Truly I say to you, this generation will not pass away until all these things take place. ” Matthew 24:34

But if one does this, why could we not also accept the following saying

“37”Jerusalem, Jerusalem, who kills the prophets and stones those who are sent to her! How often I wanted to gather your children together, the way a hen gathers her chicks under her wings, and you were unwilling. 38″Behold, your house is being left to you desolate!…” Matthew 23:37

which is located just several verses before Matthew 24:34. It seems rather arbitrary to accept the latter while rejecting the former.

This verse is intriguing in many respects.

In it, Jesus implies his divinity while not stating it explicitly, and if it was a theological creation such as in John’s Gospel, it seems strange that Matthew did not make this point much more often and clearly at other places, if such was his agenda.

What’s more, the presence of Matthew 24:34 (provided it was a false prophecy) has some interesting consequences about the dating and intention of the author.

1) Let us consider that Matthew made up the whole end of his Gospel out of his theological wishful thinking for proving that Christ is the divine Messiah.

If it is the case, it seems extremely unlikely he would write that one or two generations AFTER Jesus had perished.
This fact strongly militates for dating Matthew’s gospel as a pretty early writing.

2) Let us now suppose that Matthew wrote His Gospel long after Jesus’s generation had passed away.
He would certainly not have invented a saying where his Messiah made a false prediction.
It appears much more natural to assume he reports a historical saying of Jesus as it was because he deeply cared for truth , however embarrassing this might prove to be.

And if that is the case, we have good grounds for thinking he did not make up Matthew 23:37 either.

I’m not saying that what I have presented here is an air-tight case, it just seems the most natural way to go about this.

I think that historical events posses objective probabilities, geekily minded readers might be interested in my own approach.”

To which James replied:

“Thanks for making this interesting argument! How would you respond to the suggestion that Jesus there might be speaking as other prophets had, addressing people in the first person as though God were speaking, but without believing his own identity to be that of God’s? I think that might also fit the related saying, “I will destroy this temple, and in three days rebuild it.””

Lotharson:

“That’s an interesting reply, James! Of course I cannot rule this out.

Still, in the verses before Jesus uses the third person for talking about God:

“And anyone who swears by the temple swears by it and by the one who dwells in it. 22 And anyone who swears by heaven swears by God’s throne and by the one who sits on it.”

and verse 36: ” 36 Truly I tell you , all this will come on this generation.” is a typical saying of Jesus he attributes to himself.

And so it seems to me more natural that Jesus would have said something like:

For Truly God says: ’37 “Jerusalem, Jerusalem, you who kill the prophets and stone those sent to you, how often I have longed to gather your children together, as a hen gathers her chicks under her wings, and you were not willing…”

James:

“Well, the same sort of switching back and forth between first person of God and the first person of the prophet is found in other prophetic literature, so I don’t see that as a problem. Of course, it doesn’t demonstrate that that is the best way to account for the phenomenon, but I definitely think it is one interpretative option that needs to be considered.”

I mentioned our conversation because I think it is a nice example of how one can disagree about a topic without being disagreeable towards one another.

Would not the world be in a much better state if everyone began striving for this ideal?

Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)

# Invisible burden of proof

Progressive Evangelical apologist Randal Rauser has just written a fascinating post about the way professional Skeptics systematically deny a claim they deem extraordinary.

I’ve talked about God and the burden of proof in the past. (See, for example, “God’s existence: where does the burden of proof lie?” and “Atheist, meet Burden of Proof. Burden of Proof, meet Atheist.”) Today we’ll return to the question beginning with a humorous cartoon.

This cartoon appears to be doing several things. But the point I want to focus on is a particular assumption about the nature of burden of proof. The assumption seems to be this:

Burden of Proof Assumption (BoPA): The person who makes a positive existential claim (i.e. who makes a claim that some thing exists) has a burden of proof to provide evidence to sustain that positive existential claim.

### Two Types of Burden of Proof

Admittedly, it isn’t entirely clear how exactly BoPA is to be  understood. So far as I can see, there are two immediate interpretations which we can call the strong and weak interpretations. According to the strong interpretation, BoPA claims that assent to a positive existential claim is only rational if it is based on evidence. In other words, for a person to believe rationally that anything at all exists, one must have evidence for that claim. I call this a “strong” interpretation because it proposes a very high evidential demand on rational belief.

The “weak” interpretation of BoPA refrains from extending the evidential demand to every positive existential claim a person accepts. Instead, it restricts it to every positive existential claim a person proposes to another person.

To illustrate the difference, let’s call the stickmen in the cartoon Jones and Chan. Jones claims he has the baseball, and Chan is enquiring into his evidence for believing this. A strong interpretation of BoPA would render the issue like this: for Jones to be rational in believing that he has a baseball (i.e. that a baseball exists in his possession), Jones must have evidence of this claim.

A weak interpretation of BoPA shifts the focus away from Jones’ internal rationality for believing he has a baseball and on to the rationality that Chan has for accepting Jones’ claim. According to this reading, Chan cannot rationally accept Jones’ testimony unless Jones can provide evidence for it, irrespective of whether Jones himself is rational to believe the claim.

So it seems to me that the cartoon is ambiguous between the weak and strong claims. Moreover, it is clear that each claim carries different epistemological issues in its train.

### Does a theist have a special burden of proof?

Regardless, let’s set that aside and focus in on the core claim shared by both the weak and strong interpretations which is stated above in BoPA. In the cartoon a leap is made from belief about baseballs to belief about religious doctrines. The assumption is thus that BoPA is a claim that extends to any positive existential claim.

I have two reasons for rejecting BoPA as stated. First, there are innumerable examples where rational people recognize that it is not the acceptance of an existential claim which requires evidence. Indeed, in many cases the opposite is the case: it is the denial of an existential claim which requires evidence.

Consider, for example, belief in a physical world which exists external to and independent of human minds. This view (often called “realism”) makes a positive existential claim above and beyond the alternative of idealism. (Idealism is the view that only minds and their experiences exist.) Regardless, when presented with the two positions of realism and idealism, the vast majority of people will recognize that if there is a burden of proof in this question, it is borne by the idealist who denies a positive existential claim.

Second, BoPA runs afoul of the fact that one person’s existential denial is another person’s existential affirmation. The idealist may deny the existence of a world external to the mind. But by doing so, the idealist affirms the existence of a wholly mental world. So while the idealist may seem at first blush to be making a mere denial, from another perspective she is making a positive existential claim.

With that in mind, think about the famous mid-twentieth century debate between Father Copleston (Christian theist) and Lord Russell (atheist) on the existence of God. Copleston defended a cosmological argument according to which God was invoked to explain the origin of the universe. Russell retorted: “I should say that the universe is just there, and that’s all.” With that claim, Russell is not simply denying a positive existential claim (i.e. “God exists”), but he is also making a positive existential claim not made by Copleston (i.e. “the universe is just there, and that’s all”).

In conclusion, the atheist makes novel positive existential claims as surely as the theist. And so it  follows that if the latter has a burden to defend her positive existential claim that God does exist, then the former has an equal burden to defend her positive existential claim that the universe is just there and that’s all.

Here is was my response.
This is another of your excellent posts, Randal!

Unlike most Evangelical apologists, you’re a true philosopher of religion and don’t seem to be ideology driven like John Loftus (for instance) obviously is. This makes it always a delight to read your new insights,

I think that when one is confronted with an uncertain claim, there are three possible attitudes:

1) believing it (beyond any reasonable doubt)
2) believing its negation (without the shadow of a doubt).
3) not knowing what to think.

Most professional Skeptics automatically assume that if your opponent cannot prove his position (1), he or she is automatically wrong (2), thereby utterly disregarding option 3).

All these stances can be moderated by probabilities, but since I believe that only events have probabilities, I don’t think one can apply a probabilistic reasoning to God’s existence and to the reality of moral values.

While assessing a worldview, my method consists of comparing its predictions with the data of the real world. And if it makes no prediction at all (such as Deism), agnosticism is the most reasonable position unless you can develop cogent reasons for favoring another worldview.

Anyway, the complexity of reality and the tremendous influence of one’s cultural and personal presuppositions on reality make it very unlikely to know the truth with a rational warrant, and should force us to adopt a profound intellectual humility.

This is why I define faith as HOPE in the face of insufficient evidence.
I believe we have normal, decent (albeit not extraordinarily) evidence for the existence of transcendent beings. These clues would be deemed conclusive in mundane domain of inquiries such as drug trafficking or military espionage.
But many people consider the existence of a realm (or beings) out of the ordinary to be extremely unlikely to begin with.
This is why debates between true believers and hardcore deniers tend to be extraordinarily counter-productive and loveless.

The evidence are the same but Skeptics consider a coincidence of hallucinations, illusions and radar deficits to be astronomical more plausible than visitors from another planet, universe, realm, or something else completely unknown.

In the future, I’ll argue that there are really a SMALL number of UFOs out there (if you stick to the definition “UNKNOWN Flying Objects” instead of a starship populated by gray aliens)

Of course, the same thing can be said about (a little number of) miraculous miracles.

# A mathematical proof of Ockham’s razor?

$i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$

Ockham’s razor is a principle often used to dismiss out of hand alleged phenomena deemed to be too complex. In the philosophy of religion, it is often invoked for arguing that God’s existence is extremely unlikely to begin with owing to his alleged incredible complexity. A geeky brain is desperately needed before entering this sinister realm.

In a earlier post I dealt with some of the most popular justifications for the razor and made the following distinction:

Methodological Razor: if theory A and theory B do the same job of describing all known facts C, it is preferable to use the simplest theory for the next investigations.

Epistemological Razor: if theory A and theory B do the same job of describing all known facts C, the simplest theory is ALWAYS more likely.”

Like the last time, I won’t address the validity of the Methodological Razor (MR) which might be an useful tool in many situations.

My attention will be focused on the epistemological glade and its alleged mathematical grounding.

${\frac{x+y}{xy}}$

## Example: prior probabilities of models having discrete variables

### Presentation of the problem

We consider five functions that predicts an output Y (e.g. the velocity of a particle in an agitated test tube) which depends on an input X (e.g. the rotation speed).

Those five functions themselves depend on a given number of unknown parameters $latex a_i$.

$latex f1(a1)[X]$
$f2(a1,a2)[X]$
$latex f3(a1,a2,a3)[X]$
$latex f4(a1,a2,a3,a4)[X]$
$latex f5(a1,a2,a3,a4,a5)[X]$

To make the discussion somewhat more accessible to lay people, we shall suppose that the $latex a_i$ can only take on five discrete values: {1,2,3,4,5}
Let us suppose that an experiment was performed.
For x = 200 rpm (rotation per minute), the measured velocity of the particle was y = 0.123 m/s.

Suppose now that there is only one set of precise values that allows the function fi to predict the measurement E.
For example
f1(2)[200 rpm]= f2(1,3)[200 rpm]= f3(5,2,1)[200 rpm]=f4(2,1,4,5)[200 rpm]=f5(3,5,1,3,2)[200 rpm]= 0.123 m/s.

Now we want to evaluate the strength of the different models.
How are we to proceed?

Many scientists (including myself) would say that the five functions fit perfectly the data and that we would need further experiments to discriminate between them.

$latex your-latex-code-here$

### The objective Bayesian approach

Objective Bayesians would have a radically different approach.
They believe that all propositions (“The grass is greener in England than in Switzerland”, “Within twenty years, healthcare in Britain will no longer be free”, “The general theory of relativity is true”…) is associated with a unique precise degree of belief every rational agent knowing the same facts should have.

They further assert that degrees of belief ought to obey the laws of probability using diverse “proofs” such as the Dutch Book Argument (but see my critical analysis of it here).

Consequently, if at time t0, we believe that model M has a probability p(M) of being true, and if at t2 we get new measurement E, the probability of M should be updated according to Bayes’ theorem:

$latex p(M|E) = \frac{p(M)*p(E|M)}{(p(E|M)+p(E|\overline{M})}$.

p(M|E) is called the posterior, p(M) is the prior, p(E|M) is the likelihood of the experimental values given the truth of model M and p(E|M)+p(E|non M) is the total probability of E.
A Bayesian framework can be extremely fruitful if the prior p(M) is itself based on other experiments.

But at the very beginning of the probability calculation chain, p(M) we are in a situation of “complete ignorance”, to use the phrase of philosopher of science John Norton.

Now back to our problem.

An objective Bayesian would apply Bayes’ theorem and conclude that the probability of a model fi is given by:

p(fi|E) = p(fi)*p(E|fi)/(p(E|fi)+p(E|non fi))

Objective Bayesians apply the principle of indifference, according to which in utterly unknown situations every rational agent assigns the same probability to each possibility.

As a consequence, we get p(f1)=p(f2)=…=p(f5)=0.2

p(E|fi) is more tricky to compute. It is the probability that E would be produced if fi is true.

For this reason O(i,j) is usually referred to as an Ockham’s factor, because it penalizes the likelihood of complex models. If you are interested in the case of models with continuous real parameters, you can take a look at this publication. The sticking point of the whole demonstration is its heavy reliance on the principle of indifference.

## The trouble with the principle of indifference

I already argued against the principle of indifference in an older post. Here I will repeat and reformulate my criticism.

### Turning ignorance into knowledge

The principle of indifference is not only unproven but also often leads to absurd consequences. Let us suppose that I want to know the probability of certain coins to land odd. After having carried out 10000 trials, I find that the relative frequency tends to converge towards a given value which was 0.35, 0.43, 0.72 and 0.93 for the four last coins I investigated. Let us now suppose that I find a new coin I’ll never have the opportunity to test more than one time. According to the principle of indifference, before having ever started the trial, I should think something like that:

Since I know absolutely nothing about this coin, I know (or consider here extremely plausible) it is as likely to land odd as even.

I think this is magical thinking in its purest form. I am not alone in that assessment.

The great philosopher of science Wesley Salmon (who was himself a Bayesian) wrote what follows. “Knowledge of probabilities is concrete knowledge about occurrences; otherwise it is uselfess for prediction and action. According to the principle of indifference, this kind of knowledge can result immediately from our ignorance of reasons to regard one occurrence as more probable as another. This is epistemological magic. Of course, there are ways of transforming ignorance into knowledge – by further investigation and the accumulation of more information. It is the same with all “magic”: to get the rabbit out of the hat you first have to put him in. The principle of indifference tries to perform “real magic”. “

Objective Bayesians often use the following syllogism for grounding the principle of indifference.

1)If we have no reason for favoring one outcomes, we should assign the same probability to each of them

2) In an utterly unknown situation, we have no reason for favoring one of the outcomes

3) Thus all of them have the same probability.

The problem is that (in a situation of utter ignorance) we have not only no reason for favoring one of the outcomes, but also no grounds for thinking that they are equally probable.

The necessary condition in proposition 1) is obviously not sufficient.

This absurdity (and other paradoxes) led philosopher of mathematics John Norton to conclude:

“The epistemic state of complete ignorance is not a probability distribution.”

The Dempter Shafer theory of evidence offers us an elegant way to express indifference while avoiding absurdities and self-contradictions. According to it, a conviction is not represented by a probability (real value between 0 and 1) but by an uncertainty interval [ belief(h) ; 1 – belief(non h) ] , belief(h) and belief(non h) being the degree of trust one has in the hypothesis h and its negation.

For an unknown coin, indifference according to this epistemology would entail  belief(odd) = belief(even) = 0, leading to the probability interval [0 ; 1].

### Non-existing prior probabilities

Philosophically speaking, it is controversial to speak of the probability of a theory before any observation has been taken into account. The great philosopher of evolutionary biology Elliot Sober has a nice way to put it: ““Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”

It is hard to see how prior probabilities of theories can be something more than just subjective brain states.

## Conclusion

The alleged mathematical demonstration of Ockham’s razor lies on extremely shaky ground because:

1) it relies on the principle of indifference which is not only unproven but leads to absurd and unreliable results as well

2) it assumes that a model has already a probability before any observation.

Philosophically this is very questionable. Now if you are aware of other justifications for Ockham’s razor, I would be very glad if you were to mention them.

# John Loftus, probabilities and the Outsider Test of Faith

John Loftus is a former fundamentalist who has become an outspoken opponent of Christianity which he desires to debunk.

He has created what he calls the “Outsider Test of Faith” which he described as follows:

“This whole inside/outside perspective is quite a dilemma and prompts me to propose and argue on behalf of the OTF, the result of which makes the presumption of skepticism the preferred stance when approaching any religious faith, especially one’s own. The outsider test is simply a challenge to test one’s own religious faith with the presumption of skepticism, as an outsider. It calls upon believers to “Test or examine your religious beliefs as if you were outsiders with the same presumption of skepticism you use to test or examine other religious beliefs.” Its presumption is that when examining any set of religious beliefs skepticism is warranted, since the odds are good that the particular set of religious beliefs you have adopted is wrong.”

But why are the odds very low (instead of unknown) to begin with? His reasoning seems to be as follows:

1) Before we start our investigation, we should consider each religion to possess the same likelihood.

2) Thus if there are (say) N = 70000 religions, the prior probality of a religion being true is 1/70000 p(R), p(R) being the total probability of a religious worldview being true.

(I could not find a writing of Loftus explicitly saying that but it seems to be what he means. However I could find one of the supporters of the OST taking that line of reasoning).

## Objective Bayesianism and the principle of indifference

This is actually a straightforward application of the principle of indifference followed by objective Bayesians:

In completely unknown situations, every rational agent should assign the same probability to all outcomes or theory he is aware of.

While this principle can seem pretty intuitive to many people, it is highly problematic.

In the prestigious Standford Encyclopedia of philosophy, one can read in the article about Bayesian epistemology :

“it is generally agreed by both objectivists and subjectivists that ignorance alone cannot be the basis for assigning prior probabilities.”

To illustrate the problem,  I concocted the following story.

Once upon a time, king Lothar of Lorraine had 1000 treasures he wanted to share with his people. He disposed of 50000 red balls and 50000 white balls.

Frederic the Knight (the hero of my trilingual Christmas tale) has to choose one of those in the hope he would get one of the“goldenen Wundern”.

On Monday, Lothar distributes his treasures in a perfectly random fashion.
Frederic knows that the probability of finding the treasure in a red or in a white ball is the same: p(r) = p(w) = 0.5

On Tuesday, the great king puts 10% of the treasure within red balls and 90% within white ones.

Frederic  knows that the probabilities are   p(r) = 0.10   and    p(w) = 0.90

On Wednesday, the sovereign lord of Lorraine puts 67% of the treasures in red balls and 33% in white ones.

Frederic knows that the probabilities are p(r) = 0.67 and p(w) = 0.33

On Thursday, Frederic does not know what the wise king did with his treasure. He could have distributed them in the same way he did during one of the previous days but also have chosen a completely different method.

Therefore Frederic does not know the probabilities;   p(r) = ?  and p(w) = ?

According to the principle of indifference, Fred would be irrational because he ought to believe that p(r) = 0.5 and p(w) = 0.5 on the grounds it is an unknown situation.

This is an extremely strong claim and I could not find in the literature any hint why Frederic would be irrational by accepting his ignorance of the probabilities.

Actually, I believe that quite the contrary is the case.

If the principle of indifference were true, Fred should reason like this:

“I know that on Monday my Lord mixed the treasures randomly so that p(r) = p(w) = 0.5
I know that on Tuesday He distributed 10% in the white ones and 90% in the red ones so that p(w) = 0.10 and p(r) = 0.90
I know that on Wednesday He distributed 67% in the white ones and 33% in the red ones so that p(w) = 0.67 and p(r) = 0.33
AND
I know absolutely nothing what He did on Thursday, therefore I know tthat the probabilities are p(r) = p(w) = 0.5 exactly like on Monday. “

Now I think that this seems intuitively silly and even absurd to many people. There seems to be just no way how one can transform an utter ignorance into a specific knowledge.

### Degrees of belief of a rational agent

More moderate Bayesians will probably agree with me that it is misguided to speak of a knowledge of probabilities in the fourth case. Nevertheless they might insist he should have the same confidence that the treasure is in a white ball as in a red one.

I’m afraid this changes nothing to the problem. On Monday Fred has a perfect warrant for feeling the same confidence.
How can he have the same confidence on Thursday if he knows absolutely nothing about the distribution?

So Frederic would be perfectly rational in believing that he does not know the probabilities p(r) = ? and p(w) = ?

Likewise, an alien having just landed on earth would be perfectly rational not to know the initial likelihood of the religions:
p(Christianity) = ?     p(Islam) = ?     p(Mormonism) = ? and so on and so forth.

But there is an additional problem here.

The proposition “the religion x is true one” is not related to any event and it is doubted by non-Bayesian (and moderate Bayesian) philosophers that is warranted to speak of probabilities in such a situation.

Either x is true or false and this cannot be related to any kind of frequency.

The great science philosopher Elliot Sobert (who is sympathetic to Bayesian epistemology) wrote this about the probability of a theory BEFORE any data has been taken into account:

Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”

He rightly reminds us t the beginning of his article that “it is not inevitable that all propositions should have probabilities. That depends on what one means by probability, a point to which I’ll return. The claim that all propositions have probabilities is a philosophical doctrine, not a theorem of mathematics.” l

So, it would be perfectly warranted for the alien to either confess his ignorance of the prior likelihoods of the various religions or perhaps even consider that these prior probabilities do not exist, as Elliot Sober did with the theory of gravitation.

In future posts, I will lay out a non-Bayesian way to evaluate the goodness of theory which only depends on the set of all known facts and don’t assume the existence of a prior probability before any data has been considered.

As we shall see, many of the probabilistic challenges of Dr. Richard Carrier against Christianity kind of dissolves if one drops the assertion that all propositions have objective prior probabilities.

To conclude, I think I have shown in this post that the probabilistic defense of the Outsider Test of Faith is unsound and depends on very questionable assumptions.

I have not, however, showed at all that the OST is flawed for it might very well be successfully defended based on pragmatic grounds. This will be the topic of future conversations.

# Knowledge-dependent frequentist probabilities

This is going to be a (relatively) geeky post which I tried to make understandable for lay people.

Given the important role than epistemological assumptions play in debate between theists and atheists, I deemed it necessary to first write a groundwork upon which more interesting discussions (about the existence of God, the historicity of Jesus, miracles, the paranormal…) will lie.

## Bayesianism, Degrees of belief

In other posts I explained why I am skeptical about the Bayesian interpretation of probabilities as degrees of belief. I see no need to adjust the intensity of our belief in string theory (which is a subjective feeling) in order to do good science or to avoid irrationality.

Many Bayesians complain that if we don’t consider subjective probabilities, a great number of fields  such as economy, biology, geography or even history would collapse.
This is a strong pragmatic ground for being a Bayesian I hear over and over again.

## Central limit theorem and frequencies

I don’t think this is warranted for I believe that the incredible successes brought about by probabilistic calculations concern events which are (in principle) repeatable and therefore open to a frequentist interpretation of the related likelihoods.

According to a knowledge-dependent interpretation of frequentism I rely on the probability of an event is its frequency if the known circumstances were to be repeated an infinite number of times.

Let us consider an ideal dice which is thrown in a perfectly random way. Obviously we can only find approximations of this situation in the real world, but a computer can reasonably do the job.

In the following graphics, I plotted the results for five series of trials.

The frequentist probability of the event is defined as

,

that is the limit of the frequency of “3” when the number of trials becomes close to infinity.

This is a mathematical abstraction which never exists in the real world, but from the 6000-th trial onward the frequency is a very good approximation of the probability which will converge to the probability according to the central limit theorem.

Actually my knowledge-dependent frequentist interpretation allows me to consider the probability of unique events which have not yet occurred.

For example, a Bayesian wrote that “the advantage of this view over the frequency interpretation is that it can deal with cases where there is no relative frequency to draw on: for example, Gigerenzer mentions the first ever heart transplant patient who was given a 70% chance of survival by the surgeon. Under the frequency interpretation that statement made no sense, because there had never actually been any similar operations by then.“

I think there are many confusions going on here.
Let us call K the total knowledge of the physician which might include the different bodily features of the patient, the state of his organs and the hazard of the novel procedure.

The frequentist probability would be defined as the ratio of surviving patients divided by the total number of patients undergoing the operation if the known circumstances underlying K were to be repeated a very great (actually infinite) number of times.Granted, for many people this does not seem as intuitive as the previous example with the dice.
And it is obvious there existed for the physician no frequency he could have used to directly approximate the probability.
Nevertheless, this frequentist interpretation is by no means absurd.

The physician could very well have used Bayes’s theorem to approximate the probability while having only used other frequentist probabilities, such as the probability that the body reacting in a certain way would be followed by death or the probability that introducing a device in some organs could have lethal consequences.

Another example is the estimation of the probability it is going to rain tomorrow morning as you will wake up.

While the situation you are confronted with might very well be unique in the whole history of mankind, the probability is well defined by the frequency of rain if all the circumstances you know of were to be repeated an extremely high number of times.

Given this extended, knowledge-dependent variant of frequentism, the probabilities of single events are meaningful and many fields considered as Bayesian (such as economical simulations, history or evolutionary biology) could be as well interpreted according to this version of frequentism.

It has a great advantage: it allows us to bypass completely subjective degrees of belief and to focus on an objective concept of probability.

Now, some Bayesians could come up and tell me that it is possible that the frequentist probabilities of the survival of the first heart transplant patient or of the weather does not exist: in other words, if the known circumstances were to be repeated an infinite number of times, the frequency would keep oscillating instead of converging to a fixed value (such as 1/6 for the dice).

This is a fair objection, but such a situation would not only show that the frequentist probability does not exist but that the Bayesian interpretation is meaningless as well.

It seems utterly nonsensical to my mind to say that every rational agent ought to have a degree of belief of (say) 0.45 or 0.87 if the frequency of the event (given all known circumstances) would keep fluctuating between 0.01 and 0.99.
For in this case the event is completely unpredictable and it seems entirely misguided to associate a probability to it.

Another related problem is that in such a situation a degree of belief could be no nothing more than a pure mind state with no relation to the objective world whatsoever.

As professor Jon Williamson wrote:
Since Bayesian methods for estimating physical probabilities depend on a given prior probability function, and it is precisely the prior that is in question here, this leaves classical (frequentist) estimation methods—in particular confidence interval estimation methods—as the natural candidate for determining physical probabilities. Hence the Bayesian needs the frequentist for calibration.”

But if this frequentist probability does not exist, the Bayesian has absolutely no way to relate his degree of  belief to reality since no prior can be defined and evaluated.

Fortunately, the incredible success of the mathematical treatment of uncertain phenomenons (in biology, evolution, geology, history, economics and politics to name only a few) show that we are justified in believing in the meaningfulness of the probability of the underlying events, even if they might be quite unique.

In this way, I believe that many examples Bayesians use to argue for the indispensability of their subjectivist probabilistic concept ultimately fail because the same cases could have been handled using the frequentist concept I have outlined here.

However this still leaves out an important aspect: what are we to do about theories such as the universal gravitation, string theory or the existence of a multiverse?
It is obvious no frequentist interpretation of their truth can be given.
Does that mean that without Bayesianism we would have no way to evaluate the relative merits of such competing models in these situations?
Fortunately no, but this will be the topic of a future post.
At the moment I would hate to kill the suspense 🙂

# A mathematical proof of Bayesianism?

This is going to be another boring post (at least for most people who are not nerds).

However before approaching interesting questions such as the existence of God, morality and history a sound epistemology (theory of knowledge) must already be present. During most (heated) debates between theists and atheists, people tend to take for granted many epistemological principles which are very questionable.

This is why I spend a certain amount of my time exploring such questions, as a groundwork for more applied discussions.

I highly recommand all my reader to first read my two other posts on the concept of probability before reading what follows.

Bayesianism is a theory of knowledge according to which our degrees of belief in theories are well defined probabilities taking on values between 0 and 1.

According to this view, saying that string theory has a probability of 0.2 to be true is as meaningful as saying that a normal dice randomly thrown has a probability of 1/6 to produce a “3”.

Bayesians like asserting over and over again that it is mathematically proven to say we ought to compute the likelihood of all beliefs according to the laws of probability and first and foremost Bayes formula:

Here I want to debunk this popular assertion. Bayes theorem can be mathematically proven for frequential probabilities but there is no such proof that ALL our degrees of belief behave that way.

Let us consider (as an example) the American population (360 millions people) and two features a person might have.

CE (Conservative Evangelical): the individual believes that the Bible contains no error.

FH (Fag Hating): the individual passionately hates gay people.

Let us suppose that 30% of Americans are CE and that 5.8% of Americans hate homosexuals.

The frequencies are f(CE) = 0.30 and f(FH) = 0.058

Let us now consider a random event: you meet an American by chance.
What is the probability that you meet a CE person and what is the probability that you meet a FH individual?
According to a frequentist interpretation, the probability equals the frequency of meeting such kinds of persons given a very great (actually infinite) number of encounters.
From this it naturally follows that p(CE) = f(CE) = 0.30 and p(FH) = f(FH) = 0.058

Let us now introduce the concept of conditional probability: if you meet a Conservative Evangelical, what is the probability that he hates faggots p(FH|CE)? (the | stands for „given“).

If you meet a fag-hating person, what is the probability that he believes in Biblical inerrancy p(CE|FH)?

To answer these questions (thereby proving Bayes theorem) it is necessary to get back to our consideration of frequencies.

Let us consider that 10% of all Conservative Evangelicals and 4% of people who are not CE hate faggots: f(FH/CE) = 0.1 and f(FH/CE) = 0.04. The symbol ⌐ stands for the negation (denial) of a proposition.

The proportion of Americans who are both conservative Evangelicals and fag-haters is f(FHCE) = f(FH/CE)*f(CE) = 0.1*0.3 = 0.03.

The proportion of Americans who are NOT conservative Evangelicals but fag-haters is f(FH∩⌐CE) = f(FH/⌐CE)*f(⌐CE) = 0.04*0.7 = 0.028.

Logically the frequency of fag-haters in the whole American population is equal to the sum of the two proportions:

f(FH) = f(FHCE) + f(FH∩⌐CE) = 0.03 + 0.028 = 0.058

But what if we are interested to know the probability that a person is a conservative Evangelical IF that person hates queers p(CE|FH)?

This corresponds to the frequency(proportion) of Conservative Evangelicals among Fag-Haters: f(CE|FH).

We know that f(FHCE) = f(CE∩FH) = f(CE|FH)*f(FH)

Thus f(CE|FH) = f(FH∩CE) / f(FH)

Given a frequentist interpretation of probability, this entails that

which is of course Bayes theorem. We have mathematically proven it in this particular case but the rigorous mathematical demonstration would be pretty much the same given events expressable as frequencies.

If you meet an American who hates gays, the probability that he is a Conservative Evangalical is 51.72% (given the validity of my starting values above).

But let us now consider the Bayesian interpretation of probability (our degree of confidence in a theory) in a context having nothing to do with frequencies.

Let S be “String theory is true“ and UEP “an Undead Elementary Particle has been detected during an experience in the LHC“.

In that context, the probabilities correspond to our confidence in the truth of theories and hypotheses.

We have no compelling grounds for thinking that

, that is to say that is the way our brains actually work or ought to work that way in order to strive for truth.

The mathematical demonstration used to prove Bayes theorem relies on related frequencies and cannot be employed in a context where propositions (such as S and UEP) cannot be understood as frequencies.
Considering ALL our degrees of beliefs like probabilities is a philosophical decision and not an inevitable result of mathematics.

I hope that I have been not too boring for lay people.

Now I have a homework for you: what is the probability that Homeschooling Parents would like to employ my post as an introduction to probability interpretation, given that they live in the Bible Belt  p(HP|BB)?

# On the ontology of the objective Bayesian probability interpretation

Warning: this post is going to analyse mathematical concepts and will most likely cause intense headaches to non-mathematical brains.

At the beginning I wanted to make it understandable for lay people before I realized I am not the right man for such a huge task.

I considered it necessary to write it since Bayesian considerations plays a very important role in many scientific and philosophical fields, including metaphysic problems such as the existence of God.

Basically, objective Bayesianism is a theory of knowledge according to which probabilities are degrees of belief (and vice-versa) whose values can be objectively identified by every rational agent disposing of the same information.

It stands in opposition to frequentism which stipulates that the probability of an event is identical with the frequency of a great (nearly infinite) number of events.

I illustrated how this plays out in a previous post.

The name of the philosophy stems from Bayes theorem which stipulates that

where P(A|B) is the probability of an event A given an event B, B the probability of the event B given the event A, P(A) and P(B) the total probabilities of the event A and B, respectively.

At that point, it is important to realize that the Bayesian identification of these probabilities with degrees of belief in the hypotheses A and B is a philosophical decision and not a mathematical result, as many Bayesians seem to believe.

Bayes theorem is utilized to actualize the probability of the theory A as new data (the truth of B) come in. Unless one believes in infinite regress, there is going to be basic probabilities called priors which cannot themselves be deduced from former probabilities or likelihoods.

Here I want to go into two closely related problems of Bayesian epistemology, namely those of the ontological nature of these probabilities and the values one objectively assigns to them.

Let us consider that I throw a coin in the air. My degree of belief (1/2) it will land on heads is a subjective brain state which may (or should) be related to a frequency of action if betting money is involved.

But let us now consider the young Isaac Newton who was considering his newly developed theory of universal gravitation. What value should his degree of belief have taken on BEFORE he had begun to consider the first data of the real world?

Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”

Frequentism provides us with well-defined probabilities in many situations. The likelihood of getting a coin coming down as heads is identical with the frequency of this event if I were to repeat it an infinite number of times and the central limit theorem guarantees that one gets an increasingly better approximation of this quantity with a growing number of trials.

But what does the likelihood of the theory of universal gravitation being 2%, 5% or 15% mean?

And once one has come up with a definition one thinks to be valid, what is the objective value for the probability prior to any observation being taken into account?

I could not find any answer in the Bayesian papers I have read until now, these questions are apparently best ignored. But to my mind they are very important if you pretend to be building up a theory of knowledge based on probabilities.

Next episode: a mathematical proof of Bayesianism?

Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)