Warning: this post is going to analyse mathematical concepts and will most likely cause intense headaches to non-mathematical brains.
At the beginning I wanted to make it understandable for lay people before I realized I am not the right man for such a huge task.
I considered it necessary to write it since Bayesian considerations plays a very important role in many scientific and philosophical fields, including metaphysic problems such as the existence of God.
Basically, objective Bayesianism is a theory of knowledge according to which probabilities are degrees of belief (and vice-versa) whose values can be objectively identified by every rational agent disposing of the same information.
It stands in opposition to frequentism which stipulates that the probability of an event is identical with the frequency of a great (nearly infinite) number of events.
I illustrated how this plays out in a previous post.
The name of the philosophy stems from Bayes theorem which stipulates that
where P(A|B) is the probability of an event A given an event B, B the probability of the event B given the event A, P(A) and P(B) the total probabilities of the event A and B, respectively.
At that point, it is important to realize that the Bayesian identification of these probabilities with degrees of belief in the hypotheses A and B is a philosophical decision and not a mathematical result, as many Bayesians seem to believe.
Bayes theorem is utilized to actualize the probability of the theory A as new data (the truth of B) come in. Unless one believes in infinite regress, there is going to be basic probabilities called priors which cannot themselves be deduced from former probabilities or likelihoods.
Here I want to go into two closely related problems of Bayesian epistemology, namely those of the ontological nature of these probabilities and the values one objectively assigns to them.
Let us consider that I throw a coin in the air. My degree of belief (1/2) it will land on heads is a subjective brain state which may (or should) be related to a frequency of action if betting money is involved.
But let us now consider the young Isaac Newton who was considering his newly developed theory of universal gravitation. What value should his degree of belief have taken on BEFORE he had begun to consider the first data of the real world?
The great science philosopher Elliot Sobert wrote this about this particular situation:
“Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”
Frequentism provides us with well-defined probabilities in many situations. The likelihood of getting a coin coming down as heads is identical with the frequency of this event if I were to repeat it an infinite number of times and the central limit theorem guarantees that one gets an increasingly better approximation of this quantity with a growing number of trials.
But what does the likelihood of the theory of universal gravitation being 2%, 5% or 15% mean?
And once one has come up with a definition one thinks to be valid, what is the objective value for the probability prior to any observation being taken into account?
I could not find any answer in the Bayesian papers I have read until now, these questions are apparently best ignored. But to my mind they are very important if you pretend to be building up a theory of knowledge based on probabilities.
Thematic list of ALL posts on this blog (regularly updated)