A mathematical proof of Ockham’s razor?

Ockham’s razor is a principle often used to dismiss out of hand alleged phenomena deemed to be too complex. In the philosophy of religion, it is often invoked for arguing that God’s existence is extremely unlikely to begin with owing to his alleged incredible complexity. Bild A geeky brain is desperately required before entering this sinister realm.

In a earlier post I dealt with some of the most popular justifications for the razor and made the following distinction:

Methodological Razor: if theory A and theory B do the same job of describing all known facts C, it is preferable to use the simplest theory for the next investigations.

Epistemological Razor: if theory A and theory B do the same job of describing all known facts C, the simplest theory is ALWAYS more likely.”

Like the last time, I won’t address the validity of the Methodological Razor (MR) which might be an useful tool in many situations.

My attention will be focused on the epistemological glade and its alleged mathematical grounding.

Example: prior probabilities of models having discrete variables

To illustrate how this is supposed to work, I built up the following example. Let us consider the result Y of a random experiment depending on a measured random variable X . We are now searching for a good model (i.e. function  f(X)  ) such that the distance d = Y - f(X) is minimized with respect to constant parameters appearing in f . Let us consider the following functions: f1(X,a1)f2(X,a1,a2)f3(X,a1,a2,a3)  and  f4(X,a1,a2,a3,a4) . which are the only possible models aiming at representing the relation between Y and X. Let n1 = 1, n2 = 2, n3 =3 and n4 = 4 be their number of parameters. In what follows, I will neutrally describe how objective Bayesians justify Ockham’s razor in that situation.

The objective Bayesian reasoning

Objective Bayesians apply the principle of indifference, according to which in utterly unknown situations every rational agent assigns the same probability to each possibility.

Let be pi_{total} = p( f i) , the probability that the function is the correct description of reality. It follows from that assumption that p1_{total}=p2_{total} = p3_{total} = p4_{total} = p = \frac{1}{4} owing to the the additivity of the probabilities.

Let us consider that one constant coefficient ai can only take on five discrete values  1, 2, 3, 4 and 5. Let us call p1  p2p3  and  p4 the probabilities that one of the four models is right with very specific values of the coefficient (a1, a2, a3, a4). By applying once again the principle of indifference, one gets: p1(1) = p1(2) = p1(3) = p1(4) = p1(5) = \frac{1}{5}p1_{total} = 5^{-n1}p In the case of the second function which depends on two variable a, we have 5*5 doublets of values which are possible: (1,1) (1,2),…..(3,4)….(5,5) From indifference, it follows that p2(1,1)=p2(1,2) = ... = p2(3,4) = ....p2(5,5) = \frac{1}{25} p2_{total} = 5^{-n2}p There are 5*5*5 possible values for f3.

Indifference entails that p3(1,1,1)=p3(1,,12)=... =p3(3,,2,4)=....p3(5,5,5)= \frac{1}{125} p3_{total} = 5^{-n3}p f4 is characterized by four parameters, so that a similar procedure leads to p4(1,1,1,1)=p4(1,1,1,2) =...=p4(3, 2,1,4)=....p4(5,5,5,5)=\frac{1}{625}p4_{total}= 5^{-n4}p Let us now consider four wannabe solutions to the parameter identification problem: S1 = a1 S2 = {b1, b2} S3 = {c1, c2, c3} S4 = {d1, d2, d3, d4} each member being an integer between 1 and 5. The prior probabilities of these solutions are equal to  the quantities we have just calculated above. Thus p(S1)= 5^{-n1}p p(S2)= 5^{-n2}p p(S3)= 5^{-n3}p p(S4)= 5^{-n4}p From this, it follows that  \frac{p(Si)}{p(Sj)}= 5^{nj - ni} or O(i,j)= \frac{p(Si)}{p(Sj)} =5^{nj - ni} If one compares the first and the second model, O(1,2) = 5^{2-1} = 5 which means that the fit with the first model is (a priori) 5 times as likely as that with the second one .

Likewise, O(1,3) = 25 and O(1,4) = 125 showing that the first model is (a priori) 25 and 125 times more likely than the third and fourth model, respectively. If the four model fits the model with the same quality (in that for example fi(X, ai) is perfectly identical to Y), Bayes theorem will preserve the ratios for the computation of the posterior probabilities.

In other words, all things being equal, the simplest model f1(X,a1) is five times more likely than f2(X,a1,a2), 25 times more likely than f3(X,a1,a2,a3) and 125 times more likely than f4(X,a1,a2,a3,a4) because the others contain a greater number of parameters.

For this reason O(i,j) is usually referred to as an Ockham’s factor, because it penalizes the likelihood of complex models. If you are interested in the case of models with continuous real parameters, you can take a look at this publication. The sticking point of the whole demonstration is its heavy reliance on the principle of indifference.

The trouble with the principle of indifference

I already argued against the principle of indifference in an older post. Here I will repeat and reformulate my criticism.

Turning ignorance into knowledge

The principle of indifference is not only unproven but also often leads to absurd consequences. Let us suppose that I want to know the probability of certain coins to land odd. After having carried out 10000 trials, I find that the relative frequency tends to converge towards a given value which was 0.35, 0.43, 0.72 and 0.93 for the four last coins I investigated. Let us now suppose that I find a new coin I’ll never have the opportunity to test more than one time. According to the principle of indifference, before having ever started the trial, I should think something like that:

Since I know absolutely nothing about this coin, I know (or consider here extremely plausible) it is as likely to land odd as even.

I think this is magical thinking in its purest form. I am not alone in that assessment.

The great philosopher of science Wesley Salmon (who was himself a Bayesian) wrote what follows. “Knowledge of probabilities is concrete knowledge about occurrences; otherwise it is uselfess for prediction and action. According to the principle of indifference, this kind of knowledge can result immediately from our ignorance of reasons to regard one occurrence as more probable as another. This is epistemological magic. Of course, there are ways of transforming ignorance into knowledge – by further investigation and the accumulation of more information. It is the same with all “magic”: to get the rabbit out of the hat you first have to put him in. The principle of indifference tries to perform “real magic”. “

Objective Bayesians often use the following syllogism for grounding the principle of indifference.

1)If we have no reason for favoring one outcomes, we should assign the same probability to each of them

2) In an utterly unknown situation, we have no reason for favoring one of the outcomes

3) Thus all of them have the same probability.

The problem is that (in a situation of utter ignorance) we have not only no reason for favoring one of the outcomes, but also no grounds for thinking that they are equally probable.

The necessary condition in proposition 1) is obviously not sufficient.

This absurdity (and other paradoxes) led philosopher of mathematics John Norton to conclude:

“The epistemic state of complete ignorance is not a probability distribution.”

The Dempter Shafer theory of evidence offers us an elegant way to express indifference while avoiding absurdities and self-contradictions. According to it, a conviction is not represented by a probability (real value between 0 and 1) but by an uncertainty interval [ belief(h) ; 1 – belief(non h) ] , belief(h) and belief(non h) being the degree of trust one has in the hypothesis h and its negation.

For an unknown coin, indifference according to this epistemology would entail  belief(odd) = belief(even) = 0, leading to the probability interval [0 ; 1].

Non-existing prior probabilities

Philosophically speaking, it is controversial to speak of the probability of a theory before any observation has been taken into account. The great philosopher of evolutionary biology Elliot Sober has a nice way to put it: ““Newton’s universal law of gravitation, when suitably supplemented with plausible background assumptions, can be said to confer probabilities on observations. But what does it mean to say that the law has a probability in the light of those observations? More puzzling still is the idea that it has a probability before any observations are taken into account. If God chose the laws of nature by drawing slips of paper from an urn, it would make sense to say that Newton’s law has an objective prior. But no one believes this process model, and nothing similar seems remotely plausible.”

It is hard to see how prior probabilities of theories can be something more than just subjective brain states.


The alleged mathematical demonstration of Ockham’s razor lies on extremely shaky ground because:

1) it relies on the principle of indifference which is not only unproven but leads to absurd and unreliable results as well

2) it assumes that a model has already a probability before any observation.

Philosophically this is very questionable. Now if you are aware of other justifications for Ockham’s razor, I would be very glad if you were to mention them.

Do extraordinary claims demand extraordinary evidence?

Deutsche Version: Erfordern außergewöhnliche Behauptungen außergewöhnliche Beweise?


Answering such a question proves much more difficult than many people like to think.

The famous Skeptic of parapsychology Richard Wiseman from Britain was once asked why he rejected Extrasensory Perceptions (ESP) and specifically remote viewing. His answer was very revealing:

“I agree that by the standards of any other area of science that remote viewing is proven, but begs the question: do we need higher standards of evidence when we study the paranormal? I think we do.

“If I said that there is a red car outside my house, you would probably believe me.

“But if I said that a UFO had just landed, you’d probably want a lot more evidence.

“Because remote viewing is such an outlandish claim that will revolutionize the world, we need overwhelming evidence before we draw any conclusions. Right now we don’t have that evidence.”

Such an approach to anomalous phenomena is often backed up by the legendary Bayes’ theorem, according to which one can actualize the likelihood of the truth of a theory by incorporating the information conveyed by new facts.

I’m going to keep a critical examination of the related philosophy Bayesianism to future conversations.

In the second book of the Narnia series “The King Of Narnia“, the famous writer C.S. Lewis completely rejected this method. The young Lucy came into Narnia, a parallel world, after having hidden within a wardrobe. Back in the house, she ran to her siblings who utterly denied the reality of her experience.

Worried that their small sister kept holding fast on the truth of her incredible story, they searched Professor Kirke who rebuked them for not trusting Lucy. After they retorted that her claim was extraordinary, he replied:

“Logic!” said the Professor half to himself. “Why don’t they teach logic at these schools? There are only three possibilities. Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn’t tell lies and it is obvious that she is not mad. For the moment then and unless any further evidence turns up, we must assume that she is telling the truth.”

That is to say, for the old wise professor, normal evidence was sufficient for vindicating the wild claim of the little girl.

At this point, I am kind of confused about both principles.

On the one hand, it is clear one should always take our background knowledge into account before evaluating a new hypothesis or theory.

On the other hand, if a set of facts is sufficient to prove an ordinary claim, I don’t see why a similar set of facts should fail to prove an extraordinary conclusion.

Let us now see some concrete examples of well-known phenomena which were rejected in the past due to their alleged extraordinariness. Saying in hindsight they weren’t extraordinary after all would be all too easy for this was the way they were perceived by scientists at that time.

The existence of meteorites was once thought to be an outlandish claim and the normal evidence was explained away in terms of purely terrestrial phenomena or witness hallucinations.

In 1923 the German geologist Alfred Wegener found normal evidence for continental drift, but failing to present a mechanism which worked, his theory was ignored and even ridiculed during decades.

The same thing could be said about ball lightnings which were often dismissed as stemming from illusions or hallucinations experienced by the witnesses.


Nowadays a similar phenomenon can be observed for the small proportion of flying objects which are truly unidentified.

If extraordinary claims demands extraordinary evidence, then UFOs (in the present) does not and continental drift, meteorites and ball lightnings did not (in the past) exist.

But if one only seeks for normal evidence, a strong case can be made that some UFOs (according to the original definition as “unidentified”) really exist. I am going to explain this in future posts.

We will also explore together the possibility that there really exists normal evidence for the resurrection of Jesus of Nazareth.



Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)





Does the progress of science vindicate naturalism?

Deutsche Version:  Weisen die Fortschritte der Naturwissenschaft auf die Wahrheit des Naturalismus hin?



In the Secular Outpost at Patheos, the insightful atheist and naturalist philosopher Jeffery Jay Lowder wrote an interesting post criticizing theistic explanations.

One sentence at the end of the text caught my attention:

“At this point, the naturalist can hardly be blamed for comparing the track record of naturalistic explanations to that of theistic explanations and sticking with naturalistic explanations.”
The problem with that comparison is that it is very similar to a kind of black-and-white thinking.

He opposes naturalism (A) against everything incompatible with naturalism (B) and states that if on average science is much more consistent with A than with B, then A must be true.

But this is a fallacious dichotomy. Let us consider different theoretical supernatural models, whose existence as ideas is independent of the first time they came up in a human mind.

B1: Spiritism: everything we see around us is caused by invisible forces

B2: Intervention Theism: there are some automatic processes but God has to intervene all the time to fix things

B3: Lazy Theism: many things work automatically according to the laws created by God but he has to intervene for important things like the creation of new species

B4: Evolutionary Theism: God created the laws of nature in such a way he can work in the universe without violating them.

B5: Deism: God just created our universe and doesn’t care anymore about it, he has been from the very beginning an absent landlord.

B6: Panentheism (Phillip Clayton): there are strong emergent properties and phenomena which cannot be reduced to the sum of their parts. God is the greatest being this strong emergence can possibly produce.

If one adopts an epistemological version of Occam’s razor (which I don’t as I explain here) it is clear that B1 and B2 have been constantly pushed back as science progressed and from Darwin’s time, the same thing has been occurring for B3 despite all the efforts of ID creationists to show the contrary.

Now, many atheists (tough not necessarily Jeff himself) reason like this: on average, B has been constantly shoved away by the advances of science which is completely compatible with A, so since B4 and B5 belongs to B, they must also be much less likely to be true than A.

But that’s a clear example of a fallacious reasoning.

If you want to show that B5 is much less likely than A, you have to DIRECTLY compare them.

And the extraordinary success of science to find natural and logical explanations would have been a prediction of B5 (and even B4) three thousand years ago.

Therefore you cannot use the success of natural explanations to favor A over B4, B5, B6 because the three models predicted the same things.

All you can say is appealing to the epistemological razor of Occam: when two theories explain equally well the same data, the simplest one is always the most likely one.

But as I’ve explained, nobody has been able to prove this without begging the question and smuggling assumptions about the actual simplicity of the cosmos into the argumentation.



Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)




On the burden of proof of the atheist

Deutsche Version: von der Beweislast des Atheisten


Paul Copan has written a great article several years ago showing that both theists and atheists have a burden of proof regarding the truth of their claims:


I’ve give additional reasons to think so on my blog under the category “Parsimony”
. https://lotharlorraine.wordpress.com/category/parsimony/

If you’re discussing with an atheist friend, don’t forget that aspect.



Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)






Does the absence of evidence mean evidence of absence?

Deutsche Version: Ist das Fehlen von Beweisen der Beweis des Fehlens?

Let us consider the problem of the existence of God.

There are basically three possibilities which might be nuanced by probabilities:

  1. I know God exists (Theism)
  2. I know God does not exist (Atheism)
  3. I don’t know if God exists or not (Agnosticism)

For many people today, if we have neither evidence for nor against God’s existence, we should not only reject 1), but also 3) and be atheists.

Quite a few folks would justify that by saying that the absence of evidence is evidence of absence (a principle which will be referred to as PA).

Flying spaghetti monsters and invisible pink unicorns

They often illustrate that by quoting the infamous pink invisible unicorn (which might be lying on the ground besides you!)


Although it is very seldom well articulated, the reasoning seems to look as follows:

  1. it is certain that the pink invisible unicorn doesn’t exist
  2. if it is certain, it has to have a justification
  3. PA is the only possible justification
  4. therefore PA must be true.

This is the only way I can make sense of the manner Skeptics use such kinds of prowling monsters in public debates.

The first thing which strikes me is that it is completely absurd and hopelessly circular.

We don’t know if PA is true and want to prove it. Now we want to base our proof of PA on our certainty that there is no pink invisible unicorn. But we can only know there is no such beast if PA is true!

But PA faces a far more serious problem: in many situations it leads to quite absurd results…

Let us consider for example that I’ve invented a time-travel machine and fly with it to the ancient Greece.

I meet there an Epicurean philosopher who fervently believes in PA. During the course of our discussion, I explain to him in great details how a kangaroo looks like.


Amused, he glances at me and tells me: “since I have no evidence such a creature exists, I can be almost certain it is not real.“

Would he be justified to hold this belief?

Back to the present time: I have no evidence there is a bear-like intelligent being scratching his head at the boundary of the milky way, can I conclude there is no such being?

The absence of evidence is only evidence of absence if one would expect such evidence to be out there.

But once we’ve rejected PA, what are we to do with our best invisible friend and her single pink horn?


The ground for our disbelief shouldn’t be PA, but the self-contradictory nature of the proposition.

I’m completely open to the existence of a pink unicorn somewhere in the multiverse, or of a creature invisible for our eyes, but not of a being having both features.

Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)

Ockham’s razor, the Origin of the Universe and the Search for an Airtight Argument

I’m really grateful to Jonathan Pierce for the time he took to read my essay and criticize it.

From his writing, it wasn’t clear, however, if he was defending the Methodological Razor (MR) or the Epistemological Razor (ER), as I’ve defined the terms in my initial post.


Instead he chose to use the general term (OR).

At the end of his response, he wrote

This methodological approach is all that is necessary

but there would be no issue here since I’ve nothing against MR, which cannot be used as an argument showing God’s existence to be unlikely.

Therefore, I’m going to suppose Jon refers to EM henceforth since this is relevant for our disagreement concerning theism, atheism and agnosticism.

First Jonathan is right that I wasn’t careful enough concerning the Kalam Cosmological Argument (KCA):

  1. Everything that begins to exist has a cause

  2. the universe began to exist

  3. therefore the universe has a cause

It is obviously true that many cosmologists believing in Loop Quantum Cosmology, some forms of Multiverses, Big Crunch and so on and so forth are going to deny the truth of premise 2).

I wholeheartedly agree that Evangelical apologists like William Lane Craig pick and choose the data they wish, but the same can be said of atheist apologists.

My point was that many honest atheists (like Jeffrey Jay Lowder) who are quite open to the truth of 2) find that 1) is fallacious due to reasons very relevant for the present discussion.

Jonathan mentioned philosopher Kevin.T Kelly who doesn’t only prove OR to be mathematically true, but also emphasized that OR can ONLY be used as a tool for ensuring convergence towards truth (MR).

If theories A and B account equally well for the same data, and A is simpler than B, we cannot say that it’s more probable that A is true, but JUST that it is better to methodologically ASSUME that A is true to converge towards the TRUE theory X, which might be much more complex than B itself.

Professor Kelly says that assuming that (all other things being equal) the simpler theory is always the most likely “smacks of wishful thinking.”

Jon wrote that:

The problem is, OR seems to be an inductive argument used pragmatically, and so extending it in the way in which LS does to “ALWAYS” be the case appears to be extrapolating an inductive conclusion to a deductive premise.

The problem is that you need an ER to argue against the truth of theism, a MR won’t do the job. And the inductive justification of OR is only valid for a limited space, as I’m going to explain below.

In fact, ironically, this is a move which the KCA does with both opening premises of its arguments! –

Actually, popular versions of the KCA do the very same mistake as proponents of ER by assuming certain premises are true far beyond the domain where they can be empirically, inductively justified.

The main thrust of the argument appears to be that to justify OR one needs an independent, non circular way of doing it.  But this is similar to the critique of reason. One cannot justify reason without appealing to reason. True, but we do all the time, and we use it because reason works. On balance, if we can show OR to be pragmatically useful and successful, then it is at least a good rule of thumb.

This is very revealing and interesting. Jonathan is apparently ready to accept the existence of basic beliefs which are pragmatically justified. He would certainly have no problem accepting the existence of intuitive moral truths. But what about the existence of feelings and thoughts DIFFERENT from the material world they are about?

That said, I’m skeptic about the fact we ought to believe in OR (or even MR) to live consistently, there are other postulated principles which do the job quite well.

In fact, to draw further analogies with the KCA, this is precisely the move done with ex nihilo nihil fit, out of nothing nothing comes. This is something people like Craig claim is one of the most basic philosophical truths or intuitions. Yet it is merely an inductive observation (an erroneous one at that). One must differentiate between such approaches, between intrinsic, analytic conclusions, and inductively derived synthetic truths.

I largely agree with that, as Jeffrey Jay Lowder pointed out, such a belief is only valid within the realm it has been inductively arrived at. But ER faces the very same problem.

Next, it must be clear I didn’t mention the existence of multiverses as something violating the Epistemological or even Methodological Razor.

I used multiverses to show one cannot prove ER inductively with its absolute claims due to the fact all our observations are limited to our present universe.

Finally, Jon wrote:

This methodological approach is all that is necessary, in my view. I am not sure of the application of the epistemological approach. I think it IS an inductive, scientific tool which is more probably right given past successes and so can be applied to anything concerning the natural sciences and philosophy. It is not necessarily true, I would agree (intuitively, without having studied OR too much).

Like the principle “from nothing, nothing comes”, I see no reason to think OR can be applied to

completely unknown situations, like simulated realities.

Furthermore, as other users have pointed out, it’s true I poorly expressed myself. I meant that all things being equal, history has vindicated more complex theories.

Maybe I was wrong, and the examples of continental drift and ball lightning weren’t directly applicable to OR, but to the claim “Extraordinary claims demand extraordinary evidence” as I’m going to explain in a future post.

To put it in a nutshell, if Jonathan thinks he can use ER (or even OR in general) to undermine theism, he ought to prove it is always valid. If he denies this universality, he has to explain what the exceptions are, and why some metaphysical questions could not be such exceptions.

That said, there exist other arguments against theism (like the problem of evil, religious confusion, physical confusion and so on.), but they aren’t very popular among non-philosophers.



Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)






Deconstructing the Popular Use of Occam’s Razor

Deconstructing the Popular Use of Occam’s Razor

Occam’s Razor (OR) seems to lie at the very core of the worldviews of naturalism and materialism. It demands only few imagination to realize the pair would completely collapse if the razor were cut off.

Also called principle of parsimony, it exists in two forms: a methodological form and an epistemological form.

Methodological Razor: if theory A and theory B do the same job of describing all known facts C, it is preferable to use the simplest theory for the next investigations.

Epistemological Razor: if theory A and theory B do the same job of describing all known facts C, the simplest theory is ALWAYS more likely.

Here, I won’t address the validity of the Methodological Razor (MR) which might be an useful tool in many situations.

I am much more interested in evaluating the Epistemological razor (ER), since it is under this form it most always plays an overwhelming role in philosophy, theology and the study of anomalous phenomena.

Nowadays, the most popular argument for atheism looks like this:

  1. It is possible (at least a priori) to explain all facts of the Cosmos as satisfactorily with nature alone as with God

  2. ER: if theory A and theory B do the same job of describing all known facts C, the simplest theory is ALWAYS more likely

  3. God is much more complex than nature

  4. Nature alone is much more likely to be responsible for reality than God

Of course, since neither God nor nature can explain their own existence, ER stipulates that the existence of nature as a brute fact is much more probable than the existence of God as a brute fact.

ER is employed in a huge variety by proponents with diverse worldviews. This is the main reason why most scientists believe that UFO cannot be something otherworldly.

Despite the voluminous literature related to ER, it comes as a surprise that only a few publications deal with its justification. And unlike the expectations of its most enthusiastic proponents, such a demonstration proves a formidable task due to its universal claim to always hold true.

In this entry, I’ll show why I’m under the impression that nobody has been able to prove ER without begging the question in one way or the other.

One common way to argue is by using a reductio ad adsurbum.

Let us consider the following realistic conservation I could have with a UFO denier.

Skeptical Manitoo: „I was really shocked as I learned you believe all this non-senses about flying saucers!“

Lothar’s son: „Actually, this isn’t quite true. I do believe most of them can be traced back to natural or human causes. I’m just undecided about a small minority of them. I consider it possible that something otherworldly might be going on…“

Skeptical Manitoo:„What??? How dare you utter such lunacy before having drunk your third beer? The UFO hypothesis is the most complex one, therefore it is also the most unlikely one!“

Lothar’s son: „And how the hell do you know that, all other things being equal, simpler explanations are always more probable?“

Skeptical Manitoo: „And how do you know otherwise that the traces on the field stem from some wild living things rather than from elves?“ he replied bitterly.

At the point, the skeptics expects me to recognize this is silly indeed, AND that the only way to avoid this madness is by believing ER, so that I’ll end up agreeing with him.

But this is only a pragmatic argument, it has no bearing on the truth of ER whatsoever.

What if I stay stubborn:

Lothar’s son: „I believe your elfic intervention is also within the realm of possibilities, even if it is more complex.“

Skeptical Manitoo: „What? And would you also tolerate the presence of a Flying Spaguetti Monster which has caused the rain shower which fell on us previously?“

Lothar’s son: „„Of course!“

Skeptical Manitoo: „What? And do you also believe in a flying Dick Cheney who threw bombs upon the civilian population in Iraq?“

Reaching this level of insanity, I might very well be tempted to nod in order to escape the ordeal.

But it is important to realize that this whole discussion only shows, at best, a pragmatic MR to be valid.

If there is no INDEPENDENT ground for rejecting the crazy situations my imaginative friend has mentioned, anti-realism seems to be true, which means we can never have any kind of knowledge.

To justify the Epistemological Razor, one clearly needs non-circular arguments which might come from pure philosophical considerations or experimental inferences.

A very commonly used one is the alleged inexorable progress of science towards the simplest explanations.

There are many problems with this argument. The history of science is full of examples of complex theories who were wrongly dismissed because of their lack of parsimony, tough the future vindicated them in the most triumphant way. Continental drift and the reality of ball lightnings are only two examples on a long list.

But let us suppose for the sake of the argument that during OUR ENTIRE history, the simplest theories always proved to be the most likely.

Would this show that ER, as I’ve defined it above, is true? Not at all.

All this would prove is that we live in an universe (or perhaps even ONLY a region of an universe) where things are as simple as possible.

But modern science seems to indicate there exist a gigantic (perhaps even an infinite) number of parallel universes out there. And as Max Tegmark pointed out, these are not only limited to those resulting from chaotic cosmic inflation and string theory, but include as well quantum universes (Everett’s theory) and perhaps even mathematical universes. Simulated universes can certainly be added to this list.

So ultimately the justification of Occam’s razor would look like that:

  1. in our universe, simplest explanations are always the most likely to be true
  2. if it is true in our universe, it is also probably true in the other 10000000000000000000000000000000000…… universes we know very little of
  3. therefore, in the entire reality, simplest explanations are always the most likely to be true.

I hope that most of my readers will realize that premise 2) is an extraordinary claim, an interpolation based on nothing more than wishful thinking.

I know there have been many elegant attempts to ground ER on bayesian considerations. Like philosopher of mathematics Kelly I believe all are hopelessly circular because they smuggle simplicity into their definition of reality.

I’d be glad to learn from my reader if they know ways to justify ER which don’t presuppose the existence of a simple multiverse in the first place.

Finally, I want to point out a further problem one should have using ER against the existence of


The Kalham’s cosmological argument (named after a great Muslim theologian) tries to establish the existence of a transcendence as follows:

  1. Everything that begins to exist has a cause
  2. the universe began to exist
  3. therefore the universe has a cause

Due to the overwhelming experimental and theoretical success of the Big Bang theory, atheist apologists can no longer deny premise 2)

Consequently, they typically deny premise 1), arguing like Jeffrey Jay Lowder that it is not always true.
Lowder agrees it would be absurd to believe something in our universe could pop into existence, and this is the case because all our experience allows us to INDUCTIVELY conclude this is never going to occur. But he also emphasizes that this inference is only valid for things taking place WITHIN our universe, and not outside.
Since the grounds for believing in 1) are limited to our experience in this universe, we’ve no warrant to assert it is generally true.

But this is exactly my point about Occam’s razor or the principle of parsimony.

It might (or not) be true it holds in our universe, but this gives us absolutely no justification for believing it can be applied to transcendental realities (or to rule them out).

So, this was admittedly a very long post, and I hope to receive lots of positive and negative feedbacks!



Thematic list of ALL posts on this blog (regularly updated)

My other blog on Unidentified Aerial Phenomena (UAP)