After half an hour worth of research, I have found out that there are two competing views in scientific methodology that can be traced back to two different views on inference models. Long story short, if you compute P( Data | Model ), you are a frequentist. If you compute P( Model | Data ), you may proudly label yourself as a Bayesian.

Okay, but seriously, can it be a serious scientific issue? Apparently, yes. (Google: 265K hits on “frequentist versus bayesian”) Most Google links lead to blog posts in which authors argue for their favorite toy model by giving examples in different kinds of weirdness. Frequentists give obvious toy examples, Bayesians reply with taking the examples and showing that with advanced probability-fu, one can get better results (if the example behaves well), then a debate on the philosophical implications of ~~iteratees~~ inference begins…

Don’t get me wrong: I somewhat like the Bayesian point of view, since computing the probability/likelihood of the model seems to be more straightforward in terms of scientific reasoning, which is about testing models, not data. But in the end, you still have to justify your __statistical__ model used in inference (contrary to: “yo dawg, I heard you like probability distributions, so I took the Beta distribution: