Against Falsifiability


Nearly two years after the publication of Farewell to Reality, the debate about ‘fairy-tale’ physics rages on. Highly speculative and arguably non-scientific papers continue to be published on aspects of superstring theory and the multiverse. Peter Woit recently drew attention on his blog to a Templeton Foundation grant of almost $900,000 to Stanford University theorists Leonard Susskind, Andre Linde and Stephen Shenker, on the subject of ‘Inflation, the Multiverse and Holography’ (see here). I think we can agree that’s a lot of money.

Now, I didn’t expect Farewell to change anything – its purpose was simply to raise awareness of some of the problems with contemporary theoretical physics and engage the debate. However, I confess to being a little disappointed to see that arguments against ‘fairy-tale’ physics still tend to be based on Austrian philosopher Karl Popper’s criterion of falsifiability, which states that a theory is not considered to be scientific unless it makes predictions that can in principle be falsified.

Many practicing theorists and quite a few experimentalists of my acquaintance are very dismissive of the philosophy of science, with one notable Nobel Prize-winner once declaring that:1

‘This is not to deny all value to the philosophy of science, much of which has nothing to do with science. I do not even mean to deny all value to the philosophy of science, which at its best seems to me a pleasing gloss on the history and discoveries of science. But we should not expect it to provide today’s scientists with any useful guidance about how to go about their work or about what they are likely to find.’

I think it’s now time for those who would like to count themselves as opponents of the ‘fairy-tale’ physics tendency to get their arguments straight. And this means embracing some of the things that philosophers have learned in the time since Popper’s The Logic of Scientific Discovery, first published (in German) in 1934. After all, we all know how much notice we would take of arguments about the structure of matter based on the idea of the neutron (discovered in 1933), with no acknowledgement of anything that had been learned since. We would simply dismiss it.

A good place to start is Alan Chalmers’ What Is This Thing Called Science? This is hardly up-to-date (the most recent third edition was published in 1999), but it might at least help us to get past Popper’s criterion once-and-for-all. I’d encourage interested scientists to read chapters 5-7 of this book (available on Amazon.com and Amazon.co.uk). Chapter 7 in particular provides a range of arguments against falsificationism:2

‘An embarrassing historical fact for falsificationists is that if their methodology had been strictly adhered to by scientists then those theories generally regarded as being among the best examples of scientific theories would never have been developed because they would have been rejected in their infancy.’

By continuing to argue that contemporary theories should in principle be falsifiable, opponents simply play into the arms of theorists of the ‘fairy-tale’ tendency. These theorists can counter by claiming that it’s easy to show that science really doesn’t work this way, so why should they develop theories that are falsifiable? As we’ve seen, the door is then opened to allow in all kinds of notions of ‘intra-theoretical progress’ and ‘theory-confirmed theory’ characteristic of a post-empirical science (see my article The Evidence Crisis).

Not convinced? Here’s an example from planetary astronomy. The planet Uranus was discovered by William Herschel in 1781. When Newton’s mechanics were used to predict what its orbit should be, this was found to disagree with the orbit that was observed. What happened? Was this example of disagreement between theory and observation taken to falsify the basis of the calculations, and hence the entire structure of Newtonian mechanics? No, it wasn’t.

Theories are built out of abstract mathematical concepts, such as point-particles or gravitating bodies treated as through all their mass is concentrated at their centres, and so on. If we think about how Newton’s laws are actually applied to practical situations, such as the calculation of planetary orbits, then we are forced to admit that no application is possible without a whole series of so-called auxiliary assumptions or hypotheses. And, when faced with potentially falsifying data, the tendency of most scientists is not to throw out an entire theoretical structure (especially one that has stood the test of time), but instead they tinker with the auxiliary assumptions. This is the basis of the Duhem-Quine thesis, named for French physical chemist and part-time philosopher Pierre Duhem and American philosopher Willard Quine.

And this is what happened in this case. The auxiliary assumption that was challenged was the (unstated) one that the solar system consists of just seven planets. British astronomer John Adams and French mathematician Urbain Le Verrier independently proposed that this assumption be abandoned in favour of the introduction of an as yet unobserved eighth planet that was perturbing the orbit of Uranus. In 1846 the German astronomer Johann Galle discovered the new planet, subsequently called Neptune, less than one degree from its predicted position.

This does not necessarily mean that a theory can never be falsified, but it does mean that falsifiability is not a robust criterion for a scientific method. Emboldened by his success, in 1859 Le Verrier challenged the same auxiliary assumption in attempting to solve the problem of the anomaly in the perihelion of Mercury. He proposed another as yet unobserved planet – which he called Vulcan – between the sun and Mercury itself.

No such planet could be found. When confronted by potentially falsifying data, either the theory itself or at least one of the auxiliary assumptions required to apply it must be modified, but the observation or experiment does not tell us which. In fact, in this case it was Newton’s theory of universal gravitation that was at fault.

So, falsifiability cannot provide a sufficiently robust criterion for defining ‘science’. And yet history shows that some theories have indeed been falsified and that others have been at least temporarily ‘verified’, in the sense that they have passed all the tests that have been thrown at them so far.

In Farewell to Reality I argue that the most important defining criterion is therefore the testability of the theory. Whether we seek to verify it or falsify it, and irrespective of what we actually do with the theory once we know the test results, to qualify as a scientific theory it should in principle be testable against empirical facts.

The demand for ‘testability’ in the sense that I’m using this term should not be interpreted as a demand for an immediate yes-no, right-wrong evaluation. Theories take time to develop properly, and may even be perceived to fail if subjected to tests before their concepts, limitations and rules of application are fully understood. Think of ‘testability’ instead as more of a professional judgement than a simple one-time evaluation.

But then, the ‘fairy-tale’ physicists will argue, how can we tell if a novel theoretical structure has the potential for yielding predictions that can be tested? For sure, it would be a lot easier if this was all black-and-white. But I honestly don’t think it’s all that complicated. A theory which, despite considerable effort, shows absolutely no promise of progressing towards testability should not be regarded as a scientific theory. A theory that continually fails repeated tests is a failed theory.

The problem with much contemporary theoretical physics of the ‘fairy-tale’ kind is that theorists have built structures with just too much ‘wriggle room’. Few make predictions of any kind, but those that do – even in a very loose sense – are crammed with so many unknown and adjustable parameters and based on so many assumptions (visible and hidden), that referencing predictions against empirical data is an exercise in futility. The theorists simply change a few parameters or assumptions and shift the prediction out of scope. Such predictions are like liquid mercury: no matter how hard you try you can never nail them down.

1 Steven Weinberg, Dreams of a Final Theory, Vintage, London, 1993, p. 133.

2 A.F. Chalmers, What Is This Thing Called Science? Third edition, Hackett Publishing Co., Indianapolis, IN, 1999, p. 91.