Status Anxiety: All ‘Theories’ Are Not the Same


Much of the recent debate about string theory and the scientific method derives from the mis-use or mis-interpretation of the word ‘theory’. Scientific theorizing follows a logical progression from idea to hypothesis to theory, and I argue that a fully-fledged scientific theory must be grounded in empirical data. In the absence of empirical data, non-empirical arguments will suffice to choose between rival hypotheses, but they cannot ‘confirm’ theories. A lack of clarity on the status of the ‘string hypothesis’ in many popular presentations has created the misleading impression that this is regarded as a valid and accepted scientific theory, threatening to undermine public trust in science and scientists. A ‘Munich Declaration’, developed by participants at the recent conference ‘Why Trust a Theory’, is proposed as a potential way forward.

The conference ‘Why Trust a Theory: Reconsidering Scientific Methodology in Light of Modern Physics’ was held in Munich last December, organised locally by theorist-turned-philosopher Richard Dawid. It was prompted by a strongly-worded article by cosmologist George Ellis and astronomer Joe Silk published in the journal Nature a year before, which called on the scientific community to ‘defend the integrity of physics’; to take up arms in nothing less than a ‘battle for the heart and soul of physics’ and avoid ‘potential damage to public confidence in science’.1

The conference was attended by physicists, philosophers, a few science journalists and interested onlookers. Reports subsequently appeared in a number of popular science periodicals and blogs, and my own piece on the controversy was published in New Scientist in February, co-written with the magazine’s features editor, Dan Cossins (see here).2 Philosopher Massimo Pigliucci maintained a live blog from the conference (see his daily posts here, here and here). Other reflections well worth consulting include those by Sabine Hossenfelder (here), Natalie Wolchover (here), and Peter Woit (here, here and here). Videos of the presentations and panel discussions are now available online (here).

As this event retreats into the mists of recent history, I thought it might be helpful to reflect a little more deeply on what I saw and heard (and made notes on). Readers of Farewell to Reality will know that I was not an unbiased observer.

For my money, this debate is far from over. By accident or (more likely) design, the conference appeared to me to be a fairly well-orchestrated defence of string theory led by Nobel laureate David Gross, one of the architects of the standard model of particle physics. To be fair, there were a number of naysayers invited to participate, some more vocal than others. Some things were admitted but much was denied. There was no attempt to address what I regard to be the fundamental problem: the misrepresentation (or in some cases deliberate mis-selling) to the general public of string theory as something more than it actually is, a speculative series of hypotheses without empirical foundation. There was no discussion at all of the possible fall-out; of the long-term consequences for public trust and confidence in science and scientists.

There was no progress that I could see, and certainly no ‘Munich Declaration’ of the kind that Pigliucci called for in his presentation. If a follow-up conference or publication is planned, I’m not aware of it. There was, however, a concerted effort to deflect attention to something that everyone at the conference could broadly agree on: theories based on the idea of a multiverse raise serious concerns (Gross called for an informal vote on this towards the end). This was possible only because there were no multiverse theory proponents present to argue the contrary – Joe Polchinski was scheduled to present a robust defence of string theory and the multiverse but cancelled due to ill-health. Gross delivered Polchinski’s presentation, and made a reasonably fair-minded job of it, though it’s always difficult to present views that you do not share, especially Polchinski’s claim that the multiverse is 94% likely to exist.

The Trouble with ‘Theory’

So, what’s all this really about?

I think this is all about the word ‘theory’. We all use this word rather loosely, and even those scientists who should really know better are no different. I have a theory about Donald Trump’s outrageous success in the race for the Republican nomination. I had a theory about whether or not Jon Snow would return from the dead (a theory confirmed in the most recent episode of Game of Thrones). I have a theory about Rey’s parentage. We can all agree that these are ‘just theories’, and consequently many among the wider public unfamiliar with the subtleties of the scientific method tend to wonder what all the fuss is about. Why all this squabbling over what is, after all, ‘just a theory’?

Of course, to scientists, successful theories are much more than this. They tell us something deeply meaningful about the nature of our physical reality. Theories such as Isaac Newton’s system of mechanics, Charles Darwin’s theory of evolution, Albert Einstein’s special and general theories of relativity, and quantum theory are broadly accepted as contingently ‘true’ representations of reality and form the foundations on which we seek to understand how the universe came to be and how we come to find ourselves here, able to theorize about it. Much of what we take for granted in our complex, Western scientific-technical culture depends on the reliability of application of a number of scientific theories. We have good, solid reasons to believe in them.

In a recent New York Times online article, cell biologist Kenneth R. Miller explained that a scientific theory ‘… doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.’

I’m sure we can all violently agree with this, but we also have to face up to the really rather awkward situation we find ourselves in. The theories we know and love (and have come to depend on) combine to form the structures that scientists use to describe the properties and behaviour of elementary particles (in the ‘standard model’ of particle physics) and the large-scale behaviour of the universe (in the ‘standard model’ of big bang cosmology). These structures are simply unprecedented in their scope and accuracy. They are truly marvelous intellectual achievements.

But there’s a problem. These structures are also full of explanatory gaps – there are things they just can’t describe adequately. The standard model of big bang cosmology needs dark matter but this is nowhere to be found in the standard model of particle physics. We also need dark energy – the energy of ‘empty’ spacetime – but quantum theory predicts that the density of dark energy should be about 10120 times larger than it is. Following the discovery of the Higgs boson at CERN we have some confidence that we understand where mass comes from, but we have no way of determining the masses of the elementary particles from first principles.

And, of course, the reason we have two ‘standard models’ and not one is that the basis for the big bang theory is general relativity, yet it is quantum theory (in the form of quantum field theory) that is used to describe elementary particles. General relativity and quantum theory are venerable, tried-and-trusted structures, but when we try to put them together we find they really don’t get along and the whole thing starts to fall apart.

Here’s the rub. We know our current theories are inadequate. We even understand why they’re inadequate. But there’s no huge flashing, illuminated sign saying ‘this way to the solution to all the problems’.

And, just to make matters much worse, we’re now dealing with a science in maturity, which means that the time (and the cost) required to determine new empirical facts about elementary particles and the wider universe has lengthened beyond the career-span (and, indeed, the life) of a typical theorist. Peter Higgs published a paper detailing what came to be known as the ‘Higgs mechanism’ in 1964, when he was 35. On 4 July 2012, he sat in CERN’s main auditorium listening to the announcement that the boson named for him had been discovered. He was 83. He said: ‘It’s really an incredible thing that it’s happened in my lifetime’.3

Einstein used his newly-formulated field equations of general relativity to predict the existence of gravitational waves in 1916. They were finally detected 99 years later by the LIGO gravitational wave observatory, in September 2015. Now, there was already plenty of empirical evidence in support of general relativity and nobody has doubted its essential correctness for quite some time. But Einstein didn’t live to see his gravitational wave prediction confirmed: he died in 1955.

Why is this a problem? Well, imagine you’re an ambitious young theorist, working at a prestigious academic institution. What do you do? Throw your arms in the air and declare that you simply don’t know? Wait patiently for 50 years until the next big empirical fact comes along? No, you speculate about how the problems with current theories might be fixed.

The Logical Progression

So, how does this work? I don’t think it’s all that difficult to construct a process for the way theory gets done. In the old days, when there was plenty of unexplained data lying around, scientists would try to deduce a theory using already familiar concepts and see not only if it ‘fit’ the existing data but also predicted some new and preferably unusual facts that could be sought. In situations where data is hard to come by, scientists might try introducing some new concepts and develop a theory which provides a better explanation or rationalization of the existing data by plugging one or more of the explanatory gaps. This sometimes involves a complete re-interpretation of existing theoretical concepts – we thought this meant that, only to find that it means something else entirely.

Either way, we start with an idea. Maybe we should try to describe elementary particles not in terms of physically unrealistic ‘point particles’, with all their mass concentrated to an infinitesimally small point, but as ‘strings’ or filaments of energy. Maybe we can resolve some of the problems of the standard model of particle physics by assuming that there exists a fundamental symmetry – a ‘supersymmetry’ – between matter particles (such as quarks and electrons) and the particles that carry forces between them (such as photons).

Theorists start with such ideas, and run with them. They build elaborate mathematical structures that they then call ‘theories’ – supersymmetry theory, string theory, and so on.

But, hang on. According to Miller, quoted above: ‘A theory is a system of explanations that ties together a whole bunch of [empirical] facts.’ Well, this is tricky. All the known facts are more-or-less explained by our existing theories. It’s the gaps we’re trying to fill. For reasons I will try to explore below, these theories are unable to make hard-and-fast predictions – they don’t yet tie to any facts, nor do they provide any new interpretations of these facts.

Consequently, I prefer to think of these as hypotheses. This is a perfectly natural, logical progression. We wrap the new idea around a mathematical structure, but until we can do something useful with it in relation to empirical facts, it’s a scientific hypothesis, not a theory. Theories that cannot (yet) establish a foundation in empirical data simply do not – should not – have the same status as theories that can.

Let me try to illustrate this with a couple of examples. In 1905 Einstein published a speculative paper suggesting that light could be considered to consist of indivisible ‘particles’ or packets of energy called quanta. This was Einstein’s light-quantum hypothesis. He took the idea – light radiation as quanta – and wrapped a structure around it to form a hypothesis. He then used the hypothesis to predict the results of experiments on something called the photoelectric effect. These predictions were borne out about ten years later (and Einstein was awarded the 1921 Nobel Prize in Physics for his efforts). The theory that resulted (from this and many, many other developments in experiment and theory) is, of course, quantum theory.

In 1915 Einstein published details of his general theory of relativity. He had started with the idea – which came to him in a moment of inspiration whilst working at the Swiss Patent Office in Bern – that a man falling freely would not feel his own weight (meaning gravity and acceleration are equivalent). He wrapped this idea in a mathematical structure in which spacetime curves in response to mass-energy, and mass-energy moves in response to the curvature of spacetime.

So, why was this advertised as a ‘theory’, rather than a ‘hypothesis’? Well, I’ve already said that scientists use the term ‘theory’ rather loosely, and sometimes ‘hypothesis’ and ‘theory’ are used interchangeably. But in this case I’d point to the simple fact that in a report to the Prussian Academy of Sciences on 18 November, 1915, Einstein used the general theory to predict the advance in the perihelion of Mercury, a pre-existing fact that couldn’t be explained by Newton’s theory of universal gravitation. The result, an additional contribution of 43 arc-seconds per century, was well within the range of astronomical observations of the time, 45+5 arc-seconds per century.

Einstein’s hypothesis did indeed plug an explanatory gap. It also accommodates Newton’s theory of universal gravitation as a first approximation (and is therefore consistent with everything that can be done with Newton’s gravity) . Einstein made the finishing touches to his theory and presented it to the Prussian Academy on 25 November. He published a comprehensive review in the journal Annalen der Physik in March 1916.

From this I think we can argue that a scientific hypothesis becomes a fully-fledged scientific theory only when it establishes some foundations in empirical data.

This is all well and good. But in a period where theory must somehow progress in the absence of data, how do theorists choose between rival ideas and hypotheses? In the race to solve all the problems, how do they figure out which horse to back?

String Theory and the Scientific Method

Dawid explored the logic and reasoning behind the string theorists’ motivations in his book String Theory and the Scientific Method, which was published in 2013.4 It was this book that, in part, prompted Ellis and Silk to publish their provocative article. Dawid’s book is based on something called the ‘Bayesian approach’, named for eighteenth-century statistician, philosopher and Presbyterian minister Thomas Bayes. Bayes’ approach was given its modern formulation by Pierre-Simon, marquis de Laplace in 1812.

There is nothing particularly contentious about this. When faced with a choice between two actions, a rational person will choose the action that maximizes some expected utility. But, given that the world is a complex and sometimes unpredictable place, how do any of us know which action will achieve this? Should I keep my money in my bank account or invest in the stock market? In Bayesian decision theory, I put a probability on each action and choose the action with the highest probability of delivering the expected utility. I don’t necessarily calculate these probabilities: I might look at bank interest rates and study the stock market and try to form an objective view, or I might run with a largely subjective opinion about these different choices taking into account my appetite for risk.5

We can extend this kind of logic to making choices between different scientific hypotheses. Suppose there is a hypothesis (call this h) in which you have come to develop some belief that it might be a ‘true’ or valid representation of some aspect of reality. Assign it a ‘probability of being valid’ given by P(h), a measure of your degree of belief in it, called a ‘prior probability’. Now look at it again in the light of some evidence (e). The ‘probability of being valid in the light of the evidence’ is P(h|e), called a ‘posterior probability’.

We’re now faced with a very simple question. Is P(h|e) larger or smaller than P(h)? In other words, does the evidence confirm or support hP(h|e) is greater than P(h) – or does it disconfirm or undermine it – P(h|e) is less than P(h) – or is it neutral – P(h|e) equals P(h)? Yes, it’s that simple.6 And don’t be fooled, this is just a tool and, like all tools, it can be mis-used (there’s a good discussion on this by Scientific American journalist John Horgan here).

Because we’re now talking about confirming or disconfirming hypotheses, this approach is often referred to as Bayesian confirmation theory. Again, scientists don’t necessarily calculate these probabilities, but they will certainly form an instinct for them that may be somewhat objective but which history tells us is likely to be largely subjective. When the evidence is derived from empirical data there is in principle no real issue. This is the way we choose between different scientific hypotheses, it is how hypotheses become theories, and it is how theories are tested.

But we’re in a situation where there are no relevant empirical data. So, instead, Dawid considers forms of non-empirical evidence. There’s a slight problem here, because the application of Bayesian confirmation theory to this situation leads to some rather contentious phraseology, such as ‘non-empirical confirmation’, and ‘theoretically-confirmed theory’. Dawid learned directly from some participants at the Munich conference that they felt that the use of the word ‘confirmation’ in this context was inappropriate. His approach has led to accusations that he’s intent on developing a kind of ‘post-empirical science’, accusations which he strenuously denies.

I actually think this is all perfectly satisfactory if we’re prepared to acknowledge that we’re only talking about ways to choose between different scientific hypotheses, rather than ways to ‘confirm theories’. In such a situation there is absolutely no issue – this is what scientists have always done. Dawid disagrees, though his argument is subtle. He claims this use of Bayesian confirmation theory is much more than just about making choices between rival hypotheses, it is about understanding the relative probability that a hypothesis or theory is viable (not necessarily that it is ‘true’).7 I’m not at all sure I see a difference here – any rational choice is surely going to be based on judgements about the relative merits and explanatory potential (call this viability) of the various options available. This issue is, as it always is when it comes to arguments based on the Bayesian approach, what is the basis for these judgements?

So, when making choices between rival scientific hypotheses, what passes as non-empirical evidence? In his book Dawid identified three arguments.

The No Alternatives Argument (NAA) – This is basically an admission (or, more likely, an assumption) that there are, in fact, no rivals: the hypothesis in question is the ‘only game in town’. Many string theorists appear to believe this, despite the existence of genuine rivals. String theorists appear to have backed away from the claim that this approach represents a viable candidate for a ‘theory of everything’ but extol its virtues as a candidate theory of quantum gravity. On this view, string theory is not the only game in town. From an analysis of monthly publications on quantum gravity in the late 1990s, science historian Helge Kragh observed that 56% of these were about string theory, and 20% were about a rival hypothesis called loop quantum gravity (the balance were concerned with other approaches).8

The Argument of Unexpected Explanatory coherence (UEA) – This applies when the mathematical structure of the hypothesis produces some unlooked-for relationships with other hypotheses or established theories. In the first ‘superstring revolution’ of 1984, it was found that string theory predicts the existence of a hypothetical boson with a spin quantum number of 2, precisely what is expected for the graviton, the hypothetical carrier of a quantum gravitational force. Various string theory structures have also been shown to exhibit a number of mathematical ‘dualities’, the most famous being the AdS/CFT correspondence, connecting a string theory formulated in anti-deSitter space with conformal field theory, a class of structures that includes the quantum field theories of the standard model of particle physics. The correspondence establishes these structures as what Gross referred to at the Munich conference as ‘kissing cousins’.

The Meta-Inductive Argument (MIA) – This involves essentially ‘borrowing’ instances of the empirical confirmation of existing theories (such as the discovery of the Higgs boson at CERN) to bolster belief in the overall direction of development of a new hypothesis. Given the AdS/CFT correspondence, failure to find the Higgs would have been a tremendous blow not only for the current standard model of particle physics, but it would have also completely undermined the string theory programme.

It’s important to note that at no point does Dawid suggest that this approach is valid for determining if a hypothesis (or, for that matter, a theory) is ‘true’. He is rather exploring ways of judging the value of a hypothesis and developing a sense of trust in it for situations in which empirical evidence is non-existent or very hard to come by. Such trust must include at least the promise of empirical confirmation which, he argues, remains as the final arbiter ‘in the background’.

Lost in Parameter Space

So, what is the prospect for a few string theory predictions, sufficient for us to accept that it has promise, that it is capable of changing status from a scientific hypothesis to a fully-fledged scientific theory, anytime soon? Readers seeking a few insights on the difficulties of such ‘string phenomenology’ might do well to watch the presentations by Gordon Kane and Fernando Quevedo at the Munich conference.

Kane, for one, was quite clear. As far as he is concerned, there is no issue in using string/M-theory to predict the masses not only of the Higgs boson (ahead of the discovery of this particle at CERN in 2012) but also of some of the lighter supersymmetric particles (or ‘sparticles’), anticipated by the assumption of supersymmetry. Surely, this is exactly what we need? Isn’t this a perfect example of the use of a scientific hypothesis (h) to make predictions which can then be judged in the light of empirical evidence (e)?

Before we get too carried away, let’s look more closely at what’s involved.

M-theory was the trigger for the second ‘string theory revolution’ in 1995. String theorists were confronted with five different versions of supersymmetric string theory – called Type I, Type IIA, Type IIB and two versions of ‘heterotic’ superstring theory. Each version requires 10 dimensions, the three familiar dimensions of space plus time, plus six extra spatial dimensions that are rolled up and hidden (or ‘compactified’) with scales of the order of the Planck length, about 1.6 billionths of a trillionth of a trillionth (1.6 x 10-33) centimetres. Mathematical physicist Ed Witten made a bold conjecture. Maybe these five structures are all variants of one over-arching 11-dimensional structure, which he called M-theory. At the time he refused to be drawn on the significance of ‘M’.

This remains a conjecture. There is today no single structure we can point to and call ‘M-theory’. At the Munich conference, Gross admitted: ‘We thought string theory was a theory, but we can’t write down the equations, so string theory is a framework’. Applying M-theory really means going back to one of the variants, or what Quevedo calls a ‘corner’ of M-theory.

Having fixed on this, the theorists must then choose how to ‘compactify’ the six extra spatial dimensions that the structure demands. These compactification schemes are generally called Calabi-Yau spaces or manifolds. Slight problem. In 2003 theorists Shamir Kachru, Renata Kallosh, Andre Linde and Sandip Trivedi worked out the number of different Calabi-Yau shapes that are theoretically possible. This number is determined by the number of ‘holes’ each shape can possess, up to a theoretical maximum of about five hundred. There are ten different possible configurations for each hole. This gives a maximum of 10500 different possible Calabi-Yau shapes, with no basis on which to choose the ‘right’ one, the one relevant to the description of our universe.

This isn’t just a question of mathematical semantics. Different Calabi-Yau manifolds yield very different particles, forces and physical laws. In other words, they describe other potential universes, leading to the (outrageous) assumption that this ‘landscape’ of abstract mathematical shapes describes a multiverse. In a wonderful example of circular logic, multiverse theorists argue that we shouldn’t really be surprised to find ourselves in a universe with a Calabi-Yau manifold finely-tuned to yield a universe that can support the evolution of intelligent life forms.

Having selected a compactification scheme, the theorists must now make some assumptions about the dynamics of the system they’re modeling, in terms of the relationships between potential and kinetic energy. Kane and his colleagues adopted some ‘generic’ functions to do this.

It doesn’t really matter whether you follow the mathematical logic or not. What I hope you get from this is some sense of the arbitrariness of the procedure. There’s just way too much freedom and flexibility, there are too many choices, too many parameters. It’s almost impossible to get it nailed down. This means that any predictions are virtually worthless. Kane and his colleagues predicted the mass of the gluino – the supersymmetric partner of the gluon, which binds quarks together inside protons and neutrons. They got a value of 1.5 TeV, plus or minus 10-15 percent, and suggested that it should be found at the LHC during the summer of 2016.9 I confess I’m not waiting with breath bated.

The real problem with a prediction like this is that, in the event the gluino is not found this summer, nothing is proven, either way. Arguably, the absence of evidence, which we could think of as (-e), serves to disconfirm the hypothesis, as P(h|-e) would be less than P(h). But such is the freedom and flexibility of choice in the structure that the theorists can simply choose differently, change their assumptions, and shift the prediction to a higher mass range. Some theorists have been playing this kind of game for decades.

There was a fair amount of discussion following Kane’s presentation at the Munich conference. Gross didn’t think this work provided a credible test of M-theory. At best, this kind of approach involves making choices which help to constrain the model sufficiently to provide a ‘fit’ to pre-existing facts (such as the mass of the Higgs boson). But it doesn’t provide valid predictions.

Whilst string theorists remain firmly optimistic about their likely chance of success, I don’t really see things changing anytime soon. It’s hard not to be reminded of Richard Feynman’s famous observation: ‘String theorists don’t make predictions, they make excuses’.10

Cognitive Bias and the Social Normalisation of Deviance

So, the string theory programme will continue to be justified by non-empirical means, for the foreseeable future. There are two consequences. Firstly, if you buy the logical progression I outlined above, then in the absence of empirical data a scientific hypothesis cannot progress to the status of a fully-fledged scientific theory. Secondly, whilst the vacuum created by this absence can be partly filled by rational, but non-empirical, considerations, we must accept that a whole bunch of irrational, emotional considerations get dragged in as well. This kind of fundamental physics becomes inevitably more political.

Choosing which horse to back in the race to solve the problems of the standard model means making judgements about the relative promise of different hypotheses. Dawid’s three arguments are, of course, entirely rational and logical, but such judgements also inevitably depend on rather more irrational considerations, derived from the perceptions, culture and value-sets of the scientific community’s leaders, and the way in which they exercise power. Those with recognised authority tend to set the standards by which the value of research efforts are judged, and rewarded. They can greatly influence funding decisions and help to compete for the best minds at the world’s most prestigious academic institutions.

The danger is that research programmes become self-perpetuating, even when they appear to have failed to deliver on whatever promise they held.

Carlo Rovelli, one of the founders of loop quantum gravity, believes that this is what has happened with string theory. He argues that the last thing we need from philosophers is an attempt to legitimise a failed string theory programme on the basis that some theorists hold chairs at prestigious institutions. ‘A theory is interesting when it teaches us something new about the real world,’ he says, ‘not when it becomes a house of cards that delivers nothing but university positions’.

At the conference, Rovelli accused Dawid of confusing validation (that word ‘confirmation’, again) with the preliminary appraisal of hypotheses (much as I have outlined here), thereby undermining the perception of science, eroding confidence in the scientific method, and misleading young scientists into joining a sterile programme.

It’s probably fair to say that whilst the interest has undoubtedly waned since the second ‘string revolution’ 20 years ago, it is nevertheless still relatively high. It’s quite difficult to tell how many string theorists there might be working in academic institutions around the world but a figure of 1,500 is often cited. As Kragh has pointed out, there are more string theorists today than there were physicists a hundred years ago.11

Of course, science is a human endeavour and has always had its sociological dimensions, but with empirical data ‘in the background’ these dimensions tend to come to the fore. Dawid admits: ‘That’s just the way it is’.

Mathematical physicist Peter Woit believes there is nothing particularly new or unusual about any of this, in principle. Professional mathematicians do not work with empirical data yet they avoid the descent into messy politics by maintaining a very rigorous approach to their work, rigour that Woit simply can’t find in contemporary theoretical physics. ‘These are very different cultures,’ he says. ‘It is very easy to fool yourself with mathematics, so mathematicians must make their arguments clear and unambiguous. The string theorists fail to do this adequately.’ Indeed, history is littered with examples of highly compelling – and utterly convincing – mathematical structures that nevertheless proved inadequate as descriptions of nature.

Woit is concerned that, despite Dawid’s best intentions, the demand for empirical proof may be pushed so far into the background that it becomes essentially invisible. The theory community is then even more susceptible to what Hossenfelder called ‘cognitive biases’ at the Munich conference. This is a notion introduced by psychologists Amos Tversky and Daniel Kahneman in 1972, in which ways of thinking and processing mental inputs become distorted and irrational (or, at least, less rational compared with some standard that we might call ‘good judgement’) and so become much less objective. There’s a long list of cognitive biases (see here), but I think the most pertinent is ‘confirmation bias’ (that word again), a tendency to focus your attention only on arguments (or data, if they exist) that support or confirm your preconceptions.

At its most extreme we get what sociologist Diane Vaughan calls the ‘social normalisation of deviance’. This is arguably at its most visible in instances of corporate or political wrong-doing, for example in post-mortem examinations of the 2008 financial crisis, or in the soul-searching following the vehicle emissions scandal, or the recent decision of the Brazilian parliament to begin impeachment proceedings against President Dilma Rousseff. In each case we might be tempted to ask: ‘What were they thinking?’ I suspect they were mostly thinking: ‘Don’t worry. It’s okay – everyone is doing that’.

With the demand for empirical evidence pushed firmly into the background, it becomes perfectly normal to ignore the demand completely, to lose all respect for it and, arguably, to abandon the scientific method. ‘There’s a younger generation of theorists now quite comfortable with the lack of connection to empirical data,’ warns Woit.

A young, ambitious and impatient theorist doesn’t want to wait 50 years to discover if they’re on the right track. They’ll draw encouragement from the values and behaviours of the community of which they are a part, saying to themselves: ‘Don’t worry. It’s okay – everyone does that’. They’ll happily overlook that science is meant to be all about empirical evidence, publish papers on their new ‘braneworld scenario’, write best-selling popular books, and then wait in hope to be awarded a $3 million Breakthrough Prize.

The theorist Mikhail Shifman has called them a ‘lost generation’.12

Now, this is unprecedented, says Woit: ‘Physics has traditionally been the hardest of the hard sciences – we’ve never had to worry about any of this before.’

So, What Happens Next?

Ellis does not repent his strongly-worded article with Silk. He’s somewhat discomfited by the fact that the debate became rather personalised, but feels strongly that the present situation in fundamental theoretical physics cannot continue unchallenged. He argues that a relatively small community of theorists should not be allowed to undermine a methodology that applies across the whole of science. If we lose sight of the need to anchor science with empirical data – for whatever reasons, intended or not – we gift creationists, homeopaths, the ‘anti-vaxxer’ movement and many other pseudoscience practitioners a whole new line of argument to exploit.

Pigliucci echoed these sentiments at the Munich conference, referencing a post about the Ellis-Silk paper which appeared on Uncommon Descent, a blog serving the intelligent design community, which said: ‘If physicists want to join the many and various advocates of self-expression who do not depend on rigorous examination of evidence to validate their assertions, that is a choice physicists make… Just remember, no one did that to you. You did it to yourselves’.

Pigliucci closed his presentation with a suggestion that the organisers of the conference might want to consider framing a ‘Munich Declaration’ of some sort. He did not suggest what such a Declaration might contain, but urged that it should be respectful and professional, concern itself with the nature of scientific theorizing and the relationship between theory and evidence, openly acknowledge the sociological dimension, and recognize that it will carry consequences for the public understanding of, and trust in, science and scientists.

I wholly support this idea. I can’t for a minute imagine that it will be easy to secure agreement on the wording of such a Declaration from the conference participants, but I think it would be well worth the effort. The Declaration could then be published in Nature.

If we could get this far, I’d like to see some acknowledgement in the Declaration of the process in which scientific theories are developed – from idea, to hypothesis, to theory – and the roles played by non-empirical evidence (in evaluating ideas and hypotheses) and empirical evidence (in progressing hypotheses to fully-fledged theories). I don’t expect common usage of the word ‘theory’ to change anytime soon, but when theorists write blog posts and popular books, make television documentaries and talk to science journalists, I’d like the Declaration to encourage them to make a much more concerted effort to clarify the status of the theory under discussion.

 

1 George Ellis and Joe Silk, Nature, 516 (2014) p. 321.

2 Jim Baggott and Daniel Cossins, New Scientist, 27 February 2016, p. 39.

3 See Jim Baggott, Higgs: The Invention and Discovery of the ‘God Particle’, Oxford University Press, p. 219.

4 Richard Dawid, String Theory and the Scientific Method, Cambridge University Press, 2013.

5 See, for example, Patrick Maher, Betting on Theories, Cambridge University Press, 1993.

6 See, for example, Colin Howson and Peter Urbach, Scientific Reasoning: The Bayesian Approach, Open Court, 1989.

7 Richard Dawid, personal communication, 4 May 2016.

8Helge Kragh, Higher Speculations, Oxford University Press, 2011, p. 316.

9Sebastian A.R. Ellis, Gordon Kane and Bob Zheng, arXiv: 1408.1961v1 [hep-ph] 8 August, 2014.

10Richard Feynman, quoted by Lawrence Krauss, Isaac Asimov Memorial Panel Debate, Hayden Planetarium, American Museum of Natural History, New York, 13 February 2001. Quoted in Peter Woit, Not Even Wrong, Vintage, 2007, p. 180.

11Helge Kragh, Op. cit., p. 303.

12M. Shifman, arXiv: 1211.0004v2 [physics pop-ph] 14 November 2012, p. 12.