Last week, Limerent Emeritus drew my attention to a thought-provoking article in the Washington Post about a how some self-help books “express great confidence in theories of the brain that are still in their unproven infancy”.
It’s a worthwhile read, and I agree with the general premise. There is a similar phenomenon in the exercise and health space, where gurus find a single study listed on PubMed and use it to extrapolate a Universal Truth about how to build muscle or lose fat.
Given that I blog about a lightly researched psychological phenomenon from a neuroscience perspective, I feel some need to explain myself. How confident can I really be about applying neuroscience to a contentious and understudied human experience, given that there isn’t much direct research in the area?
To answer that, we need to understand a bit about the nature of the Scientific Method (an Ideal), the scientific community (a bunch of people nominally practicing science), and the scientific literature (a body of literally tens of millions of papers published by the scientific community).
1. The Scientific Method
A lot could be written about the development of the scientific method, and its strengths and limitations, but for the sake of space, we can boil it down to the essence.
The Scientific Method is an approach to discovery of knowledge that seeks to neutralise sources of bias, and the cognitive limitations of humans, by rigorously testing ideas through experimentation.
The common formulation is this: a scientist has an idea about how some aspect of reality works, and formalises this idea into a “hypothesis” that can be tested. Then, the scientist devises experiments that would test the hypothesis by disproving it. If the experiments fail to disprove the hypothesis, then the scientist refines it, designs additional experiments, and continues to try and prove it wrong. If a hypothesis lasts this endless assault of stress-testing, then it will eventually become accepted as provisionally correct. At this point, it might be promoted to a Theory.
To give an example – let’s say you had the idea that limerence is caused by attachment disorders in childhood. To test this scientifically, you would assert the hypothesis “people who experience limerence are more likely to have had childhood attachment problems than people without limerence”. You would then devise an experiment. You could, for example, ask a large number of limerents to take a survey on childhood attachment experiences, then repeat this with a similarly large number of non-limerents, and compare the two populations. If the hypothesis is incorrect, there would be no statistically significant difference between limerents and non-limerents.
To do this well, you would also need to think carefully about controlling for other confounding variables, and refine the experimental design to try an anticipate ways your results might not be sound, but that’s the general principle.
As an approach to discovering truth, the scientific method is wildly successful. It takes emotion, prejudice, politics, and cognitive biases out of the process of knowledge generation. The most amazing discoveries in the history of science have usually been unexpected – someone was trying to test their favourite hypothesis and proved themselves wrong.
2. The scientific community
In reality, scientists don’t work in isolation. There are teams all over the world, all trying to advance knowledge, all trying to test ideas and make a breakthrough discovery. If one group proposes a new hypothesis, all the other groups working in the same field are highly motivated to prove them wrong. That adds a lot of competition and ego to the mix, but as long as the principles of research integrity are abided by, then it accelerates progress.
The aggregation of scientific knowledge mostly advances in small steps. Specific studies are carried out to test hypotheses, and the results are published in papers that summarise a set of linked experiments. The main value of a paper is in the experimental results, but there is also a discussion section where the authors are granted some licence to speculate about the wider implications of their discovery. This is where people advance their pet theories about the field.
When a field is healthy, there is a lot of activity at the leading edge, and groups are rapidly publishing new discoveries in new papers and trying to build a body of evidence to explain how their little corner of the world works. There is cut-and-thrust, point-and-counterpoint, but in a spirit of constructive competition and a common purpose to get closer to the truth.
Knowledge at this leading edge is all uncertain. Dr Smith’s lab publishes a paper showing that gene X is highly active in disease Y and that silencing the gene slows disease progression. Dr Jones’s lab then publishes a follow up paper that shows the silencing experiment was flawed, and what’s more, overactivation of gene X does not make the disease worse.
It takes time for all this cut-and-thrust to settle into a consensus. A lot of work is needed, testing the ideas from multiple angles and in multiple labs, to reach the point where a hypothesis is provisionally accepted.
This is where the Washington Post article comes in. It rightly points out that some gurus cherry-pick a single paper from the chaos of this leading edge of discovery and present it as settled science.
3. The scientific literature
Scientific papers are published in specialist academic journals. There are thousands of them, even just in the biomedical sciences, and more are founded each year.
To be recognised as credible, a journal has to be listed in PubMed, a database of all the scientific papers in biomedical sciences. These journals must include “peer review” as a part of the process, where a new paper is sent out to independent experts in the field for critique before publication. Typically the reviewers are asked to confirm the methodological soundness of the paper, but also whether it is of sufficient interest to warrant publication.
Scientists operate under the belief that peer review is a venerable institution, intended to prevent junk science being published. In actual fact, it was only widely adopted in the 1970s, and while the benefits of maintaining rigour could be true, there are just as many shortcomings. Nevertheless, peer review is now accepted as a minimal requirement for a journal to be taken seriously.
Behind the chaos of the cutting edge experimental papers, there is a second tier of publications known as reviews. These papers summarise an active field of research and try to make sense of the fragmentary evidence base and unify it into a coherent narrative. Review articles are very valuable for non-specialists to understand the state of the art in an unfamiliar field.
Finally, a third tier of scientific publication is the textbook. Generally speaking, textbooks present ideas that have emerged from the primary experimental literature, passed through the secondary review literature, and become widely accepted as sound enough to be taught to University students as the basic facts of the field. If an idea is in the textbooks it might still be wrong, but there is a large body of evidence to support it.
4. How things go wrong
So far, I’ve presented the public-facing version of the scientific process.
Unfortunately, there are many points of failure.
First, and most importantly, scientists are human. Although the scientific method is extremely powerful, it has to be followed properly for it to work. When people’s salaries, research money, reputation, and status depend on publishing exciting new discoveries, it is easy to see how corruption can creep in.
The mundane end of this spectrum is designing experiments to try and prove your favourite hypothesis right (rather than wrong), dropping data that weaken the strength of your claim, “massaging” your statistical tests to get a positive outcome, or ignoring your competitors’ contradictory papers. The extreme end of the spectrum is directly falsifying or fabricating data – a practice that is increasingly common.
Money also corrupts the enterprise. An obvious example is the pharmaceutical industry funding trials of their own potential products, but the publication industry is itself a massive money spinner. Those thousands of journals are published by companies that generate tens of billions of dollars, often with astonishing profit margins.
Another serious problem is gatekeeping. Given the huge number of journals, no individual could conceivably keep up with the full literature. No recruitment committee for a University could feasibly read and understand all the publications of their roster of job candidates. Inevitably, proxy measures of publication quality are used.
There is a hierarchy in the status of scientific journals. Some, like Nature and Science, are positioned at the top, where only the most important, impactful, and wide-ranging discoveries are published. Getting a paper into Nature can make a scientific career. At the bottom there are “predatory” journals that charge a publication fee and therefore are highly motivated to accept almost any paper they receive.
Couple this status hierarchy with peer review, and you get a de facto system of “peer veto”, where reviewers act as gatekeepers to the career spoils that come with the most coveted publication slots.
Like any other human endeavour, systems that should be neutral end up becoming distorted by ego, vanity, money and resentment.
But, we just keep soldiering on, hoping that the rot from the edges does not seep too deeply into the core.
5. Back to limerence
OK, so after that long dissection of the business of science, what does this all mean for relating scientific discoveries to everyday life?
The first thing is to agree with the concerns of the Washington Post article that there is obvious danger (and opportunism) in extrapolating from studies at the cutting edge of experimental research to human behaviour.
Kirsten Martin’s article particularly focuses on fMRI studies, and these are, indeed, especially prone to misleading media splashes. There is a bit of a joke in the wider neuroscience community that fMRI is the “new phrenology”, as interpretation of blobby regions of increased blood flow get overinterpreted into proof that a given brain region controls a specific behaviour.
A weakness of the WaPo article is that any attempt to explain any aspect of human behaviour is necessarily going to be speculative. If you are writing about a specific human experience – limerence, procrastination, infidelity, grief, or anxiety – there is almost zero chance that there will be a settled, textbook-tier body of evidence obtained from human studies to draw from.
Good popular science attempts to present an argument in the following way: start with a foundation of textbook-tier knowledge. Add detail by summarising the state of the art from (multiple) review articles. Add spice by including some particularly interesting or provocative studies at the cutting edge.
That’s what I try to do with limerence. I concede that I rarely provide any citations – because this is still more of a blog than an academic site – but I hope that my process is clear. Reward, arousal, and bonding are textbook-tier phenomena that are pretty uncontroversial. The latest addiction research on wanting versus liking comes from the secondary literature and is still an active area of debate and refinement. Specific claims about serotonin wander into the primary experimental literature, and so are more fluid and contentious.
Like so many things in life, balance is the key. Don’t accept any claim simply because there is a published paper that supports it, but don’t discard all science-based self-help because it’s building arguments from a complex literature that won’t be settled for decades (if ever).
tl;dr: be cautious and use your judgement.