In 1747, James Lind, a physician in the British Royal Navy, was stationed aboard the ship HMS Salisbury🇧🇷 Back then, long-haul flights had an unwanted passenger: scurvy. Today it is known that it is caused by a vitamin C deficiency, but until then it was not known what caused it. Many people died, despite the most diverse means – from fresh vegetables to elixirs of sulfuric acid, alcohol, sugar and spices.
On one particular crossing, having to deal with an escalation of disease, Lind picked 12 crew members with similar symptoms and housed them in the same room, giving them the same diet, except for one detail. The patients were divided into pairs and each pair received a different prescription: apple juice, drops of the said elixir, two tablespoons of vinegar, half a cup of sea water, a laxative, and the last pair, two oranges and one lemon a day. Two weeks later, this latest pair has only improved. This was one of the first recorded controlled experiments.
Now, what makes an investigation an experience? It is the manipulation of the subject of study. If Lind had allowed the patients to eat whatever they wanted instead of tailoring each pair’s diet, the doctor would have conducted an observational study.
In an observational study, scientists observe, as the name says, without intervention. For example, if you wanted to find out if coffee is good for headaches, you could compare a group of people who drink coffee every day with another group of people who never drink coffee, and then see which group reported headaches more often – that’s it. It will be a cross sectional study. Another alternative might be to follow the same group of people for several years, monitoring the amount of coffee they drink and the frequency of headaches over time – a cohort study.
You can imagine this as an endless discussion about scientific studies with skeptics. You begin to notice that many people who suffer from headaches drink coffee and soon the symptoms go away. Coffee cures headaches, she says. The skeptical person replies: “Maybe the pain will go away on its own, it has nothing to do with coffee.” She is constantly collecting notes on many other people, some of whom drink coffee when they have a headache, some of whom do not. And he noticed that whoever drank coffee reported that the pain was gone.
With more confidence, you go over to the skeptics and once again declare that coffee cures headaches. The spokesperson says: “It may be that the people who drink coffee are exactly the ones who already know that coffee works for headaches — perhaps the pain is due to withdrawal from coffee. But that doesn’t mean that coffee is effective for all headaches.”
With your spirits somewhat cooled, you decided to do a study using experimental manipulation, so that it would be easier to attribute the solution to your headaches to coffee—which now comes close to what Lind did on the ship. You take two people with a headache and toss a coin to choose who will have breakfast and who will not. After half an hour, only those who drank coffee no longer had a headache.
Energized, you claim with more conviction that coffee cures headaches. The skeptic immediately replies that the one-person control group is unimpressive; That may just be a coincidence. Devastated, and at this point you already have a headache, you buy a coffee because you will have to spend the night collecting more people with these symptoms…
Of course, science isn’t quite that simple, but this imaginative debate with a skeptic captures the essence of the logic behind good study. It is this that allows us to confidently extract the answers we want – in this case, whether or not coffee cures a headache. You are always adding controls to handle objections someone might raise.
Collectively, the scientists act as skeptics of each other, always looking for alternative explanations for the findings. Empirical manipulation reduces the strength of one of these alternative explanations – that the result is correlation, coincidence, but does not imply causation.
Of course, the study in and of itself isn’t definitive proof of anything, be it experimental or observational. The science is built on consensus, with evidence accumulating across many independent studies. In some areas of research, it is very common to carry out systematic reviews and meta-analyses. These are studies that attempt to summarize all the evidence accumulated on a particular issue.
This does not mean that experimental studies are, by their nature, superior to observational studies. Study reliability is a characteristic of the study, not the type of study. The advantage of an experimental study in determining causation is not a good reason not to criticize that particular study and to consider any experimental study over an observational study. For example, the conclusion that smoking causes lung cancer, in the 1950s, was based on observational studies.
It’s interesting to think that there is no ready-made scientific method. Scientists’ idea of what makes a good study is evolving over time, either because new statistical techniques have emerged, or because new ways of tinkering with study objects or even new types of study have emerged—as when Lind carried out the experimental intervention on the ship and solved the local scurvy problem.
Kleber Nieves is a neuroscientist and director of science at the Cerabilera Institute.
Participation In the Serrapilheira Institute newsletter to follow more news from the institute and from the Ciência Fundamental blog.
Current link: Did you like this text? A subscriber can issue five free visits to any link per day. Just click the blue F button below.
“Entrepreneur. Music enthusiast. Lifelong communicator. General coffee aficionado. Internet scholar.”