Tuesday, June 21, 2016

Scientific Studies and Journalism


You've probably seen those headlines: Latest study shows eggs are good for you. Latest study shows eggs are bad for you. Cell phones might cause cancer. Scientists discover a way to make plastic out of pollution. These headlines are confusing because it seems like the scientific community can't make up its mind about anything, with contradictory information every other day. Many of the touted miracles and breakthroughs don't ever show up in the real world.

What this is is a symptom of terrible journalism... Journalism so bad, many of the writers of those types of articles should be fired. While the readers are stumbling around, confused and distrusting of science with all its seemingly abundant contradictions, the writers of those articles are busy trying to get clicks and eyeballs on their content. While it frustrates me that people who read these articles can't see past them or ask any basic and relevant questions, I don't really blame them. Science is complicated. If it weren't, we could all be scientists. People don't have the time to look at the specifics of every study, and even if they wanted to they often have to pay to have access to the studies. Not only do people need a background in the particular field that is being studied, they also need to understand how studies work in general. That's not even considering the dogmas people hold about many subjects and just plain irrationality. And boy, those things are in no short supply.

Let's explore the many ways words and sentences can trick you into believing things that are not true when it comes to medicine.

'The World Health Organization has categorized processed meats as a group 1 carcinogen, the same category as tobacco.'

One has to ask, what exactly does 'group 1 carcinogen' even mean? It's normal to assume the worst. The WHO uses the IARC (International Agency for Research on Cancer) system. There is a difference between 'hazard' and 'risk'. The former explains the likelihood that something can give cancer while the latter tells us the intensity of the effects from a given thing. The categories from IARC for cancer hazard are as follows: definite, probable, possible, don't know, probably not. The fact that processed meats and tobacco are both considered to be definitely carcinogenic does not tell us how much of an impact those things actually have. It just tells us the link between those things and cancer are believed to be strong. Also in the 'definite' category are things like alcohol, sunlight, birth-control pills, and Chinese-styled salted fish. Nobody recommends never stepping outside. There isn't a worldwide ban on birth-control pills. It takes a lots of sun over time or a ridiculous amount of birth-control pills to get cancer. It doesn't take that much tobacco to have a seriously negative impact on your health.

Another problem with the bacon hysteria is on the purported 18% risk of cancer for consuming processed meats like bacon. 18% risk of cancer for consuming how much bacon? It turns out consuming 2 sticks of bacon a day, every day, has an 18% relative risk for colorectal cancer. The 18% figure headlines like to stick in your face is relative risk. The chance of getting colorectal cancer over your lifetime is about 5%. An 18% relative risk brings that up to 0.05*1.18 or 5.9% chance of getting colorectal cancer over your lifetime. That is an absolute risk increase of about 0.9%. In other words, the risk is very small. That's not to say that people should go heavy on the bacon of course. Cancer is not the only ailment a person can have. Obesity or high blood pressure are problems too. It's probably not a good idea to eat a lot of processed meats all the time.

'Study finds video games increase aggression'.

What is 'aggression'? When I lose in a video game, I get angry. Being angry makes me aggressive. I also get angry at people that can't drive or people who can't read a study before jumping to conclusions. Also somewhat relevant is that fact that the latest study does not nullify all of the older studies. A positive study also doesn't 'cancel out' a negative study. The specifics of the studies matter. Even if all of the studies hold up to scrutiny, the objective viewpoint is to consider it as what it is: contradictory data in a debatable field of study. I understand that people don't like nuance or uncertainty, but that is reality much of the time.

'Latest study finds that mice and rats exposed to cell phone radiation have an increased risk of brain and heart cancers.'

If you have been keeping up with the latest news, you probably know which study I am referencing here.  This is the study published in March done by the US government where 2,000 rats and mice were subjected to signals modulated to GSMA and GSM standards at 900 and 1200mhz frequencies. After two years, the researchers report a 'low incidence' of brain and heart cancers. (The data has not been released.) Not surprisingly, people who believed cell phones caused cancer jumped onto this study without actually reading it.

To first state the obvious: Mice and rats are not humans. Studies that are done on mice cannot be directly ported over to humans, otherwise nobody would ever do human trials. Let's assume the results of rat studies are directly comparable to human studies for the sake of argument though. Following the results of this study, female humans are immune to cell phone radiation, whereas males need to be careful. You will get more brain and heart cancers, but you will also live longer. (But only if you use the cell phone to talk to your friends for nine hours a day to get the full benefit!)

The rats in the study also did not behave normally. The type of cancers the radiation-doused mice received were typical of older mice of that species. Since the control mice (the ones that got no radiation) died early, they might have gotten the same cancers had they lived long enough. This study also wasn't peer reviewed. Well, technically picking out the peers you want to review your study is peer review in that the people are your peers and they reviewed it, but that is not the peer review people typically mean when it comes to science. When published to a pre-publication site, it got hammered.

Also worth noting is that the cell phone has a real value. What absolute risk of cancer one is willing to tolerate for a given activity varies for each person. But if you are really worried about non-ionizing radiation, you should probably be more scared about the giant nuclear reactor people often get exposed to, causing a cancer that will kill about 10,000 people in the US alone. It fires both ionizing and non-ionizing radiation. You meet it every time you go outside. It's called the sun.

'Study finds an 87% increased risk for autism for babies who's mothers took anti-depressants during pregnancy.'

As you might guess, that 87% figure is a relative risk increase. The risk of getting an autistic baby is about 1%. That makes the absolute risk increase a little bit less than 1%. There are other things to consider. Maybe the anti-depressants aren't the cause, it's the depression or the things depressed mothers have to go through. Maybe abstaining from anti-depressants will cause harms elsewhere (like self-harm). Everybody has heard of the phrase, 'correlation is not causation'. However, most people don't seem to believe it. They sure don't act like they do. Sometimes looking at a link between two things is not enough to get the full story. The world is more complicated than that.

There are two types of studies: observational studies and experimental studies. Cohort and case control studies are studies that look at some group of people and in an attempt to look for correlations. On the other hand, a randomized controlled trial is an experimental study. They both have strengths and weaknesses. RCTs are expensive and have a smaller sample size than observational studies. If people are studying a rare phenomenon, it would be hard to gather a large enough group of people for RCTs to show anything. If the thing studied takes a long time to show results, then it would be extremely time consuming to follow people over the years. It's also unethical to do some RCTs on humans. However, RCTs are still the golden standard of research today. People are gathered randomly, with one group given a control (for example, a placebo) and another given the real pill. RCTs are not prone to many of the confounding factors of cohort studies. If we look at people's life expectancy on or off heart medication who have hypertension, we might not be controlling for factors like race, age, ethnicity, sex, socioeconomic status, or other factors. People who take medication might simply be the type of people who are more sick to begin with or are more willing to exercise or eat right.

There are many possible confounding factors for a study. Even if a study is done correctly, the conclusions one draws from a study may be incorrect. We have to consider sample size, correlation vs causation, and whether the study is directly applicable to humans. Mouse studies are not the same thing as human studies. My point about sample size might seem like useless ranting, but in fact it is a huge problem when it comes to studies on diet. The sample size of many diet studies is downright shameful. For example, one of the commonly quoted studies for the anti-artificial-sweeteners crowd is a study looking at people and their gut bacteria. Turns out, the study randomly took 7 people and dosed them with FDA's maximum allowed levels of saccharide for six days straight. This is the type of studies we are dealing with: Groups of less than 10 people, often with little control. A study is only as good as how well it's done. A meta-analysis (analysis of many studies) is only as good as the studies included for the analysis.

Science is very complicated and it takes a decent background in a particular field to be able to make heads or tails of a study by oneself. I totally understand why people would fall for a bacon scare headline. But many people I have met over the years seem to believe they are the experts on diet and exercise. When I challenge their beliefs and invite them to sit down with me and look at the studies regarding to our debate, they always turn it down. Many people are underqualified and overconfident, which is classic Dunning-Kruger. They want to have their worldview reinforced, not challenged.

Most of the blame goes to the science communicators: the people writing the headlines and articles online and in print. These people are paid to write factually correct information and to whip up informed opinions about various issues. The burden of crafting a headline that doesn't cause misconceptions to the person who only reads headlines is on the journalists. Unfortunately it seems like many of them are either intentionally incompetent or helplessly incompetent.

I was on a subreddit called 'Futurology' a couple of months ago. This is a subreddit that contains many posts about new and exciting headlines about the latest scientific 'breakthroughs'. One of the threads was about solar panels which generate electricity from falling raindrops. The headline sounded promising until we start to crunch the numbers on the efficiency penalty for developing such a solar panel. The amount of raindrops required for the solar panel to produce enough electricity to match a typical solar panel would be about 2 trillion drops per square meter, continuously. One commenter posted that future solar panels are almost guaranteed to be the type that can generate electricity from raindrops. To which I responded:

Will and might are two very different things. Many things start out inefficient and end up inefficient. Science is a graveyard of dead ideas. People remember the hits and forget the misses. All that has to happen for this predicted future to never happen is for people to find a better way of getting power that doesn't involve raindrops. I'm not in the business of predicting the future for a good reason. Plenty of sensationalist headlines with little real world benefits to show for it just makes people doubt science.

Many, many things are interesting and promising lines of research. But we should always be candid about the current obstacles that scientists face and how far they are from achieving what they want. Just saying 'future solar panels will be like this this and this' means either the person writing the headline is pointing to something so obvious as to render the headline useless, or the person is being sure of things they cannot be sure about.


Finally, I leave you with a segment from John Oliver about scientific studies. (Note: This is not a random video I threw in here to make my post look more snazzy, it's actually educational and entertaining.) Now if you'll excuse me, I have to go argue with more people about artificial sweeteners, GMOs, coffee, and soda.







Credits:
raygirl.deviantart.com