by Ryan Sain
I used to work on an 8000-acre dryland wheat farm. Harvest time was full of separating the wheat from the chaff – riding around in dusty, hot, and noisy Gleaner. Interpreting science is much the same. The problem is that there isn’t a machine to do it – and you can’t just pick up the internet and shake it in the wind to clear out the nasties. This is where all those years of training kick in – from validity (did they measure what they intended to?) to reliability (would they get the same result again if they tried the study again?) to sample issues (do the people studied accurately represent the group they are supposed to?). The good news is that as a good consumer of science – you don’t need to spend 12 years in the classroom – you just gotta use some smarts. Good ole’ everyday smarts – like fixing a broken shovel on an 80’ cultivator in the middle-of-a-field smarts.
The world is full of great science. It’s also full of completely junk science (i.e., pseudoscience). Telling the difference between the extremes isn’t super hard – but identifying the differences in the middle isn’t easy. Our current culture tends to make the problem even worse. For example – go on Facebook for 30 seconds and you’re bound to see some extreme claims (e.g., from chemtrails to curing all your woes by following the newest diet fad) – these ideas get spread quickly and are given more attention than the more mundane (e.g., eating a balanced diet has been shown yet again to help you maintain a healthy life). It’s no wonder people walk around with tinfoil hats dodging blivits. How do you start the combine up and separate the junk from the keepers?
The first thing you need to do is put on your skeptics hat. If it sounds outrageous it probably is. Don’t hit share, please. Second – read a bit and ask “what did they measure?”. If the study is talking about diet success and they measured your OPINION of how much you liked it – then I would be skeptical of the results – why? Because they should have measured how much it worked! Then ask – did they make a comparison between two or more groups? So what if all of one group was highly successful on a certain diet – did they compare two groups of people using two different diets? If not – then it’s hard to know if the findings are actually because of the diet or something else (i.e., correlation). Third, who did they study? Did they study 49 freshmen at Montana State or randomly selected 4300 people from across 18 counties in Washington? I can surely attest to the latter being a better sample of all people in general. Fourth, do your own work! Go look some of this stuff up on your favorite search engine (e.g., scholar.google.com) – you’ll get a quick idea if the places it comes from are universities, or paid for services by some business promoting their own snake oil.
One last point before you go off consuming all this science – the word correlation. “CO-RELATION”- two things relating. It doesn’t mean causation. At all. Not even a bit. Just because two things measured at the same time seem to be related to each other – doesn’t mean they are. My favorite example is ice cream sales and crime rate. When one goes up, so does the other (no joke) – but is one causing the other? No way. They are only correlated. The key then is to read very carefully – because a HUGE amount of science is only correlation (which is a great start along the long arduous path toward what we may discover some day as “cause”). Wear socks much? Well so do serial killers. Unless someone does an EXPERIMENT to compare one condition to another (comparing the effects of two diets while holding all else equal) then we can’t even begin to think about cause. Don’t take the bait. Use your noodle and be critical. You’re smart.