Monday, September 12, 2005

Bad Science Reporting


I've railed here before on the piss-poor job most journalists do when it comes to reporting anything scientific or mathematical. Very few journalists have any science or math background; often proudly so. The net effect is that their readers are badly served by illiterate reporting.

Ben Goldacre, writing in the UK's Guardian, does a much better job than I of demonstrating and explaining in this column. Here's a taste:
And last, in our brief taxonomy, is the media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data. Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).

Last month there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.

Scientists never said that tenuous small new findings were important headline news - journalists did.

But enough on what they choose to cover. What's wrong with the coverage itself? The problems here all stem from one central theme: there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.

Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.
Goldacre also looks at the press' use of "speaking from authority" instead of speaking from facts, which then sets up the "competing authority" story you so often see. Never mind that facts are clear, the "authorities" differ and that's the story.

It's not a long article and it's very good. Well worth your time.

No comments: