On Thursday 3rd March, the Media Standards Trust, BBC College of Journalism and science training for journalists programme at the Royal Statistical Society hosted an afternoon of workshops on data and news sourcing. One of these was on ‘expert sources in science and health’.
The internet has the potential to transform science journalism, bringing with it opportunities to link and reference to the plethora of publications – journals, journalists, bloggers and scientists – now online. But have these opportunities been taken? What changes has the net brought to science journalism and sourcing? And how have our perceptions of ‘expertise’ changed?
Our panel of experts – a broadsheet science editor, the director of the Science Media Centre, a science blogger, and a scientist (and writer) – were well-placed to answer. (Fully sourced and linked where possible, of course.)
Mark Henderson, science editor of The Times, asked who counts as an expert, and how do we identify them? ‘Expert’ is often too general a term of no utility to the reader – ‘health expert’, for example, gives no clues about their expertise, when you should aim to give as many clues as possible (their research background, their organisation, their personal homepage etc). Henderson distinguished three categories of experts:
- Scientists with genuine, and recent, experience within the field – the most important group. They may have been involved in the research under discussion.
- People who run things – chief scientists, chief scientists of government departments, people running research trusts or people running the money.
- Scientists who are representative in some way, e.g. those speaking for the Royal Society or British Medical Association.
Henderson was less keen on ‘talking head’ scientists, cultural commentators talking about areas outside of their expertise. Ten years ago these were the main actors in science stories, but no longer. There was also the danger of ‘equality of expertise’, a ‘phoney balance’ where an organisation (such as the Family Research Council in the US) would be rolled out against a Nobel Laureate. Science is about balance of evidence, not balance of viewpoints.
Asked by chair Kevin Marsh whether newsroom pressures could hinder his role, Henderson replied that he was fortunate with his editors at The Times. He said the role of a specialist correspondent is that of ‘first bullshit filter’, and as much about what you keep out as what you keep in. When the Raelians claimed to have cloned a human (with no evidence), it was clear the story would have to be covered – The Times put it in the front-page basement slot, reserved for the light-hearted stories. Ben Goldacre said he didn’t realise that The Times did that – how would readers know? Henderson said that was an important point, especially online, and agreed it was important to write caveats towards the start of the story. He said Fiona MacRae at the Daily Mail was particularly good at this.
Fiona Fox, director of the Science Media Centre (SMC), described the role of the SMC, which was to make the best experts available to the media. When setting up the SMC, she had spoken to someone at the BBC’s General News Service who said that if a story about GM food broke, they would interview Friends of the Earth or Greenpeace because they would drop everything to make themselves available. Looking at the coverage of GM in 1999, the SMC found that the media did not go to the scientists developing the technology or the best plant scientists, but rather MPs, Monsanto and some ‘talking head’ scientists. The SMC did not see science in terms of ‘pro’ or ‘anti’ viewpoints; its criteria were around science and quality. What the SMC did wasn’t churnalism – it didn’t have a corporate or institutional message to push. In response to Kevin Marsh, who asked if the SMC was simply playing the media game rather than trying to break out of it, Fox said the SMC wasn’t set up to improve journalism, but had been set up by the scientific community because scientists hadn’t engaged in the MMR and GM debates.
Ben Goldacre, doctor, broadcaster and author of the Bad Science column and book, said ‘you’re dead to me’ if you don’t reference primary sources. Doing so keeps things honest and provides information. Recent reports on ‘Asian paedophile gangs’ had mentioned a report by the Jill Dando Institute – none of the articles linked to the report, which it transpired was unfinished (and the Institute thought they had been misrepresented). Goldacre had corresponded with the BBC about linking to primary sources – they had raised objections, which he countered:
- Many academic papers are subscription-only: this was an important issue. Linking would still keep you honest by anchoring the article to the research; the abstract was always free and still had useful information; and most publicly-funded work would have a free text version
- Journal URLs can change: this had been fixed through the Digital Object Identifier system (dx.doi.org)
- The academic article may not have been published when the story was filed: putting the DOI on the press release could get around this, or you could remember to add a link when it became available
- Academic journals should be clearer – it was wrong that commentariat discussion on important papers could happen before a paper was published and available
- The primary source for articles was often the press release, not the journal article: many journalists didn’t want to be honest about this. A piece in the New England Journal of Medicine had been covered completely differently in the US and the UK, because where American journalists had based their (more balanced) pieces on the original article, UK headlines such as ‘Prostate cancer screening could cut deaths by 20%’ had been based on a cancer charity press release.
If journalists were straighter with the public, we would all be better off. Health and science sourcing was just a small corner of the wider issue of referencing, transparency and news architecture. In October 2010, the BBC started linking where possible.
Mark Henderson agreed that the biggest barrier to transparency was links not being readily available – of the big journals, only Nature was clear on DOI and most press releases didn’t have one. Another problem was that the journalist often wasn’t the person putting the links into the article.
Goldacre said people changed behaviour because of the articles they read – as a junior doctor, people would bring in press cuttings and he would try to find the original paper without success. He wanted to talk to the patient about the research, rather than treating them by the edict of the Daily Mail – people might die, and he might get struck off.
Ed Yong, science writer and blogger at Not Exactly Rocket Science, made the point that the links to the original articles might not be for everyone. But the ideological commitment to transparency, and allowing ‘watchdog’ writers and scientists to find the article, were good things. Linking widely, rather than just to original articles, was also good – a Northwestern University report had found science bloggers linked to more diverse sources than either political bloggers or traditional science writers. Yong made two main points:
- The blogger could be a viable source in themselves: if the writer was an expert, that was an expert opinion. Bloggers can crowdsource sources – Twitter was like a rolodex of thousands of people you could ping at the same time, who had their own rolodexes.
- The assumption that the story ends with publication is false – stories continue. Yong cited the example of a story he had written on a gynandromorphic chicken, which led to a chicken farmer getting in touch and collaborating with the biologist who had written the original report. This modern form of post-publication peer review could also be seen in The Guardian’s story tracker on Nasa’s ‘arsenic bacteria’.
Questions from the audience then raised two main issues: the sort of audience interested in science reporting; and whether journalists were now involved in curation at the expense of investigation.
On the first issue, Fiona Fox was worried that the excellent science journalism in the blogosphere wasn’t reaching a mass audience. Mark Henderson thought it was a strength of ‘traditional’ media that people could come across a science story they might not have been looking for, but it was a problem that those people reading science blogs or story-trackers would be those already interested in science. Ben Goldacre reiterated this point: the key divide wasn’t between journalism and blogosphere, but between those interested in science and those not.
On the second issue, Ed Yong and James Randerson (science news editor at The Guardian) thought the role of the science journalist was moving towards curation, especially since there was some great science journalism in the blogosphere (Yong mentioned Wired’s Superbug blog). Connie St Louis (director of the Science Journalism MA at City University) was worried about focusing on curation rather than investigative science journalism – scientists could appear as a priesthood who weren’t challenged enough about their mistakes. Ben Goldacre agreed there was a real problem around the scientist as a priest, and Fiona Fox was concerned at the lack of original, investigative science journalism – BBC Radio 4’s Analysis programme, for instance, had made few shows on science. Mark Henderson thought it was false to say that investigative journalism was the most valuable form of science journalism – explanatory journalism (as in The Times’ Eureka supplement) was also important. And the people best placed to investigate the findings of scientists were often other scientists. Ed Yong distinguished between aggregation (simply collecting things in one place) and curation, which was editing and presenting information as journalism had always done.
Ultimately, Ben Goldacre concluded, good science and good science journalism were about the same thing: showing your working.