Hi there! This is TITLE-ABS-KEY(“science journalism“), a newsletter about science journalism research. Basically, if my main newsletter in Russian were the respectable and polite Steven Grant, with a ‘v’ and all, then this section would be Marc Spector. (Not saying there will eventually be a section in Spanish, though)
As a science journalism instructor, I need to keep up with the research out there to know which of my hard-earned observations about the craft may actually be evidence-based. The best way to gamify reading a bunch of papers and making notes, I have found, is to turn it into a regular activity with some visible output. So, a newsletter section!
When I was in the science news business, I loved covering all those meerkat studies detailing how complex the lives of these social animals are. (I’m so delighted, by the way, that meerkats are listed as Least Concern by IUCN — as are capybaras! For once, the animals I love aren’t in grave danger.)
That’s why when I thought of this idea for the English-language section, my mind immediately went to meerkats providing snarky commentary on all that fancy research. That’s how you can picture me when you’re reading these emails. Let’s go!
Today’s paper: Slater MH, Scholfield ER and Moore JC (2021) Reporting on Science as an Ongoing Process (or Not). Front. Commun. 5:535474. doi: 10.3389/fcomm.2020.535474
Why this paper: frankly, it’s early September, which means Science Journalism 101 for me, so this looked about right.
Abstract: Efforts to cultivate scientific literacy in the public are often aimed at enabling people to make more informed decisions — both in their own lives (e.g., personal health, sustainable practices, &c.) and in the public sphere. Implicit in such efforts is the cultivation of some measure of trust of science. To what extent does science reporting in mainstream newspapers contribute to these goals? Is what is reported likely to improve the public's understanding of science as a process for generating reliable knowledge? What are its likely effects on public trust of science? In this paper, we describe a content analysis of 163 instances of science reporting in three prominent newspapers from three years in the last decade. The dominant focus, we found, was on particular outcomes of cutting-edge science; it was comparatively rare for articles to attend to the methodology or the social–institutional processes by which particular results come about. At best, we argue that this represents a missed opportunity.
In this first email, we’ll set the rules for my comments. All quotes from the paper will look like this
. All other emphasis will be mine, and I’ll do my best to think of useful links and resources and not just be pointlessly sarcastic as we go along. Onwards!
It is widely acknowledged that many Americans do not consistently see science as an authoritative source of information about matters of social importance (Funk and Goo, 2015).
Well, folks, as a science reporter I used to get metaphorically smacked across the face with a newspaper every time I wrote that something is widely acknowledged. Kidding, of course, my mentors were the kindest, most generous people in the field but I always felt like getting smacked with a newspaper would have been less embarrassing in this case.
So, whenever I see that something is widely acknowledged, I just can’t resist going to check out the citation. And, well, the 2015 Pew Research Center report on what the US public does and doesn’t know about science seems to contain no mention whatsoever of Americans’ views on science as a source of information about matters of social importance. (It does contain a really curious discussion of whether you should design your science literacy quizzes to discourage guessing, and thus acting upon incomplete information, by including a ‘don’t know’ option)
Isn’t this just the best start for this newsletter?
Anyway, let’s keep reading. For instance, despite the robust scientific consensus on the dangers of anthropogenic climate change or the safety of childhood vaccines or GMOs, significant public dissent remains (stifling action and leading to health crises in the former two cases).
Ooof, so, okay, significant public dissent on climate change, GMOs, and childhood vaccination is an instance of people not seeing science as an authoritative source of information on these matters. The intro goes on to mention low scientific literacy, scientists being bad at comms, and ultimately science journalists as an understudied factor here – what it doesn’t mention, however, is competing sources of (dis)information with vested interests. Because of course all those other sources do not matter at all as science, simply by its nature, is the source here, and if only people were made sufficiently aware of that, they would agree and instantly stop dissenting. Because the marketplace of ideas is a meritocracy!
Okay, now that I’ve stopped giggling, let’s go on. Journalists, including science journalists, recognize their importance in informing the public, and, at first glance, the different kinds of science journalism – news, historical or thematic overviews, people-centered reporting etc – are “all to the good” in that they’d presumably help develop the public’s understanding of science and thus (again, presumably) bolster appropriate levels of public trust of science.
Uh, two quick points. First, you’d better be deliberately setting up that ‘presumably’ to knock it off later in the paper, because the road from understanding to trust is nowhere as straightforward as that sentence suggests. Second, I saw what you did there using the word ‘science,’ which can mean both the organized knowledge about the world that scientists have mined (or maybe a less extractive verb here), processed and presented – and science as a corporation, i.e. the people and institutions as well as links between them. And let me tell you, as a profession, we went through a whole conversation amongst ourselves about whether we’re cheerleading for science and decided, largely, that we’re fine with occasionally doing it for the former (knowledge) but not for the latter (corporation).
[Narrator: yes, they knocked it off a few sentences later, but, astonishingly, the question of whether science journalists should affirmatively aim at the cultivation of trust of science when doing so might be seen as (or amount to) turning a blind eye to problems within science such as misconduct, questionable research practices, replication failures, and the like
was… set aside]
This is where we get to the purpose of this article, which is offering some general reasons for thinking that some [contributions of science journalism] can be expected not only to fail to improve important aspects of scientific literacy but that they may, in fact, damage public trust of science.
I have to admit we’re now within smelling distance from ‘journalism is actually bad for the public,’ a sentiment that is somewhat hard to interrogate neutrally for someone who could not use her byline in recent stories until she left Russia. I will also try to avoid No-True-Scotsman-ing here about the things that aren’t akshually journalism at all! and instead will genuinely listen: so, which contributions of science journalism may damage public trust of science?
To find out, the authors analyzed a random sampling of 163 articles about science from the New York Times, the Washington Post, and USA Today. They looked at whether and to what extent these articles provided insight into the scientific process or enterprise (the social structure of science, its nature as an ongoing process)
, and, for those that did provide such insight, which genres they belonged to and if the emphasis was positive (eg. self-correction) or negative (e.g. conflict). The sample included op-eds and obituaries (because scientists' obits are a rich source of context around their work and career; I approve this choice)
The results? Science stories in the sample generally reported only the outcome with no attention to method or process (these were the additive codes for story focus in the analysis). Process articles overwhelmingly emphasized conflict as opposed to self-correction. Of all disciplines, medical/health stories did best on both method and process. This may be due to the commonality of mention of clinical trials in articles on medicine and the relative ease of describing methods involved in medical research compared to those used in other sciences,
the authors write. Pssst, it may also be due to the concerted efforts and hard work of my health journalist colleagues who tend to take that Spider-Man quote about power and responsibility very seriously. (if any rules of journalism are in fact written not just in sweat and tears but also in blood, health journalism is definitely up there)
Now, to the conclusions. We start with the discretization of science, i.e. how the news grinder chops science up into ‘latest breakthrough‘ bits and works against the ‘long view’ on science, which is better suited to convey the significance of science for the broader public. True, and a great many journalism and scicomm students have heard me drone on about how this is an ethical issue for a journalist/PIO. They say if you only have a hammer, everything can feel like a nail; but once you’ve realized that it’s not that you’re actually surrounded by nails, don’t you have a responsibility to stop hitting stuff?
This discretization is indeed a risk, especially because scientists and the public treat ‘latest findings’ wildly differently, which maybe should warrant a special label, kind of like those native ads markings that we generally agree are mandatory; “caution, this story is based on a study that virtually no one has been able to read properly yet, let alone replicate“. Again, health journalists are all over this; a few of my former students who are pursuing careers in health journalism have been telling their students to stop writing ‘science news about medicine’ based on single studies.
More broadly, confusion and mistrust do spill over, and yes, that totally happens where it really shouldn’t (making a mental note to see whether climate science deniers are on the record anywhere talking about the reproducibility crisis in psychology). And yes, thank you to the authors who entertain the notion that maybe a more nuanced coverage of the scientific process would not necessarily result in greater trust of science, as per the sausage principle.
Now, to be clear, our claim is not that science does as a whole deserve our uncritical trust (let alone uncritical deference).
Also thanks, I guess?... And further research is needed to see whether the lack of constructive coverage of scientific conflicts (ie. not always framing them as self-correction) is in fact a ‘straightforward representation’ of messy science. Eh, but I’ll take it.
To what extent is the outcome-only focus of science journalism inevitable?
Now we’re talking! After listing all sorts of trouble with journalism these days, the authors see reason for optimism in the fact that journalists have largely been doing better with regard to objectivity, fairness, and (false) balance. But then we end with consuming discrete science news as eating ice cream for dinner and science journalists having a significant role to play in a balanced epistemic diet.
I have to say I am surprised that the “publish or perish”, positive-results-bias publishing culture in vast segments of academia did not feature in this discussion of discretization and ‘breakthrough science journalism‘ in any way. I am prepared to explore the idea that this culture may actually be reinforced by journalism to some extent, but there’s no way it’s shaped entirely by science reporters. After all, scientists aren’t salami-publishing (and their PIOs aren’t spamming me all at once) because science journalists need more news stories.
That’s it! If you enjoyed this issue, let me know. If you also have opinions or would like to suggest a paper for me to read in one of the next issues, you can leave a comment or just respond to the email.
Cheers! 👩🔬