Hi there! This is TITLE-ABS-KEY(“science journalism“), a newsletter about science journalism research. In the previous issue, I explored how perceived intentions of a source affect the reader’s classification of a given story as true or false. It was a paper with a really cool experimental design, and I enjoyed unpacking it.
This time, I continue my journey of reading slightly less introspective papers. Also I pretty much got hooked by the title of the study, which you’ll see in a moment. Onwards!
Today’s paper: Batchelor, J. (2023). Just another clickbait title: A corpus-driven investigation of negative attitudes toward science on Reddit. Public Understanding of Science. DOI: 10.1177/09636625221146453.
Why: I mean, it’s a genuinely clever title for a research paper. I also want/dread to learn more on how Reddit users feel about science journalism.
Abstract: The public understanding of science has produced a large body of research about general attitudes toward science. However, most studies of science attitudes have been carried out via surveys or in experimental conditions, and few make use of the growing contexts of online science communication to investigate attitudes without researcher intervention. This study adopted corpus-based discourse analysis to investigate the negative attitudes held toward science by users of the social media website Reddit, specifically the forum r/science. A large corpus of comments made to r/science was collected and mined for keywords. Analysis of keywords identified several sources of negative attitudes, such as claims that scientists can be corruptible, poor communicators, and misleading. Research methodologies were negatively evaluated on the basis of small sample sizes. Other commenters negatively evaluated social science research, especially psychology, as being pseudoscientific, and several commenters described science journalism as untrustworthy or sensationalized.
Occasionally I come across an academic paper that somehow projects… confidence? It’s not really about the quality of research or the robustness of conclusions (wouldn’t be able to assess those myself for most of the papers I read, and that’s fine as it’s not my job). And it’s not about the references or length or even a clever title (although the latter I do appreciate).
I hate to be so handwavy, but it’s really a vibe. You’re reading the paper and you can tell that, if the author(s) were a comic book character, writing this exact text on this exact subject would be their story purpose – across the Multiverse. They love the research questions and the methods, and there’s nothing else they would rather do. (I want to stress that this is the impression these papers give – I sure hope people really feel this great about their projects, but sadly, that’s not a given.)
These cases are pretty rare. One I still remember vividly was a 2018 study of sawfly larvae and how they defensively vomit on their enemies. It started out as a deceptively simple story – until I learned that not only did the group give their larvae different diets of Scots pine needles, but “repeatedly harassing larvae to produce a chemical defence” meant gently poking them with a tiny glass tube until they threw up (and the scientists could collect the vomit with that tube).
Even recalling all of this for the newsletter brings a big smile to my face. I read that article and wondered if the lead author started their career in academia hoping that, one day, they would get paid to repeatedly harass sawfly larvae FOR SCIENCE. In our messy and largely soul-crushing world, it’s a delightful thought.
This was my very long way of saying that the r/science paper I’ve picked for this issue also projects this calm and joyful confidence (perhaps not at larvae-poking levels). When I was done reading, I was genuinely happy the author got to do what he clearly loved and scored some professional points.
What was that thing he loved? Well, it’s a corpus-based analysis of negative attitudes towards science on Reddit’s main science hub. So, instead of asking people what they thought of and how they felt about science, the author looked at what people felt comfortable saying about science in public, but with a reasonable degree of anonymity.
It’s a neat workaround for the socially acceptable answer bias in surveys, where people might feel bad for choosing the clearly “wrong“ answer (do you hate science? I bet you club baby seals too.) The paper points to this problem: findings often showed consistently positive responses, a trend which has continued in more recent surveys
, even though it’s abundantly clear it’s not all sunshine and rainbows between science and the rest of society. And these surveys can’t really help you shine a light into the darker corners.
Another problem with surveys is that most of us do not walk around with fully formed opinions about The Science that we’ve properly reflected on and can consistently present in a survey, in part because, on a layperson level, there’s no such thing as The Science to think of and feel about. There’s medicine (in your cabinet), and engineering (in your house), and electronics (in your iPhone), and horticulture (in your garden), and maybe astronomy (if you are blessed with a dark sky). You get the idea. So someone can both conceivably tell you they looooove science in a survey and not vaccinate their children – because to them, vaccines are not science, naturally. And is that survey response really useful to you?
These problems can and should be addressed in survey design, but I don’t think you can get rid of them entirely. Choosing instead to derive sentiment from “wild“ stated opinions, i.e. not ones produced in an experimental setting, has the added benefit of scale: in this study, the author amassed 177,296 comments across three years basically without leaving their desk. A tough ask for a survey!
Of course, the flip side of this is also clear. Here’s my fairly messy take on how humans work: in general, we love to talk and we enjoy having opinions, including public opinions (cue this very newsletter.) And yet we still need some momentum to share those opinions with other humans, because the stakes for us as social beings are quite high. Think back to that joke you really liked in your head that did not land well at all, for reasons you still can’t comprehend because I mean it was so funny –
…Right. What I’m saying is, of all attitudes towards science, Reddit comments are a selection of the more powerful ones, and we know, from experience and from research, that negative bias is very real. So focusing on unsolicited opinions is not a flawless approach at all.
The author dodges this bullet, in a way, by reiterating the qualitative nature of the study: there was no attempt to quantify the frequency of any attitude/evaluation
. In other words, we refrain from any judgment on how representative these comments are of wider sentiment.
Okay I guess, but I’m still worried about the part of the design where the paper goes from “professional” science to “popular” science, i.e. where the author specifically looks at attitudes towards science journalism. My admittedly 100% gut feeling tells me that the following difference between keywords –
When reviewing keywords like ‘journalism,’ ‘news,’ and ‘layman,’ it became clear that this discourse involved negative evaluation in a way that other similar keywords, like ‘journals(s),’ ‘university,’ and ‘NEJM,’ did not.
– may be less about how people feel about researchers and journalists and more about how strong those feelings are. The imaginary bar for negativity towards journalists is lower than the one for scientists, I’d say, so more comments attributing flaws to messengers rather than messages make it through the filter. It’s kind of like ‘you suck at math/girls suck at math’ but for scientists and journalists.
That was a complicated thought that I’m not sure I’ve baked thoroughly enough. In any case, I still found this analysis of negative attitudes quite useful. The way the author does this is, after identifying evaluative language and extracting attitudes from it, he presents them with one or two examples – that is, verbatim Reddit user comments that illustrate these attitudes.
Here’s my top five, with my comments:
Science used to just be science, but these days science is politics and can’t be trusted [. . .] now you have to question every scientific finding to see what political bias was used in forming the question [. . .]
Oh you sweet summer child. I can almost hear Those Were the Days in the background.
Bad science journalism has been around as long as science has been. It’s not some new “activist streak,” and you can still trust scientists (be careful with science journalists). [prof sci]
I just enjoy how these two comments go neatly together I guess.
I’m honestly slightly pissed that these kind of press releases/articles never indicate how much time and effort went into the development of these treatments. The article makes it sound so easy and fast, as if they just came up with this treatment on the fly, then quickly grew a few cells, put them on a heart and done. [. . .] [pop sci]
Now this is here because I read it and suddenly thought back to thousands of news stories I had written where I casually said things like (I’m paraphrasing) “the most common bananas are at risk of dying out, so researchers created a variety resistant to fusarium wilt“ – as if that wasn’t a culmination of decades of study and, quite likely, an absolute pain in the ass to do. Hm.
It’s weird how Reddit only trusts the social sciences when it affirms their bias. Any other time you’d have hordes of people talking about sample size, cultural differences, researcher bias, etc. [pop sci]
*whispering* it’s not just Reddit.
How long until we see some breathless headlines about a study on sensationalist journalism eroding public trust in science. [pop sci]
I will make it my life’s mission to find that study and write about it in this newsletter, with an appropriately breathless headline!
In his conclusions, the author also presents three higher-level themes he picked up on in studying user attitudes:
negative attitudes toward popular science and sensationalization (more research is needed on the views science consumers hold toward popular science and mediators such as journalists, and I’m totally here for that research);
the challenges of communicating social science research (social sciences are basically the red car of scicomm scrutiny – dunked on much more often, and everyone sort of loves it? – except this paper cites evidence that journalists are tougher on it than on natural sciences);
the criticism of research methodology and sample sizes (I was moderately surprised it was indiscriminate worship of sample size and not of p-value, which was more common when I came of age).
Even though I don’t think I’ve gained a lot of unexpected insights about negative attitudes towards science from this paper, this was still fun. That’s the vibe for you. I wonder whether scientists can get the same feeling from papers and, if yes, whether those would be the same papers. Could I craft a breathless headline for that story?
That’s it! If you enjoyed this issue, let me know. If you also have opinions on science journalism research or would like to suggest a paper for me to read in one of the next issues, you can leave a comment or just respond to the email.
Cheers! 👩🔬