This is a lightly edited version of a keynote presentation I gave on 26 October 2022 at the Massachusetts Science Education Leadership Association meeting in Marlborough, MA. Thanks again to Liz Baker for inviting me, and the whole group for being such wonderful hosts.
I really like the theme of this conference, “developing intelligent consumers of science.” I’m not normally a big fan of the term “consumers,” but in this case I think it makes sense.
We live in an environment that is awash in information about science, and the pieces of it we pay attention to, or consume, define our understanding of the world.
Unfortunately, a tremendous amount of that information is bad. Looking up scientific topics on the internet could lead someone to accurate, well-explained summaries, or into a morass of misinformation and disinformation. Worse, depending on who’s searching and how they’re doing it, the bad information may be more prominent and more persuasive than the accurate information.
The internet is simultaneously the greatest reference library ever created, and a virulently toxic dump of stupidity and lies.
All of this leads to a phenomenon you’ve probably seen: someone curious about a topic researchers it online, and emerges thinking they know a lot, when in fact they have been horribly misinformed.
This may seem kind of funny sometimes, but make no mistake: it is an existential threat to humanity.
If that sounds a bit over the top, think about it. We live in the most powerful nation in the world, which ostensibly operates as a representative democracy, and the citizens who determine our collective course of action are often being immersed in persuasive falsehoods about science.
I’m not talking about the rhetoric of debates over policies and values; that’s a feature of a healthy democracy, not a bug. It’s fine for people to have wildly divergent opinions about our national priorities, and to favor different strategies for addressing problems. But when that rhetoric starts altering and misrepresenting the underlying facts - the science - then we become a people who can’t even agree on the fundamental nature of reality.
To put it another way, people are entitled to their own opinions, not their own facts. But the modern science news environment often presents facts as negotiable. If the facts don’t agree with your ideology, you can search up alternative ones, and the internet will oblige with entire bodies of credible-looking content supporting them.
In a world facing a thicket of serious, interlocking problems, many of which we can’t solve or even understand without science, bad information is leading us toward disaster.
Think about the content your students find when they go online looking for information about health, or climate change, or vaccines. And remember that when given the choice between uncomfortable truths and comforting lies, human nature will drive most of us to click on the latter. The search algorithms that drive our modern media system will then gladly offer up more of the same, which leads us straight down a rabbit hole of delusion.
Multiply that by hundreds of millions of people, and we have a population ripe for the picking by carnival barkers, who will gleefully fleece them as the world burns.
We need to fix this. But how?
I think it helps to take a step back and see how we got here. In the process, we’re going to see that the current science news landscape was the product of a series of decisions, each of which seemed to make sense at the time. Very few of the people involved were trying to be evil, though there might be some exceptions.
I’d like to make it clear at the outset, though, that I’m not going to be pining for the good old days. The business of science news is radically different today from the way it was just a couple of decades ago, but both the old system and the new one had advantages and disadvantages.
Party like it’s 1996
Let’s go back 26 years, to 1996. Some of you probably weren’t around then, and those of you who were may not have been conscious of how the news business looked, but that was the year before I became a science journalist. Most of what I’m about to talk about are things I witnessed firsthand from inside the business.
Like Gaul, the media business in 1996 was divided into three parts: there was print, there was radio, and there was television. For both legal and technical reasons, the three did not overlap. The New York Times did not run a TV studio, and NBC didn’t publish newspapers. However, all three media shared very similar market constraints that shaped the way they covered the news.
A newspaper had a tremendous amount of capital tied up in printing presses, delivery trucks, and a distribution network built to print huge quantities of text onto paper, and distribute thousands of copies of that document to every household in the vicinity. Just about every town and city across the country had a local paper, which also worked with national papers such as the New York Times, the Chicago Tribune, and the Washington Post to distribute their blocks of text. A local paper’s distribution system typically extended to no more than a 50 mile radius from the city it served, usually less. Further than that was too far to drive all those papers to.
TV and radio stations sank their money into expensive studios, transmitters, and towers. Like the newspapers, they operated on a local basis in each area, and were also part of national networks that distributed content across the country. But each local station had a limited radius, again typically less than 50 miles or so from the transmitter. That was just a byproduct of the laws of physics and the propagation of VHF radio signals.
Those purely physical and logistical constraints exerted a powerful moderating force on the industry. If you had to keep an entire printing plant or a hundred thousand watt transmitter running, you needed to cater to as many people as possible in your 50-mile radius. That meant your content had to have the broadest possible appeal. You aimed for the middle. Content that was extremely specialized or way outside the mainstream didn’t gain much traction in this system.
At the same time, the structure of the local outlets and national networks served as a unifying force. Whether you lived in New York or Natchez, when you turned on the TV after dinner, you saw one of three or four channels, all covering the same national news stories. Your local news was next, and everyone in town got the same information.
The biggest disadvantage of this system was that it lacked diversity, both in the newsrooms and the content. Not only did most of the people creating the news look the same, but the stories also tended to hew to well-known topics and formats, and of course they sometimes avoided issues that might have been important to cover.
In addition, some types of news were chronic money-losers in this system. Science news and investigative journalism were especially bad at turning profits. Everyone would watch or read about the building collapse downtown, but then they wanted something lighter, maybe a few sports scores or lifestyle stories. A lengthy investigation of corruption in the government, or a story about some newly-discovered species, was a lot less tempting for most people. So fewer advertisers wanted to be in those sections.
Newspapers, however, had access to a cash cow that TV and radio lacked. At the same time, most newspaper publishers thought of themselves as fulfilling a sacred obligation to inform the public - yes, really. As a result, print media handled most of the science and investigative coverage. They could afford to fund some loss leaders that they felt were worthwhile.
What was that cash cow? Most people outside the news business still don’t know this part of the story. I’ll let you all think about it for a moment while I digress to discuss a completely unrelated development.
Desperately seeking cash flow
We’re still in 1996. That year, a computer programmer in San Francisco named Craig Newmark put a brutally simple page on the newfangled World Wide Web. He called it Craigslist. It was just a site for classified ads. There was a little back end script that let people fill in the information for their ad and post it. Very simple. In short order, the site became very popular, then expanded to other cities across the US, and then around the world.
And it gutted the newspapers’ cash cow.
The classifieds were those little tiny ads people would pay a few bucks apiece to run in the local paper. Motorcycle for sale. Roommate wanted. Free kittens. They were cheap individually, but there were thousands of them in every paper, every day, and they were one of the primary sources of revenue supporting the whole newspaper publishing industry.
The technology of the web was so much more efficient at handling that need that Craigslist could run its entire global operation with only a couple dozen employees. In the process, they inadvertently put thousands out of work and drastically reshaped the media landscape.
Around the same time, newspapers, and radio and TV stations, were putting up their own web pages and figuring out what they could do with this technology. Suddenly the broadcasters were effectively producing digital newspapers. Once some bandwidth and server issues got sorted out, the newspapers also started producing video and audio stories. The old boundaries were gone. So that seemed kind of promising.
In theory, the web should also have saved news outlets a lot of money: no more need for printing presses, transmitter towers, and so forth. In practice, it wasn’t that easy. Everyone still had to keep those older distribution systems while also going online.
Initially, the thinking was that the web site would bring in money with ads, but then advertisers realized that online ads weren’t nearly as effective as the ones they ran in the old media. Online ad prices started low, and then plummeted. In fact, per-ad prices have been on a more or less continuous decline since the Web began. It turns out that the business model of “we’ll pay for the web site with advertising” just doesn’t work.
The web quickly got way too big to keep track of manually, so search engines sprang up to help people find whatever they were looking for. Google eventually won, of course, with their business model of serving little ads alongside people’s search results.
Remember what I said about online ads being cheap? That’s only a problem if you don’t deliver enough of them. By delivering billions of ads a day, constantly tailored to the specific terms someone is searching for, Google quickly became one of the biggest companies on Earth. Along the way, they’ve refined their ad-serving algorithms relentlessly, so they could track as much information as possible about each user. Then they could serve ever more specific ads in the exact ways and times that were most likely to get that specific user to click on them.
Facebook and other social media sites then came along, and took this a step further. They selectively move user-generated content around, all while assembling detailed dossiers and maps of everyone’s social connections and interests. They then serve even more carefully tailored ads based on that information. Some commentators have called this “surveillance capitalism,” but what stock markets have mostly called it is highly profitable.
Where does science news fit into this landscape? Well, anywhere, really. The old media outlets for science information were very limited, but there’s endless space for it online. Anyone can now publish anything to everyone, at little or no cost. So there are blogs, podcasts, Facebook groups, Youtube channels and so on, dedicated to every aspect of science. A lot of scientific journals are also making primary publications available for free now, so anyone can read the original papers behind whatever story they’re interested in.
Meanwhile, the traditional gatekeepers are gone. There’s no editor of the internet. Which is really annoying if, like me, you suffer from Proofreader’s Eye.
More problematically, those moderating and unifying influences that stemmed from the constraints on the old media system have been turned upside down. Everyone’s news feed is now customized. Your science news isn’t the same as mine, and both of our feeds are probably driven, at least to some extent, by computer algorithms that even their creators don’t fully understand.
Fragmenting and radicalizing
Old media was moderating and unifying, but it lacked diversity. New media is all about diversity, but it tends to be radicalizing and fragmenting.
When I say “radicalizing,” I don’t just mean it can lead people to embrace dangerous viewpoints, though that certainly can happen. I mean it can also lead people to become extremely interested in very narrow subjects.
Sometimes that can be an unhealthy distraction. If it’s managed properly, though, it can be a force for good. I co-host a weekly podcast about viruses, where some hard-core virus nerds sit around for two hours talking shop. As in, detailed, experiment-by-experiment analyses of primary research reports, and highly technical discussions with other virologists who come on as guests.
We don’t aim for the middle.
We do try to pause and explain some of the thornier bits of jargon, but we don’t dumb anything down. It’s the kind of content that never would’ve worked on traditional radio or television; there’s no city in the world where you’d be able to find enough dedicated virus geeks in a 50-mile radius to support such a transmission.
But online, it doesn’t matter that our relatively small audience is scattered around the world. And while a lot of our listeners are indeed scientists, a lot of them aren’t. We get emails from people in all walks of life, including truck drivers, lawyers, and of course teachers. Serious science discussion is available to everyone now, which is great.
Unfortunately, that kind of narrowcasting - in contrast to broadcasting - works just as well for serving misinformation and disinformation. Some guy railing about conspiracy theories would’ve been ignored in the old media system. Now he can have his own Youtube channel that the algorithms will serve to anyone who paused a few seconds too long on the wrong segment of the search graph.
The systems that are built to serve exactly the right advertisement to exactly the right person at the exact moment when they’re most vulnerable to that sales pitch, have the unintended side-effect of also serving exactly the wrong piece of propaganda to exactly the wrong person at exactly the worst moment, when they’re most likely to be duped.
Simply by trying to make web advertising profitable, tech companies have created the perfect brainwashing machine.
When this system misleads people on scientific issues, it radicalizes them against reality. Repeat that process enough times with enough people - and the internet is very, very good at scaling up - and we get a plurality of citizens believing absurdities as they enter the voting booth. This, as I said, is an existential threat to humanity.
So where do we go from here? There’s no way to go back to the old media system, and we shouldn’t want to do that anyway. Nor should we expect tech companies to deal with the problem on their own. In fact, the next generation of algorithmically-driven content services, on sites like TikTok and Facebook’s new Metaverse, are mainly focusing on strip-mining even more data from users’ behavior to serve even more thoroughly customized content and ads.
Regulators and lawmakers have started to weigh in on some of the more egregious misuses of these platforms, but the law moves a lot slower than the technology, so I wouldn’t hold out much hope for that approach.
Instead, we need to focus on teaching people how to use the new media system to get accurate information about science. There is a tremendous amount of good science reporting online, in all of the media forms: text, audio, and video. The key, I think, is to teach people to recognize it when they see it, and to be able to spot the hallmarks of misinformation and disinformation when they inevitably show up.
You, as science educators, need to be at the forefront of this effort, but you don’t need to do it alone. In many respects, the most important skill people need to evaluate science information online is the same skill they need to evaluate literature, history, and many other subjects: critical thinking.
It’s a skill that’s inseparable from science, which is fundamentally a process for learning truths about the natural world. Well-managed skepticism is a crucial component of that process. It won’t always give us the answers we want, but if we use it rigorously, it will give us the ones we need.
If you can teach students those things - teach them to be intelligent consumers of science - my science journalism colleagues and I will continue trying to produce intelligent content for them to consume. I hope we find each other online.
Thank you for your time.