skip to main content
skip to newscasts

Sunday, April 28, 2024

Public News Service Logo
facebook instagram linkedin reddit youtube twitter
view newscast page
play newscast audioPlay

Louisiana teachers' union concerned about educators' future; Supreme Court hears arguments in Trump immunity case; court issues restraining order against fracking waste-storage facility; landmark NE agreement takes a proactive approach to CO2 pipeline risks.

view newscast page
play newscast audioPlay

Speaker Johnson accuses demonstrating students of getting support from Hamas. TikTok says it'll challenge the ban. And the Supreme Court dives into the gray area between abortion and pregnancy healthcare, and into former President Trump's broad immunity claims.

view newscast page
play newscast audioPlay

The urban-rural death divide is widening for working-age Americans, many home internet connections established for rural students during COVID have been broken, and a new federal rule aims to put the "public" back in public lands.

Fighting Misinformation With a Combination of Strategies

play audio
Play

Monday, September 18, 2023   

By M. Mitchell Waldrop for the Proceedings of the National Academy of Science.
Broadcast version by Eric Tegethoff for Washington News Service reporting for the Pulitzer Center-Public News Service Collaboration.


The problem has gone by many names since the 2016 US presidential election first brought it to widespread attention-from "fake news," to "infodemic" during the COVID-19 pandemic, to "misinformation" in a slew of academic studies. Regardless, this toxic mix of confusion, misdirection, manipulation, and outright lies now threatens both public health and our collective faith in democratic elections-not to mention any common understanding of truth.

When it comes to solutions, though, "that's a very hard problem," says Jevin West, a biologist-turned-information scientist at the University of Washington in Seattle and co-creator of the widely used data literacy course "Calling Bullshit" (1). It's especially hard when you consider that much of the fight against misinformation has to take place on social media platforms, such as Facebook, YouTube, and X (formerly known as Twitter), which are in private hands.

Typically, these companies attack the problem with fact-checking on the most widely shared posts. But it's extremely difficult to police billions of posts per day when they're up against bad actors who've become skilled at high-tech fakery of all sorts. The toolkit ranges from software "bots" that can generate millions of fake user profiles and phony news posts to the latest artificial intelligence (AI)-powered chatbots that can fool people into thinking they're talking to another human.

And, of course, the platforms have to do their policing in today's culture-war environment, where any attempt to fact-check or moderate the discussion is instantly condemned as a plot to impose some ideological agenda.

The good news is that there has been an enormous outpouring of research into the misinformation problem in recent years. "It's a very fast-moving space," says New York University psychologist Jay Van Bavel. "Every week or two, there's another big paper." And, as researchers understand more about why people believe and share misinformation, they've begun to come up with next-generation solutions that the platforms could deploy at scale-and that don't tell people what to think (3, 4). Examples include crowdsourced accuracy ratings, nudges that encourage users to think more about accuracy before they share something, and even a kind of mental inoculation against common deception techniques.

Thus far, no one's claiming to have a complete solution. But researchers have begun to experiment with combinations of these techniques, allowing the interventions to reinforce each other and become more effective than any one of them alone. And in that work, we can begin to glimpse the outlines of a scalable, evidence-based "public health" system that could keep the misinformation plague from running rampant.

Your Brain on Social Media

At first blush, one might wonder, "Why bother policing?" Why not just let everyone see everything so they can make up their own minds?

A common retort: No one has the time, expertise, or stomach to sort through an endless flow of toxicity, harassment, and nonsense. If a platform doesn't make some effort to moderate the discussion-as virtually all the reputable ones still do-reasonable users will leave.

But another piece of the answer is that today's social media environment is rarely a neutral forum for debate. Often, the platforms actively reward the spread of misinformation-so strongly that some users routinely share posts that actually conflict with their political leanings.

In one recent laboratory experiment, for example, self-described habitual users of social media were found to share as much as eight times more misinformation than the self-described casual users (5). According to team leader Gizem Ceylan, a postdoc at the Yale School of Management, the experiment (which included four studies in all) then took a closer look at these habitual users and found that all the "likes," "reshares," and comments their posts were getting seemed to give them a rush of emotional validation akin to a high-sugar snack. And their craving for these psychic rewards had long since taught them to share the kinds of posts that got the strongest response-which were probably not carefully fact-checked articles from The New York Times. Instead, they had learned to post items that are surprising, emotionally charged, eye-catching, or zingers against people they and their friends despise: "content that exactly traces the characteristics of false information," Ceylan says, and that other researchers have shown will spread further and faster than truth (6).

Social media platforms do attempt to thwart at least some of the grifters, conspiracy theorists, and demagogues who supply those oh-so-clickable fakes. According to Indiana University computer scientist Filippo Menczer, who has spent the past decade developing tools to detect and map the activity of bots and other online deception technologies, big platforms such as Facebook, YouTube, and (pre-Elon Musk) Twitter have routinely disrupted hundreds of the secretive, but highly coordinated "information operations" that are often used by politicians, advocacy groups, and foreign actors to sow discord and misinformation.

Unfortunately, Menczer adds, the platforms haven't been nearly as tough with "superspreaders," who aren't even trying to hide. "A lot of harm and a lot of misinformation is spread not by unknown bots, but by well-known, even verified users," he explains. Often, these are influencers, elites, politicians, or celebrities who are making money, pushing a political agenda, crusading for a cause, or opposing vaccines. "And platforms are very reluctant to take them down because these people have lots of followers, which means they generate a lot of engagement and the platforms make a lot of ad money from them," Menczer says.

Instead, most of the big social media platforms rely heavily on fact-checking, which has become a global industry complete with professional standards for nonpartisanship, as well as for transparency about sources, methodology, and funding. Facebook, for example, will send its most popular and widely shared posts to be evaluated by one of its 90 international fact-checking partners. Posts that these firms judge to be false or misleading are then labeled as such and downgraded in Facebook's news feed, so that fewer users see them.

But fact-checking has its limits and must be done carefully, Menczer notes. If you don't stress what's true instead of repeating the false claim, he says, "then people will remember the claim instead of that it was false," and efforts will backfire. Still, independent researchers have confirmed that carefully crafted debunking labels have a real and lasting effect (7). "Just putting that 'not true' label on something is a profound thing," says David Rand, a cognitive scientist at MIT in Cambridge, MA. "It really reduces belief and sharing of that post." In one recent fact-checking experiment carried out with users in Argentina, Nigeria, South Africa, and the United Kingdom, this belief reduction amounted to at least 0.59 points on a 5-point scale-even after 2 weeks or more (8).

Even so, there aren't enough professional fact-checkers in the world to vet billions of posts per day. And even if there were, notes Gordon Pennycook, a psychologist at the University of Regina in Saskatchewan, Canada, "you're always putting a Band-Aid on things: You can only debunk something once it's already spread and out there."

Cognitive Inoculation

To limit that spread, some researchers are trying to help users recognize misinformation on their own. At a minimum, says Sander van der Linden, a psychologist at the University of Cambridge in the United Kingdom, "we need to empower people to spot manipulation as it happens, so that they can make genuinely informed decisions" about what to share, what to buy, or even what to believe.

Although digital literacy courses like the University of Washington's "Calling Bullshit" do this well, van der Linden says, he and his colleagues have been working on faster, more broadly applicable techniques inspired by decades of research into a psychological form of inoculation.

Also known as preemptive debunking, or "prebunking," this cognitive inoculation works like a real vaccine, he says: "You expose people to a weakened dose of a falsehood, or the techniques that are used to produce falsehoods, so that people can build up mental antibodies to neutralize misinformation in the future" (9). And because it's impossible to prebunk every lie coming out about, say, climate change or immigration, and since the (reputable) social media platforms are loathe to act as arbiters of truth, the best strategy, he argues, is to go for a broad-spectrum vaccine-prebunking that focuses on manipulation techniques. We've known about the bad guys' favorite tricks since the time of Aristotle, van der Linden says. Fake experts, false dilemmas, ad hominem attacks, phony tugs at the heartstrings, fearmongering against convenient scapegoats-"these are things that both the left and right can agree are misleading."

In 2019, van der Linden and Cambridge psychologist Jon Roozenbeek put prebunking to the test with Bad News, a role-playing game in which players were introduced to these manipulation techniques and then were awarded points for crafting a convincing misinformation campaign and attracting followers (10). Before-and-after tests showed that the game gave players a 20 to 25% boost in their ability to recognize real-life misinformation, van der Linden says, with the effect lasting for at least 7 to 10 days. Once their experiments were done, he says, the team made the game free and publicly available online, "and it went viral on Reddit. There were so many users coming to it that we had to buy more cloud space."

When the pandemic hit, the team launched a similar game, Go Viral, designed to inoculate players against COVID-19 misinformation specifically, van der Linden says. Backed by the UK government, the World Health Organization, and the United Nations, the game received more than 200 million views on social media, according to the UK government. As gratifying as that response was, though, the games took at least 5 to 20 minutes to complete-an eternity on social media-and once the games were out of the lab, they mostly attracted people who were already open to their message.

So Roozenbeek, van der Linden, and others devised an intervention with wider reach (11). Working with Google, they produced a series of 2-minute videos that use humor and pop-culture references to illustrate six of the most common manipulation techniques. In one example, they depict the sort of false dichotomy often used to push people toward extremism. In a clip from Star Wars: Episode III: Revenge of the Sith, Anakin says, "If you're not with me, then you're my enemy!" Obi-Wan responds that "only a Sith deals in absolutes!" The video's narrator concludes by intoning, "Nobody wants to be a Sith."

After extensive refinement in the lab, the videos were field-tested by dropping them into time slots normally used for ads on Google's YouTube platform, van der Linden says. This campaign went out to millions of real YouTube users, he says, "and we were able to randomly select about 30% who would either be exposed to our treatment video or to a control video about freezer burn." Then, in a follow-up about 18 hours later, the people who had seen the treatment video were 5 to 10% better than the controls at recognizing that manipulation technique in a test headline.

That isn't a transformative change. But it's good enough that YouTube has implemented the videos widely in parts of Europe and elsewhere. And while it's true that the effect fades over time, van der Linden says, the team has confirmed that giving people a "booster shot" in the form of a short reminder video can keep the cognitive immunity going for at least 3 months (the longest time period they've tested).

Think More, Share Less

Still, inoculation will only have a limited impact on the misinformation problem by itself. If nothing else, Ceylan says, "it's putting a lot of responsibility on the individual," as opposed to the social media algorithms that deliver psychic rewards when users share misinformation. "I don't think user behavior will really change unless we have structural changes in that reward mechanism," she says.

This is why another strand of misinformation research tries to nudge social media users into focusing less on engagement when they share something, and more on accuracy-or, as West puts it in a sticker he gives his students: "Think More, Share Less."

When you ask users to stop and think, as Rand and Pennycook did in 2019, they are surprisingly good at judging a post's accuracy-regardless of political party affiliation. Says Rand: "If you ask 15 to 20 random Facebook users, 'How accurate does this seem?' and then average their responses, they do a good job of reproducing what a professional fact checker would come up with" (12). This is consistent with the extensive literature on the wisdom of crowds, Rand says, and suggests that crowdsourcing could be a reliable way for platforms to label inaccurate content and overcome the scaling challenge to professional fact-checking.

In 2021, Rand, Pennycook, and their colleagues extended this line of thinking to nudges they call "accuracy prompts" (13). Again, Rand says, the idea is consistent with previous work in psychology on the priming effect: "If you can just get people to think about accuracy, then they're less likely to share bad content." And it's not even hard, he says: All you need is a pop-up window asking users to, say, rate the accuracy of a random headline or state whether accuracy is important to them-anything that makes them think about truth.

The group verified their lab results with a real-world trial that asked more than 5,300 users of (pre-Musk) Twitter to rate the accuracy of a single nonpolitical headline. In the 24 hours that followed, tweets shared by these users showed a definite skew toward sources that independent fact-checkers had already rated as more reliable, such as CNN and The New York Times, and away from those deemed less reliable, such as Breitbart and The Daily Caller. Subsequent experiments have confirmed that accuracy prompts reduce bad-content sharing by roughly 10% -not dramatic, Rand says, but worthwhile, and easy for platforms to implement routinely (14).

At the Hospital del Mar Research Institute in Barcelona, meanwhile, neuroscientist Clara Pretus has joined with NYU's Van Bavel and others on preliminary tests of a more direct nudge toward accuracy: giving users a button to tag posts as "misleading" (15). In a lab experiment with some 2,500 US and UK users, Pretus says, 25% reported that a high misleading count would make them less likely to share a post-with a critical caveat. "This probably works really well for the general population," she says, but could easily backfire with users at the extreme ends of the political spectrum.

In an earlier study of more than 1,600 social media users in the United States and Spain (16), Pretus says, she and her coauthors found that these highly politicized individuals were some of the most prolific sharers of misinformation. "It's an identity thing," she says. "They not only use it as a way to campaign for their cause, but to signal that they are part of their group." But this also makes them the most profoundly mistrustful of nonbelievers. "So an important aspect of our current intervention is that users only see 'misleading' tags from other people in their group," she says. And with that restriction, she says, the tags seem to be quite effective. She and her colleagues are now working on the next step: testing the technique with real users in a real social media environment.

Defense in Depth

It's widely agreed that none of these interventions by itself is a silver bullet against misinformation. The way forward is to find methods of combining debunking, prebunking, accuracy nudges, and more.

One preliminary example can be found in unpublished research cofunded by Google, van der Linden says: It suggests that when the inoculation videos are alternated with accuracy nudges, the effects of both interventions are enhanced. In another example, West recently joined with colleagues at the University of Washington to develop a computer simulation that treats the spread of misinformation as a kind of contagion and that's calibrated with real-world data on 10.5 million misinformation tweets during the 2020 election. "It's a very simple model," West says. "But it does a great job of capturing the dynamics that we see in the real data," such as surges of activity when a post goes viral. From there, he and his colleagues used the simulation to model the effect of potential interventions such as fact-checking, accuracy nudges, or outright suspension of the most active superspreaders-and found that, when used in combination, these interventions could reinforce one another and cut the total volume of misinformation by 50% or more (17).

But this coordinated defense is not something that individuals can do by themselves, West says: The need for some sort of coordinated action from governments, companies, and other institutions has long since become unavoidable.

That's especially urgent, given the startling advances in generative AI. This category of algorithms includes incredibly lifelike "deepfakes"-images, videos, and audio files that are almost impossible to tell from the real thing-as well as equally realistic text and conversations powered by the likes of ChatGPT. "AI is now democratizing the creation of propaganda," van der Linden says. "Anyone with access to the internet can have ChatGPT automate a campaign to deceive people at scale," he says-and even do mass microtargeting, by feeding readily available browsing data to the AI and telling it to tailor a manipulative message for each individual.

The big social media platforms are moving cautiously, reluctant to do anything that might hurt their engagement numbers (and thus their advertising income), Ceylan says. But the reputable ones still have a strong incentive to keep things under control, she points out. According to her research, most people say they're hesitant to engage or reshare posts and that it's increasingly hard to know which information is true or false. "If we can create an environment where people can trust that the information they see is going to be accurate," Ceylan says, "I think it's going to impact the engagement positively rather than negatively."


M. Mitchell Waldrop wrote this article for the Proceedings of the National Academy of Science.


get more stories like this via email

more stories
The United Nations experts also expressed concern over a Chemours application to expand PFAS production in North Carolina. (Adobe Stock)

play sound

United Nations experts are raising concerns about chemical giants DuPont and Chemours, saying they've violated human rights in North Carolina…


Social Issues

play sound

The long-delayed Farm Bill could benefit Virginia farmers by renewing funding for climate-smart investments, but it's been held up for months in …

Environment

play sound

Conservation groups say the Hawaiian Islands are on the leading edge of the fight to preserve endangered birds, since climate change and habitat loss …


Jane Kleeb is director and founder of Bold Alliance, an umbrella organization of Bold Nebraska, which was instrumental in stopping the Keystone Pipeline. Kleeb is also one of two 2023 Climate Breakthrough Awardees. (Bold Alliance)

Environment

play sound

CO2 pipelines are on the increase in the United States, and like all pipelines, they come with risks. Preparing for those risks is a major focus of …

Environment

play sound

April has been "Invasive Plant Pest and Disease Awareness Month," but the pests don't know that. The U.S. Department of Agriculture says it's the …

Legislation to curtail the union membership rights of about 50,000 public school educators in Lousiana has the backing of some business and national conservative groups. (wavebreak3/Adobe Stock)

Social Issues

play sound

Leaders of a teachers' union in Louisiana are voicing concerns about a package of bills they say would have the effect of dissolving labor unions in t…

Health and Wellness

play sound

The 2024 Arizona Alzheimer's Consortium Public Conference kicks off Saturday, where industry experts and researchers will share the latest scientific …

Environment

play sound

Environmental groups say more should be done to protect people's health from what they call toxic, radioactive sludge. A court granted a temporary …

 

Phone: 303.448.9105 Toll Free: 888.891.9416 Fax: 208.247.1830 Your trusted member- and audience-supported news source since 1996 Copyright © 2021