Evidence-Based Toolbox Against False Information

Evidence-Based Toolbox Against False Information

A team of researchers at the Max Planck Institute of Human Development spearheaded by senior research scientist Anastasia Kozyreva developed an evidence-based toolbox of individual-level interventions against false information based on a thorough review of 81 scientific papers. Their findings, which are detailed in a paper published on 13 May 2024 in Nature, provide a conceptual overview of nine types of interventions and empirical evidence supporting each intervention.

Nine Evidence-Based Toolbox of Individual-Level Interventions Against False Information: Definitions, Scope, and Examples

1. Accuracy Prompts

This intervention is used to direct the attention of the audience to the concept of accuracy. This is grounded on the Limited-Attention Utility Model which suggests that the limited cognitive resources of individuals can result in errors in decision-making. The audience, especially social media users, should be reminded of the importance of accuracy.

Accuracy points can help in addressing the prevalence and habit of sharing false information or misleading content online. For example, using social media platforms, there should be prompts and constant reminders for users to evaluate the accuracy of a headline. The goal is for people to think about the accuracy of the content before sharing it online.

It is worth mentioning that this intervention against false information has high scalability but is still considered a short-term solution. It does not improve the ability of people to evaluate or recognize false information. This means individuals should already be discerning. It requires some underlying capacity to recognize whether the information is accurate.

2. Debunking and Rebuttals

The theory of persuasion and models of belief change collectively argue that directly attacking false information through debunking and rebuttals is an effective individual-level intervention. This is considered ideal in addressing false information that can be fact-checked or countered with confirmable data or facts or connection with authoritative sources.

Debunking involves offering corrective information. Rebuttals center on presenting accurate facts about an inaccurately addressed topic. Both are implemented by stating the truth, warning about imminent false information exposure, underscoring the misinformation and explaining why it is wrong, and reinforcing truth by offering a correct explanation.

The targeted outcomes of this intervention are belief calibration and competence in detecting and resisting false information. Nevertheless, while it can be used for the general public, it has limited scalability. This intervention is also reactive and topic-specific. There is also no guarantee that it can convince people. The attention of the target audience is required.

3. Friction

Several social media platforms like Facebook and X have added warning prompts or reminders that appear whenever users share or repost linked news articles without reading the content. This intervention is called friction. It involves introducing small obstacles to slow down the viral spread of false information and encourage people to rethink after they click.

Researcher Jo Bates introduced the concept of data friction while examining social and political forces that impede and enable the circulation of research data in his 2017 paper. This is the basis of using friction as an intervention against false information. The goal is to encourage pausing and address the prevalent habit of sharing content without checking.

One of the advantages of this intervention is that it is easy to implement. It also has a higher degree of scalability, However, on the downside, similar techniques can be used to restrict freedom of choice and communication on the internet. Remember that its conceptual basis originated from an investigation of obstacles preventing research data dissemination.

4. Inoculation

This individual-level intervention is akin to a medical vaccine designed to strengthen the resistance of an individual against false information. It is specifically a preemptive intervention that exposes people to a weakened form of common strategies used in manipulating people or creating and spreading false information to build up their ability to resist them.

Social psychologist William J. McGuire developed the Inoculation Theory in 1961 to explain how attitudes and beliefs change or how to keep them consistent despite efforts to change them. It also explains how an attitude or belief can be made resistant to persuasion or influence. The theory posits that receive can develop resistance through weak counterarguments.

Nevertheless, as an intervention, inoculation is considered helpful in combatting false information that makes use of manipulation strategies. Examples include conspiracy theories, fake news, and political propaganda. Implementation varies. There are simple-to-follow forms but more effective ones include experimentation, lengthier lectures, and gamification.

5. Lateral Reading and Verification

Professional fact-checkers use lateral reading to evaluate online sources. Additional verification strategies like tracing and evaluating the original context of the information are used to support further lateral reading. There is a range of methods and techniques used to assess the credibility, accuracy, and reliability of information, specific messages, or content.

The Digital Inquiry Group at Stanford University led by psychologist Sam Wineburg developed a curriculum called Civic Online Reasoning o equip students with the skills to discern credible information from false information and to effectively participate in civic discourse in the digital age. This is the basis of lateral reading and verification strategies.

Note that teaching lateral reading and verification generally involves classroom-based instructions or other formal and structured avenues. A more practical application would involve designing and placing pop-up messages or prompts to encourage the audience to cross-check information. This intervention helps in increasing information and digital literacy.

6. Media-Literacy Tips

Another individual-level intervention against false information centers on giving people a list of strategies for determining false and misleading information or to critically evaluate content or the information contained in a content. This applies to the newsfeeds of social media platforms. It reminds the audience to evaluate and reconsider a content.

Platforms like Facebook offer tips to spot false news. X has done a similar approach. TikTok has published and maintained its Digital Literacy Hub which provides a set of tools that can help users determine and even report inappropriate content. These social media platforms have also regularly rolled out information and digital information literacy campaigns.

An approach that involves regularly providing media literacy tips can help not only in information and digital information literacy but also in improving online and social media skills. This can be easier to implement but more effective approaches would involve ensuring that the audience or actual learners understand the tips and can apply them.

7. Social Norms

Leveraging social information or peer influence can encourage people not to believe, endorse, or share false information. This is called social norms intervention. It is based on the Social Norms Theory or Theory of Normative Conduct first introduced in a 1990 paper by psychologists R.B. Cialdini, R. R. Reno, and R. R. Kallgren and was expanded later.

The Theory of Normative Conduct attempts to describe how individuals discreetly or indirectly handle multiple behavioral expectations at once. It then explains how social norms influence the behaviors or actions of individuals by distinguishing between descriptive and injunctive norms. It further asserts that norms are only influential when they are made salient.

An obvious example of this individual-level intervention against false information would involve some sort of virtue signaling or emphasizing the importance of fact-checking content and stating that sharing false information is generally considered inappropriate and harmful. This is easy to implement but the effect depends on social distance or group affiliation.

8. Warning and Fact-Checking Labels

This intervention involves placing warning labels on content to explicitly alert the audience to the likelihood of being misled or fact-checking labels or ratings from professional fact-checkers to indicate the trustworthiness of the content or the message and information it contains. This is based on various models of belief change like theories of persuasion.

Placing labels is generally the prerogative of platforms like social media platforms and publishers or can be mandated by regulatory bodies. Facebook used to add warning labels on content deemed false or misleading but it changed its policy in 2025. X is known for its community notes which add context to a particular post or label it as false or misleading.

This intervention is considered easy to implement as demonstrated by Facebook and X. It is also scalable and can be used alongside automated fact-checking algorithms or through community participation. However, because there are no regulations mandating its use, placing warning labels or trustworthiness ratings depends on the inclination of a platform.

9. Source-Credibility Labels

Another intervention similar to warning and trustworthiness labels is the source-credibility label. This involves adding credibility ratings on news articles or an entire publication to help users determine reliable sources of information and separate them from false ones or those publishers notorious for publishing false information, biased news, or misleading content.

Nevertheless, because it is similar to placing warning labels or trustworthiness ratings, this is also based on various models of belief change like theories of persuasion. The nonprofit organization NewsGuard places labels on digital content to indicate its trustworthiness using a rating ranging from 0 to 100. The rating system is based on nine journalistic criteria.

Adding source-credibility labels can help in separating credible sources from misleading ones. It can also encourage reputable media organizations to tap third-party fact-checkers or rating systems to improve their reputation further. This intervention can also be automated. Tools can be deployed to automatically assess and cross-check the content with other sources.

FURTHER READINGS AND REFERENCES

  • Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K. H., Lewandowsky, S., Hertwig, R., Ali, A., Bak-Coleman, J., Barzilai, S., Basol, M., Berinsky, A. J., Betsch, C., Cook, J., Fazio, L. K., Geers, M., Guess, A. M., Huang, H., Larreguy, H., Maertens, R., … Wineburg, S. 2024. “Toolbox of Individual-Level Interventions Against Online Misinformation. In Nature Human Behaviour. 8(6): 1044-1052. Springer Science and Business Media LLC. DOI: 1038/s41562-024-01881-0