NewsGuard’s new white paper, “Fighting Misinformation with Journalism, not Algorithms,” outlines independent research on the effect of using human-curated news reliability ratings to mitigate false news, some of which has been conducted by leading academic institutions and other top scholars using NewsGuard’s Reliability Ratings dataset.
As researchers, media experts, and policy analysts redouble their efforts to assess the impact of misinformation and test effective ways to counter the spread of misinformation online, NewsGuard’s human-powered ratings of all the news and information sources that account for 95% of engagement online has become an essential benchmark. Since NewsGuard’s founding in 2018, researchers at 25 academic, nonprofit, and corporate institutions have licensed NewsGuard’s data for their work.
“NewsGuard’s detailed and nuanced rating system is an indispensable asset for anyone interested in obtaining quality information on the Internet,” said Stephan Lewandowsky, Professor of Cognitive Science at the University of Bristol.
The white paper describes the findings of independent research assessing the impact on news consumers when they have access to ratings of the trustworthiness of news sources at the website, or domain, level. Researchers have tested NewsGuard’s approach of providing source reliability ratings alongside online news content, showing this approach reduces the sharing of, and trust in, content from unreliable sources.
For example, 2017 research from Stephan Lewandowsky, a psychologist at the University of Bristol; John Cook, a researcher at the Center for Climate Change Communication at George Mason University; and Ullrich Ecker, a cognitive psychologist at the University of Western Australia, found that it is more effective to administer warnings about misinformation before misinformation is encountered rather than after. In other words, it is more effective to “prebunk” a false claim by instantly alerting a user that the claim has been published by a source known to publish falsehoods than it is to “debunk” a false claim with a fact check that occurs, by definition, after the fact — and that may not reach the individuals who saw the false claims in the first place.
After launching the browser extension in 2018, NewsGuard and the Knight Foundation commissioned a Gallup survey to assess how the tool worked when installed on personal computers. The study found that 63% of respondents would be less likely to share news stories from red-rated (generally unreliable), and 56% would be more likely to share news from green-rated (generally reliable) websites. 90% generally agree with the ratings, and respondents trusted the ratings more because NewsGuard ratings are done by “trained journalists with varied backgrounds.” And 83% of respondents want social media platforms and search engines to integrate NewsGuard ratings and reviews into their products.
Additionally, the paper describes a new approach to fighting online harms: having platforms offer “middleware” solutions such as NewsGuard that enhance the user experience via third party integrations — an idea first proposed earlier this year by the Stanford Working Group on Platform Scale led by political scientist Francis Fukuyama.
The white paper also explains how NewsGuard data is used for research measuring how misinformation spreads and how social media companies and others could take effective steps to counter misinformation. For example, a November 2021 report published by researchers at New York University, “Understanding Engagement with U.S. (Mis)Information News Sources on Facebook,” used NewsGuard data to generate a list of news publishers’ official Facebook pages and categorize them based on reliability scores and political leaning.
In another example, The Carter Center’s paper, “The Big Lie and Big Tech,” catalogued how “repeat offenders” — sources found by NewsGuard to repeatedly publish false content — thrived on Facebook in the run-up and aftermath of the 2020 U.S. election.
NewsGuard’s Reliability Ratings also continue to power the University of Michigan’s “Iffy Quotient,” an ongoing project by the Center for Social Media Responsibility. Researchers at the University of Michigan use NewsGuard data in the Iffy Quotient to measure whether social media companies are making progress in reducing the spread of unreliable news on their platforms.
“In order for us to produce a metric that allows meaningful comparisons over time, we depend on having lists of reliable and unreliable sites that are updated as new sites become popular,” said Paul Resnick, Director of the Center for Social Media Responsibility at the University of Michigan School of Information. “That’s one of the reasons that NewsGuard has been a great resource for us.”
Other institutions that have licensed access to NewsGuard’s data for their research include:
- Avaaz
- Dartmouth University
- George Washington University
- The German Marshall Fund of the U.S.
- IMT School For Advanced Studies Lucca
- Los Alamos National Labs
- New York University
- The University of Pennsylvania
To read these ideas and more in NewsGuard’s white paper, click here.
To read more about NewsGuard’s work with researchers, including some of the published works mentioned here, click here.
Posted on: Wednesday 15 December 2021