New! Become A member
Subscribe to our newsletter

Misinformation Hub

In a hurry? Check out our podcast summary.
Listen

What is the difference between misinformation and disinformation?

The terms “disinformation” and “misinformation” both refer to the spread of false information, but they differ significantly in intent. Disinformation refers to false information that is deliberately created and disseminated to deceive and mislead the audience. The creators of disinformation are aware that the information is false but spread it to achieve a specific objective, which could be to manipulate public opinion, influence political processes, or cause confusion. 

For example, a fabricated news story is created and shared that falsely claims a candidate was involved in illegal activities. The purpose of such a story would be to damage the candidate’s reputation and influence voters’ perceptions, with the creators fully aware that the accusations are baseless.

Misinformation, on the other hand, involves the sharing of false information without the intent to deceive. Often, the person disseminating misinformation believes it to be true. It can spread due to misunderstandings, mistakes, or a lack of knowledge about the facts. In many cases, it is done with the best of intentions.

An example would be when a person shares a social media post claiming that a certain vitamin supplement can cure a disease. The individual sharing the information may genuinely believe this claim and think they are helping others, even though there is no scientific evidence to support said claim.

The reality is that both can have devastating effects. While determining intent may be important before the court of law, the impact of either one is ultimately where the danger lies. Well-intentioned agents have perpetrated a lot of harm. Both types can be mitigated through strategies that remind us to slow down, think, and evaluate what we plan to share before doing so. One can lump both under the “misinformation” umbrella, as a catch-all term for the spread of false information, which is what we’ll do henceforth. 

What are the types of misinformation?  

False information comes in many shapes and forms. According to Dr Claire Warddle, of First Draft Media, we can divide them into 7 categories:

  1. Fabricated Content – False content created and designed to deceive or do harm. An example of this would be the creation of entirely false articles claiming that a celebrity has died. These are completely fabricated stories with no basis in fact, designed to generate clicks and revenue through sensationalism, or sometimes to manipulate stock prices or public perception.
  2. Manipulated Content – Cases when truthful information or imagery is altered in a way to deceive and or to harm. For example, a genuine photograph of a politician might be digitally altered to make the person appear in a compromising situation or in poor health. This manipulation of truthful imagery is designed to create a false narrative, often to damage reputations or influence public opinion.
  3. Imposter Content – This is when news is produced through impersonated sources. That is, the agent makes it look like it is from a reliable or reputable source. During the 2016 U.S. presidential election, there were instances where fake news websites mimicked the design of legitimate news outlets like ABC News to spread false stories. One infamous example was a fake story about a protester being paid to demonstrate at Trump rallies, purportedly published by ABC News.
  4. Misleading Content – When a piece of media uses information to unfairly frame an issue or an individual. For example, during the COVID-19 pandemic, some posts on social media claimed that drinking hot water every 15 minutes would prevent infection by the coronavirus. This used real recommendations about hydration but framed them misleadingly to suggest they could prevent COVID-19, creating a false sense of security.
  5. False context – When truthful content is shared with false contextual information. As an example, a photo showing a large crowd might be shared with the claim that it represents a protest against current governmental policies. However, the photo could actually be from an entirely different event or time, such as a music festival from several years earlier. This false context can dramatically change the perceived meaning of the image making a simple gathering become a political act when that was not the real case.
  6. False connection – This happens when a headline or a visual does not support the content it links to. For example, a news article might carry a sensational headline like “Shocking Discovery in Local Government,” but the actual content discusses minor bureaucratic errors with no real “shocking” elements. This creates a false connection between the headline and the article content, misleading readers about the significance of the information.
  7. Satire or Parody – Most often harmless pieces of media designed to make light of serious topics or authority figures. Yet, to highlight the absurdity of a topic, satire can exaggerate or remove context for comedic effect leading to misinformation. For example, “The Onion” a well-known satirical news site, publishes articles on a wide range of topics with a humorous and often absurd twist. While clearly intended as satire, such content can sometimes be shared by those unaware of the satirical nature as real news, leading to confusion or misinterpretation. This can also happen when people share clips from comedy programs like “The Daily Show” as news.

Resources

  1. A short informative podcast on why you should care about dis/misinformation from Ed Melick
  2. Listen to engaging conversations on the topic with our experts Jason Thacker and Andrew DeBerry.
  3. In this engaging TED talk, Dr. Claire Wardle discusses the real-life consequences of misinformation online. https://www.npr.org/2020/03/20/818299094/claire-wardle-why-do-we-fall-for-misinformation
  4. How to prebunk? Here is a guide on how to get ahead of misinformation by preempting with well-reasoned facts that address the issue at hand (improve) https://firstdraftnews.org/articles/a-guide-to-prebunking-a-promising-way-to-inoculate-against-misinformation/
  5. Verify it – here are some tips on how to verify misinformation: https://counterhate.com/blog/how-to-navigate-online-disinformation-and-propaganda-and-practicing-information-resilience/
  6. Report it – this link shows how you can report misinformation when you spot it in social media: https://counterhate.com/blog/how-to-report-misinformation-on-social-media/
  7. Who is most susceptible to spreading misinformation? Informative study that hones in traits of vulnerable populations most vulnerable to spreading misinformation https://psycnet.apa.org/record/2021-96824-001
  8. Tools to stop and mitigate misinformation online https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html
  9. Fact-checking sites https://en.wikipedia.org/wiki/List_of_fact-checking_websites
  10. Submit rumors to check whether a claim is false https://www.snopes.com/
  11. Tools to check deepfake videos https://scanner.deepware.ai/ https://deepfakedetector.ai/ https://weverify.eu/tools/deepfake-detector/
  12. Resources for detecting audio fakes https://medium.com/@deepmedia/a-comprehensive-guide-to-detecting-voice-cloning-825f2738c00e

Go deep with our paper that covers the technical aspects of misinformation along with interfaith reflections on its insidious impact

Why is this such a daunting problem? 

Looking over the examples of misinformation above it is not difficult to see why this is such a pervasive problem. Most likely, all of us have at least once inadvertently passed on misinformation to our network in social media, email or through conversation. In a world that bombards us with noise, the human brain cannot possibly discern truth from falsehood every time. 

With that said, this is not a new problem. As noted author Jason Thacker affirms, misinformation is another version of the age-old problem of propaganda. Empowered by advancing technological tools, agents of chaos often spread misinformation with a goal in mind. Whether it is to influence the outcome of an election, sway public opinion for or against a cause, or even to confirm and reinforce a worldview, misinformation is just one of the many new forms of spreading propaganda. As it was in the past, propaganda is successful when it gives people a satisfying part to play: someone to be, to love and hate. As it was then, regulation is much needed but certainly not sufficient.

Even so, the pervasive spread of misinformation brings with it another daunting challenge which is the “liar’s dividend,” a phenomenon where the existence of synthetic media and the capability to create realistic yet false content enable malicious actors to sow doubt about the authenticity of real information. The “liar’s dividend” fundamentally alters the dynamics of trust and truth in society. As synthetic media technology becomes more accessible and its products more indistinguishable from reality, the potential for misuse increases, complicating the efforts to maintain an informed public and a healthy democratic discourse. 

Technological augmentation allows disinformation campaigns to exploit sophisticated algorithms that analyze vast amounts of personal data to identify individuals most likely to be influenced. By targeting these vulnerable groups with precision, misinformation not only finds a receptive audience but also fosters environments where it can flourish and mutate. As a result, falsehoods are not only spread more efficiently but are also more likely to be reinforced through echo chambers where contradictory information is scarce. Thus, the integration of AI technologies in the dissemination of misinformation introduces a significant amplification in the speed and reach of these campaigns.

What can you do about it?

As the magnitude of this problem becomes evident, the temptation is to move into fatalistic despair. Yet, there are things that citizens like you and me can do. Ultimately, the spread of misinformation is highly dependent on the human factor. Even a small group of informed citizens can do a lot to mitigate its spread among their networks. This is especially true in faith communities that already share strong relational bonds among its members. 

We suggest five rules of thumb you can follow to mitigate the spread of misinformation:

   Verify Before Sharing: Always check the credibility of the information before sharing it on social media or other platforms. Use fact-checking websites like Snopes, FactCheck.org, or credible news sources to verify the accuracy of claims, especially if they seem sensational or unlikely.
   Check the Source: Evaluate the reliability of the source providing the information. Look for signs of credibility such as established media outlets with strong journalistic standards. Be wary of unknown websites or platforms that frequently post sensational or extreme content without factual backing.
   Read Beyond Headlines: Headlines can be misleading or crafted to attract attention. Always read the full article before sharing to understand the context and ensure the content actually supports the headline. This helps avoid spreading article that may not accurately represent the facts or that  are based on out-of-context quotes.
   Educate Yourself on Media Literacy: Improve your ability to analyze and evaluate media critically. Understanding how media is created and the motivations behind it can help you discern between legitimate news and misleading or false information. There are many online resources and courses available that teach media literacy.
   Report Misinformation When you Encounter it: Once you have determined that a piece of media is false, pass on your discovery to others. Share what you found with links to fact-checking or debunking sites. Furthermore, you can report it directly to social media platforms so they can block or remove fake content.

One could summarize all these five rules into one simple piece of advice: 

Before sharing anything, slow down and consider whether it is worth sharing. When in doubt, don’t share it at all.

Why should people of faith care?

AI and Faith affirms the crucial role faith community plays in informing the development of AI. Hence, it is no surprise that we call on faith communities to set the standard for truth-telling in mitigating the spread of misinformation. This is especially important since recent research on COVID-19 misinformation showed a consistent association between a religious group and political misinformation. This is not to say that faith communities are more vulnerable to misinformation per se but it does show evidence of a checkered past. 

There are at least three reasons why people of faith should deeply care about this topic. First, faith often dictates what is considered morally acceptable, influencing how individuals and communities respond to the challenges posed by AI-driven misinformation. Religious teachings and doctrines can offer a moral compass, guiding followers on discerning truth from falsehood. This is a crucial capacity in the era of AI, where distinguishing between authentic and fabricated information is increasingly challenging. This moral guidance can empower individuals to stop and critically assess information before passing it on, thereby mitigating the impact of misinformation and disinformation.

Second, the discussion around misinformation touches upon deep existential and philosophical questions that are traditionally in the domain of faith, such as the nature of truth, the ethics of creation, and the responsibility towards the created. As AI technologies advance, these questions become increasingly pertinent, pushing religious thinkers and scholars to engage in the conversation. Their contributions can provide valuable insights into how humanity should navigate the moral dilemmas posed by AI, including those related to misinformation and disinformation.

Third, faith influences the narratives and ideologies susceptible to manipulation by AI-driven misinformation campaigns. Propaganda agents can leverage beliefs deeply rooted in religious convictions to sow division, radicalize individuals, or mobilize groups toward certain political actions. Understanding the context in which these conspiracy theories arise is essential for developing strategies to counteract misinformation that exploits religious sentiments. It also highlights the need for a nuanced approach to AI governance that respects religious diversity while safeguarding against the abuse of technology to manipulate faith-based communities.

Hence, we invite you to read our paper which provides both a technical overview of the topic along with perspectives from multiple faiths. We believe that mitigation starts with deep reflection followed by action. As we ground ourselves on the spiritual nature of truth, we are better prepared to take a stand against misinformation.

This page was created through a collaboration of AI and Faith experts Elias Kruger, Dr. Mark Graves, Marcus Schwarting, Ed Melick and Haley Griese. We invite your feedback and additions to this pressing topic by reaching out to us here.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter