We prize knowledge but everybody lies. So says Psalm 12:2 in the Hebrew Bible. Current neuroscience research and data analytics agree.
Paradoxically, we live today with more knowledge and more falsehoods more closely intertwined than ever before. Thanks to digitization of the world’s knowledge and AI-powered search capability to access it, we can instantly summon the most arcane facts imaginable. Yet, also as never before, AI-powered software and social media networks allow ordinary people to create highly convincing fakery and falsehoods and circulate them globally.
The result is a world in which it often seems that objective truth is neither expected nor valued, trust is tribal, accountability a quaint concept, and our native ability to discern truth from falsehood rapidly diminishing. Former truth-telling systems like professional journalism and scientific expertise are undercut, anyone can be a convincing propagandist, and everyone has to learn new ways to tell fact from fiction.
How do we better live with this paradox of greater knowledge and more convincing falsehood? First we need to understand the landscape of true and false statements so we can distinguish between them. For future consideration is the ethical question of where we draw the line between unacceptable and unacceptable statements for purposes of societal flourishing and living with integrity.
A Field Guide to the Real and Virtual Landscape of Truth and Lies
Here is my attempt at a “field guide” to recognize basic types of truthful statements, lies and blended statements in between in the real and the digital world, starting from the truthful side of the landscape.
Objective, verifiably true facts must surely hold down the “most truthful” end of the true to false spectrum.
Some would question whether these kinds of facts even exist. Post-modernists working in humanities like history, literature, philosophy and social sciences say “no” – that there are as many truths as there are lenses for perceiving truth. But hard scientists and people who design and make things in the real world like engineers most likely say “yes”. For purposes of our field guide, let’s go with them.
How do we verify such truths in the real world? Through careful and reliable observation, recording of data, comparisons from multiple sources, and often through accumulated knowledge over time (expertise). Scientists have a multi-step scientific process formalizing these steps. Courts decide what is true through an adversary judicial process. In everyday commerce and society, we have traditionally sorted claims in advertising and public relations, looked to professional journalists to investigate and report what really happened, and very often rely on the knowledge of people we know and trust. None of these methods is infallible, but all have generally stood the test of time, until now.
In the digital world of AI, vast pools of data and powerful analytical tools create tantalizing possibilities for understanding how the world works in ways we would not even have previously thought to hypothesize. Computer-based modelling is supplementing and replacing real world cause-and-effect experimentation by virtually testing and retesting hypotheses millions of times, the results of which can then be related back to the real world. By sheer repetition of outcomes, correlation may become as good a basis for verification as proof of actual causation, or so it is claimed.
Subjective eye-witness accounts are one step down from objectively verified true facts. A person describes what he or she claims to have seen. More and more, psychology and neuroscience tell us that such subjective observations are less reliable than “objectively verifiable facts”, owing to the limitations of human perception, memory, bias, care, and other frailties that even the most conscientious witness cannot control. But eyewitness accounts are still highly valued, especially if they can withstand careful challenges and are supported by alternative evidence.
AI augments such human perception with always-on government and private business surveillance and everyday consumer surveillance through the cell phones we all carry about. Pictures don’t lie, or so we believe until new technology enables visual images and oral statements to be convincingly created and altered.
Truthiness is defined by the Merriam Webster dictionary as “a truthful or seemingly truthful quality that is claimed for something not because of supporting facts or evidence but because of a feeling that it is true or a desire for it to be true.” Here’s the best everyday test: anytime someone starts a sentence with “I feel like . . . “ – which these days is very often!
AI has empowered “truthiness” by offering everyone a platform to speak their own truth, in direct contrast to the dictum attributed to various 20th century politicians that “everyone is entitled to their own opinion, but not to their own facts.”
Which brings us to Opinions. Many people would say opinions do not belong in a field guide to truth because opinions are interpretations, not statements of facts. But opinions nevertheless can be powerful shapers of other people’s understanding of actual facts, depending on how much trust, experience, wisdom and common sense they accord to the opiner. For instance, law courts allow juries to decide what factually happened on the basis of opinions offered by qualified experts derived from admissible evidence of underlying facts.
These days, social media enables and encourages everyone to form and share their opinions, whether they have any special expertise or have considered any underlying facts. As this happens and is recorded on social media by the millions and billions, it alters perceptions of truth both by changing “I think” to “I feel” as noted above, and by discouraging consideration of evidence due to short message templates and shorter attention spans.
Beliefs are a subset of opinions – let’s call them “considered opinions”. I include them in this field guide because at their best they are partially based on underlying verifiable facts, and reflect a coherent system of thought that often has been subjected to long term study and constant challenge. Take evolution, for example, which remains a theory – though obviously a very widely accepted one, especially in the scientific world — resting on millions and millions of scientific observations.
Organized religions often constitute a strong category of “beliefs” because they require a degree of consistency and rigor and were formulated or maintained by organized communities based on texts written and venerated over a long period of time. Devout adherents accord great “truth” to their faith beliefs even if unverifiable because they explain both the spiritual and the material world.
In the AI world, social media appears to have accelerated people’s desire and ability to formulate their own customized statement of beliefs. In my hometown, Seattle, organized religious beliefs overlap with secular catechisms like the “In this house . . .” placards that have appeared in many of my neighbors’ yards.
At the same time, adherents to organized religion can and do use social media to connect and encourage consideration of their own time-tested beliefs, while offering real community to augment the digital online version.
The category Omissions takes our field guide into deception territory. Omissions are the failure to make a statement necessary to avoid creating a false impression or assumption. Legally, omissions only become deceptive when there is a duty to speak up in a particular context. For example, a financial disclosure document that fails to disclose a material fact is regarded as untruthful even though no false statement is affirmatively made. In many Christian prayers for forgiveness, sins of omission are called out right alongside sins of commission. This is because Christian doctrine teaches the duty to be honest and to live as a servant to others.
Natural language processing, an area of early and extraordinary success for AI, presents an example of an omission issue in the AI world. The classic test for AI success is if a human cannot tell they are interacting with a robot rather than another human – the Turing Test. But now that bots are approaching this capability, including the incorporation of human-like verbal ticks (“um” and “uh”) to encourage a successful, frictionless interface, chatbot designers must ask whether they are deceiving customers by failing to disclose they are dealing with a bot rather than a human.
Next up are “White Lies”, surely the largest category of untruths. These are low grade falsehoods we tell ourselves and others for reasons better (e.g., to avoid hurting someone’s feelings) or worse (e.g., to excuse our failures). They often get a moral pass because they seem to produce some benefit and usually little harm.
Shankar Vedantam, host of the popular NPR show and podcast The Hidden Brain, recently co-authored a whole book exploring the idea that we are evolutionarily wired to deceive ourselves and others all the time for the benefit of our own good and that of the broader society. In Useful Delusions: The Power and Paradox of the Self-Deceiving Brain, he argues for the functional value of such self-deception “to help you survive, to forage for opportunities, to get along with mates and friends, to raise offspring to adulthood, and to avoid feelings of existential despair.”
Since White Lies are social grease, they are a mainstay of social media discourse in the digital world.
Nearer the far end of the false spectrum are Objectively Verifiable Lies – statements that can be verified by the same strategies as objective truths. Many reasons for telling such lies exist including “magical thinking” in which cognitive dissonance is relieved between a belief and the true facts which undercut the belief; allegiance to a particular group; or to gain the powerful feeling of special “insider knowledge.” Likely even more common is laziness or simply a lack of care for what is actually true or false.
Here again the mores and incentives of social media have undermined a former duty to learn and tell the truth. It used to be we laughed over “urban legends”. Now we have to deal with Q Anon true believers at our family dinner tables. This is one reason we now have a host of overt “fact checker” columns by professional journalists and grass roots organizations working to restore truth to our social and political discourse.
On the other hand, data pools and analytics can also be an AI-powered discerner of truth and revealer of deception. In his book, Everybody Lies : Big Data, New Data, and What the Internet Can Tell Us about Who We Really Are, New York Times data columnist Seth Stephens-Davidowitz shows how Google searches reveal people’s real beliefs and interests better than personalized surveys.
Intentional Falsehoods hold down the far false end of the spectrum because they are made with the affirmative intention to mislead. Up to this point, it is possible to excuse falsehoods as minor and well-intentioned, careless, irresponsible, and lazy. But lies made with the intent to mislead constitute outright “fraud.”
In the digital world, scary forms of such fraud are emerging in AR and VR-based false images and videos of real people; vast and highly targeted synthetic media posts of false information intended to influence elections, sow social discord, and steal data, identities and money.
Further Work: Drawing Lines of Acceptable Behavior
A “field guide” is only descriptive – a way to see the landscape. It doesn’t tell you how to navigate it. On this spectrum, where truth ends and deception begins depends on the world view and ethical model you apply. So does the line between acceptable and unacceptable behavior.
Compare these two approaches, one secular and one faith-based, to see how this can work.
Shankar Vedantam, author of Useful Delusions, is passionate about reason. In my field guide, it appears from his book that he would draw the line between truth and “self deception” immediately below “objectively verifiable facts”, making any kind of belief system a part of the world of self-deception. Indeed, the last chapter of Useful Delusions is about religion. He entitles it “The Grand Delusion” and discusses in it such scholarly explanations for religious belief as “terror management”, group cohesion and solidary, and other beneficial societal motivations.
At the same time, Vedantam recognizes there is virtue in self-deception, including religion, and serves up in his Epilogue the largely unanswered question: “When should we fight self-deception and when – and how much — should we embrace.” It seems Vedantam would draw the line between acceptable and unacceptable deception well down in falsehood territory because ultimately his interest is in functionality. “Remember this,” he writes. “If self-deception is functional, then it will endure, regardless of all the best sellers that criticize it. Life, like evolution and natural selection, ultimately doesn’t care about what is true. It cares about what works.” (192) And “what works” may include religion, even if it is delusional: “if the stories have resonance and power, does it really matter if they are true? Why put the emphasis on the truth or falsity of the stories, rather than on what the stories do for us.?” (202)
Andy Stanley, popular pastor of the huge North Point megachurch in Atlanta, Georgia, thinks Christianity works too, but at face value. In his recent book Better Decisions, Fewer Regrets: 5 Questions to Help You Determine Your Next Move, Rev. Stanley’s first question, and the most basic one to everything that follows is “Am I being honest with myself? Really?” Because Stanley’s world view includes an omniscient, sovereign, loving God who claims to be the source of all truth, he believes it is not only possible but essential to practice self-honesty, not self-deception. If you have no platform for determining what is “true”, you are unmoored in life and will eventually flounder. But wielding truth will also be moderated under the command to “love your neighbor”, leaving ample room for benevolent outcomes of the type Vedantam chronicles.
My own faith beliefs dictate that we can and must do better than a standard for truth based on “what works.” That’s just dubious utilitarianism. But even that low bar can find value in the faith beliefs, as Vedantam suggests, as we seek to navigate the shifting landscape of truth and lies in the digital age.