New! Become A member
Subscribe to our newsletter
Insights

Generative AI and the (De)generation of Truth

Saudi Arabia has recently introduced an AI-powered automated fatwa (religious ruling)-on-demand robots with the capability of generating religious advice in no less than 11 languages. While the technology it uses is unclear, I would not be surprised if it has taken advantage of Large Language Models in the likes of ChatGPT. At the same time, it made me wonder about the potential pitfalls of an approach using Generative AI as a way of guiding people to the “truth.” Which and whose “truth” are we talking about?

Generative AI tools such as ChatGPT and Midjourney are wonderful time-gaining machineries. They provide users a colourful palette of instruments in one interface to assist with anything from the creation of stock images, illustrations, logos, and product pitches to academic book summaries (disclaimer: I’ve tried it, it’s not there yet). Recently, Midjourney has been in the spotlight for enabling the creation of alternative realities. Some of these are seemingly innocent, such as the reinvention of the Harry Potter universe as if designed by Balenciaga or imagining the pope wearing a puffer jacket. However, some other creations bear a much heavier political significance, such as the one depicting a fictional arrest of former US president Donald Trump.

These fictional outputs gripped the Internet for a particular reason: such images have a strong symbolic power. Take well-known figures to which people attach strong emotions (the pope, a former president, and characters with a huge fan base), and associate them with equally strong symbolism: a trendy white coat, the violence of an arrest or the emaciated aesthetics of a luxury brand. There you have it: the perfect recipe for internet virality; and here lies the thin line between truth and fiction.

Social psychology studies such as Solomon Asch’s experiments on peer pressure have demonstrated that the more people assert a statement as truth, the more individuals are likely to succumb to a group’s opinion – even if they know the group is wrong. Several times, the internet has shocked us with surreal and seemingly impossible events: videos and images that happened to be, in the end, true (remember that shirtless, horned man in the US Capitol?). I first clicked on the fictional Trump arrest pictures because my brain thought that despite how spectacularly violent the images were, they were plausible. Since then, ChatGPT and Midjourney have taken “measures” to avoid the generation of content that could be used to create confusion with malicious intent. The former returns that it cannot impersonate public figures and the latter has banned the “arrested” prompt. However, these Band-Aid policies are no long-term fix: people always find workarounds (ever heard of adversarial prompting?). Can we think of countermeasures such as obligatory watermarks and metadata in addition to the creators’ full disclosure? Design a reverse-engineering algorithm that can figure out whether a particular piece of content was artificially generated? These questions remain open.

At the time of writing, it is fair to say that current Generative AI models are far from perfect and eagle-eyed users are still able to spot the AI’s “signature”. Midjourney struggles with hands and faces and ChatGPT has its own writing style: ask ChatGPT about how to make the perfect morning toasts and it will return a bullet-point list in the style of WikiHow: 1. Find some bread. 2. Think about slicing it. Ask it to summarize a book and you will soon learn that every single book in the world is “thought-provoking” and “insightful”. But tech catches up rapidly. Did you hear about this AI-generated picture that duped the judges and won the Sony world photography awards? This deception was deliberate – the author wanted to send a shockwave through the art world. If professional judges were fooled, what about us mere mortals?

Returning to this piece’s opening example of fatwa-generating robots, can we imagine the effect of governments and religious institutions appropriating Generative AI tools? Many authoritarian regimes now have armies of developers in addition to government-vetted media and the financial means to spread influence. Now, imagine regimes – each with their own idiosyncratic vision of “truth” – delegating issues as complex as religious rulings to algorithms and distributing them to millions of believers across the globe. Events such as the 2023 Sony photography awards serve as reminders that information is only as effective as our capacity to believe and to spread it – are we still taking our feeds for granted? It is a matter of shared responsibility: from creators to end users, we are all stewards of truth in our social circles. If we want a future rooted in truth, perhaps it is not the software that we should fear, as much as our ability (or lack thereof) to think critically.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter