New! Become A member
Subscribe to our newsletter
Insights

AI, Facebook and Faith

Over the past few weeks, two blockbuster stories have competed for public attention. President Trump continues to generate, by any ordinary standards, an astonishing amount of headline coverage. There’s the ongoing Mueller/Russia investigation, of course. Now there’s the separate SDNY investigation of Michael Cohen (and Trump). We also have the merry-go-round of who’s in and who’s out at the White House, the list of women paid hush money for supposedly not having sex with the President, a reignited feud between Trump and James Comey due to Comey’s new book, etc., etc. Michael Wolff is so yesterday’s news.

Yet all of that, arguably, took a back seat to the story that has transfixed both the U.S. and the U.K. — the Facebook/Cambridge Analytica scandal, including CEO Mark Zuckerberg’s live-streamed attempt at damage control before House and Senate lawmakers. All this because two of the most consequential, and closely contested, political events of this generation — the Brexit vote and the election of Donald Trump — were potentially influenced by Russian hackers using Facebook (more or less) as its business model intends.

More than a little press coverage ensued. Just a few days after Zuckerberg’s testimony, in fact, Google listed over 4.4 million news stories in response to a “facebook zuckerberg congress” search. Which raises an obvious question — is there anything still to be said? Permit me to suggest that, despite those 4.4 million stories, the answer is yes.

First off, the role of AI in the Facebook scandal has gone almost entirely unnoticed and unreported. Mostly when the press writes stories about the potential dangers of AI, it’s writing about the near- to-distant future. It’s generally ‘around-the-bend’ sort of stuff. In other words, stuff we should get ready to worry about…eventually.

Yet the essential vulnerability exploited by Russian hackers — Facebook’s ability to create individual psychological profiles of its 2.2 billion users and then deliver custom-tailored content to each of them — is entirely the result of AI that’s been operational at Facebook for several years. And it’s that AI-powered ability to know and exploit the psychic vulnerabilities of its users that Facebook has been selling to advertisers all along. In fact, such individually-targeted narrow-casting is both possible and persuasive entirely because of AI.

But there’s another aspect to the story that has gotten even less coverage. I suspect, in fact, that is has been written about a grand total of…exactly zero times. And that is the essential role of people of faith in the discussion about whether and how to employ AI’s immense powers.

In prepared remarks submitted prior to his Congressional testimony, Mark Zuckerberg said:

“Facebook is an idealistic and optimistic company. For most of our existence, we focused on all the good that connecting people can bring . . . But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well. That goes for fake news, foreign interference in elections, and hate speech.”

It’s worth noting how disingenuous one can be while sounding contrite. Facebook likes to present for public consumption that its mission is to (altruistically) connect more and more people in the furtherance of more and more community. It may even believe, sort of, in that mission. But Facebook’s business model is something else entirely. As a business, Facebook’s ultimate aim is to discover and catalog the psychic vulnerabilities of every one of its users — and then deliver that knowledge to anyone with something to sell (politicians included). Behind the ‘world peace’ facade, Facebook is entirely about monitoring and psychic manipulation for profit — what has rightly been called surveillance capitalism.

But I digress. Both in his prepared statement, and in accompanying interviews with members of the press, Zuckerberg made clear that for the first ten or so years of Facebook he was naively optimistic about human nature. His deep, unquestioned assumption was that if he built tools for connecting people, good would necessarily ensue (Facebook’s business model notwithstanding). Only recently, in his telling, did he come to realize that some people might use his tools for harm rather than good. The idea that Facebook could be weaponized dawned slowly and late.

This is a recurring story. Scientists and business people regularly bring new technological capabilities to humankind. Much of the time, these technological breakthroughs seem full of promise. But we eventually discover “unintended consequences” — the technologies can, and often are, used for real harm as well.

Characteristically, this comes as a rude awakening for inventors. They had been so optimistic about the great good their invention would bring to the world. That their technology ends up being used harmfully, exploitively, comes as a real surprise.

But it does not surprise people of faith. The world’s religions have been dealing with the issue of evil, and with the fallenness of humankind, for thousands of years. We know full well that humans are incredibly valuable, wonderfully capable and creative — and intrinsically flawed. As Aleksandr Solzhenitsyn pointedly observed: “The line dividing good and evil cuts through the heart of every human being.”

None of this has convinced people of faith that new technologies are necessarily bad. Far from it. But it does make us aware of their potential for abuse — an awareness that is there from day one, not simply after ten years of dangerous naivete. As a result, we are often those who argue for carefulness and deliberation when it comes to the deployment of new technologies.

It turns out, Facebook could have used a little more of that caution and thoughtfulness (and humility) when it was in its ‘move fast and break things’ phase. So could the entire field of AI.

has been a long time business leader in commercial real estate and more recently a speaker and author in numerous venues on integration of faith, work and better models for responsible business. Tim is a Red Sox fan from Boston where he graduated from Harvard.

In lieu of comments, we invite you to submit your comments and questions to our contact form.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter