Overview
Psychologist and expert on the impact of media on children, Susan Linn, has written a book whose title says is all. 1 Her main, well-argued thesis is that big business, and in particular big tech corporations, have created computer technologies and marketing strategies which harm children by luring them to digital devices from birth, hooking them, and training them to grow up to become uncritical consumers (pp.207-208). She nicely points out the most flagrant addictive practices of big tech (pp.46,73). Especially relevant to the AI&Faith community are her explicit reflections on AI, and so I focus my review on the points she makes relative to these projects.
Linn notes that AI and human intelligence are distinct in important ways (p.55). One difference she points to is in their construction: algorithms are based on what users like or what they have learned about users. By reinforcing only what users find appealing, algorithms readily lead users down rabbit holes where they never get opportunities for free creative play. Instead, users are essentially programmed to do what they have been doing or to be exposed only to tailored advertisements that serve the interests of the business (pp.136-137).
Linn then observes that not only do digital technologies interfere with learning as they cut down on opportunities to expand horizons beyond what the child or the programmer knows, but they also interfere with family and social relationships, which are additional learning opportunities for a developing mind (pp.137,142). But she is not all doom and gloom. She notes a 2020 study conducted by Sandra Calvert which concluded that children interacting with a trusted and well-known screen character like Dora the Explorer learned math skills faster than kids in a control group 2. More study on how and why familiarity with such trusted online characters might help adult learning on other subjects seems mandated for AI research as this research further enhances the ability of an AI to engage human beings. But Linn is quick to note that the role a screen character might play in learning remains problematic if the technology is created to market a product or addict the user to a more heightened use of the technology (pp.142-143).
It seems that digital technology in Linn’s view is a tool which preserves the status quo and reflects the worldview and aims of its programmers. This is precisely the problem for me and others with religious and ethical agendas. We should expect that a racist and secular society will develop algorithms which reflect and even promote these values. (In previous columns for the Newsletter, I have pointed out AI’s inability to date to produce reactions and behaviors impacted by the brain chemicals dopamine and oxytocin . Therefore our emotions render most operations of an AI as inherently secular.) Linn points out several examples of racism in various search engines which indicate that they are programmed to reflect certain racial biases (pp.146-147). She reports on a 2013 study by Latanya Sweeney 3 which showed that “a person has been arrested” was more likely to appear in searches for names associated with being Black. Linn also notes that Alexa reports when asked about “African-American boys” that “many are struggling readers/learners”. In fact, it seems that some books have been written on this range of subjects (see, for example, work by Safiya Noble 4).
In any case, Linn’s book presses an important question upon us all, perhaps leading us to agree with her and NYU AI researcher Meredith Broussard, that “Computers are excellent at doing math, but math is not a social system. And algorithmic systems repeatedly fail at making social decisions.” To be sure, algorithms can be tweaked to correct for racial (as well as secular) bias. But OpenAI leadership is correct. The apps have biases. We have a lot of monitoring work to do, even as the latest AI agents get rolled out, mass, produced, and marketed. Read the data and observations in Linn’s book (and take another look at the June 2021 article by Khari Johnson in Wired 5) if you are inclined to think that the issues raised in this review are too alarmist. There is enough covered in this work to incite concern, and then commit to finding solutions.
Acknowledgments
A big thanks to Dr. Mark Ellingsen for his time read this work and write this thoughtful review. Thanks to Emily Wenger and Marcus Schwarting for proofreading, editing, and publishing this work.
References
1. Susan Linn. Who’s Raising the Kids?: Big Tech, Big Business, and the Lives of Children. The New Press, 2022.
2. Sandra L. Calvert, Marisa M. Putnam, Naomi R. Aguiar, Rebecca M. Ryan, Charlotte A. Wright, Yi Hui, Angella Liu, and Evan Barba. “Young children’s mathematical learning from intelligent characters.” Child development 91, no. 5 (2020): 1491-1508.
3. Latanya Sweeney. “Discrimination in online ad delivery.” Communications of the ACM 56, no. 5 (2013): 44-54.
4. Safiya Umoja Noble. Algorithms of oppression: How Search Engines Reinforce Racism. New York University Press, 2018.
5. Khari Johnson. “The Efforts to Make Text-Based AI Less Racist and Terrible.” Wired, 2021.