New! Become A member
Subscribe to our newsletter
Book Review

A Review of Toby Walsh’s Machines Behaving Badly: The Morality of AI

Overview

Toby Walsh, a renowned AI researcher and professor at the University of New South Wales in Sydney, Australia, takes a critical look at the promises and pitfalls of AI. He argues that we need to be more mindful of the potential consequences of its rapid advancement. The book covers a wide range of topics, from the impact of automation on employment to the misuse of AI by malicious actors.

Of note, Toby Walsh has coauthored over 500 scientific articles on various topics in AI, including automated reasoning, constraint programming, and machine learning. He has also served as an editor for several academic journals in AI and has received numerous awards for his research contributions, including the 2015 Humboldt Research Award and the NSW Premier’s 2016 Prize for Excellence in Engineering and Information and Communications Technology.

One of the strengths of the book is that Walsh wrote in a way that is accessible to both experts and non-experts. He explains complex concepts and uses real-world examples to illustrate his points. He also offers practical solutions for addressing the potential dangers of AI, such as developing ethical guidelines and implementing regulations. From the perspective of responsibility, the author advocates that the book could instead be titled “The People Making Machines Behave Badly.”

Overall, Machines Behaving Badly: The Morality of AI is a thought-provoking and engaging read that raises essential questions about the role of AI in society. It is a must-read for anyone interested in the future of technology, its impact on humanity, and the results of unforeseen or unintended consequences. It is a fascinating book that provides an insightful look into the future of AI, in which Walsh explores various scenarios where AI could go wrong, including autonomous weapons, fake news, and the potential for AI to replace human jobs. He also provides a comprehensive overview of the current state of AI technology, including the history and development of AI, its various applications, and the latest advancements in machine learning and deep learning. Walsh’s insights and analysis are informative and timely, and his balanced approach to the subject matter makes the book engaging and intellectually stimulating.

Walsh recommends several guidelines to ensure AI is developed and used ethically. He argues algorithms used in AI systems should be transparent and open to scrutiny with clear lines of responsibility to ensure they are free from bias and other ethical concerns. He also suggests safety features for AI systems, including fail-safes and other mechanisms to prevent unintended consequences. Walsh believes collaboration between industry and academia should involve joint research projects and sharing best practices and guidelines.

“We cannot yet build machines that match the intelligence of a two-year-old.” – Toby Walsh.

As big tech companies deploy AI for profit rather than societal good and governments develop AI for power and control, Walsh’s most exciting ideas concern whether machines can operate in moral ways. One of the most significant and fascinating experiments in this area is the Moral Machine project run by MIT’s Media Lab. This digital platform crowdsources the moral choices of 40 million users. For example, the machine will question subjects about the decision-making processes of self-driving cars. Walsh is skeptical about the applicability of such neat moral decisions and whether they could become part of a machine’s coded logic. We often say one thing and do another. Sometimes we do we know we shouldn’t, like ordering ice cream while on a diet. Walsh notes that moral crowdsourcing depends on the choices of a self-selecting group of Internet users, who do not reflect the diversity of different societies and cultures. Walsh concludes that moral decisions made by machines cannot be the blurred average of what people tend to do. Morality changes. Democratic societies no longer deny women the vote or enslave people, as they once did, whereas AI consistently recycles previous decisions and reinforces currently held biases.

“We cannot today build moral machines… And there are many reasons why I suspect we will never be able to do so.” – Toby Walsh.

In his history of AI, Walsh explains how those with the skills required to build AI programs form a tiny part of the world’s population. These neurodivergent individuals think about AI technology and its usage differently from the general population. This contributes to the prominence of techno-libertarianism; that is, the idea that technology should not be restrained by regulations or any kind of control, and that only the ingenuity of its innovators should limit technology. The use of AI by governments for mass surveillance through facial recognition and biometric technology, coupled with The People’s Republic of China’s social credit system, has intensified the human propensity for authoritarianism. The development of AI by big tech for the purpose of surveillance capitalism, effectively ending the right to privacy, deflates Walsh’s optimism that humans can use the technology ethically. Meanwhile, Turkey, Russia, the United States, and Australia are fueling an arms race for dominance in developing lethal autonomous weapons. Walsh provides more pervasive examples of how algorithms can exert influence financial markets as well as the decisions of health insurance providers, judges, and law enforcement. The pairing of AI technologists with the energy sector has greatly increased oil production and accelerated the harvesting and pollution of other natural resources towards environmentally catastrophic rates.

In terms of trusting AI in the future, Walsh first considers the need for AI to encompass a range of desirable characteristics such as “explainability, auditability, robustness, correctness, fairness, respect for privacy, and transparency”. Second, Walsh suggests that regulation is the first step to reining in the abuses of current and future AI systems. Finally, he argues that we must educate the public on how they should utilize emerging technology. These arguments are convincing because this education may include providing the community with a better understanding of how AI powers their digital platforms, minimizing risks associated with exploiting their data, and explaining why their informed consent is necessary. While the author considers the many ways AI can go wrong and emphasizes the need for further technical, regulatory, and educational advancements going forward, he provides several different counterpoints to balance this perspective. There is still hope that engineers, scientists, governments, and communities can collaborate and use AI for the overall benefit of society. However, therein lies the limitations of “Machines Behaving Badly”: no one can predict how humanity will ultimately overcome the many ethical challenges of an automated society. This may prove to be a concerning prospect for the future of humanity, especially when it comes to using AI superiority for war, power, and control.

 


Ron Roth

Is a lead vocalist, philosopher, writer, ethicist, and Board Member at Unitarian Universalist Fellowship of Boca Raton, Florida, and a 20-year veteran of designing and architecting software systems.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter