New! Become A member
Subscribe to our newsletter
Insights

The Good News about Technology Ethics

When I was a graduate student in philosophy in the 1970s, I had the ambition to go on the job market with two specializations, philosophy of physics and technology ethics. That proved to be a mistake when, for the first and last time, I gave a technology ethics job talk, focusing on ethical issues surrounding nuclear power. It was not that the audience was hostile, they just didn’t understand how what I was doing counted as philosophy. Another problem back then was that, for the most part, tech sector corporate leaders, engineers, and government officials weren’t talking with the small number of academics working on ethics and technology, the prevailing attitude being that any talk of ethics was a threat, that “ethics is just a way of telling us what we can’t do.” There were a few, prominent, thoughtful and persuasive voices raising concerns, especially in the area of environmental ethics, such as Rachel Carson and Barry Commoner. And the public interest group, “Science for the People,” was very active in raising ethical questions about a wide range of issues, including technology in war – think of Agent Orange and napalm. But leaders in industry, government, and the academy weren’t listening.

The good news is that, today, the situation is remarkably different. The change started in the academy, with the establishment of a few programs, centers, and institutes devoted to science and values at places like Penn State (1969), Stanford (1971), and Virginia Tech (1975), and the launch of the journal, Science, Technology, and Human Values (1976). It is noteworthy that these early initiatives came as much from scientists and engineers as from philosophers and others in the humanities. Notre Dame’s Reilly Center for Science, Technology, and Values was founded in 1985. The number of philosophers working on technology ethics began slowly to expand, but virtually nowhere would a primary focus on technology ethics be a safe route to tenure and promotion. If one made one’s mark elsewhere, then one’s colleagues might tolerate work on technology ethics as, at best, a kind of extracurricular hobby. And, still, industry and government largely turned a deaf ear to what the academic “nattering nabobs of negativism” were saying and writing (to repurpose Spiro Agnew’s classic expression).

Today, however, technology ethics has, at last, broken through and become part, albeit still a small part, of the philosophical mainstream. Indeed, it would not be an exaggeration to say that, within just the past few years, the field has acquired a kind of cachet. I am surprised and pleased to see how many of my colleagues who never before made technology ethics part of their professional portfolio are now doing so, especially in the classroom. It’s the hip and cool thing to do. New endowed chairs and well-funded programs, centers, and institutes are popping up from Beijing, Vienna, Delft, Oxford, and Cambridge to MIT, Arizona State, and Stanford. There is actually something of a recruiting war going on now for the very top people in the field.

It was just this past September when Notre Dame launched its own Notre Dame Technology Ethics Center (NDTEC) with major financial backing from the university and private donors, including a soon-to-be announced major gift from a very well known, leading technology firm. We plan to make as many as fifteen new hires associated with NDTEC, including perhaps five or six in philosophy.

That NDTEC will be supported partly by a large endowment from a major technology firm is a sign of another happy turn of events, for corporate support from work in technology ethics is materializing in a number of forms. In some places as at Oxford and Notre Dame, it is to support university-based centers and institutes. In some places, as at IBM, it takes the form of recruiting from academia an AI Ethics Global Leader, the computer scientist, Francesca Rossi. Even Google sought to establish an AI ethics advisory board, the Advanced Technology External Advisory Council, but, sadly, that quickly failed thanks to internal and external controversy over the membership of the council. Nonetheless, the effort shows that Google’s leadership recognizes the need to engage with ethical issues. And it’s not just industry that is on board. Government is joining as well. For example, it was eight years that the Defense Advanced Research Projects Agency (DARPA) commissioned the National Research Council and the National Academy of Engineering to do a two-year study and report on the role of ethics in weapons research and development. And just within the last year we have seen the release of the European Commission’s new Ethics Guidelines for Trustworthy Artificial Intelligence and the Chinese government’s Beijing AI Principles.

Of course these are all still just first steps toward full engagement with ethical questions about AI and other emerging technologies. Especially important in my opinion and a focus of my current research, lecturing, and writing is to think together with partners in industry and government about how to develop effective structures through which to integrate a sophisticated, sincere, and distributed engagement with ethics into the everyday workflow of technology corporations and relevant regulatory bodies, this as opposed to having carping philosophers chastise industry or have industry fly in, from time to time, a consulting ethics specialist. The goal is to encourage the development of ethical competence and ethical engagement as a deeply embedded part of corporate culture, part of the business plan, as it were.


Don Howard, Ph.D

is Professor of Philosophy, a Fellow of the University of Notre Dame’s Reilly Center for Science, Technology, and Values, and an Affiliate of the newly formed interdisciplinary Notre Dame Technology Ethics Center. Professor Howard has been writing and teaching about the ethics of science and technology for over three decades. Among his current research interests are ethical and legal issues in cyberconflict and cybersecurity as well as the ethics of autonomous systems. Professor Howard earned his Ph.D from Boston University.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter