When I was a graduate student in philosophy in the 1970s, I had the ambition to go on the job market with two specializations, philosophy of physics and technology ethics. That proved to be a mistake when, for the first and last time, I gave a technology ethics job talk, focusing on ethical issues surrounding nuclear power. It was not that the audience was hostile, they just didn’t understand how what I was doing counted as philosophy. Another problem back then was that, for the most part, tech sector corporate leaders, engineers, and government officials weren’t talking with the small number of academics working on ethics and technology, the prevailing attitude being that any talk of ethics was a threat, that “ethics is just a way of telling us what we can’t do.” There were a few, prominent, thoughtful and persuasive voices raising concerns, especially in the area of environmental ethics, such as Rachel Carson and Barry Commoner. And the public interest group, “Science for the People,” was very active in raising ethical questions about a wide range of issues, including technology in war – think of Agent Orange and napalm. But leaders in industry, government, and the academy weren’t listening.
The good news is that, today, the situation is remarkably different. The change started in the academy, with the establishment of a few programs, centers, and institutes devoted to science and values at places like Penn State (1969), Stanford (1971), and Virginia Tech (1975), and the launch of the journal, Science, Technology, and Human Values (1976). It is noteworthy that these early initiatives came as much from scientists and engineers as from philosophers and others in the humanities. Notre Dame’s Reilly Center for Science, Technology, and Values was founded in 1985. The number of philosophers working on technology ethics began slowly to expand, but virtually nowhere would a primary focus on technology ethics be a safe route to tenure and promotion. If one made one’s mark elsewhere, then one’s colleagues might tolerate work on technology ethics as, at best, a kind of extracurricular hobby. And, still, industry and government largely turned a deaf ear to what the academic “nattering nabobs of negativism” were saying and writing (to repurpose Spiro Agnew’s classic expression).
Today, however, technology ethics has, at last, broken through and become part, albeit still a small part, of the philosophical mainstream. Indeed, it would not be an exaggeration to say that, within just the past few years, the field has acquired a kind of cachet. I am surprised and pleased to see how many of my colleagues who never before made technology ethics part of their professional portfolio are now doing so, especially in the classroom. It’s the hip and cool thing to do. New endowed chairs and well-funded programs, centers, and institutes are popping up from Beijing, Vienna, Delft, Oxford, and Cambridge to MIT, Arizona State, and Stanford. There is actually something of a recruiting war going on now for the very top people in the field.
It was just this past September when Notre Dame launched its own Notre Dame Technology Ethics Center (NDTEC) with major financial backing from the university and private donors, including a soon-to-be announced major gift from a very well known, leading technology firm. We plan to make as many as fifteen new hires associated with NDTEC, including perhaps five or six in philosophy.
That NDTEC will be supported partly by a large endowment from a major technology firm is a sign of another happy turn of events, for corporate support from work in technology ethics is materializing in a number of forms. In some places as at Oxford and Notre Dame, it is to support university-based centers and institutes. In some places, as at IBM, it takes the form of recruiting from academia an AI Ethics Global Leader, the computer scientist, Francesca Rossi. Even Google sought to establish an AI ethics advisory board, the Advanced Technology External Advisory Council, but, sadly, that quickly failed thanks to internal and external controversy over the membership of the council. Nonetheless, the effort shows that Google’s leadership recognizes the need to engage with ethical issues. And it’s not just industry that is on board. Government is joining as well. For example, it was eight years that the Defense Advanced Research Projects Agency (DARPA) commissioned the National Research Council and the National Academy of Engineering to do a two-year study and report on the role of ethics in weapons research and development. And just within the last year we have seen the release of the European Commission’s new Ethics Guidelines for Trustworthy Artificial Intelligence and the Chinese government’s Beijing AI Principles.
Of course these are all still just first steps toward full engagement with ethical questions about AI and other emerging technologies. Especially important in my opinion and a focus of my current research, lecturing, and writing is to think together with partners in industry and government about how to develop effective structures through which to integrate a sophisticated, sincere, and distributed engagement with ethics into the everyday workflow of technology corporations and relevant regulatory bodies, this as opposed to having carping philosophers chastise industry or have industry fly in, from time to time, a consulting ethics specialist. The goal is to encourage the development of ethical competence and ethical engagement as a deeply embedded part of corporate culture, part of the business plan, as it were.