Brad Smith, president of Microsoft, has written an important book, maybe even a watershed book. He did so with the help of Carol Ann Browne, the company’s senior director of communications. Leaders in both tech and government will be talking about the book, and grappling with it, for a very long time. And it often surprises — not because its opinions are startling, but because hearing them from a tech leader is so uncommon.
Start with the title: Tools and Weapons. Haven’t we all come over the last several years to realize that any tech innovation will eventually be weaponized? That even the best-intentioned breakthroughs have unintended consequences? That those intent on harm will, sooner or later, find ways to turn good tech into bad? And yet, name another tech leader who candidly admits to the damage done by technology and pointedly says tech companies must do more, much more, to protect the public from harm.
Moreover, he’s willing to get specific. Early in the book, he describes how Microsoft (and others) were caught off guard and chagrined to learn — via Edward Snowden’s leaks — that the NSA was siphoning a great deal of information from their (supposedly secure) data center transmissions. Extensive encryption followed — and continues to this day to be a point of friction between tech and government.
He is also willing to call out others: “Facebook had not designed its services as a platform for foreign governments to use to disrupt democracy, but neither had it put in place measures that could prevent or even recognize such activity (emphasis added).” That giant blind spot gave us Cambridge Analytica and the entire Russian disinformation campaign which, together, Smith labels digital tech’s Three Mile Island. He notes that the US nuclear industry never recovered from that disaster, and suggests that tech could face a similar fate if it doesn’t soon mend its ways.
Changes for Tech
What would that look like? Smith argues “for a cultural change across the tech sector.” He is pointedly critical of Silicon Valley’s fixation on ‘growth at all costs’ — what Reid Hoffman calls “blitzscaling” and what Mark Zuckerberg had in mind with his “move fast and break things” motto. He is equally critical of the “almost theological belief that new technology will be entirely beneficial.” Instead, he notes that “even the best technologies have unintended consequences . . . And this is before the new technology is misused for harmful ends, as it inevitably will be.”
Smith argues, therefore, for raising the bar on what is expected from tech companies, especially from those whose effects are dramatic and pervasive. “When your technology changes the world, you bear a responsibility to help address the world that you have helped create.” He then adds, “it is more than possible for companies to succeed while doing more to address their societal responsibilities.” (Maybe not coincidentally, as of the publication of Brad’s book, Microsoft is the only company in the world valued at more than one trillion dollars.)
But Smith also makes clear that self-regulation is insufficient. Probably his biggest, and certainly his most surprising, message is that tech needs government regulation. Government, he says, must provide the guardrails within which the tech industry can operate for society’s benefit.
Interestingly, he makes very clear that much of how he, and Microsoft, now views government regulation was shaped by the company’s near-death experience two decades back. In the late 1990s the Department of Justice waged a multi-year battle to break-up Microsoft. The company survived intact, barely, but it was changed profoundly in the process. It went from a company that prized bare-knuckled, winner-take-all, competition to one that, more often, aims to cooperate and collaborate instead. In particular, it learned that government has a legitimate role in protecting society from the dangers and excesses of business, tech very much included. If you will, Microsoft grew up.
Now, Smith says:
Tech leaders may be chosen by boards of directors selected by shareholders, but they are not chosen by the public. Democratic countries should not cede the future to leaders the public did not elect. All of this makes it important for governments to take a more active and assertive approach to regulating digital technology.
Smith helpfully points out that there are many markets in which government regulation has created a healthier dynamic for consumers and producers alike. The auto industry, for example, spent decades resisting calls for regulation, but today there is broad appreciation that laws have played an essential role in ensuring ubiquitous seat belts and air bags and greater fuel efficiency.
Smith notes the same can be said with respect to other industries, including air safety, food, and pharmaceuticals. Still, Smith is careful not to let tech off the hook: “The need for government leadership does not absolve technology companies of our own ethical responsibilities.”
Smith then says the tech arena most in need of self-regulation and government action is artificial intelligence. He points out that AI is unlike singular inventions from the past such as the automobile, the telephone, or even the personal computer. Instead, it behaves more like electricity — increasingly powering the tools and devices that run almost every aspect of society and our lives.
Interestingly, Smith takes issue with a key theme from another important AI book, AI Superpowers, by Kai-Fu Lee (see our review here). Kai-Fu’s book argues that because AI success goes to those with the most data, China and the U.S. will necessarily dominate our AI-based future — each struggling mightily for supremacy. Smith acknowledges that risk, but suggests a different possible outcome — one in which companies adopt an “open data” approach similar to the “open source” model that has come to dominate much of software. In fact, he makes a compelling argument for the upsides of “open data,” noting that it is increasingly the approach taken by prominent academic and medical research institutions.
Smith also describes the process by which Microsoft has developed a set of six guiding principles for their work on AI. These ethics touchstones are Fairness (the bias problem); the importance of Reliability and Safety; the need for strong Privacy and Security; Inclusiveness; and Transparency.
The last and most important of these principles is Accountability. About which he poses this compelling question: “Will the world create a future in which computers remain accountable to people and in which the people who design these machines remain accountable to everyone else? This may be one of the defining questions of our generation.”
Yes, and Amen.