New! Become A member
Subscribe to our newsletter
Book Review

Book Review: Genius Makers, An Accessible Account of How We Got Deep Learning

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World, by New York Times Technology Reporter Cade Metz is a highly entertaining and enjoyable account of the long back story behind the rise of modern deep learning approaches  (neural networks) which power today’s AI algorithms.  The starting point is the early days of AI research which were heavily influenced by long-time MIT professor and Turing Award winner Marvin Minsky.  The book starts here due to Minsky’s key role in publishing a highly influential paper in 1969 which discredited the work of Frank Rosenblatt, who is now widely considered the father of deep learning due to his conception of the first neural network termed the perceptron.  Minsky’s prominent dismissal of neural networks was quite influential in discouraging neural network research in the 1970s.

Of course, research in neural networks would re-emerge, but this time, with improved software, hardware and data.  More than that, the re-emergence would feature two of the modern leaders of AI and deep learning:  Geoffrey Hinton and Yann LeCun.  Hinton and LeCun, both recent Turing award winners themselves along with Yoshua Bengio, feature prominently in the book as they developed neural network algorithms, mathematical infrastructure, and application technologies long before it was clear that deep learning would power modern AI.  Hinton, for instance, developed the back propagation algorithm, which for those of you who had calculus, is an enabling version of chain rule suited for neural networks.  LeCun’s early successes were with computer vision and optical character recognition while at Bell Laboratories.  Interestingly, neural networks experienced a revival in the late 80s and early 90s that again diminished due to the lack of success in real-world applications.

Then came 2012:  the year in which deep learning revolutionized AI and machine learning.  A good portion of the book is focused on the past decade due to the seminal work of Krizhevsky, Sutskever, and Hinton whose AlexNet deep neural network won the 2012 ImageNET challenge with a performance that surpassed anything accomplished by neural networks to date.  More than that, AlexNet out-performed all other traditional methods and quickly was improved in subsequent years to reach better than human performance.  AlexNet began the AI revolution we are currently in.  And the driver of this success wasn’t simply the neural network, it was the abundance of data provided by ImageNET which powered this transformative success and unlocked the true potential of neural networks.

In fact, it became quite clear that what was needed to power deep learning was extraordinary amounts of data.  Once the magnitude of data required for training neural networks was understood, deep learning-powered AI quickly swept the community and transformed speech recognition, computer vision and machines like AlphaGo, which was the first computer program to beat the world’s leading players in the game of Go.  This is nicely detailed in the book.  Interestingly, even if Minsky had not squashed research in neural networks in the early 1970s, it is not clear much success could have been achieved given the state of hardware, software and data in that era.  After all, the 1990s also witnessed a fading interest in neural networks simply because computers and data were not capable of powering deep learning.  Basically, in 2012 we finally had enough data to train deep learning algorithms and this was the critical revelation to the broader community.

2012 was also a Pandora’s Box for AI, quickly raising serious concerns and alarm on the ethical use of the technology, biases that are encoded in algorithms, data privacy (or lack thereof), deep fakes, lack of diversity in tech, and spoofing of recognition algorithms.  Alarms were also raised about the possibility of such algorithms being able to achieve super-intelligence and potentially threaten human existence.

Regardless of these various concerns, it was quite clear that AI was a critically enabling future technology.  This ushered in the AI arms race between corporate behemoths:  Google, Facebook, Microsoft, Amazon, Apple, Baidu, etc.    All of them competed for leading talent and paid extraordinary sums of money in salaries and acquisitions around AI.  Metz does a very nice job of detailing the diversity of such issues and how corporations, employees of these corporations, and public tech figures have navigated the turbulent waters of AI technologies that have fundamentally shifted societal norms.  Indeed, we have continuously witnessed congressional hearings over the past five years that have called corporate leaders to account for their companies’ behavior, despite the fact that there is little legislation regarding the use of data for training AI algorithms.

In the end, Metz highlights the very human aspects of those involved (and their exceptional cleverness as well) in developing AI to its current state.  More than that, the book highlights the incredibly difficult challenges and unforeseen problems that have emerged over the last decade as the technology has advanced at a pace well beyond what anybody could have anticipated.   He also does a nice job of threading throughout the book the concept of artificial general intelligence (AGI), a holy grain concept of AI, and what the Founding Experts of AI think about this prospect.  It is quite striking to see that the famous Krizhevsky, Sutskever, and Hinton paper that kicked off the AI deep learning craze in 2012 features two divergent opinions:   Krizhevsky whom the book suggests believes that deep learning is nothing like AGI, just a very complex curve fitting procedure; and Sutskever whom the book suggests believes that the deep learning algorithms form the basis of AGI.

After all, this is the biggest  metaquestion of the book:  Is there any real intelligence in AI, or are these just algorithms that given sufficiently large data sets allow us to make accurate models?  This remains an open question.  With such a rapidly progressing field, perhaps the second edition of this book might have more to say on this pressing and compelling issue.  What is clear is that the AI arms race will continue unabated for the foreseeable future, driving technologies into new directions that will be both exciting and potentially nefarious.  Another decade of these developments should be quite interesting indeed.


Nathan Kutz, Ph.D

is a Founding Expert of AI and Faith, the Robert Bolles and Yasuko Endo Professor within the Department of Applied Mathematics at the University of Washington in Seattle, and author of the book Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data (Oxford Univ. Press, 2013). He is also the co-director of the UW’s new AI Institute for Dynamic Systems, just created with a major grant from the National Science Foundation.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter