For several years, Jeff Bezos has convened an invitation-only program for C-level technology and business peers on the subjects he finds most interesting: Machine learning, Automation, Robotics, and Space (MARS). Our Founding Member Linda Ranz attended the 2019 MARS conference in March in her role as chief of staff to Amazon Senior Vice President for Devices and Services David Limp.
For the first time this year, however, Amazon followed MARS two months later with ReMars – a review of many of the same subjects for a broader audience featuring senior Amazon executives and partners.
Watch video here.
AI and Faith Board Chair David Brenner attended and came away with impressions relevant to some of AI and Faith’s focus areas:
This conference was primarily about the world of Amazon products, with only a single panel expressly on the ethics of AI. But a running subtheme throughout was diversity, responsibility, and the need to build customer trust. Over and over, Amazon senior executives voiced these values in the midst of touting Amazon’s remarkable ability to transform everything it needs for its own business into valuable products for others. A cynic would view such language as PR for the many millennial tech workers in the audience whom Amazon presumably wants to employ, but these speakers impressively showed they know how to at least talk the walk of ethical AI. The takeaway: Why doesn’t Amazon talk this way more at home?
Amazon Cloud Services vigorously pushed its array of 180 software components as a break through opportunity for any business to get into the data analytics game without the need for specialized data engineers. https://youtu.be/wa8DU-Sui8Q All that is needed is data, ordinary software coding skills, and Amazon’s “plug and play” components. Up to a point, Amazon can even provide the data sets. The apparent lesson here: data analytic tools are becoming standard business necessities, readily accessible to any company that does not want to be left behind, which makes the stakes for appropriate guard rails even higher.
Sophisticated modelling capability is becoming the test bed and classroom for much of machine learning, allowing development at a vastly greater scale than the real world and suggesting solutions sooner than we may otherwise expect. Two examples:
1) While real driverless test cars are constantly “mapping” San Francisco streets in real time, this data is the basis for city models in which virtual driverless cars are “encountering” and learning from driving situations in numbers that are orders of magnitude beyond the interactions of the real cars.
2) “Soft grasp” is the holy grail of robotic handling. Several years ago Google created a robotic “arm farm” in which arms grasp an ever increasing array of objects 24/7, seeking to learn from rote experience. A UC Berkeley lab, however, has made faster progress by virtually modeling the same thing, kind of a “Monte Carlo simulation” of grasping.
No matter how many videos you see of Boston Dynamics’ robots leaping, summersaulting, and moving at speed through difficult terrain, in person they create an instant impression of organic beings. The sinuous movements of Boston Dynamics’ four-legged military carrier robot https://youtu.be/Ve9kWX_KXus immediately triggers an impression of a deer picking its way through the woods, even though its construct is unadorned metal armature. Add to this the presentation by MIT Media Lab Kate Darling on animal empathy as the pathway to human/robot trust which was the talk of the conference. http://www.katedarling.org/ The takeaway: our many millennia of hunter/gatherer experience in identifying creatures by their movements leads us to intuitively assign life-form status to metal, inorganic mechanisms, with ensuing empathy but also great potential for category confusion.