Reflecting on My Time at the Allen Institute for AI

Over the past 9 months I have been one of the Entrepreneurs in Residence at the Allen Institute for Artificial Intelligence (AI2),  one of the world’s leading AI research institutes. I had the wonderful opportunity to engage and learn from some of the top minds in the world of AI, as well as to explore real-world applications of that cutting-edge technology. I learned an enormous amount throughout my time at AI2 and saw firsthand what it looks like to use AI for the Common Good.

 

The institute was founded by Paul Allen alongside its founding CEO Oren Etzioni with the mission of using AI for the Common Good, and thus a great deal of the work done at the institute is published and put out to the world. AI2 has also been a leader in the social use of AI, and within that mission has focused much effort on helping out during the Covid19 outbreak. Semantic Scholar, one of the core teams at AI2, has utilized their AI to help doctors and researchers identify the right studies, papers, and other materials they need to fast track our global response. Further, Oren has been a prominent voice both locally and nationally talking about the power of AI, responsible AI, and even proposed a leading framework for the regulation of AI.

 

My fascination with AI, however, started long before my time at AI2. I grew up in Israel where in the mid-nineties my dad was a civil engineer, working on power grids, utilizing what we would now call AI. This was a rarefied enough pedigree that in the late nineties my family moved to Spokane, Washington, a place none of us had ever heard of, as my dad joined a startup using AI to design modern power grids. But my interest in AI goes back even before this move across the world.

 

I distinctly remember a conversation I had with my dad when I was pretty young. It was the first time I had heard of the concept of artificial intelligence and I remember asking my dad, when would normal people like me, aka not super engineer types, get to play with this AI thing?

 

My dad was not the first person to say this, but he was the first person to say it to me: “One day we will all have AI assistants in our pockets.” That idea sparked a lifelong fascination in me; specifically, about how us ordinary folk could utilize this technology in our own lives.

 

In 2012, I founded Utrip and got my first opportunity to utilize AI in a real-world application. At Utrip we used several types of AI within our platform. We had an algorithm (really a set of algorithms) called UtripAI which took preference information from travelers to help them plan highly personalized trips to their dream destinations. UtripAI not only took preferences and desires into account but also geography, diversity, seasonality, and more.

 

We also had a system called SnowGlobe, which used Machine Learning (which is one type or field of AI) and Natural Language Processing (one area within machine learning) to read reviews, stories, itineraries, descriptions, and more to build a deep understanding of a point of interest. Using NLP, we learned what types of travelers were attracted to a certain place versus another; what days of the week were the best to visit, when the new menu specials come out, and even what night the famous noodle maker was cooking.

 

Two things I learned about AI from both Utrip and AI2 are that AI is not a ‘thing’ and AI is just a tool. More than a “thing”, AI is a wide area of study that includes numerous academic fields of study and real-world applications. Related to that,  because AI is just a tool it cannot in itself be good or evil, wicked or kind. Like every other tool created by mankind, it can be used for good as well as for other purposes. Like other tools, it can combat bias and it can perpetuate it. It can make our world a better place in some ways and worse in others — think medical breakthrough versus mass censorship. It’s all dependent on how we use it.

 

Another important lesson reinforced by my time at AI2 is that since AI is such a wide field, half the battle for AI practitioners is to identify the right kind of AI in a given situation. At Utrip, we saw that sometimes it was simple solutions like Collaborative Filtering (the recommendation AI you see on Netflix or Amazon when it says ‘people who bought this also bought that’) that were actually most effective. Each kind of AI has its own strengths and weaknesses that need to be examined from all sides as it is being selected.

 

During my residency, data bias was always on my mind. It was stressed that fighting bias in the data we use is critical for both ethical and practical reasons. We must always remember that when we use biased data we train a biased machine. It is up to us to design machines that live up to our values. Similarly, when thinking about the regulation of AI, as with all tools, we must place responsibility directly in the hands of the operator. The ‘AI ate my homework’ excuse will not be acceptable.

 

One more critical thing I learned from Oren is the importance of AI auditability. AI is often a black box, meaning that it can become so complex that the operator cannot always tell exactly why the AI made a certain decision. Rather than AI ‘transparency’ which has legal, intellectual, and technical challenges, we must insist on AI auditability to ensure that its work product meets the highest of ethical and practical standards.

 

When thinking about deploying AI I always ask myself this question: what parts of this process are humans more naturally apt to succeed at and in which parts of this process do computers have the edge. To use Utrip as an example, we believed that humans have a strong edge when it comes to communication, judgment, and creativity but AI was the hands down winner at looking across all 42 stops on your Parisian adventure and managing the geographic, traffic, and scheduling conflicts that arise.

 

Lastly, as I think about AI and Faith a few things come to mind. Humans have spent thousands of years thinking about moral codes and ethics. As AI continues to mature, develop, and envelope the world, we should not start those key discussions pertaining to ethics and morality from step one but rather be informed by the wisdom of our ancestors.

 

When I look at my own faith, Judaism, I take so many great lessons that pertain to technology. The concept of the Shabbat, the Sabbath, is as old as civilization. It is an understanding that we humans need ritual and a cadence to our week. That a good life, an examined life, requires some down time to reflect and recharge. As AI and digital technology continue to expand into every aspect of life, I hope we don’t lose the gift of Shabbat.

 

Another key lesson from Judaism is that of Pikuach Nefesh; preservation of the soul. It is the basic idea that saving and caring for a human life transcends all else. I’ve read that the Hippocratic Oath draws some of its history from this concept that above all else we must protect and care for people. This too is a lesson that we should embed in our future technology as it evolves.

 

Lastly, I deeply value the Jewish virtue of learning, of study, of debate and question asking. The core book of Jewish law, the Talmud, in itself is a debate between ancient scholars who asked themselves how to live a good life and to create a more just society. As AI takes over many day to day tasks, we must not abandon our own thinking, debating, and reasoning skills. AI should lift us up and enable greater thought and discussion as opposed to making life so easy that it robs us of Avoda, the Jewish word that reflects both work and study.

 

Because at the end, AI is just a tool. What matters is what we do with it.

X