Recap of Stanford Digital Economy Lab Online Conference: AI and The Future of Work, October 27, 2020

Editor’s Note:  We are pleased to synopsize important symposia and conferences, especially by our Founding Members and others with expertise on the subject of the conference. This conference is notable for the high level former and current government, business, and philanthropy leaders on its panels.  Lew recently served as Co-Manager of the Future of Work Project for Washington State’s Workforce Training and Education Coordinating Board and brings that perspective to this useful summary.

Stanford University’s Human Centered Artificial Intelligence (HAI) Institute hosted an online conference Tuesday, October 27, 2020.  The link to the four-hour recording is here and is well worth watching or reviewing.

Erik Brynjolfsson, a leader in future of work research, and new leader of Stanford’s Digital Economy Lab, moderated the event.  Here are the agenda and speakers.  Stanford University President Marc Tessier-Lavigne gave opening remarks discussing HAI and how the world will look very different post-COVID.  Stanford has been actively researching health and societal impacts of COVID.  President Tessier-Lavigne sees the accelerated application of knowledge and the great move online as the major trends that intersect with AI.  We are at the beginning of an AI revolution that will impact how we live, work, play and more. There are both opportunities and risks.  Stanford wants HAI to keep humans at the forefront of how AI will be used to benefit humanity.

Prof. Brynjolfsson discussed AI, machine learning, a wide range of data and statistics on the economy, jobs, wages and more, making the point that technological advancement has been, at least up until about 2000, widely spread and has created benefits for large segments of the global population.  He also made the point that there is no economic law that guarantees widespread benefits of productivity and technological adoption.  He showed the chart of how productivity and wages have diverged significantly in the last twenty years.

This was followed by remarks by Reid Hoffman, Linked In founder and James Manyika of McKinnsey, discussing the 4th industrial revolution and the “new equilibrium”.  These are characterized by the acceleration of technology adoption where people have to adapt faster, within a few years, making jobs and careers disrupted in a shorter time frame than previous generations.

It’s not jobs that will get automated, tasks will get automated. Every job has some automation potential to it; the question is how much and how people need to re-learn skills. Hoffman believes that if you want manufacturing and related jobs to grow in the U.S., you have to speed toward automation and robots to compete.  Because of this pace of change, a broader and more intentional conversation is needed about ensuring that the benefits of AI and automation are widespread.

James Manyika stated that it makes more sense to think about tasks and what needs to get done as opposed to whole jobs.  All jobs have rote tasks and parts that require social interaction and emotional intelligence, something machines are not good at.  Essentially, Manyika said the future of work is “jobs lost, jobs gained, jobs changed.”  The discussion ranged across the topics of job disruption, new job creation, digitization of the economy and continued pace of change.

Gillian Tett of the Financial Times moderated the next panel on policy challenges and solutions.  Condoleeza Rice, former Secretary of State and Mary Kay Henry, President of SEIU, were among the panelists. Both agreed on better systems approaches to workforce development. Rice was particularly critical of WIOA and the panoply of federal programs.  Discussion ensued about the lack of use of AI in government, except by the military.  There was agreement that the private sector was way ahead in AI use and adoption.

Secretary Rice put in context the issue of the role of government in how AI is developed.  She pointed out how differently China imposes AI as a method of control of its people compared to democratic governments.  She also pointed out how different the U.S. is from Europe when it comes to government involvement in the economy and business.  She did recommend that government, business, workers and academia need to work more closely together to develop solutions systems wide.

On the topic of Uniform Basic Income (UBI), Ms. Henry stated that wages were the bigger issue, not UBI, while Rice pointed out that federal and state budgets were not able to support current spending, much less any UBI-type programs.  Both agreed policymakers do not understand the implications of AI, do not understand the new economy or the rise of non-traditional employment where a new “floor” is needed to support workers of all types.  Ms. Henry said she wants to protect workers, not jobs.  Home care will grow so there will be continued demand and a need for use of AI and “cobots” to handle the need.  There are many privacy concerns.  How advanced technology is introduced matters a great deal in terms of worker adoption.

Henry believes many low wage workers are very capable and willing to give up rote tasks and help build or program robots and cobots.

As with the previous discussion, this one ranged across a number of topics, particularly around ethical use of AI and “surveillance tech” used in the workplace and in public. There was an effort to put AI’s development in context with other disruptors; in that it has taken years for it to be ready for widespread use.  The panel agreed the media has overhyped the fear side of AI and done more to spread fear than good information.

Next up was a conversation between one of Stanford HAI’s directors, Fei Fei Le and Gov. Gina Raimondo of Rhode Island.  The governor said their state is being intentional about using AI and automation in state government.  One application is to help job seekers find training or jobs.  She believes laws need to be strengthened to protect workers around how advanced tech is used in and by government.  Rhode Island is accelerating STEM subjects in K-12 and promoting to girls and disadvantaged populations.  Governor Raimondo emphasized  inequity is both exposed and exacerbated by COVID-19 and how important it is to use advanced technology to help reduce inequity.  Gov. Raimondo was particularly passionate that the public school system serve all children and is proud of how many more low income students of color are graduating from high school and going to post-secondary education.  She also sees the need to use data more effectively and is frustrated at how data sharing remains siloed and difficult.

Oussama Khatib and Rana El Kaliouby provided discussions and demonstrations of AI-based research applications, particularly with advanced robotics outside of manufacturing.  Haptics and simpler programming of robots were featured.  Human-machine collaboration was the primary message in this segment, showing the capabilities of robots in situations where humans either cannot go or are hazardous.

The last segment was a conversation with former Google CEO Eric Schmidt and Erik Brynjolfsson.  Schmidt is particularly excited about AI’s application to science, such as biology and chemistry.  He believes AI is “additive” to current processes.  He sees a lot of promise using AI for detecting financial fraud.  Schmidt sees the pandemic accelerating the digital economy: 3D manufacturing, internet of things, and digital commerce will all come faster and  both be disruptive and create opportunity.

Schmidt said government structures are too hierarchical to get things done; too slow to adopt advanced technology; lacking in a tech and data savvy workforce; and unable to respond quickly to shocks.

Schmidt believes tech regulation is a difficult topic.  He views the China-U.S. split as not a good thing and regulations like GDPR as “tricky”.

Overall, the conference was extremely well done.  The speakers, especially the well-known names, all delivered thoughtful remarks, perspectives and comments.  All the “right” concerns were expressed about AI’s capabilities, dangers, ethics, usage and more.

My takeaways: Industry will continue to move forward with adoption of AI, automation, robots and more.  Some will do it ethically and responsibly.  Some will involve their workforce and train them for emerging new jobs and some will not.

The role of government and public policy is critical to ensure against the abuses that are sure to occur in the coming data and digital evolution and revolution.  What is obvious are the shortcoming in the skills of government employees and shortsightedness of policymakers.  Academia, non-profits and groups like AI and Faith will have to step into the gap to ensure AI and other advanced technology is developed properly and deployed for the benefit of many.

X