New! Become A member
Subscribe to our newsletter

Synthetic Media: what is it and why does it matter?

Most people have never heard of synthetic media and those who have mostly know very little about it. When asked, ‘have you interacted with synthetic media?’ most people will say ‘no’ or ‘I don’t think so’. If this is true, then why should everyday people care about synthetic media?

The reason is that very soon the age-old adage of ‘simply believe your eyes’ will no longer hold true.

This will require all of us to adapt, to develop new skills, and to grapple with critically important questions of ethics and morality. Currently, there are many more questions about synthetic media than answers. This means that now is the perfect time to study up, ask the critical questions, and install some guardrails for a world where synthetic media will be omnipresent before it’s too late.

In order to understand synthetic media, one must begin by defining it. For our purposes, it is useful to define synthetic media broadly. Thus, the definition of synthetic media contains a whole family of capabilities including synthetic text (these days often generated using GPT-3 technology); synthetic imagery (the earliest arrival from this category); ‘deep fake’ videos, and the like. Additionally, when thinking about synthetic media, we must include the creation of synthetic (fake) people, personas, social accounts, and other representations of “individuals” who do not and have never existed.

Test yourself at – one of the below is a picture of a real person and the other is a synthetic face of a person that has never existed. How confident are you in your ability to tell them apart?

Answer: The synthetic face is the one on the right.

Under this broad definition, anyone who uses social media has likely already engaged with synthetic media whether she knows it or not. This could be through engagement with a ‘bot’ aka a “synesthetic agent/user”, and in limited cases with a sophisticated enough bot that it autonomously created content such as answers to specific questions in a real-world situation. One anodyne example of this that many have already encountered is an auto-generated citation at the end of an article or academic paper.

Utilizing this elementary understanding of what synthetic media is, the obvious follow up question is ‘when’? When will synthetic media become part of our day to day lives? When do we need to start putting our guard up?

This question is difficult to answer due to the rapidly changing quality and prevalence of this technology. But “now” is probably as good an answer as any. One illustrative datum is the fact that synthetic media had become so prevalent by 2017 that the FBI formalized a unit whose core focus is on synthetic media and fake personas. The idea that “now” is the time was further solidified in 2020 by the launch of GPT-3, which is an autogenerated autonomous language model (AI tool) that can predict, write, and text in natural language based on open source content it has read in the past.

It is hard to explain just how freakishly capable, even while imperfect, this technology is. Below is an example of a synthetic conversation with a fake Sam Harris: Bobby, a real human, is chatting with a Sam Harris-like Bot which has been trained by searching google for Sam Harris content and reading it. The conversation is not flawless but compared to our best technological capabilities of just a year ago, the exponential progress is truly astonishing.

At this pace of development, in mere months these synthetic capabilities will be so strong that even the most discerning readers will struggle to tell synthetic media from the real thing. If you need more proof of how close to fooling the average person this technology is, just search for ‘deep fake videos’ and you will see how easily one can be fooled into believing what they are seeing and hearing is authentic and created by a real human being.

At this point in most conversations,  folks have the understandable desire to stop this whole thing in its tracks by regulating synthetic media or outright banning its creation. Unfortunately, as with any technology, that desire is not achievable. As with all knowledge, once known there is no way to put the toothpaste back in the tube. The reality is that synthetic media works, it’s here to stay, and we as individuals and as a society will have to adapt and innovate to live in a world where eventually more synthetic content will exist than content created by people.

The other important truth to remember is that as with any technology, synthetic media will be used for good, for profit, for fun, and for nefarious reasons. A few examples:

  • Synthetic media will usher in a new era of translation and by doing so will expand access to knowledge and entertainment for the first time in a truly global way. Every word and sound will be perfectly localized to every language, dialect, and local slang preference; this will include sign language and other forms of communications and for the first time be truly inclusive of everyone.
  • Synthetic media will transform and elevate communications by enabling truly personalized marketing and mass communications. This goes way beyond what we currently call personalization, where AI is used to select the best piece of preexisting content to display to an individual user. Synthetic media is a quantum leap in our abilities, enabling the creation of an infinite amount of content specifically curated to each individual’s preferences and motivational drivers.
  • Synthetic media technology, with time, will eliminate communication barriers for people with disabilities and injuries that impact their communication abilities and will bring independence to people currently reliant on human aides.
  • Synthetic media will enable historians and academics to publish further papers on an endless variety of topics. This will ensure that we preserve every culture, historical event, and discovery regardless of the size of the community working on said field or how well funded that cause or country is.
  • Synthetic code (software) will soon come into existence. With that ability we will have software that can write its own brand-new software without the need for human involvement. This will set off an exponential growth curve for the creation of new software, AI, and digital beings, which will be a topic for scholarly and philosophical thought for decades to come.
  • Synthetic media will also be the greatest engine of rumor, propaganda, misinformation, and lies that humanity has ever experienced. If you think that people today have poor fake news and fake content detection skills, you’ve seen nothing yet!

There is nearly an endless array of possibilities that will keep innovators engaged for years to come. For us and for society, the critical questions are unrelated to use cases but rather to ethical behavior, moral responsibility, and education for the masses.

For starters, we will have to answer basic questions relating to the rules of the road. For example, should content created by AI be required to disclose that it indeed was created synthetically? If “yes” – disclosure should be required – then the next even more challenging question is, should the AI’s objectives and biases also be disclosed? And if the answer to this second question is also “yes”, then why do we not require the same disclosure of human content creators, many of whom are paid to communicate a specific position or to persuade?

Another set of questions relates to rules governing different domains within society. For example, “should synthetic media be utilized in advertising” is a much easier question to answer than “should it be permissible within political campaigns”. Taking this in a different direction we could ask: is it moral for a company to create synthetic agents (fake people) who converse with biological people online? Is it moral for these synthetic agents to ‘date’ real people? Is it a good idea for lonely people to utilize synthetic agents to cure their loneliness and by doing so be even further removed from human society?

As we think about these difficult questions we can benefit from recalling the lessons taught throughout history and that come from our traditions of faith. Virtues such as honesty, integrity, accountability, respect for human life, and others will be critical as we think about the future. We must consider both laws and societal moral standards that build on this wisdom and utilize these age-old virtues to build societies of trust and opportunity for all. Some guardrails we might deploy include:

  • Labeling – it must be incumbent upon the creator and owner of said technology to label synthetic media as such.
  • Transparency – if a synthetic media algorithm has bias, perspective, or an unstated goal or objective, that should be made evident to the user.
  • Demasking – the true owner or controller of said technology should be easily identifiable.
  • Detection Tech – we must create technology that proactively highlights for the regular user when they are engaging with synthetic media.

In the novel field of synthetic media there are considerably more questions than answers, but there are a few things we can be pretty certain of. First, synthetic media, content, and personas are real and you have probably already encountered them. Second, there is no way to put the genie back in the bottle. Third, regardless of how good or prevalent synthetic media is today, by our next June Newsletter it will be infinitely more effective and pervasive. Last, like every invention created by man, synthetic media will be used to make people lives better and more fulfilling. It will be used for profit and it will also be used by the worst players on earth to deceive and take advantage of unwitting individuals. It is up to all of us to ensure that as a society we create the rules of the road and tools to help ordinary people spot the ‘synthetic’ before they act on it’s recommendations.

Gil Berenstein

is currently exploring a number of AI opportunities and areas of interest after a recent stint as an Entrepreneur-in-Residence at the Allen Institute for Artificial Intelligence (AI2) in Seattle. He was previously the Founder and CEO of Seattle-based travel personalization startup, Utrip, which utilized AI and human experts to help travelers plan highly personalized trips. Gilad grew up in Israel and moved to Washington in the late 90’s with his family when his dad joined a startup. Gilad is a graduate of the UW Foster School of Business where he obtained both his undergraduate and masters degrees. Gilad is passionate about travel, technology, food, innovation, and history and is a congregant at Temple De Hirsch Sinai in Seattle.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter