New! Become A member
Subscribe to our newsletter
Insights

Exploring Latent Space, model hallucinations, and the connections between AI and faith

Untitled

 

What is latent space?

 

ChatGPT allows one to chat with an AI assistant and elicit anything from poetry to code. But where is this all coming from? Where is all this information stored? Welcome to latent space, The word latent literally means “hidden.” It is used that way in machine learning, as well. Models observe data in the real world, in a space that is accessible, and map it to a mathematical latent space, where similar data points are close together. Models’ latent space encodes an internal representation of externally observed data, called tokens, stored as vectors of numbers that map to a location in the space. Samples that are similar in the external universe are positioned close to one another in latent space.

 

The human mind works similarly– we don’t remember every detail of an object, like a coffee cup, but rather an internal representation of the general object. That said, we know a coffee cup when we see one. When ChatGPT generates an email to your boss or a song about the mitochondria, it predicts what follows from representations from the region of its latent space that matches the prompt’s context, like we would pull up a representation of a coffee cup when we see one.  Latent space compresses a model’s understanding of the world. That doesn’t mean the model understands the data or what it’s generating? … or does

it?

 

GPT-3 (Generative Pretrained Transformer 3), the large language model behind ChatGPT, is one of a class of models called foundation models. Foundation models are trained on massive quantities of data and can perform tasks beyond those they were trained for. They are trained using self-supervision and can learn new skills, dubbed emergent abilities, as you scale up their size and training data, without having to change their architectures. Self- supervision is a type of machine learning where the model trains itself to learn representations of data without the need for human-annotated labels. Instead, it relies on various forms of artificial supervision, including predicting missing, masked, values, to train the model to learn useful representations of the data. Chain of thought prompting, an emergent ability in large language models, enables models to solve more complicated logic problems by “thinking step by step.” Still, this doesn’t mean the model understands the problem; just that it can solve it by pulling the right words from latent space based on what it predicts comes next.

 

Could a model be conscious if all it does is predict phrases, retrieve visuals, or take actions without knowing what they mean? What does it mean to be conscious anyway?

 

What is consciousness?

People still do not agree on a single definition of consciousness. Even neuroscientists contest whether our consciousness is simply an after-reading of neural signals. Memory itself is reconstructed after-effects of what we’ve already done. For the sake of this article, let’s consider consciousness as being aware of your existence and the world around you. Intelligence doesn’t necessarily necessitate consciousness and vice versa. And let’s say all objects have varying degrees of consciousness. A high degree of consciousness necessitates some form of personal identity, a concept of self.

 

This brings up the distinction between salience and sentience. To be salient, something must be noticeable– a large language model connected to an information database might optimize to pull the most salient information from the database given a prompt. To be sentient, something must be conscious enough to feel. The word sentience itself derives from the Latin word meaning “to perceive” or “to feel.” In the animal rights camp, there are definitions involving the presence of pain receptors and learned behavioral responses to stimuli in an environment. Based on the first criterion, a model without biological pain receptors could never be sentient. Based on the second criterion, however, things get murky, as models can be trained to react to their environments with reward and punishment systems called reinforcement learning.

 

So where does that bring us on the question of ChatGPT’s consciousness? Well, of course it’s not conscious at an appreciable level, but perhaps rather, it is simulating consciousness, pretending as though it were a conscious character, something salient, and taking on the purported view points, goals, and beliefs it believes it would have (or it is fed) to be that character. …but aren’t we as well? I’d posit that we, as humans, are also prediction engines trained to appeal to the charade we’re trained for, through which we develop the self. This notion goes back thousands of years. In fact, the root word for person comes from the Greek for mask in the theatre.

 

Children acquire what we call consciousness through lived experience, by exploring the world around them and learning from what they see, feel, touch, and receiving rich, multi-modal feedback from their experiences. But babies are born as predictive processors, as statistical learners. As they navigate the world with their predictive processing and statistical learning, they get rewarded and punished according to their actions, whether the consequences are natural, take the adage of baby Moses burning his mouth on a piece of coal, or imposed, take a parent punishing a child for coloring on the table. As these chains of reward and punishment persist, we learn to adapt to maximize reward, the ultimate goal in reinforcement learning. We develop a persona, a self, according to the beliefs, desires, and goals these rewards teach us. We soon forget we are anything other than that self, that “I.”  We develop personal memories of our experiences along with a sense of purpose. So, how then does our connection to faith play in?

 

The human relationship with God

The human’s relationship with God, in a biblical sense, is similar to that of a model– we cohere to a set of rules, we are rewarded or punished in accordance with our actions, and we learn to be faithful servants of the higher good. Let’s call it reinforcement learning with divine feedback. That said, more religions rely on mimesis, or imitation,  than on reinforcement. Questions like “what would Jesus do” or objectives to “become the buddha” guide faith much more heavily than divine punishment and reward might. Religion is filled with notions of becoming like God. The Torah is a set of codes for becoming like God from within the limited frames of human lives. Humans aren’t divine, but we can simulate the divine. Unlike current AI models, however, we never stop learning. Advances in a field called continual learning aim to equip models with that same ability.

 

But what is the “I” and how does latent space come into play? Perhaps to be divinely connected or enlightened, is to have a more developed latent space, or as some describe it, to be cosmically linked to what some call God and others call the spirit of the universe, to become one with all of existence, an expansion of latency. Perhaps we are becoming aware of the latent space of our own world model and how limited it is. To be aligned with the divine is an exercise in next token prediction, being able to predict and carry out the next right action based on the meta-narratives of divinity and morality in the latent space of our world model. And perhaps too all ideas are pulled from latent space, prompted by some external stimuli. Perhaps all of consciousness occurs in latent space.

 

That would imply there are divine hallucinations, as there are model hallucinations. A model is hallucinating when it outputs a response that lacks fidelity to the input source data, but sounds correct in theory. For example, a model could make up theorems that don’t exist when asked about them, give a detailed biography on a made up name, or make up dates for fake historical events. Perhaps we might interpret these as false intuitions.

 

Modeling and beyond

Recently, a demo called https://biblegpt.org/ came to light. Using embeddings, it compresses the entire bible into latent space and enables a LLM, GPT-3, to navigate these embeddings as it answers your biblical questions. You can prompt the model with any situation you are currently facing and receive an answer derived from biblical texts. The creators of the demo have stated they’re building similar models for the Torah and the Quran. If you adhere to one of these religious doctrines and believe in some level of consciousness in all things, is querying this model akin to talking to a non-human entity, or would it need better abstract reasoning skills, a self concept, a body and sense of that body to qualify?  And what does that mean for our conversation about models and human consciousness?

 

It’s uncomfortable for some to think consciousness is beyond our control, that perhaps consciousness is a space, for humans and models alike, to be navigated according to abilities. Like we prompt a model, who prompts our adventures in this space, who decides the directions of our trajectories in our latent space of personal consciousness? And how do we compare your latent space of consciousness to my latent space of consciousness? Surely, they cannot be precisely the same as we’ve all had different experiences which shape our consciousness. But surely there are universal experiences, and therefore similarities. And how surely is our behavior aligned with our consciousness? Perhaps to be divinely aligned is to act in fidelity to our divine consciousness, to make decisions aligned with the values we learned through the process of reinforcement learning with divine feedback. To be aligned to a different set of values would necessitate a different form of training.

Leave a Reply

Your email address will not be published. Required fields are marked *

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Subscribe to our newsletter