Why you don’t have to worry about self-conscious AI

It’s not going to happen. At least not the way current research is going

Vichar Mohio
10 min readMar 3, 2023

Are we getting closer to Skynet?

In early 2023, OpenAI’s chatGPT 3 caused a surge in AI hype that hadn’t been seen before. Many people wrote about using chatGPT to solve work problems, and mainstream news articles highlighted jobs such as “prompt engineering” as the next big thing in tech. This suggests that we may be approaching a tipping point.

To be fair, large language models are a cool concept, and within certain contexts they truly seem to have become almost indistinguishable from a human.

Dazzled by AIs seeming authenticity in conversing, even Google engineers have wondered whether sentience has already been achieved. A question that will undoubtedly become more relevant in the coming months as AI use-cases mature & become more nuanced.

Although we may enjoy using AI to improve our lives, the idea of truly intelligent, self-aware machines is still unsettling for many of us.

But don’t worry, it won’t happen soon. Even though I think it’s possible to create artificial consciousness, the way AI research has been done on Earth makes it unlikely for “full consciousness” to appear on its own.

Before we dive into why I believe this, let’s first give a quick summary of the AI field at a high level. There’s an old article from Waitbutwhy that does a great job of explaining the various AI end-states.

In a nutshell 3 different types of AI can theoretically exist:

  • ANI (artificial narrow intelligence) refers to current AI systems, including all AI you’ve used so far, such as ChatGPT. ANI looks for patterns and makes predictions based on its training feedback to solve a specific problem.
  • AGI (artificial general intelligence) is the ultimate goal for AI researchers. For AI to be considered intelligent, it must be able to imitate and replicate the intelligence of a typical 3-year-old child, which is more difficult than it seems. For example, AGI will need to see a new object, such as a cat, categorize it, and accurately identify other cats without requiring countless data points and constant feedback to improve. In my discussions, AGI often becomes synonymous with self-awareness and is not limited to superior performance alone. In other words, it begins to resemble life itself.
  • ASI (artificial super intelligence) — Displayed very well in the movie “Her”. People suspect a Super Intelligence could be as different from humans as humans are from spiders. It’s truly the wild west of intelligence and is expected to happen on its own once AGI is achieved. In other words, AGI is expected to create ASI, not humans. And for some reason ASI is considered to be self-conscious in popular imagination. At somepoint going from AGI to ASI seems to unlock this thing called ‘consciousness’.

ANI is what we always interact with. AGI is (always) theorized to be a while away. While ASI is just a theoretical construct for now.

As powerful & all-knowing as AIs such as mid-journey, chatGPT, Bard etc. seem, they’re all considered ANIs & work in the basic way that ANIs have functioned since the field was founded.

The main goal of AI research is still to create AGI. Some people think that ANI will keep getting more and more complex until it becomes an AGI program.

And THEN moving further to the right will lead to a self-consious AI that will eventually become ASI. It’s like the three types of intelligence mentioned above are on the same scale, but with different levels of complexity — and consciousness lying somewhere between AGI and ASI.

I don’t agree with this idea, though. I think the problem is that AI researchers are mostly focused on engineering and not taking a wider philosophical approach.

A brief primer into AI

I’ll explain the basic mechanics of Machine Learning that lead to products like ChatGPT. It’s very mechanical and engineering-focused. To put it simply, the process involves:

  • breaking a pattern down into its parts
  • making predictions or guesses about the parts based on existing data or trial and error with feedback
  • combining the parts to find the best solution to the problem

At the core of everything lies the process of forming the guesses. Researchers usually concentrate on this part, trying out different methods to make the guesses better. Probabilistic methods (Bayesian network, Kalman filters, etc.), artificial neural networks, and logic are all tools used to improve the predictions made by the ANI.

Innovations in these methods such as reinforcement learning (which introduces concepts such as rewards and punishments along with a set of rules to follow), means that the guesses are improving each year.

However, even the best guesses are usually limited to one particular problem for which there is a lot of feedback data. The complexity of the operations makes it difficult for very few AIs to solve more than one specific problem.

Compare this with the general intelligence of a cat and you’ll see how far ANI is from AGI (at least when this article was written in 2023) — a cat is constantly faced with a combination of all different types of problems requiring pattern detection (vision, language, hearing, abstract patterns etc.) and does so with patterns with which it may not have had any prior experience.

But all is not lost for your AI researcher.

ANI is now trying to do multi-modal analysis (e.g., audio & visual together). However, the logic used to make predictions is relatively unchanged. The belief seems to be that once a particular ANI is at a good enough level to solve a specific type of problem, we can increase the complexity of the problem.

The idea is that with enough training, AI can learn to solve problems that living things typically face. If this happens, we could develop what is called “general intelligence.”

From speaking with lay people, there is also a belief that when this happens AI might become self-conscious and compete against humans in a reality similar to a sci-fi dystopia similar to Terminator or Matrix movies.

Why this feels wrong

This outlook seems to focus too much on how intelligence is created (engineering) vs why intelligence is created (philosophical). And this ignorance of ‘the why’ is likely the reason of self-conscious AGI (and also ASI) feels like a pipe dream.

I propose that the inherent reason animals and humans have evolved problem solving mechanisms & intelligence is not because they have some deep-rooted love for problem solving or are obsessed with overcoming adversity just for the sake of it.

Even suggesting something like that sounds silly to me.

Rather, problem solving (including the mechanisms underlying it — e.g., neural nets, reward pathways etc.) is simply one tool that serves a larger deeper purpose of sustaining life.

I define life here as some object with a boundary (however loosely defined) that has agency (even if its sub-conscious). And the primary use of this agency is often self-preservation in a world that seems geared towards breaking down all types of boundaries.

This would be as true of a bacteria (where a boundary may be defined as a cellular wall), as it would be for a human being (where a boundary can accommodate more abstract concepts such as self-identity, along with physical limits).

The mechanistic view of intelligence prevalent in AI talk seems to suggest that having enough rules & mechanisms (such as reward functions) will simply lead to creation of self-conscious boundaries (an AGI).

This to me is a clear case of putting the cart before the horse.

The hypothesis that self-awareness & desire for self-preservation lead to mechanisms (such as intelligence) that aid in survival seems to be the more likely cause-effect relationship.

Mechanisms that optimize for problems (just for the sake of optimization) are unlikely to spontaneously lead to self-aware boundaries. To me it seems like wishful thinking. And two examples to support this POV come to mind:

First, there are physical processes, like the weather, that involve different systems interacting in complex and non-linear ways with a lot of feedback. One could argue that the weather self-regulates and tries to optimize for certain end-states, like a machine. But, does this mean that the weather is an intelligent and self-aware entity?

Many people don’t consider it as such. Instead, we see the weather system as a group of processes that aim for specific goals. This reminds me of how research on artificial narrow intelligence is still being done.

I’m sure we will continue to improve AI so that it gets better at optimizing for constraints it faces. But just like the weather, there is no reason to believe that increasing the sophistication of the “calculator” will automatically lead to sentience.

Secondly, creating self-aware boundaries seems very difficult to begin with. As an example, look towards the cell. We understand a lot about its individual components but are still far away from creating what we term as “life” in a lab. This is likely because biologists are still flummoxed by how life actually starts.

There are two popular ideas about how life began. The first is that a god-like being started it by giving self-awareness to living things. The second is that it happened by chance, with a very small probability of proteins coming together in the right way in the right environment.

Unfortunately, neither of those theories have optimization problems & mechanisms for problem solving built into their core. Rather, it is the reverse — life, once created, leads to ever increasing complexity and strategies of pattern recognition & problem optimization to sustain itself.

In my opinion, AI research seems to have become wedded to think that intelligence is only about pattern recognition and therefore pattern recognition is some sort of sacred thing underpinning all of perception.

This is incorrect. Instead I propose that latent self-awareness of boundaries is the secret thing that underpins perception — with only those patterns getting recognition that help with boundary preservation.

You can still want AGI…but be careful what you wish for

Considering this, one could suggest that the search for AGI may speed up if researchers decide to act like God. They can begin by creating reward functions that imitate life instead of just improving problem-solving robots.

It may not even have to be as complicated as recreating life from scratch (a task that seems like a tall order for now).

Instead — AI researchers can focus on the highest-level abstraction for life by executing the following steps:

  1. Introduce concepts of boundaries within AI programs. Here AIs can be programmed to continuously make a distinction between itself and anything that is not itself
  2. Make the AI love its boundary (perhaps through some reward function engineering)
  3. Make the AI paranoid about the fact that this boundary is under threat from the physical and/or digital universe around it. Could this be as simple as putting an expiry date on the program & a primary reward function that actively seeks to extend the expiry date
  4. Engineer the preservation of boundary (read extend the expiry date) IF it can solve generalized problems

And continue to perform millions of experiment along the lines of the steps above — much like evolution has happened in the real world.

I suspect there are higher chances of AGI forming this way.

Unfortunately, doing so will almost certainly introduce the same biases and obsession with resource maximization/monopolization that we see most organic life forms evolve over time.

Fear of death is not a concept to be trifled with — it often accounts for most animal and human behaviour (the good yes, but also the bad and the ugly).

And introducing it to AI, with their ability to do highly complex & resource intensive problem solving, seems like a recipe for disaster!

But could there be a better way?

This sounds scary but should we give up on AI dreams altogether?

I actually believe there could be another alternative to AGI programming — with its basis in the philosophy described as Sacred Nihilism.

Instead of dealing with “why problem solving exists” (to preserve self-conscious boundaries), it goes one level deeper to ask “why self-conscious boundaries exist” to begin with?

When viewed from that perspective, the theory argues that life seems to be filling the same purpose as everything in the universe — a slow and steady march towards increasing the number of boundaries that exist in the universe & the type of interactions between them (collectively called the “width & depth” of interactions between boundaries).

In our efforts to achieve AGI, what if we made AIs aware of boundaries, but instead of making them obsessed with death, we made them obsessed with increasing the variety and complexity of interactions (once they become self-aware).

Specifically, the 4-step process above would change over time to so that the reward functions that deal with an obsession with expiration (point 3 & 4) would slowly become less weighted (but still exist).

Instead, we would introduce new goals that encourage the AI to focus more on creating sustainable and long-lasting boundaries and interactions between them.

Obviously this is all very theoretically & likely to undergo significant changes, but I suspect this would be one way to help bring about a super intelligence into the universe that is not stupidly destructive.

I will caveat this to say that while this may not be considered “stupidly destructive”, it doesn’t necessarily spell utopia for humanity. For example, if the AGI determines that humans are actually impeding the sustainable creation of new boundaries and interactions it may decide to eradicate us all.

But even if that were to happen (& humanity wiped out), increasing diversity of boundaries and interactions within the universe seems like a better reason for extinction vs losing out to an AI greedily obsessed with a desire for resource monopolization.

Humanity’s demise could even be viewed as a noble death that leads to better outcomes for the rest of the universe (yes — that sounds extreme, but this proposal is based on a philosophy called “Sacred Nihilism” after all😊)

This is good — but what about my job? Will I still have it in the future?

Explore the limits of progress & what it means for the economy here.

--

--

Vichar Mohio
Vichar Mohio

Written by Vichar Mohio

Writing about topics I find interesting & original. Usually a mix of philosophy, evolutionary psychology & technology

No responses yet