Rob Smith, Senior Director — Deep Cognitive Artificial Intelligence at eXacognition — Author Artificial Superintelligence Handbook series

In this series of Tales from the Dark Architecture articles I will be discussing some of the more extreme deep cognitive Artificial Intelligence designs that we are exploring on the pathway to Superintelligence.

Recently the notion of sentient AI has permeated the internet including my own Medium article on Google's non sentient chatbot LaMDA. I have discussed the notion of whether or not a computer can exist with elements like self awareness, empathy, etc. in books and articles. The truth behind these philosophical explorations that help drive other elements of sentient cognitive design is a foundation of one very simple definition and one very difficult question. That definition is 'what is sentience' and the question is 'how would we build it if we were able to'?

In its simplest form, sentience is defined as the 'ability to feel sensation'. A more complex view of sentience is that it requires self awareness as well as sensory emotive sensation and response with both elements inseparable. It also requires the ability to comprehend layers of variance in context as it relates to stimuli, our own self aware goals and our dimensional position within our physical and cognitive world. This is based on our human cognitive and physical capacity to actually feel sensations and emotion and cognitively use these to move forward through time toward our goals. It is also present in some animals with higher cognitive functions like dogs and chimps. Where it gets murky is when we get to animals, organisms and things that cannot 'feel' emotive stimuli and response and this includes computer systems and other inanimate objects. However digging a bit deeper into the philosophical side of the sentience question causes a surfacing of another question or exactly what it means to 'feel'? I can certainly empathize with what another person is going through without actually feeling the empathy (i.e. observational surface empathy) but does this surface act confirm my sentience or not and just how deep do we need to go in 'feeling' before we are deemed sentient?

The next obvious step is to then ask if I am still sentient, does a 'non feeling' or 'non empathetic' response to a stimuli qualify as an indication of sentience? To those who believe that advanced AI computers like LaMDA are sentient, the ability to calculate an emotive response is considered 'feeling'. While I have proposed similar advanced Cognitive AI structures in the Artificial Superintelligence Handbook series of books, I would not deem them as sentience but more one small step on a long 'pathway' toward machine sentience. The designs include frameworks that mimic human emotive response to stimuli, use artificial self awareness and develop self aware goal setting by machines and even artificial dopamine to determine a level of emotive response. While these designs may move toward sentience, to date there has never been a sentient Cognitive AI completed, although work by companies like Google and governments like China and the US are ongoing. It also doesn't help us determine what is sentience for the purposes of determining if and when a machine is sentient. Perhaps we need a sentience Turing test for advanced AI, however that brings up another question, what exactly should go into a sentience Turing test?

Are You A Machine?

Ahhhh Captcha. That Google created method for eliciting a specific kind of response to help Google build large repositories of human reviewed training data for letter and image recognition. We have all had to wrestle with these annoying little bot blockers (which are now somewhat redundant as bot builders have used the same repositories to defeat Captcha easily) but deep inside the notion of this simplistic cognition test is a deeper view about how we humans use our cognition to move through life. The never ending and ongoing flow of our world in which we constantly respond to stimuli is the foundation of our human cognition. It is one of the cornerstones of our own human sentience. We perceive the world around us, analyze changes and their impact on our own self awareness and respond in specific ways (i.e. usually to achieve a self aware goal like survival). We do this all with bioelectric induced chemical and protein signals inside our brains and bodies. In short, we humans are a biological machine. We may be able to trick Captcha into not knowing this but that's not really the point of Captcha. Perhaps instead of 'are you a machine' it should ask 'are you a dumb machine willing to feed our repository of valuable data'? This brings up a good point, if humans are a form of bio machine that has sentience can we build a complete replica and if we do, must we consider it sentient?

We need to know what we must build to attain sentience. For that, the easiest method is to simply build and replicate the exact same elements that make a human sentient. To start, we must acknowledge two fundamentals. The first is that building such a machine that is incomplete is hazardous to the survival of humanity. A defective or immature Cognitive AI could easily escape the lab like a virus and spread out into the world to cause devastation. The second fundamental is that we could simply replicate the biological elements in a lab but by creating a sentient biological life form (see the first element for why that is a hazard to humanity). A third option is to build a hybrid of the two. One design would fuse an AI with an existing human which by default would be sentient (i.e. partially human) and the second is to build an AI but then use existing biological components to 'enhance' sentient elements of the machine like empathy. This is already being done to some degree with the use of biological elements in processor chips but could easily be expanded to perform other human sensory actions like regulating artificial dopamine to affect a protein structure that delivers an 'indication' to a machine (i.e. an input) similar to how such a system works in a human. In this style of hybrid, the notion of a sentient machine becomes a possible reality as the machine may technically begin to 'feel' stimuli and yet still be fundamentally a machine.

The Role of Emotive Perception in Sentience

A critical component of sentience in anything is emotive perception. Many things can perceive the world and many things can respond to this perception including machines. Machines can even go so far as to perceive emotive indications in perceptions like understanding when someone is sad or happy. However machines cannot 'feel' a perception. This is a large chasm between human cognitive sentience and advanced AI machines. Emotive perception is the ability to instantly feel emotive responses to external stimuli without processing. The truth of course is that there is processing done by the human mind but the speed with which this is done creates a visceral response. The nature of this response is such that our emotions instantly override all the existing probabilities of occurrence and relevance to the existing layers of perceptive context we experience such that it would be impossible for any machine today to perform the computations to complete a similar task at the same speed and resource levels as a human mind. To achieve this in a machine using today's most advanced technology would require all the resources and power in our solar system to complete the task. This however doesn't mean it can't or won't be done one day (i.e. using new vastly more efficient dimensional Cognitive AI architectures like those proposed in the ASIH 3 & 4) and on the path to this eventuality will be steps made toward true machine sentience.

As we humans move through our day, we feel much of what we perceive. Some of this comes through our senses and some through our cognition but true sentience has the added component of emotive sensation from deeper within our cognition. If you have ever been sad enough to cry or happy enough to laugh out loud then you are using this deep sentience. It is not the surface response to a stimuli that makes us sentient but feelings deep inside our consciousness. We emotively feel the world around us through our perception and can even go to deeper layers in our human cognition to feel things that we cannot even perceive. Things like intuition, creative expression, instinct, and other senses, like a sense of foreboding or dread, are examples of our cognition sensing or feeling change and variance in our surroundings. We do this because humans think dimensionally across all modes of perception and cognition and across elements like time and it is between the variances in these modes where we experience senses that we cannot easily access or that we have long since lost access to through evolution. They do however still exist. These are the layers of cognition that provide us deep emotive perception and help us with everything from relationships to innovation. We are just not very cognizant of these incredible and vast layers of our deep cognition as surface living humans even though we use them daily. This is the function of emotive perception in sentience. It is the bridge that connects all the elements of our cognition to much deeper cognitive layers.

I Know You

One of the most basic uses of this human cognition is in recognition of those we know. We don't just recognize a face like a machine doing facial recognition against our internal database memory of stored images and associations, we instead feel an emotive connection to the person. They are a friend, lover, spouse, someone we like, someone we despise, a relative, etc. They are coworkers and neighbors and even enemies or just random people in our space. In each case, we feel the degrees of relevance (or relevance probabilities) that are far greater than simple recognition and recall. So what is the nuance of these feelings of perception and why is it important to our human cognition? For one it helps us survive which is our primary self aware goal. Recognition with fear or anger keeps us safe from those who may do us harm or helps us befriend those who would contribute to our survival. Recognition with the 'warmth' of security gives us strong relationships with those we trust the most.

There are an unending matrix of associations and feelings that we humans access for each and every person we know or encounter and these 'probabilities' of relevance change inside and between complex dimensional layers of contextual relevance (i.e. a person with who you were in competition against may now be a close friend). In fact these 'values' are constantly changing as the world changes around us such as people I was once close with who are now distant as our lives took different pathways and we pursued different unrelated interests. Their relevance to me didn't change but instead their relevance to my forward path and goals did (i.e. self awareness) and I simply did not feel the same about them (i.e. I did not get the same level of dopamine hit while associating with them). In other cases, some individuals I cross paths with today instantly instill negative feelings. They tend to be exceptionally narcissistic or pompous with a self aggrandizing view and arrogance (and yes I am aware that some may hold this view of me). To others, these people are held in high esteem as pertinent to their own self aware path (i.e. these people generate a positive feeling in others). For now we still live in a world of choice so this doesn't cause too many issues. We simply avoid each other. Either way we use emotive sensations to near instantly feel the world around us. While we can algorithmically construct a model of this inside a cognitive Superintelligence with an artificial dopamine system, the world of advanced AI research has yet to build anything near a complete version of this human 'mechanism' in an Artificial General Intelligence. However companies like Google and government funded AI labs are progressing toward this goal.

When we approach someone we know, the steps inside the human mind begin with a cursory review of memory and a positioning for the current state (I discussed this in more detail in the ASIH series). This is followed by the more emotive sensory perception like attraction, warmth, happiness etc., or diametric feelings if we do not like the person. These internal feeling infuse our forward path and our response to stimuli. If it is someone we like, we go out of our way to talk to them & if it is someone we don't like, then that feeling of dread prevails and we avoid them. Either way, we are using our emotive feelings to impact our decisions and actions. This is us humans using our sentience. One of the more interesting emotive responses by humans is a feeling of relief. Relief is at times, and in most instances, a simple release in elements like perceptive risk (i.e. reduced risk makes us feel relieved that we are safely on our pathway to our goals) or the counter achievement of goals. However relief can also be a complex blend of other emotive elements like happiness and sadness. This implies that the level of sentience cascades.

Breaking Up is Hard To Do for Most

You experience the complexity of emotive sentience if you are determined to break off a relationship yet still have feelings for the person you are breaking up with. The mix of sadness at the negative aspects of the relationship ending and the positive feelings related to your own self aware goals create a swing of emotive sensations to the event. The result is a type of tumultuous emotive cascade to an eventual relief of happiness and sadness combined into one. We humans feel this mix of emotive responses far deeper than simply the acknowledgment of their contextual relevance. Machines are unable to do this. It should also be noted that as humans our cognition doesn't reside perfectly on a spectrum of binary elements but in a dimensional space of multiple related emotive elements making a complex soup of emotive feelings. In the breakup example, fear also becomes part of the 'relief' cascade as well as uncertainty, loneliness, second guessing, determination, etc.

The real question is can we build the same thing into an advanced Cognitive Artificial Intelligence and the answer is yes but it's complicated. To really feel something, a machine must have a 'real' or true self awareness of both itself and its surrounding but it must also have an evolving set of cascading goals from which to measure its progress from starting position to goal achievement. It must also feel the impact of movements or variance from these two positions. It is not enough to simply be able to articulate a goal and where in a dimensional space one resides, the machine must also physically feel the variance from these positions as the pathway of its 'life' changes. Move away from the goal and it needs to feel 'bad' or negative and not just calculate that it is bad but actually feel the entirety of the cascade of cognitive elements like regret, disappointment or depression and do so to the very core of its soul. Even the c-level sociopaths among us can 'feel' the impact of their decisions, although not always in the right way, to the same level as the rest of us or with enough volume. To recreate this in machines, I have discussed the idea of an artificial dopamine system in the ASIH series and many current non cognitive AI systems use reinforcement learning today as an artificial proxy for their machines inability to 'feel'. AI systems today cannot feel the warmth of joy that comes from a reward driven exclusively by self awareness and the motion on the pathway toward a goal. Instead today's AI systems can only calculate a level of learned emotion and mimic a response comparable to a severe sociopath. We humans however are bestowed the ability to feel elements like joy or comfort right from our birth and possibly even before birth.

What Would Turing Do

This of course is where we can use these limitations to test for sentience. The ability to feel is an emotive response to a stimuli. The degree of emotive response would be the benchmarks used in any form of Turing sentience test. To do this, a series of questions could be asked of the machine and analyzed for a degree of emotive indication. This is similar to the test used in the movie Blade Runner but without the necessity of physical response from elements like eyes. The notion is to communicate with scenarios and questions that elicit an emotional response and then measure the degree of the response to determine if the machine is actually feeling or simply calculating what it means to feel. In this way a machine could 'calculate' what it means to suffer heartache and even potentially display an emotive response but it could not determine or set a variable cascading level over iterations of contextual consistency and over consistent anomalous behaviors detectable through anomalous recognition methods. Surprisingly even animals like dogs could pass such a Turing test but a non biological machine could not.

To truly be sentient, a machine would need to feel its emotions and so far this element has eluded AI builders. This is because of the complexity of determining if a machine could ever truly feel happiness or sadness. If it can't, then it is unlikely to achieve human level sentience. It might achieve animal level sentience but more likely insect level sentience. Machines can feel sensations like pain (or more accurately calculate it) and they may even mimic human sentience and self awareness based on their training data (see my previous article on whether Google's AI is sentient) but it is unlikely they will ever feel true sentience and emotion because they simply cannot feel the dimensional depth of our human biological emotive evolution. Perhaps one day when machines and humans fuse, advanced AI systems will be able to feel emotions but it will likely still be a simple calculation of sensor levels. Of course on the surface this is what we humans are doing as bio machines except that we also experience the deep sensations of our emotive being. When you lose someone you love, it is not a calculation of your loss, it is a pain in your heart that simply never subsides. It makes us feel sadness far beyond that which is calculable and it lasts for the balance of our lives.

For an AI machine to feel the same thing is so far impossible even if the machine is self aware and sets its own goals. It may be able to calculate loss or euphoria and may even get an indication from an artificial dopamine regulator to indicate its level of sadness or euphoria but it will simply never feel the sensation of loss, success, love, triumph, laughter, hope, joy, depression, hopelessness, anger, fear, etc. Our machines will never be able to love us although they will be able to calculate and present a reasonable facsimile.

However human cognition can imprint and for this reason we may fall in love with a machine.

The feeling however will not be mutual.