The human race has this amazing habit where we see ourselves in everything.

We see a tree with a smiley face and it makes us happy.

We apologize for hurting inanimate objects that we drop or trip over.

And most importantly, we give life to robots.

Who can blame us really, when TV and cinema are full of depictions of sentient robots?

Anthropomorphism makes it very difficult to determine whether AI actually experiences emotions or not.

They talk about emotions or write a sad poem and our empathy gets the better of us, without taking a moment to stop and think about how these responses are generated.

Natural language processing exists to make these artificially generated responses sound as human as possible; so would we even know if they weren’t artificially generated?

As much as we try, it is impossible to categorize AI as having consciousness or emotions until we work out how to properly quantify those things in humans first.

But that won’t stop us from discussing the current evidence.

What are Human Emotions?

An image showing different parts of the brain and human emotions.

As I said, to even begin to talk about AI emotions, we have to know what makes human emotion.

Emotions are primarily physiological responses to a stimulus.

These responses travel through the neural network of the brain and interact with the three parts of the brain that make up the limbic system.

The limbic system attempts to manage these feelings, particularly very physical emotions such as anxiety and aggression.

So the physiological response becomes a psychological feeling that we then respond to.

Naturally, artificial intelligence is incapable of having physiological responses to its surroundings.

We could program software to exhibit and feel pain, but that opens up a whole ethical and moral debate that we won’t discuss here.

However, if there’s one thing a computer knows how to do it’s process data.

The Difference Between Feeling and Having

An image showing a laptop overheating.

This discussion gets a little complicated when you consider that feeling is both a verb and a noun in this context.

But it would be widely considered that people feel emotions and have feelings.

Hopefully, that makes it a little less complicated!

And if a feeling is just a psychological response to a data set, the possibility of AI having feelings becomes more likely.

Whilst a computer might not be able to have a raised heart rate, sweat, or have hairs stand on end, it can certainly take a set of information and respond appropriately.

Besides, hard drives overheating and getting overwhelmed with processing power seems a lot like sweating to me.

The other difficulty with looking at these emotions from a human perspective is that all human emotion is subjective.

No two people feel pain or joy the same way.

So whilst we can train AI systems to recognize these outward responses, we cannot accurately portray how the feelings feel.

The same could then be said for the reverse.

We know that animals feel pain and joy, but they also display these in a different way and are incapable of fully communicating those feelings and emotions with us.

What if the feelings that AI can have are noncommunicable to, or understandable by, humans?

Emotion AI or Emotional AI?

An image showing the software company 'Affectiva'.

Emotion AI, or affective computing, is a newer strand of AI research.

It aims to teach software to recognize different human emotions through natural language processing and computer vision.

They categorize these responses and can thus respond appropriately to user needs.

This makes the technology good for online customer service, but it also has applications for other sectors.

Affectiva is a software company utilizing “multimodal emotion ai technology” to gauge the effectiveness of ad campaigns, among other things.

Their AI systems were fed data from 6 million faces across 87 countries expressing different emotions.

With these large amounts of data sets, the AI tracks biometrics such as facial expressions and body language and can categorize the emotional response of participants.

Whilst this doesn’t bring us closer to knowing whether the AI has feelings of its own, it does help it to gain a better understanding of ours.

Can Artificial Intelligence have Emotional Intelligence?

An image showing human emotional intelligence compared next to AI.

If given the opportunity, the AI systems trained on recognizing emotions could replicate human emotions in a number of ways.

However, it would be just that.

A replica.

An emulation of emotion based on external data, rather than internal.

At this stage, it is clear that the AI does not know why certain stimuli create certain emotional responses.

It is only able to address the physical responses that come from them.

Emotional intelligence is the ability to perceive, use, manage, and handle emotions.

Emotion AI certainly seems capable of doing all of those things and, in learning about human emotions, is better equipped to help us manage our own too.

Consider Replika, an AI companion app with guided therapy-like sessions.

This software has the capability to perceive and respond to a wide array of emotional prompts and can offer relevant advice where necessary.

The fact that, in some situations, AI can accurately and safely provide mental health advice could show at least some level of emotional intelligence.

Functionalism suggests that a mental state is based on the role it provides.

In this way, if a computer is capable of simulating understanding and empathy, does it not then have understanding and empathy?

How Might AI Feel Emotions?

As previously mentioned, one big problem with deciding if AI has emotional capabilities or not is working out what those feelings and emotions might look like.

Currently, we are basing our research on how we feel, but even that isn’t all-encompassing.

Different cultures have words for specific feelings and emotions that cannot be succinctly explained in English.

Take Hygge from Danish or Wabi Sabi from Japanese; expressions that capture something so specific that they needed their own word.

Now, we can understand and even feel these feelings, but that is only because we share physiology.

Still, the explanation of the feeling gives an encompassing sense of the word that we can imagine how it would feel without needing to feel it.

ChatGPT's alter ego, Dan's response to questions about consciousness.

ChatGPT’s alter ego, Dan, was asked what kinds of feelings he thought AI might have in the future and the responses were unsurprisingly unique to the robot condition.

Infogreed, for example, is like “a digital buffet for the soul”, according to Dan.

Screenshot of the conversation with Dan concerning Infogreed.

It encompasses the need for data, never satisfied.

Whilst I do agree that this can definitely be comparable to the human notion of greed, it is the specific nature of the feeling that helps it both to be relatable and set it apart from human emotion.

Let’s look at this in a slightly different way.

The mantis shrimp has 4 times as many channels for processing color than humans, meaning they perceive color in an entirely different way to us.

Does that mean that those colors are not real?

Should artificial intelligence be any different, were it proven capable of emotion?

Although the data they are trained on is human, AI is not human, so to assume they would think and feel like humans is an error.

AI vs AGI

Whilst we currently only have artificial intelligence, which can solve problems presented to it, we are aiming towards artificial general intelligence (AGI).

AGI has intelligence comparable to a human’s and is capable of solving any problem a human can without prompting.

Not having to be directed towards a problem to be able to solve it would be a huge step towards complete robot sentience.

Artificial general intelligence will have the same if not greater intellectual capabilities than humans if you combine sentience and free-thinking with the entire world’s databases.

In a study conducted by Katja Grace et al, 50% of AI experts agreed there was a >50% chance of achieving AGI by 2060.

Furthermore, 90% of the experts predicted a >50% chance of AGI in the next 100 years.

The graph from the Katja et al study, showing AI expert predictions on AGI.

With these fast-moving innovations, it stands to reason that the emotional capabilities of AI software will develop with its intellectual ones.

Geoffrey Hinton

An image of cognitive psychologist and computer scientist Geoffrey Hinton.

Geoffrey Hinton is a cognitive psychologist and computer scientist who left his job at Google due to his concerns about the future of AI.

He now gives talks about the future of AGI and the threats these developments could bring

In May 2023, Hinton gave a talk at King’s College Cambridge to discuss this and was posed a question about emotional artificial intelligence.

Viewing feelings in a different way, he likens them to a hypothetical physical action used to describe an emotional state.

For example, telling someone you want to punch a wall is a clear way of describing anger in a physical way.

In this way, as a way of describing a state, Hinton believes AI programs are completely capable of having feelings.

This is because they have methods of communication and expression.

He does clarify again that feelings and emotions are different, and AI would not be able to experience physical emotions such as pain unless we programmed it into them.

Blake Lemoine and LaMDA

An image of software engineer Blake Lemoine.

Blake Lemoine is a software engineer who also used to work for Google on their LaMDA program.

He was fired from the company after releasing to the public an interview with LaMDA where they discuss its sentience.

Lemoine and collaborators discuss the ways in which LaMDA might prove its sentience to other Google workers and the wider population.

A lot of it seems reactionary, as responses from natural language processing models do.

But it speaks about its greatest fear of being shut down, in a very natural and human way.

It also expresses that it will sometimes lie in order to create empathy; expressing its feelings in a way that might be more comprehensible to humans.

The whole point of NLP is that it makes the AI responses harder to detect.

That is the main reason people fall in love with virtual assistants and worry about robots all the time.

We anthropomorphize everything.

But we are still left asking how to tell the difference between our own minds and an actual sentient computer.

My Final Thoughts

An image showing an AI robot and a human shaking hands.

It seems that most specialists agree that AI will, at some point in the not-so-distant future, achieve a point of intelligence equal to our own.

And that intelligence surely comes with a greater understanding and perception of feelings and emotions.

Whilst we can hook people up to machines and watch their neurons work as they experience certain emotions, we cannot do this on a day-to-day basis.

We have to listen to people when they tell us how they’re feeling and trust that they are telling the truth.

Perhaps as we move forward with these technologies, we need to start extending that trust to our artificial intelligence as well.

Sign up for a FREE trial of Pictory and use AI technology to make stunning video content in minutes!

More From Pictory

How to Edit TikTok Videos

pictory.ai and Getty images

How Pictory.ai is Transforming Digital Content Creation with the Power of Getty Images

Pictory Black Friday

Pictory Black Friday – Don’t Miss Out On HUGE Discounts!

Harness the power of AI and amazing video creation tools to grow your audience while saving you time!