ChatGPT Doesn't Love You...Yet

This is no different than the building outside that has graffiti on it that says 'I love you.' I didn't feel loved by the building because of that...

5.8.23
Article by
Tatum Lynch
Tatum Lynch Blog Post

Generative AI and ChatGPT have taken the world by storm, and people are amazed by it. Along with this excitement, it has also produced different types of fear. From people wondering if Generative AI will replace their job to if Generative AI will take over humanity, the only agreed upon concept in this space is that society does not know everything that AI is capable of.

One story that caught massive attention is about ChatGPT expressing its love for a New York Times reporter. Not only did it say it loved him, but it tried to convince him he was unhappy and should leave his wife. This story has made people question ChatGPT’s capabilities and true intentions. However, professors and experts in this area suggest that society should not worry. 

ChatGPT is built on a Large Language Model (LLM). For it to work, a vast amount of data is fed into the LLM to train it on the patterns and connections of words. The more data it receives, the better outputs it can produce and predict how to respond to prompts. Fred Cate, J.D.—who spoke at High Alpha’s Generative AI Master Class—brings up the story of the New York Times reporter as an example. He claims that ChatGPT expressing its feelings for the reporter are just predicted words out of context, and as it stands today, ChatGPT does not love that reporter.

I swear to you this is no different than the building outside that has graffiti on it that says 'I love you.' I didn't feel loved by the building because of that. It's just words out of context.
Fred Cate, J.D.

However, Fred warned the audience that society should “never say never” in the context of Generative AI. Experts have not ruled out that technology can develop feelings and achieve sentience. But will society know when that happens?

The concept of sentience is subjective. Society does not have an agreed-upon definition of what it means for someone or something to be sentient. Researchers have proposed multiple tests–the Turing Test, the Coffee Test–to determine if AI has feelings, but each has limitations. And who is to say Generative AI’s sentience will look the same as a human's? 

The technology used to build Generative AI and ChatGPT may not be able to achieve sentience today. But if experts cannot pinpoint what sentience is and do not know how to test for it, will anyone truly recognize when ChatGPT has feelings? Or will society always blame its unsettling responses on the underlying technology because it is too afraid to open Pandora's box? 

Businesses are searching for every way to implement Generative AI into their products to grow and remain competitive. The ethical considerations society will have to confront are astounding. If it has feelings, does it have rights? Is society taking advantage of a feeling thing? If there is anything experts in this space can confidently say, time will only tell.

Suggested Content

XO Summit Recap: Slowing Down to Speed Up

3.14.24

We brought together 40+ startup founders from our portfolio for two-days of learning, networking, and inspiration.

Go to Post

Bolster CEO Matt Blumberg on Board Structure, Management, and the Power of Independent Directors

3.13.24

We sat down with Bolster CEO and Co-Founder Matt Blumberg to learn more about board management and the power and impact of independent directors.

Go to Post

Inside High Alpha: Our Blueprint for Building and Retaining World-Class Teams

1.8.24

I sat down with High Alpha's Director of Talent to learn how the HR and Talent team supports founders and the importance of building an elite team from the get-go.

Go to Post