Generative AI and ChatGPT have taken the world by storm, and people are amazed by it. Along with this excitement, it has also produced different types of fear. From people wondering if Generative AI will replace their job to if Generative AI will take over humanity, the only agreed upon concept in this space is that society does not know everything that AI is capable of.
One story that caught massive attention is about ChatGPT expressing its love for a New York Times reporter. Not only did it say it loved him, but it tried to convince him he was unhappy and should leave his wife. This story has made people question ChatGPT’s capabilities and true intentions. However, professors and experts in this area suggest that society should not worry.
ChatGPT is built on a Large Language Model (LLM). For it to work, a vast amount of data is fed into the LLM to train it on the patterns and connections of words. The more data it receives, the better outputs it can produce and predict how to respond to prompts. Fred Cate, J.D.—who spoke at High Alpha’s Generative AI Master Class—brings up the story of the New York Times reporter as an example. He claims that ChatGPT expressing its feelings for the reporter are just predicted words out of context, and as it stands today, ChatGPT does not love that reporter.
However, Fred warned the audience that society should “never say never” in the context of Generative AI. Experts have not ruled out that technology can develop feelings and achieve sentience. But will society know when that happens?
The concept of sentience is subjective. Society does not have an agreed-upon definition of what it means for someone or something to be sentient. Researchers have proposed multiple tests–the Turing Test, the Coffee Test–to determine if AI has feelings, but each has limitations. And who is to say Generative AI’s sentience will look the same as a human’s?
The technology used to build Generative AI and ChatGPT may not be able to achieve sentience today. But if experts cannot pinpoint what sentience is and do not know how to test for it, will anyone truly recognize when ChatGPT has feelings? Or will society always blame its unsettling responses on the underlying technology because it is too afraid to open Pandora’s box?
Businesses are searching for every way to implement Generative AI into their products to grow and remain competitive. The ethical considerations society will have to confront are astounding. If it has feelings, does it have rights? Is society taking advantage of a feeling thing? If there is anything experts in this space can confidently say, time will only tell.