OpenAI warns of emotional attachment to AI voice feature
Alon Yamin, CEO of Copyleaks echoed concerns that "AI should never be a replacement for actual human interaction."
OpenAI has expressed concern that its advanced voice feature in the GPT-4o model might lead users to form emotional attachments to the AI, potentially at the cost of real human relationships.
"Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models," OpenAI said in a report, adding that the AI’s voice capabilities "facilitate more human-like interactions."
The company noted instances where testers displayed signs of bonding, such as lamenting that it was their last day with the AI. "These instances appear benign but must be studied to see how they might evolve over longer periods of time," OpenAI cautioned.
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: https://t.co/MYHZB79UqN
— OpenAI (@OpenAI) May 13, 2024
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx
Alon Yamin, CEO of Copyleaks, echoed these concerns, stating, "AI should never be a replacement for actual human interaction."
OpenAI also highlighted the potential for over-reliance on AI, with its ability to remember details and perform tasks, and acknowledged risks associated with the AI convincingly repeating false information.
This follows a recent incident involving a voice resembling actress Scarlett Johansson’s, leading to further scrutiny of voice-cloning technology.
Read more: Australian science magazine blasted for using AI-generated articles