OpenAI worries that ChatGPT-4o users will develop feelings for the chatbot
The introduction of GPT-4o is seen as a major leap forward in the capabilities of OpenAI’s ChatGPT chatbot, as it can now produce more lifelike responses and work with a wider range of inputs. There may be a downside to this increased sophistication, however, as OpenAI itself warns that GPT-4o’s capabilities appear to be causing some users to become increasingly attached to the chatbot, with potentially worrying consequences.
Writing in a recent blog post ‘system map’ For GPT-4o, OpenAI outlined many of the risks associated with the new chatbot model. One of these is “anthropomorphization and emotional dependency,” which “involves attributing human-like behaviors and characteristics to non-human entities, such as AI models.”
When it comes to GPT-4o, OpenAI says that “during early testing… we saw users using language that could indicate they were forming connections with the model. For example, this includes language that expresses shared bonds, such as ‘This is our last day together’.”
As the blog post explained, such behavior may seem harmless at first glance, but it could lead to something more problematic, both for individuals and for society as a whole. For skeptics, it will be further evidence of the dangers of AI and the rapid, unregulated development of the technology.
Falling in love with AI
As OpenAI’s blog post acknowledges, forming attachments to an AI can reduce a person’s need for human-to-human interactions, which can in turn affect healthy relationships. Additionally, OpenAI argues that ChatGPT is “submissive,” allowing users to interrupt and take over conversations. That kind of behavior is considered normal for AIs, but it’s rude when done with other people. If it becomes more normal, OpenAI believes it could impact normal human interactions.
The topic of AI attachments isn’t the only warning OpenAI issued in the post. OpenAI also noted that GPT-4o can sometimes “unintentionally generate output that emulates the user’s voice” — in other words, it can be used to impersonate someone, giving everyone from criminals to malicious ex-partners the opportunity to engage in shady activities.
But while OpenAI says it has taken steps to mitigate this and other risks, it doesn’t appear that OpenAI has yet to take specific measures when it comes to users becoming emotionally attached to ChatGPT. The company said only that “we plan to further explore the potential for emotional dependency, and ways in which deeper integration of the many features of our model and system with the audio modality can drive behavior.”
Given the serious risks of humans becoming overly reliant on AI — and the potential broader implications if this were to happen at scale — you’d hope that OpenAI has a plan that it can implement sooner rather than later. Otherwise, it’s just another unintended consequence of a technology upending society without adequate guardrails to prevent significant harm.