- Advertisement -
OpeniThe head of the model and behavioral policy, Joanne Jang, has written a blog post about X about X about relationships between people and a number of well-considered ideas about the subject and how OpenAi approaches the problems around it. As AI models are essentially better in imitation of life and starting a conversation, people begin to treat AI chatbots as if they were people too. It is logical that OpenAi would like to make it clear that they are aware of this and include the facts in their plans.
But the thoughtful, nuanced approach, including designing models that feel useful and friendly, but not consciously, lacks something crucial. No matter how clear and careful jang tries to be, people have emotional connections with AI, an incidental out of biter event, or a future hypothetical, it happens now and it seems to happen a lot.
OpenAi may have been overwhelmed, as CEO Sam Altman has noticed that they are surprised by how many people AI anthropomorphization and how deep users claim to make contact with the models. He has even recognized the emotional attraction and the potential risks. That is why Jang’s Post exists.
She makes it clear that OpenAI is building models to serve people and that they prioritize the emotional side of that comparison. They investigate how and why people form emotional attachments to AI and what it means to shape future models. She makes a point of distinction between ontological consciousness, such as in the actual consciousness that people have, and observed awareness, or it seems conscious to users. Observed consciousness is what matters now, because that is what influences people who communicate with the AI. The company tries to engage a behavioral needle that makes the AI warm and helpful without pretending to have feelings or a soul.
Nevertheless, the Clinical Compassionate Language could not hide a clearly missing element. It felt like you saw someone you were carefully on: wet floor sign and brag about plans for waterproof buildings a week after a flood left the floor knee deep in water.
The elegant framing and cautious optimism of the blog post and the focus on responsible model creation based on research and long-term cultural conditioning circumvent the messy reality of how people develop deep connections with AI-chatbots, including chatgpt. Many people not only talk to chatgpt as if it is software, but as if it is a person. Some even claim that they have fallen in love with an AI companion, or use these to completely replace human connections.
Ai -intimacy
There are Reddit -Threads, medium -sized essays and viral videos of people who don’t whisper Zoete about their favorite chatbot. It can be funny or sad or even occurring, but what it is not is theoretical. Legal cases about whether AI chatbots have contributed to suicides have been underway and more than one person has reported that they trust the point that it has become more difficult to form real relationships.
OpenAI does noted that constant, judgment -free attention from a model can feel as a company. And they admit that shaping the tone and the personality of a chatbot can influence how emotionally alive it feels, with increasing commitment to users who are sucked into these relationships. But the tone of the piece is too loose and academic to recognize the potential scale of the problem.
Because with the AI intimacy toothpaste already out of the tube, this is a matter of real-world behavior and how the companies behind the AI now form that behavior, not just in the future. Ideally, they would already have systems for dependency detection. If someone spends hours a day with chatgpt, talk as if it were their partner, the system should be able to softly mark that behavior and propose a break.
And the romantic connections need a number of hard limits. Not forbidding, that would be stupid and probably counterproductive. But strict rules that every AI that deals with romantic role play must remind people that they talk to a bone, one that is not really alive or is aware. People are Masters of Projection, and a model does not have to be flirty for the user to fall in love with it of course, but all the hints of conversation that have to activate those protocols in that direction, and they must be extra strict when it comes to children.
The same applies to AI models as a whole. Occasionally chatgpt memories say: “Hey, I am not a real person,” may feel uncomfortable, but in some cases they are demonstrably necessary and a good prophylactic in general. It is not the fault of users that people anthropome borging everything. Googly eyes on room bass and distinguishing our vehicles with names and personalities is not seen as more than somewhat quirky. It is not surprising that a tool could feel as responsive and verbal as a chatgpt as a friend, a therapist or even a partner. The point is that companies such as OpenAI have the responsibility to anticipate this and to design it for it, and should have from the start.
You could claim that adding all these crash barriers is ruined the pleasure. That people should be allowed to use AI the way they want, and that artificial company can be a balm for loneliness. And that is true in moderate doses. But playgrounds have fences and roller coasters for a reason for safety belts. Ai able to simulate emotions and to provoke without security controls is simply neglected.
I am glad that OpenAi thinks about this, I wish they had done before, whether there had more urgency left. AI -product design should reflect the reality that people are already in relationships with AI, and those relationships need more than thoughtful essays to stay healthy.
Maybe you like it too
- Advertisement -