Would you buy your child a barbie with chatgpt -driven Barbie? I am nauseous with the prospect of a scenario for small soldiers in life
- Advertisement -
- Advertisement -
Mattel works together with Openi Building ai -driven toys, which can lead to great fun, but also sounds like the starting point for a million stories about things that go wrong.
To be clear, I don’t think AI will end the world. I used chatgpt in a million ways, also as an assistant for activities such as parent. AI helped me to brainstorm, among other things, stories about bedtime and designing coloring books. But that is what I am using, not opening for direct interaction with children.
The official announcement is of course very optimistic. Mattel says it brings the “magic of AI” to playing time, and promises age -class, safe and creative experiences for children. OpenAi says it is pleased to help this toy to the stream of Chatgpt, and both companies seem to be planning this as a step forward for playing time and development of children.
But I can’t help to think about how chatgpt conversations can tension in bizarre conspiracy theories, except that it is suddenly a Barbie doll who talks to an eight-year-old. Or a Gi Joe Veering of positive messages about ‘knowing is half the fight’, to pitching cryptocurrency -mining because about six years heard the word ‘blockchain’ somewhere and thought it sounded like a cool weapon for the toy.
As you may have noticed from the top image, was the first thought I had about the movie Little soldiers. The Corny Classic from 1998 about a manager at a toy company decides to save money by installing AI chips of military quality in action figures, which leads to the toys that guerrilla warfare organizes in the suburbs? It was a satire, not bad. But no matter how exaggerated that outcome could be, it is difficult not to see the glimpse of chaotic potential when installing generative AI in the toys that children can spend.
I get the attraction of AI in a toy, I do. Barbie can be more than just a doll that dresses you, she can be a curious, smart conversationalist who can explain space emissions or play different roles in a dozen. Or you could have a hot wheel car that comments on the track you built for it. I can even imagine AI in UN as a deck path that actually teaches younger children strategy and sportiness.
But I think that generative AI models such as chatgpt should not be used by children. They can be reduced because of safety, but at a certain moment it stops that AI is no longer and simply a fairly robust series in advance planned reactions without the flexibility of AI. That means avoiding craziness, hallucinations and moments of unintended inappropriation of AI that can brush adults, but children can absorb.
Playing with AI
Mattel has been working on this for a long time and generally knows what it does with his products. It is certainly not to their advantage to let their toys go even a little hayfour. The company said it will build up safety and privacy in every AI interaction. They promise to concentrate on suitable experiences. But “fitting” is a very smooth word in AI, especially when it comes to language models that have been trained on the internet.
However, chatgpt is not a closed-running system that is built for toys. It was not specifically designed for young children. And even if you train it with guidelines, filters and special speech modules, it is still built on a model that learns and imitates. There is also the deeper question: what kind of relationship do we want children to have with this toy?
There is a big difference between playing with a pop and proposing conversations with it, and forming a bond with a toy that responds independently. I don’t expect a doll to go the full Chucky of M3gan, but when we fade out the border between Playmate and Program, the results can be difficult to predict.
I use chatgpt with my son in the same way as I use scissors or glue – a tool for its entertainment that I manage. I am the gatekeeper and AI built into a toy is difficult to check that way. The doll talks. The car answers. The toy is going on and children may not notice anything wrong because they don’t have the experience.
If Barbie’s AI has a malfunction, when Gi Joe suddenly slips into dark military metaphors, when a hot Wheels car says something bizarre, a parent may not even know until it is said and absorbed. If we do not feel comfortable to make these toys run without supervision, they are not ready.
It is not about banning AI from childhood. It is about knowing the difference between what is useful and what is too risky. I want AI to be very scary in the toy world, such as how a TV program aimed at Toddlers is carefully designed to be suitable. Those shows will not (almost never) script, but the power of AI is in writing his own script.
I may sound too hard about this, and goodness knows that there have been other technical toy prints. Furbies were creepy. Talking Elmo had glitches. Talking Barbies once had sexist lines about mathematics that were difficult. All problems that can be solved, except perhaps the Furbies. I do think that AI has potential in toys, but I will be skeptical until I see how well Mattel and OpenAi navigate between that narrow path between not really using AI and giving the AI too much freedom to be a bad virtual friend to give your child.
Maybe you like it too
- Advertisement -