AI can now clone your personality in just two hours – and that’s a dream for deepfake scammers
- New study trained AI models based on answers given in a two-hour interview
- AI could replicate participants’ answers with 85% accuracy
- Future studies may use agents instead of people
You may think your personality is unique, but all an AI model needs is a two-hour interview to create a virtual replica of your attitude and behavior. That is according to a new paper published by researchers at Stanford and Google DeepMind.
What are simulation agents?
Simulation agents are described by the article as generative AI models that can accurately simulate a person’s behavior “across a range of social, political, or informational contexts.”
The study asked 1,052 participants to complete a two-hour interview covering a wide range of topics, from their personal life stories to their views on contemporary social issues. Their answers were recorded and the script was used to train generative AI models – or “simulation agents” – for each individual.
To test how well these agents could imitate their human counterparts, both were asked to complete a series of tasks, including personality tests and games. Participants were then asked to replicate their own answers fourteen days later. Remarkably, the AI agents were able to simulate responses with 85% accuracy compared to the human participants.
Furthermore, the simulation agents were equally effective when asked to predict personality traits in five social science experiments.
While your personality may seem like an intangible or unquantifiable thing, this research shows that it is possible to distill your value structure from a relatively small amount of information by capturing qualitative answers to a fixed set of questions. Based on this data, AI models can convincingly imitate your personality – at least, in a controlled, test-based environment. And that could make deepfakes even more dangerous.
Double agent
The research was led by Joon Sung Park, a PhD student at Stanford. The idea behind creating these simulation agents is to give social science researchers more freedom in conducting studies. By creating digital replicas that behave like the real people they are based on, scientists can conduct studies without the expense of involving thousands of human participants each time.
They may also be able to conduct experiments that would be unethical to conduct with real human participants. Speak with MIT Technology ReviewJohn Horton, an associate professor of information technology at the MIT Sloan School of Management, said the paper shows a way you can use “real people to generate personas that can then be used programmatically/in-simulation in ways you couldn’t with that.” could. real people.”
Whether the research participants feel morally comfortable with this is one thing. More worrying to many people will be the potential for simulation agents to become something more nefarious in the future. In that same MIT Technology Review story, Park predicted that one day “there could be a bunch of little ‘yous’ running around and actually making the decisions you would have made yourself.”
For many, this will set off dystopian alarm bells. The idea of digital replicas opens up a world of security, privacy and identity theft concerns. It doesn’t take much imagination to envision a world where scammers – who already use AI to imitate the voices of loved ones – could build deepfakes of their personalities to impersonate people online.
This is especially concerning when you consider that the AI simulation agents in the study were created using just two hours of interview data. This is far less than the amount of information currently required by companies such as Tavusthat create digital twins based on a wealth of user data.