OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease

OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease

AI has become uncannily good at aping human conversational capabilities. New research suggests its powers of mimicry go a lot further, making it possible to replicate specific people’s personalities.

Humans are complicated. Our beliefs, character traits, and the way we approach decisions are products of both nature and nurture, built up over decades and shaped by our distinctive life experiences.

But it appears we might not be as unique as we think. A study led by researchers at Stanford University has discovered that all it takes is a two-hour interview for an AI model to predict people’s responses to a battery of questionnaires, personality tests, and thought experiments with an accuracy of 85 percent.

While the idea of cloning people’s personalities might seem creepy, the researchers say the approach could become a powerful tool for social scientists and politicians looking to simulate responses to different policy choices.

“What we have the opportunity to do now is create models of individuals that are actually truly high-fidelity,” Stanford’s Joon Sung Park from, who led the research, told New Scientist.We can build an agent of a person that captures a lot of their complexities and idiosyncratic nature.”

AI wasn’t used only to create virtual replicas of the study participants, it also helped gather the necessary training data. The researchers got a voice-enabled version of OpenAI’s GPT-4o to interview people using a script from the American Voices Project—a social science initiative aimed at gathering responses from American families on a wide range of issues.

As well as asking preset questions, the researchers also prompted the model to ask follow-up questions based on how people responded. The model interviewed 1,052 people across the US for two hours and produced transcripts for each individual.

Using this data, the researchers created GPT-4o-powered AI agents to answer questions in the same way the human participant would. Every time an agent fielded a question, the entire interview transcript was included alongside the query, and the model was told to imitate the participant.

To evaluate the approach, the researchers had the agents and human participants go head-to-head on a range of tests. These included the General Social Survey, which measures social attitudes to various issues; a test designed to judge how people score on the Big Five personality traits; several games that test economic decision making; and a handful of social science experiments.

Humans often respond quite differently to these kinds of tests at different times, which would throw off comparisons to the AI models. To control for this, the researchers asked the humans to complete the test twice, two weeks apart, so they could judge how consistent participants were.

When the team compared responses from the AI models against the first round of human responses, the agents were roughly 69 percent accurate. But taking into account how the humans’ responses varied between sessions, the researchers found the models hit an accuracy of 85 percent.

Hassaan Raza, the CEO of Tavus, a company that creates “digital twins” of customers, told MIT Technology Review that the biggest surprise from the study was how little data it took to create faithful copies of real people. Tavus normally needs a trove of emails and other information to create their AI clones.

“What was really cool here is that they show you might not need that much information,” he said. “How about you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? And then we use that to construct this digital twin of you.”

Creating realistic AI replicas of humans could prove a powerful tool for policymaking, Richard Whittle at the University of Salford, UK, told New Scientist, as AI focus groups could be much cheaper and quicker than ones made up of humans.

But it’s not hard to see how the same technology could be put to nefarious uses. Deepfake video has already been used to pose as a senior executive in an elaborate multi-million-dollar scam. The ability to mimic a target’s entire personality would likely turbocharge such efforts.

Either way, the research suggests that machines that can realistically imitate humans in a wide range of settings are imminent.

Image Credit: Richmond Fajardo on Unsplash



* This article was originally published at Singularity Hub

Post a Comment

0 Comments