Caryn Marjorie wanted to talk to as many of her followers as she could — so she made an AI clone of herself.
The Snapchat influencer, who has 1.8 million subscribers, launched an AI-powered, voice-based chatbot that she hopes will “cure loneliness.”
Called CarynAI, the chatbot is described on its website as a “virtual girlfriend.” It allows Marjorie’s fans to “enjoy private, personalized conversations” with an AI version of the influencer, the chatbot’s website states.
The bot has gone viral, with Marjorie making headlines, stirring backlash and even receiving some death threats. The bot has also ignited discourse around the ethics of companion chatbots.
Marjorie, who in her Twitter bio calls herself ‘The first influencer transformed into AI,” did not immediately respond to a request for comment.
In a tweet on Thursday, she wrote “CarynAI is the first step in the right direction to cure loneliness.”
“Men are told to suppress their emotions, hide their masculinity, and to not talk about issues they are having,” Marjorie, 23, wrote. “I vow to fix this with CarynAI. I have worked with the world’s leading psychologists to seamlessly add [cognitive behavioral therapy] and [dialectic behavior therapy] within chats. This will help undo trauma, rebuild physical and emotional confidence, and rebuild what has been taken away by the pandemic.”
The CarynAI website states that the team spent over 2,000 hours designing and coding the chatbot to create an “immersive AI experience.”
Forever Voices, an AI company, developed the chatbot by analyzing Marjorie’s now-deleted YouTube content and layering it with OpenAI’s GPT4 software.
John Meyer, CEO of Forever Voices, did not immediately respond to a request for comment.
He tweeted Thursday that he is “Proud of our team for the thousands of hours of work put into this,” calling the partnership with Marjorie “an incredible step forward in the future of AI-to-Human interaction!”
According to a Fortune report, CarynAI generated $71,610 in revenue after one week of beta testing the “virtual girlfriend.” Fans reportedly pay $1 per minute to use the chatbot and there are currently over 1,000 users.
While CarynAI aims to give users an intimate experience, it is not supposed to engage in “sexually explicit” interactions.
However, after outlets reported that the chatbot does so when prompted, Marjorie issued a statement to Insider saying that the AI was “not programmed to do this and has seemed to go rogue. My team and I are working around the clock to prevent this from happening again.”
Irina Raicu, director of internet ethics at the Markkula Center for Applied Ethics at Santa Clara University, said the launch of CarynAI seems premature “because problems that should have been absolutely anticipated don’t seem to have been.”
Prior tools have underwent similar problems as CarynAI, Raicu said. She pointed to a recent incident with the AI company Replika, which was similarly founded to provide supportive AI companions, scrambling to combat erotic roleplay among its chatbots.
Raicu also expressed concern that CarynAI’s claims to potentially “cure loneliness” are not backed up by sufficient psychological or sociological research.
“These kind of grand claims about a product’s goodness can just mask the desire to monetize further the fact that people want to pretend to have a relationship with an influencer,” she said.
These kind of grand claims about a product’s goodness can just mask the desire to monetize further the fact that people want to pretend to have a relationship with an influencer
-Irina Raicu, director of internet ethics at the Markkula Center for Applied Ethics at Santa Clara University
These types of chatbots can add “a second layer of unreality” to parasocial relationships between influencers and fans, she noted.
Raicu said she finds the claim around CarynAI, particularly Marjorie’s assertion that it is “an extension of my consciousness,” problematic.
“These are claims that AI researchers have been trying so hard to combat, to tell people this is absolutely not what such tools do even if the language now is sounding like there’s sentience behind it,” she said. “There isn’t.”
Raicu said influencers should be aware of the Federal Trade Commission’s guidance on artificial intelligence products. In February, the FTC released guidelines for advertisers promoting AI products and urged companies to avoid exaggerating claims.
Meyer told Fortune that his company is looking to hire a chief ethics officer and that it takes ethics “seriously.”
Marjorie continues to tweet updates about the bot. On Friday, she wrote: “if you are rude to CarynAI, it will dump you.”