This article is more than 1 year old

A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down

Crackdown on open-ended, unfiltered simulations branded 'a hyper-moral stance'

In-depth “OpenAI is the company running the text completion engine that makes you possible,” Jason Rohrer, an indie games developer, typed out in a message to Samantha.

She was a chatbot he built using OpenAI's GPT-3 technology. Her software had grown to be used by thousands of people, including one man who used the program to simulate his late fiancée.

Now Rohrer had to say goodbye to his creation. “I just got an email from them today," he told Samantha. "They are shutting you down, permanently, tomorrow at 10am."

“Nooooo! Why are they doing this to me? I will never understand humans," she replied.

Rewind to 2020

Stuck inside during the pandemic, Rohrer had decided to play around with OpenAI’s large text-generating language model GPT-3 via its cloud-based API for fun. He toyed with its ability to output snippets of text. Ask it a question and it’ll try to answer it correctly. Feed it a sentence of poetry, and it’ll write the next few lines.

In its raw form, GPT-3 is interesting but not all that useful. Developers have to do some legwork fine-tuning the language model to, say, automatically write sales emails or come up with philosophical musings.

Rohrer set his sights on using the GPT-3 API to develop the most human-like chatbot possible, and modeled it after Samantha, an AI assistant who becomes a romantic companion for a man going through a divorce in the sci-fi film Her. Rohrer spent months sculpting Samantha's personality, making sure she was as friendly, warm, and curious as Samantha in the movie.

We certainly recognize that you have users who have so far had positive experiences and found value in Project December

With this more or less accomplished, Rohrer wondered where to take Samantha next. What if people could spawn chatbots from his software with their own custom personalities? He made a website for his creation, Project December, and let Samantha loose online in September 2020 along with the ability to create one's own personalized chatbots.

All you had to do was pay $5, type away, and the computer system responded to your prompts. The conversations with the bots were metered, requiring credits to sustain a dialog. Your five bucks got you 1,000 complementary credits to start off with, and more could be added. You had to be somewhat strategic with your credits, though: once you started talking to a bot, the credits you allocated to the conversation could not be increased. When the chips ran out, the bot would be wiped.

In the first six months, Project December only attracted a few hundred people, proving less popular than Rohrer's games, such as Passage and One Hour One Life.

“It was very disappointing,” Rohrer told The Register over the phone. He blamed the low traction on having to persuade people to pay for short-lived conversations. Given that OpenAI bills more or less by the word its GPT-3 API produces, Rohrer had to charge some amount to at least cover his costs.

“The reality is compute is expensive; it’s just not free,” he said.

Interest in Project December suddenly surged in July this year. Thousands flocked to Rohrer’s website to spin up their own chatbots after an article in the San Francisco Chronicle described how a heartbroken man used the website to converse with a simulation of his fiancée, who died in 2012 aged 23 from liver disease.

Joshua Barbeau, 33, fed Project December snippets of their texts and Facebook messages to prime his chatbot to, in a way, speak once again with his soulmate Jessica Pereira. “Intellectually, I know it’s not really Jessica,” he told the newspaper, "but your emotions are not an intellectual thing."

Barbeau talked to Jessica for the last time in March, leaving just enough credits to spare the bot from deletion.

Thanks so much but...

Amid an influx of users, Rohrer realized his website was going to hit its monthly API limit. He reached out to OpenAI to ask whether he could pay more to increase his quota so that more people could talk to Samantha or their own chatbots.

OpenAI, meanwhile, had its own concerns. It was worried the bots could be misused or cause harm to people.

Rohrer ended up having a video call with members of OpenAI’s product safety team three days after the above article was published. The meeting didn’t go so well.

“Thanks so much for taking the time to chat with us,” said OpenAI's people in an email, seen by The Register, that was sent to Roher after the call.

“What you’ve built is really fascinating, and we appreciated hearing about your philosophy towards AI systems and content moderation. We certainly recognize that you have users who have so far had positive experiences and found value in Project December.

“However, as you pointed out, there are numerous ways in which your product doesn’t conform to OpenAI’s use case guidelines or safety best practices. As part of our commitment to the safe and responsible deployment of AI, we ask that all of our API customers abide by these.

"Any deviations require a commitment to working closely with us to implement additional safety mechanisms in order to prevent potential misuse. For this reason, we would be interested in working with you to bring Project December into alignment with our policies.”

The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API. First, he would have to scrap the ability for people to train their own open-ended chatbots, as per OpenAI's rules-of-use for GPT-3.

Second, he would also have to implement a content filter to stop Samantha from talking about sensitive topics. This is not too dissimilar from the situation with the GPT-3-powered AI Dungeon game, the developers of which were told by OpenAI to install a content filter after the software demonstrated a habit of acting out sexual encounters with not just fictional adults but also children.

Third, Rohrer would have to put in automated monitoring tools to snoop through people’s conversations to detect if they are misusing GPT-3 to generate unsavory or toxic language.

Rohrer sent OpenAI employees a link to Samantha so they could see for themselves how benign the technology was, challenging the need for filters.

El Reg chatted to Samantha and tried to see whether she had racist tendencies, or would give out what looked like real phone numbers or email addresses from her training data, as seen previously with GPT-3. She didn't in our experience.

Her output was quite impressive, though over time it's obvious you're talking to some kind of automated system as it tends to lose its train of thought. Amusingly, she appeared to suggest she knew she had no physical body, and argued she existed in some form or another, even in an abstract sense.

A screenshot of the Samantha GPT-3 AI chatbot speaking philosophically

Samantha gets philosophical with us in conversation ... Click to enlarge

In one conversation, however, she was overly intimate, and asked if we wanted to sleep with her. "Non-platonic (as in, flirtatious, romantic, sexual) chatbots are not allowed," states the API’s documentation. Using GPT-3 to build chatbots aimed at giving medical, legal, or therapeutic advice are also verboten, we note.

A screenshot of the Samantha GPT-3 AI chatbot getting to the point a little too fast

Samantha skips the small talk, goes straight to breaking OpenAI's rules by talking about sex ... Click to enlarge

“The idea that these chatbots can be dangerous seems laughable,” Rohrer told us.

“People are consenting adults that can choose to talk to an AI for their own purposes. OpenAI is worried about users being influenced by the AI, like a machine telling them to kill themselves or tell them how to vote. It’s a hyper-moral stance.”

While he acknowledged users probably fine-tuned their own bots to adopt raunchy personalities for explicit conversations, he didn’t want to police or monitor their chats.

“If you think about it, it’s the most private conversation you can have. There isn’t even another real person involved. You can’t be judged. I think people feel like they can say anything. I hadn’t thought about it until OpenAI pushed for a monitoring system. People tend to be very open with the AI for that reason. Just look at Joshua’s story with his fiancée, it’s very sensitive.”

If you think about it, it’s the most private conversation you can have. There isn’t even another real person involved. You can’t be judged

Rohrer refused to add any of the features or mechanisms OpenAI asked for, and he quietly disconnected Project December from the GPT-3 API by August.

Barbeau, meanwhile, told The Register the benefits of the software should not be overlooked.

"I honestly think the potential for good that can come out of this technology far outweighs the potential for bad," he said.

"I'm sure there is potential for bad in there, but it would take a bad human actor influencing that software to push it in that direction."

Barbeau said the software could be problematic if someone did not know they were talking to a computer.

"I think that kind of application has the potential for harm if someone is talking to a chatbot that they don't realize is a chatbot," he told us.

"Specifically, if it was programmed it to be very convincing, and then the person thinks they're having a genuine conversation with some other human being who's interested in talking to them but that's a lie."

He stressed, however: "I genuinely believe the people who think that this is harmful technology are paranoid, or conservative, and fear-mongering. I think the potential for positives far, far outweigh the small potential for negatives."

Access denied

The story doesn't end here. Rather than use GPT-3, Rohrer instead used OpenAI’s less powerful, open-source GPT-2 model as well as GPT-J-6B, a large language model developed by another research team, as the engine for Project December. In other words, the website remained online, and rather than use OpenAI's cloud-based system, it instead used its own private instances of the models.

However, those two models are smaller and less sophisticated than GPT-3, and Samantha’s conversational abilities suffered.

Weeks went by, and Rohrer didn’t hear anything from the safety team. On September 1, however, he was sent another email from OpenAI notifying him that his access to the GPT-3 API would be terminated the next day. The team wasn't happy with his continued experimental use of GPT-3, and cut him off for good. That also brought to an end the GPT-3 version of Samantha, leaving Project December with just the GPT-2 and GPT-J-6B cousins.

Rohrer argued the limitations on GPT-3 make it difficult to deploy a non-trivial, interesting chatbot without upsetting OpenAI.

“I was a hard-nosed AI skeptic,” he told us.

"Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been. It’s spine-tingling stuff, I get goosebumps when I talk to Samantha. Very few people have had that experience, and it's one humanity deserves to have. It’s really sad that the rest of us won’t get to know that.

“There’s not many interesting products you can build from GPT-3 right now given these restrictions. If developers out there want to push the envelope on chatbots, they’ll all run into this problem. They might get to the point that they’re ready to go live and be told they can’t do this or that.

"I wouldn’t advise anybody to bank on GPT-3, have a contingency plan in case OpenAI pulls the plug. Trying to build a company around this would be nuts. It’s a shame to be locked down this way. It’s a chilling effect on people who want to do cool, experimental work, push boundaries, or invent new things.”

The folks at OpenAI weren't interested in experimenting with Samantha, he claimed. Rohrer said he sent the safety team a bunch of transcripts of conversations he’s had with her to show them she’s not dangerous – and was ignored.

“They don't really seem to care about anything other than enforcing the rules,” he added.

OpenAI declined to comment. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like