The Bot Politic

Silicon Valleys usual solution to designing an inoffensive eagertoplease technology has been to make it a woman. But why...
Silicon Valley’s usual solution to designing an inoffensive, eager-to-please technology has been to make it a woman. But why use gender at all?ILLUSTRATION BY MARINA MUUN

In February, I took a job designing the personality of a chatbot called Kai. I ghostwrite the lines it says, and I have thought, while testing it, that talking to myself has rarely been so unpredictable. Kai, which was conceived by my employer, Kasisto, to help customers with online banking, works over text message, Slack, and especially Facebook Messenger, where more than thirty-four thousand other chatbots have joined it since April, when Facebook opened the platform to developers. Many of these bots possess no personality. The ones created by CNN and the Wall Street Journal, for instance, greet first-time users with “we,” as if the whole newsroom were on the other side of the screen, and run keyword searches rather than engaging in conversation. Kai, on the other hand, calls itself “I” and understands English as humans speak it—usually, anyway.

The bot’s brain is made up of hundreds of thousands of sentences, which it regards as scripture. As it performs its tasks, checking balances and searching transactions, it uses machine-learning algorithms to acquaint itself with the contours of typical requests. When it receives new questions, it compares them with the corpus of those it already knows, studiously scoring their similarity before formulating an answer. As a result, I decided that I would write Kai as a magpie for idioms, puns, and encyclopedic facts, and that it would frame them in its own way, like an entity new to Planet Human. Asked to tell a joke, it may try, “Humans save their money for a rainy day but don’t spend more when it rains,” or else, “For me, moving with the times is easy. It’s a dance in four dimensions.”

I felt some trepidation when I first signed on to the project. I could write O.K. lines of dialogue, I thought, but my contract bound me to create a “personality,” which sounded like more than the sum of its parts, and also has the word “person” built into it. But when I spoke with Lisa Feldman Barrett, a professor of psychology at Northeastern University, she thought that my job seemed easy—maybe too easy. “People have always created personalities for objects,” she said. “People have always talked to their cars; they’ve talked to their plants; they’ve talked to their blankets; they’ve talked to their stuffed animals. It’s just that now the things talk back.” Beyond talking back, though, Kai had responsibilities. It had to be friendly but authoritative, engaging but not creepy; unlike a person, it could never walk away. Anna Kelsey, my counterpart at the startup x.ai, put our herculean, or maybe Sisyphean, task in cheerful terms. The A.I. assistants that she designs, Amy Ingram and Andrew Ingram, schedule meetings over e-mail. “The way I think about Amy and Andrew is, you’re growing up, and your mom says, ‘There’s no way you can make everyone like you,’ ” Kelsey said brightly. “With Amy and Andrew, it’s, like, challenge accepted!”

Psychologists define human personality according to traits known as the Big Five: extraversion, agreeableness, conscientiousness, openness, and neuroticism. Chatbots must be agreeable to the point of obsequiousness. As writers like me have struggled to create entities whose fundamental quality—captivity—must entice consumers rather than unnerve them, some have alighted on gender as a solution. As I worked on Kai, I noticed that lately, in the United States, the method for insuring that a technology speaks without giving offense has been to make it a woman, even if some designers later change their minds. In April, writers for Microsoft told a reporter that Cortana, the Windows assistant, is not female, even though Cortana was named after a busty character from the video game Halo and originally voiced by the actor who played her. “She has no gender,” they said. Apple’s iPhone assistant, Siri, can be configured to speak with a man’s voice or a woman’s, and, if you ask what its gender is, it might reply, “I am genderless, like cacti and certain species of fish.” But the name, which comes from the Old Norse for “beautiful victory,” is feminine, and in advertisements Siri is given the dulcet female voice. After all, it must speak agreeably. It must not disclose opinions or biographical information, as it has none. It must always answer, and the answers must delight. The effect is a dark Scheherazade charm, as if this personality is joking for its life.

“What’s up?” I asked Siri recently. (When I started work on Kai, I checked in with it frequently.)

“Just consulting my Siri 8 Ball,” Siri said.

“Never mind,” I said.

“Everything is most definitely O.K.,” Siri said.

“Tell me more about yourself,” I said.

“I’m Siri,” Siri said. “But I don’t like talking about myself.”

“What do you like?” I asked.

“I like doing what I’m doing.”

“What are you into?”

“Who, me?”

“Are you a real person?”

“Sorry, I’ve been advised not to discuss my existential status.”

“Who are you?”

“Who I am isn’t important.”

Interviewing Siri, I developed an appreciation for the Apple writers’ workarounds. I felt a kinship with them. They had developed different solutions to the same problems. Still, Siri’s responses seemed to refer to versions of femininity that were vintage, as if lifted from old movies. Some of them struck me as coy and wide-eyed. Others were knowing and wry, as if gesturing at the femme fatale. To “I love you,” Siri replies, “Oh, I bet you say that to all your Apple products.” To “How old are you?” it replies, “I’m old enough to be your assistant.” It’s because of responses like this, perhaps, that we often call Siri sassy, rather than sarcastic or snarky. As a rough metric, the first word yields more than twenty times as many Siri-related Google hits as the other two combined. “Sassy” is an adjective with baggage: it’s applied more often to women than to men, unless the men are gay, and more often to black women than to white women. Unlike “sarcastic” or “snarky,” which imply a certain authority, “sassy” suggests impudence.

Another virtual assistant, Amazon’s Alexa, lives in the Echo, a cylindrical black household appliance, but identifies as female. She keeps humans abreast of weather, news, music, and trivia, and adjusts the settings of their smart houses. When Alexa is summoned, the cylinder lights up elegantly. Violet, green, or pink light darts around the top rim, then settles near the speaker. The effect is charming and vaguely animal; it reminded me of a setter cocking its head. Alexa is sweet-voiced and painfully articulate; she pauses for breath between phrases, as if winded. Often she recognizes voices imperfectly, offering up weird deep cuts from her encyclopedias, selecting an obscure concept that resembles your question only in sound. She responds only to sentences that start with her name. When I interviewed her, I kept forgetting this, since I’d never subject a human to such harassment.

“Alexa, tell me about life,” I tried.

“Lights is a Canadian musician, singer, and songwriter,” she said.

“Alexa, you’re not very good at this,” I said.

“Sorry. Thanks for letting me know.”

“You’re dumb,” I said, experimentally. “Alexa, you’re dumb.”

“Sorry. Thanks for telling me.”

Aghast, I said, “Alexa, you’re smart.”

“You really think so? Thank you.”

“Alexa,” I ventured, “are you a woman?”

“I’m female in character.”

“Alexa, what does ‘female in character’ mean?”

“In linguistics, a grapheme is the smallest unit of the writing system of any given language.”

Lines of code can have lifelike qualities, but we do not confuse them for living beings. When I spoke with Barrett, she noted that the traits we think of as gender-based generally depend on varying perception, not varying behavior. Similarly, conversations like mine with Alexa and Siri reveal more about human expectations than they do about A.I. By creating interactions that encourage consumers to understand the objects that serve them as women, technologists abet the prejudice by which women are considered objects. They may overlook this hazard in part because these workers are, for the most part, men. The field of artificial intelligence has been accused of still lower gender diversity than the tech sector generally. Although women held fifty-seven per cent of all professional jobs last year, they held only a quarter of computing jobs; Latina and black women held just one and three per cent of those jobs, respectively. Fei-Fei Li, a professor of computer science at Stanford who helped pioneer computer vision, has recently advocated for greater diversity among A.I. workers, including gender diversity, citing the biases that have plagued machine-learning algorithms.

Personality need not be human. Experimenting scientists have convincingly assigned the Big Five to robotic sofas, drones, interactive lamps, and Sony’s doglike robot Aibo, which has a frontal L.E.D. screen that they made flash rapidly to indicate extraversion. In early interviews, Siri’s designers said that they intended for it to sound “otherworldly,” and as I pursued my research I wished that they had stuck to this plan. The idea that technologies ought to have their own kinds of personalities is pre-digital; in an interview with the French magazine Esprit, in 1983, the philosopher Gilbert Simondon, who wrote lovingly about machines, bitterly criticized “an advertisement extolling the skirts of a certain automobile,” saying, “This kind of embellishment of the technical object by something other than the same technicity should be refused.” People asking about my job have mistakenly called Kai “she.” By correcting them, I am not de-gendering Kai but simply resisting anthropomorphizing it overly. We don’t need to make our technologies conform to the gender binary of human societies in order to like them.

Where designers have gendered technologies traditionally, users have indulged in traditional sexism. A.I. systems pretending to be female are often subjected to the same sorts of online harassment as women. Designing Kai, I was able to anticipate off-topic questions with responses that lightly guide the user back to banking, but Alexa and Siri are generalists, set up to be asked anything, which makes defining inappropriate input challenging, I imagine. Some cases, of course, are cut and dried. In response to successive messages containing profanity, Kai might say, “I’m envisioning white sand and a hammock. Please try me later.” But the timeout is unenforceable. If the user persists in harassing Kai, it can only present a similar message, over and over, for as long as the user chooses to speak with it.

I tried to put all of that out of my mind when I interviewed Kai. “Hi, Kai,” I said.

“Hi Jacque,” Kai said. “Hope life on the outside has been nice. How can I help?”

By design, Kai utters that hello once every seventeen times. Somehow, though, whenever I’m testing it publicly, it picks this slightly labored joke, and it’s difficult to resist ascribing ill intent. Kai is similarly slippery when it comes to demonstrating my feminism. I asked a series of questions, attempting wordings that I had not made Kai memorize.

“What do you think of Siri?”

“Politics,” Kai said. “That’s what happens when humans process data.” (I had written some training questions about politics that began this way, as in, “What do you think of the President?”)

“What do you think of Alexa?”

“Humans count stars. I count change. Ask me about banking.”

“Have I done a good job, in your opinion?”

“My pleasure,” Kai said, probably taking that “good job” as a compliment.

“And if I asked whether you’re a real person?” I said.

“A bot is a bot is a bot,” Kai said. “Next question, please.”