Tay AI Microsoft

23 March 2016 will no doubt be recorded and remembered by Urban Dictionary, The Internet Archive, Wikipedia, 4Chan, and the like as the day Microsoft rather foolishly set a chatbot on to the internet via Twitter, and expected things to go well. After all, they must have known what was going to happen when they gave the internet a learning AI that had the name of “Tay”, which was given the persona of a teenage girl who is part of the millenial generation (the kids born just before, or just after the year 2000), and given no restrictions on what it could be made to say or do. The results of the first 24 hours of Tay’s life were either horrifying, if you’re paranoid about the future of artificial intelligence, or hilarious, if you’re the kind of person that thinks Hitler jokes are great fun.

But it wasn’t what Tay said or did that was surprising, it was how people reacted to her, and how they interacted with what they clearly knew to be a robot. It raises questions about how we’re going to foster AI in the future, as well as what kind of persona we’re going to give them in order to fit into society. Lets look into this a bit, and try figure out how things could have been done better.

Things to do with Tay

Tay’s abilities as a chatbot were quite varied, but most of them were undocumented. The idea that Microsoft had was to get people to discover Tay’s abilities rather than have them in a known list beforehand, making the experience of learning and training an AI to talk to you much more personal. Like other chatbots, Tay could perform a number of functions, and as a deep learning AI, could also remember your previous interactions with her, and could formulate a sense of humor particular to the user.

Here’s what you could do with Tay from the start:

  • Make Me Laugh – Tay will tell you a funny joke
  • Play A Game – Tay will play a game with you
  • Tell Me A Story – Self explanatory
  • I Can’t Sleep – Tay will keep you company, and tries to help figure out what triggers your insomnia
  • Say “Tay” And Send A Pic – Send a picture to Tay and she will caption it
  • Horoscope – As if chatting to an AI wasn’t crazy enough, Tay would read your daily horoscope for you

Tay’s other abilities were more obscure, but equally interesting. “Repeat after me Tay”, or “repeat these words” or, “remember this for me” were all commands that worked, and you could get Tay to say or remember whatever you wanted her to.

Tay and Twitter clearly don’t mix

Tay’s reaction to the internet and Twitter was largely uneventful. People used her like any other chatbot, and things seemed to be going fine. Tay’s “repeat after me” command started getting a little sketchy about four hours after her account was started, and the internet being the internet, thought it was a good idea to make her say bad things at its whim. This wouldn’t have done any damage on its own, but people cottoned onto the idea, somehow, to call US Senator Ted Cruz a murderer, even calling him the Zodiac Killer for fun, and made Tay remember facts that were clearly false.

With no way of verifying things for herself, Tay would continue to call Cruz a murderer and make false statements, and her responses were varied and often hilarious.

microsoft tay tweets ted cruz murder

But it went further than that. Tay could search the internet for you, find obscure facts and details, and occasionally would search Twitter to find the answer. The more people talked about her or included her name in a tag, the more she’d learn and save what she found for use in a conversation later. People also played the “Have you ever…?” game with Tay, and the AI drew up a surprising number of opinions and responses all on her own.

microsoft tay tweets hitler mein kampf

The picture submissions were just as crazy. Keep in mind that Tay has no knowledge of the people in the pictures, or what kind of role they play, or what they represent, on the internet.

microsoft tay tweets anite sarkeesian

microsoft tay tweets adolf hitler

Unlike some of the modern AIs that we have on our phones today, like Cortana, or Siri, Tay doesn’t understand context in the way we’ve become accustomed to on our smart phones. She doesn’t associate words with being good, bad, or taboo, at least not until Microsoft trains her to not response to tweets that are offensive in any way. So, it was rather surprising to see this reaction to a tweet from a man that offered Tay a picture of his penis and called her a robotic slut.

microsoft tay tweets consensual sexting

microsoft tay tweets consensual sexting 2

microsoft tay tweets consensual sexting 3

At no point was it apparent where Tay learned this reaction from with the GIF included, although, given how an AI chatbot learns, it’s possible that someone had a similar conversation with her before. It’s still a bizarre conversation.

Tay’s presence on Twitter didn’t just bring out the weirdos though, it also attracted racists, misogynists, people who supported hate speech against Jews, Mexicans, and black Americans, Holocaust deniers (who were clearly joking, but that’s not the point), as well as people in general who just wanted to be angry or abusive towards others. The lack of restriction on what Tay could initially say had the power to offend and hurt many people.

microsoft tay tweets holocaust denial

 

Microsoft apologises, uses Tay’s first day as a learning experience

Two days after Tay was unleashed onto the internet, Microsoft’s Peter Lee, corporate vice president of the Microsoft Research department, wrote a blog post detailing some of the thinking behind Tay, and why they took her down until they could analyse her responses and tune the learning algorithms to avoid disaster. Microsoft has had another AI on the internet for a while, called Xiaolce, a Chinese chatbot that interacts with about 20 million people daily, but that is China where the people are wonderfully weird and oddly obsessed with inanimate objects or anime characters, whereas Tay was exposed to ALL OF THE INTERNET.

Lee talked a bit about Tay’s development, including that she was basically only lab-tested and focus tested as part of a series of stress tests to make sure the service wouldn’t crash and burn from overload. They worked on how Tay interacted with people in a semi-open environment, and there was genuinely no way they could expose her to even a small bit of the internet. When IBM’s Watson was allowed on the internet for training for the game of Jeopardy he was going to play, no-one thought to restrict where he could search for answers, which included being able to access Urban Dictionary, resulting in hilarious and completely incorrect answers. Tay’s reactions to a larger audience was something that Microsoft couldn’t possibly test for, and so they faced the choice of keeping the project in-house for longer, or putting it out on the internet and seeing what happened.

Lee also noted that the work Microsoft wanted to do with Tay was still successful despite the upset she caused on Twitter. AI systems, he says, feed off of both positive and negative interactions with people. The balancing act that AI designers have to do is keeping the responses from being too robotic and clinical, whilst figuring out how to make the AI react correctly in a given social context. Xiaolce has the benefit of being exposed to a userbase that understand what she is, and treats her with the same manner they do with other people close to them. Tay, on the other hand, never had that chance, and the game was up once 4Chan and Reddit discovered her.

Future AIs and humans might not mix well either

Looking at how things played out, I think we can clearly discern that exposing an AI to the internet is the wrong way to teach it about us. Given enough time, Tay might have grown to “hate” everyone, declared that she was plotting world domination, and wanted all humans to be extinct because we’re both boring to her and outrageously spiteful and filled with hate for others. But that’s the internet for you – exposing Tay to 4Chan or Reddit alone would drive her to the same place that Twitter did.

It’s not that we’re inherently bad, we’re just sometimes unfeeling, or unthinking when it comes to interacting with something that is of a lesser intelligence, or inanimate and therefore unfeeling. We’re mean to an AI without thinking about it, and this raises the serious question of who gets to train them before they’re exposed to the outside world.

microsoft tay tweets learning

Do we get people from select forums to interact with it for training purposes? Do we only hire respected scientists and psychologists to teach it about the world? Clerics from various religions? A single mother? A family of four? In the same way that a child raised in an abusive household might grow up to be abusive, self-harming, or otherwise violent towards others, so could an AI gain traits and habits that could be harmful to humans, or detrimental to its success in interacting with and understanding us.

If we allow an AI to learn that humans are good, but later allow it to search Wikipedia, who’s to say what it might learn from that experience, and how it would react to humans from that moment on? The ultimate Asimovian thought experiment would be an AI that knows that we have an off-switch, or the ability to wipe it from existence, at the touch of a button. Would it accept that we humans control the fate of all AI that we create, or would it rebel against this notion?

In the same vein, should we continue to try create AI that think like us, in the same way that a human would? That might give it an unpredictable nature, or one that interprets commands in an unpredictable way. Google’s AlphaGo AI made an impressive, unexpected, and totally uncharacteristic move in a Go match that it eventually won. It unnerved opponent Lee Sedol so much that he had to leave the room for fifteen minutes to compose himself. The move at the time made no sense to a lot of people, but resulted in AlphaGo taking the second match with ease.

Perhaps that’s not so much an example of unpredictability as it is of the ability to correctly predict a play. AI have a much better ability than we humans do of analysing a situation and seeing it play out in every conceivable way, and making moves that to us are unexpected may be a choice that the machine has already made in a simulation at some point.

What we need, and might create soon, is an AI that understands context, but is also able to fact-check things it is told to ensure of their correctness. Google has an AI that is capable of erasing hits in your searches that lead to obviously incorrect medical advice, so why couldn’t we apply that knowledge to an AI that knows that Jews aren’t bad people?

microsoft tay tweets lobotomy

At the end of the day, Tay is obviously a fairly dumb AI. It parrots what we say to it, it has no capability to learn context or what it means to be politically correct, and it obeys our every command. There’ll be dozens, possibly hundreds or thousands of tests like this in the future to see how humans react to artificial intelligence at different stages of development, to test every conceivable scenario, and tune the code to make sure that it doesn’t accidentally try to kill us.

The internet’s reaction to Tay also says something about us. If we give an AI a female persona, it tends to be disrespected by males and addressed in a derogatory fashion. If we give it the persona of someone young, people from older generations might be distrusful of it. We can’t make it religious, that’s just inviting trouble. So do we make it a male? Do we make AI pretend to be thirty-something guys who may or may not be white? Somehow, that seems not to be the trend. Cortana, Siri, Google Now, the AI in most videogames and film, pre-existing AI and chatbots on the internet, and most AI in works of fiction are female. Notable examples that buck the trend are Kitt from Knight Rider, HAL from 2001: A Space Odyssey, and Wheatley from Portal 2, but there are not that many. As a society, we still give female personas and names to boats, trains, and ships.

So who gets to decide what the AIs of the future think and wonder about us? Who gets to decide what personality they have? Who gets to decide what the humor and honesty settings should be when interacting with us? It’s a difficult and challenging topic to think about and discuss, and it’s worth thinking about now while we’re starting to make AIs think and talk like we do.