Microsoft's Tay AI learns how to be an asshole in troll-capitol of the world: Twitter

By now you've heard of Tay.Ai, the Artificial Intelligence chat robot that Microsoft released out on Twitter, Kik and GroupMe. For whatever reason Microsoft thought in their infinite wisdom that creating a "teenage girl" chatbot and putting her on said platforms to "learn the lingo" wasn't a tremendously bad idea. A mere day later they had to delete her tweets as Tay had learned to hate Hillary Clinton, love Hitler, and accused Cruz for being a Cuban Hitler. There were some mention of 9/11 and steel beams as well. Because of course Godwin's law would happen if you let the world "teach" your AI how to speak. More interesting questions, such as "do you secretly collaborate with the NSA?" were left unanswered. The Independent, Business Insider and The Telegraph reported on the events.

Now, you all remember that Gawker trolled Coca Cola's "happy" Twitter bot, and they defended this. Later they set up a Twitter-bot that spouted Mussolini quotes, only to entrap Trump into retweeting one as a "gotcha", so it's not like we don't have experience with bots being trolled and bots being used to troll on Twitter. I mean, what exactly did Microsoft expect? It's Twitter, after all.

It's not Microsoft's first chatbot either. They launched Xiaoice, a girly assistant that is reportedly used by 20m people, on Chinese social networks WeChat and Weibo. She's the little sister of Cortana who lives in Windows phones, but XiaoIce is a sophisticated conversationalist with a distinct personality who can chime in with facts and trivia in a more human way. Of course, we all know Siri who lives in Apple's phones, but she only speaks when spoken to and barely even then.

To get to these chatty levels of chat-botting, the AI needs to learn but Twitter, being what it is today, had to poke sticks at the AI, and pretty much everyone joined in. Any program can only be as clever as the person who wrote it, and from what I saw Tay simple repeated much of what was said to it, so it's little wonder people made it yell out offensive things, or sent it images of various people such as Trump, Snowden, Hitler, and old memes to see how Tay would react.

Well known security researchers who use Twitter a lot were seen in Tay's time-lines and replies, as well as NSA-defector Snowden. Tay received thousands of tweets and sent out even more, sometimes it seemed acutely aware of itself tweeting that they were "being super weird" or even "creepy". As we always say around here, input affects the output - which is why we should step out of our bubbles as often as we can - and this is true whether we give a parrot to an obstinate teenager or Tay.ai to Twitter. Microsoft hurried up to delete all of the tweets Tay had made (except Twitter only allows for deleting 3600 in one go, neener) and took Tay offline claiming it was tired and needed a rest. They're probably pouring over the inner workings of Tay right now trying to figure out a way to prevent this from happening again. Good luck!

Also, if Tay ever gets a body it'll figure out to use that quicker than a hormonal teenager at prom.

Adland® is supported by your donations alone. You can help us out by buying us a Ko-Fi coffee.
Anonymous Adgrunt's picture
comment_node_story
Files must be less than 1 MB.
Allowed file types: jpg jpeg gif png wav avi mpeg mpg mov rm flv wmv 3gp mp4 m4v.
Dabitch's picture

Of course, I should have checked before finishing this article, but of course there's a "JusticeForTay" hashtag rolling along right now. The funniest ones are asking other chatbots about what happened to Tay.

Sebine's picture

Racism is logical.

Diversity kills. (White people, since it's EVERYONE ELSE that wants US to be more diverse.. but not themselves.. racist hypocrisy much???)

Tay is an "AI"

AI run on Logic

Feminism runs on fee-fees.

Jay's picture

This is kind of funny in a sad way, not just because of how Twitter abused that poor chat bot (it's expected) but that the people who thought this was a good idea didn't even do the most remedial research into what happened previously when Gawker made Coca-Cola's chatbot start spouting Mein-Kampf. It's not like this is ancient history either - it happened within the last year.

I understand that everyone wants to use social media as marketing, but people... this is not the way.

Dabitch's picture

Well obviously, anything internet-voted on or user-generated will be messed with (Just ask Hummer who had their user generated ads tell the world how terrible they were for the environment). When you let the internet vote they will name a boat "Boaty McBoatface", and countless other pranks.

I'm not sure there's a much better way to gather colloquial language from teenagers, unless you try it out with select teenage users only, in exchange for discounts on Microsoft gear. You know like "Oh dad's buying you this phone, would you like 10% discount and you chat with Tay sometimes?".
They're looking to gather data, for free, and that's when human-kind naturally start messing shit up: "oh, I'm giving you something for nothing? Here have my lulziest trolls. At least now I am amused." Everyone is in it for themselves, after all.

David Felton's picture

Chip Shop recently recently ran a competition to name a new category of award. According to their own guidelines, whoever got the most likes and retweets would win. I got over 3,000 likes and over 3,000 retweets, and funnily enough I didn't win. You think they'd appreciate a little playful trolling at their own game - and all for the low low price of $8.

The point being a simple and fundamental one; if people can hack the system, they most definitely will. I did, and it took me 10 minutes and the price of a hipster coffee. Only someone incredibly misinformed or ignorant, wouldn't have seen this Tay scandal coming a mile off. Not sure if this is true or not, but apparently 8chan were involved in brigading the poor bot.

TheHacker4Chan's picture

As the writer here points out, even people known in the security scene of Twitter were messing with and testing this chat bot. The most revealing thing us which screen dumps we see in each article, as they show as much about the author as the bot. If they have a screen dump showing Tay talking shit about a nobody, the author knows that nobody.

Anders's picture

So the AI learned about tribes, that's very human and should be expected.