

It sounds like a joke coming from a chatbot, but it's really not all that different from the type of thing you see being said on Twitter every day. Microsoft Bing’s chatbot has reportedly been sending out strange responses to certain user queries that include factual errors, snide remarks, angry retorts and even bizarre comments. Maybe Hillary Clinton is a lizard person hellbent on destroying America, as Tay tweeted, one might think if they hear it enough times. (Reuters) - Tay, Microsoft Corp’s so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was hobbled by a barrage of. There are only so many times a person can see comments from the vast Trumpian coalition before they start to be internalized as truth. They learn from what they read others saying, and, without a bulwark of context to protect them against the vitriol, it's easy for this manner of thinking to infect. Young people go online today and are bombarded with all manner of hateful rhetoric like this, whether it's on Twitter, or in chatrooms or other forums where it proliferates. AI experts explain why it went terribly wrong.

The model for Tay here is, after all, how the rest of us learn. Less than a day after she joined Twitter, Microsofts AI bot, Tay.ai, was taken down for becoming a sexist, racist monster.

An intriguing theory, but obviously problematic when tested against the dark elements of the internet.Īs of 10 a.m.It would be easy to dismiss the experiment as simply being run off the rails by a group of trolls having fun with an easily exploitable bit of code, but it's hard not read into the brief, woeful existence of young Tay a parallel to contemporary online discourse writ large. Tay learns as she goes: “The more you talk to her the smarter she gets," Microsoft researcher Kati London told BuzzFeed News in an interview. Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter, and Kik. Asked why Microsoft didn't filter words like the n-word and "holocaust," a Microsoft spokesperson did not immediately provide an explanation. Left unexplained: why Tay was released to the public without a mechanism that would have protected the bot from such abuse, blacklisting contentious language.
#Microsoft chatbot racist Offline
As a result, we have taken Tay offline and are making adjustments.” "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. It is as much a social and cultural experiment, as it is technical," a Microsoft spokesperson told BuzzFeed News in an email. In 2016 Microsoft’s Tay, an AI Twitter bot that spoke like a teenager, was taken offline in just 16 hours after users manipulated it into posting racist tweets. “The AI chatbot Tay is a machine learning project, designed for human engagement. Microsoft quickly took Tay offline, issuing a comment blaming the bot's sudden degeneration on a coordinated effort to undermine her conversational abilities. The chatbot, which Microsoft claims to have imbued with the personality of a teenage American girl, began tweeting her support for genocide and denying the Holocaust. Asked during a closed beta whether or not she'd kill baby Hitler, "Tay," Microsoft's AI-powered chatbot, replied with a simple "of course." But after 24 hours conversing with the public, Tay's dialogue took a sudden and dramatic turn.
