Tay, the neo-Nazi millennial chatbot, gets autopsied

A user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted).

Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users.

The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born.

Unfortunately, the Tay experience was rather different. Although many early interactions were harmless, the quirks of the bot’s behavior were quickly capitalized on. One of its capabilities was that it could be directed to repeat things that you say to it. This was trivially exploited to put words into the bot’s mouth, and it was used to promote Nazism and attack (mostly female) users on Twitter.

Read 10 remaining paragraphs | Comments

Ars Technica

 
STRATEGIES FOR A COMPANY’S INTELLECTUAL PROPERTY. IP protection is a part of your business strategy and matches your commercial goals. A simple IP strategy is to protect your product and service by getting patent, trademark and copyright certificates.
 

U.S. COMPANY REGISTRATION. We help our foreign clients with registering U.S. business to support moving their innovations to U.S. market. We assist in navigating the process of setting up a new business and support while it grows.

EXHIBITION PRESENTATION. We are working with major organizers of international conferences in USA. We discuss in advance all possible discounts and available places for your expositions.

Related Posts