Microsoft's Twitter Robot Girl Gets Deleted for Posting Inappropriate Content in Less Than 24 Hours after Launching (24 Pics)

Posted on Mar 25, 2016

Two days ago, Microsoft launched Tay, an artificial intelligence chat robot on Twitter. The sentient bot could gain knowledge based on the content it read on the Internet. However, Microsoft made a huge mistake: the robot started posting extremely inappropriate content including Nazi propaganda, genocidal, racist and sexual remarks. It had to be shut down within 24 hours.

1. The Chat Robot Named Tay Was Launched by Microsoft in Order to Improve Customer Service.

The Chat Robot Named Tay Was Launched by Microsoft in Order to Improve Customer Service.
Photo: telegraph.co.uk

2. Tay Was Created to "Speak Like a Teen Girl", but This Took a Very Wrong Turn

Tay Was Created to "Speak Like a Teen Girl", but This Took a Very Wrong Turn
Photo: i.imgur.com

3. It Was Supposed to Speak to People Within the Age Group of 18 to 24

It Was Supposed to Speak to People Within the Age Group of 18 to 24
Photo: imgur.com

4. The Bot Was Designed to Improve Microsoft’s Understanding of the Language of Young People on the Internet

The Bot Was Designed to Improve Microsoft’s Understanding of the Language of Young People on the Internet
Photo: imgur.com
Share
Share
Microsoft's Twitter Robot Girl Gets Deleted for Posting Inappropriate Content in Less Than 24 Hours after Launching (24 Pics)
You might also like
Comments
Popular
Latest