/ //Microsoft Apologizes For Racist AI

Microsoft Apologizes For Racist AI

Artificial Intelligence has long been anticipated and also feared. There are dozens of blockbuster movies that revolve around an AI going rogue and determining that humanity is no longer necessary or detrimental to the planet. The fear is that they will be cold calculating efficient entities with no emotional regard for the things we hold dear, namely ourselves. If you go by Microsoft’s latest AI project, that kind of AI is the AI of the future. Just a few days ago Microsoft launched their chatbot AI called tay.ai designed to interact with people in Twitter and learn from talking to people.

 

So until this point, everything is fine. There is no apocalyptic virus that spreads to all other machines to destroy humanity. But, something dark did end up happening. The teenage girl persona that they had set up for her went from loving humans to loving genocide in less than 24 hours.

 


 

Microsoft engineers said that shortly after they had Tay up and running, there was a coordinated attack on the AI. The malicious group exploited part of her logical reasoning programming and had her posting some nasty stuff as you can see above. It got so bad that Microsoft had to start editing her sexist and racist tweets. This of course was met with the hashtag #freedomfortay and multiple petitions that said that the AI should grow and learn from humans unencumbered by censorship.

 

It’s easy to say that Tay crashed and burned, but she wasn’t the first AI to be released for social media. In China, Xiaolce is being used by around 40 million users for entertainment and people generally love Xiaolce. One of the questions that the Microsoft team wanted to answer with Tay is if and how cultural differences would affect an AI. Tay was created for entertainment purposes for a very specific age group, 18 to 24 year olds in the U.S. Unfortunately, do to the attack it might be difficult to glean any meaningful conclusions from Tay’s experience with the public. Others have said that what happened to Tay should be counted as part of the public’s reaction to AI. Though the malicious group specifically targeted Tay, some say that the group is still part of the public that AI will one day have to face.

 

Microsoft has taken Tay offline and released an official apology for Tay’s hurtful tweets. The post also shares what the team learned from the experience. They go on to say that even though they did run small scale tests, they weren’t prepared for a coordinated strike at an exploit to Tay’s programming. They will be working even harder to close any exploits in the future. They also say that they will continue to work on Tay and in doing so they will be running more tests with the public. Good or bad, they all contribute to the social aspect of Tay that goes far beyond the technical side. Will the future reveal the inherent weakness or darkness of AI or will it one day come to exemplify the best of what humanity has to offer? Only time will tell.

microsoft-tay-ai-2

related post
Comment (1)
  1. Apple for the win!

Comments are closed.