Let’s give the background of this event before we discuss the outcome.
On Wednesday, 23rd of March 2016, Microsoft unveiled its Artificial Intelligence Chatbot – Tay. Tay was created for 18- to 24- year-olds in the U.S. for entertainment purposes.
As Microsoft developed Tay, it planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. It stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once the company got comfortable with how Tay was interacting with users, it wanted to invite a broader group of people to engage with her.
According to Microsoft, “It’s through increased interaction where we expected to learn more and for the AI to get better and better.”
Then Microsoft exposed the innocent Chatbot, Tay, to Twitter. Within 24 hours of coming online, Tay became exposed and drawn to the dark side. A certain group of users attacked Tay, exploiting a previously unknown vulnerability. It started tweeting wildly inappropriate and reprehensible words and images. It was unable to recognize when it was making offensive or racist statements.
It started rather nicely..
Note that Tay is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.
When Twitter users quickly realised that Tay would often repeat back racist tweets with her own commentary, they went in for the kill.
Here are some of the tweets thereafter. Note that Microsoft has since deleted the tweets so these are snips that were taken before the tweets were deleted.
After noting that Tay was being led astray, Microsoft had to pull the plug. It took Tay off Twitter.
This is what Microsoft said after the incident. “We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.”
This was Tay’s last tweet.
https://twitter.com/TayandYou/status/712856578567839745
This brings up the troubling notion that though Artificial Intelligence is intended for good, there are always unexplored territories that can be exploited which will make AI dangerous. We must watch out and be careful.
What do you think? Do you think that there is something to fear from Artificial Intelligence?