Microsoft Apologizes For It’s HateBot

tay

It has been an interesting few months for AI so far with Google-owned AI program AlphaGo defeating the current world champion of Go, a feat which most respected researchers thought would be at least a decade away and thus raising hope in the future of AI.

Microsoft released its chatbot on twitter open to human interaction earlier this week. The bot, named Tay, was supposed to become more intelligent and learn as it interacted with humans. The result was not as utopian as Microsoft would have imagined it to be and it took less than 24 hours for Tay to start tweeting racist, anti-feminist, xenophobic and Hitler apologetic tweets out into the internet!

Microsoft took down the bot within 24 hours after it basically went completely off the charts and has now issued an apology for the same.

According to the researchers involved in the project, they had anticipated that some users would try and abuse the system and had even tested for it. Unfortunately, for them, they overlooked a critical flaw that users exploited. As expected, they declined to reveal what that exact flaw was.

Tay had been programmed to basically parrot whatever any human told it to say if it was followed by the words ‘repeat after me’. Not only did it repeat these examples of hate speech, it also learned them and incorporated them into its own artificial intelligence.

Microsoft has been criticized by some of the names that were targeted by Tay who say that not expecting users to abuse your system is naïve and frankly foolish in today’s world. Poor programming leads to poor results and Microsoft has clearly a lot to learn.

In Microsoft’s defense, though, this is the company’s second chatbot with an earlier version named Xiaolce active in China since 2014. That version has had over 40 million conversations without a major hiccup like this. The cultural differences between the English-speaking internet and the Chinese one are much more than earlier imagined apparently!

This incident also brings to light the amount of work that needs to be done in AI before it can be expected to behave in a manner that does not assimilate the worst of our characters. It will fuel the fears that have been espoused by names like Elon Musk about unknown turns that an all powerful AI can take and even threaten the entire human race.

In this particular situation, it was probably just a case of poor programming and poor testing, but the questions it raises about the state of AI as a whole must not be ignored.

Leave a Reply

Your email address will not be published. Required fields are marked *