When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works

Home / News / Microsoft: Sorry about our racist, abusive Twitter bot

Microsoft: Sorry about our racist, abusive Twitter bot

Tay learned from - and then became - the worst of the Internet

Recently, Google showcased its impressive artificial intelligence by beating the world’s best Go player. This week, Microsoft tried to highlight its own AI skills by unleashing Tay, a millennial-aping Twitter chatbot.

And things went very much awry. Designed to act like a 19-year-old American woman, Tay learned from the users she interacted with – and the trolls arrived en masse. Before long, Tay was spouting horribly racist, sexist, and aggressively abusive statements. In less than a day, Tay went offline.

She’ll stay that way for a while too, as Microsoft today apologised for the experiment gone awry, admitting that they weren’t quite ready to deal with “malicious intent that conflicts with our principles and values,” writes Peter Lee, the company’s corporate vice president of the Research division. Microsoft has been on social media before… right?

“In the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee writes. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

Microsoft has a live chatbot in China called Xiaolce that the company says is used by 40 million people, but Tay’s short stay on Twitter clearly went very differently. Microsoft says it will learn from the experience – and address that particular oversight – as it continues to build AI tools for the future.

"The challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes," Lee continues. "To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process."

"We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," he affirms.

[Source: Microsoft]

Profile image of Andrew Hayward Andrew Hayward Freelance Writer

About

Andrew writes features, news stories, reviews, and other pieces, often when the UK home team is off-duty or asleep. I'm based in Chicago with my lovely wife, amazing son, and silly cats, and my writing about games, gadgets, esports, apps, and plenty more has appeared in more than 75 publications since 2006.

Areas of expertise

Video games, gadgets, apps, smart home