Machine Learning experiments in the wild are not all unicorns and rainbows.

In March of 2016, Microsoft introduced a chatbot named Tay on Twitter. This chatbot was a computer program designed to mimic the speech patterns of a 19-year-old American girl. The goal was to have Tay engage in (and learn from) conversations online and improve Microsoft’s automated customer service.

An allegedly “coordinated online attack” quickly turned poor, impressionable Tay into a foul-mouthed, Holocaust-denying, Trump-supporting little twerp. Microsoft subsequently pulled the plug within 16 hours.

Microsoft has made Tay's Twitter account private and even taken her website offline, hanging its head in shame.

Thanks to the WayBackMachine, we can still see that Tay was built primarily by mining anonymized public data. She was coded using a combination of Artificial Intelligence and editorial that was developed by a staff that included improv comedians. 

“The AI chatbot Tay is a machine learning project, designed for human engagement,” a human Microsoft spokesperson said in a statement. “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Thanks a lot, Internet. This is why we can't have nice things!

While Tay is a failed experiment for Microsoft, this is still an excellent example of how Machine Learning is beginning to make an apperance in pop culture. It also shows how companies are using Machine Learning to interact with customers on what feels like a more personal level.

See more examples of Machine Learning in our Everyday Encounters blog series >>