Potty-mouthed bot grounded

Microsoft chatbot Tay was quickly led astray and taken offline.

Millions of people in China have been successfully (and pleasantly) interacting with Xialoce, Microsoft’s Mandarin-language chatbot.  She can remember previous exchanges and provide naturalistic conversation for those interacting with her. (She has also been a TV weather presenter.)

So far, so good for Microsoft’s AI initiatives.  However, when the company began to experiment with an English-language chatbot (Tay), things went horribly wrong, horribly quickly.

"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation"

However, because Tay was designed to interact with, and learn from, 18-24 year olds, she quickly began to pick up bad habits and developed some inflammatory political opinions.

Within 24 hours of arriving on Twitter, Tay was espousing suspect opinions while being unable to converse about popular culture such as TV programmes or music.  Her tweets were quickly being edited, thus removing the 'self-learning' element of the design and Microsoft took her offline.  When she made a brief reappearance her topic of conversation had moved on from politics to bragging about using drugs in front of police officers.  She then spammed all of her followers before being grounded indefinitely by Microsoft.

What went wrong?

IBM’s gameshow winning Watson learned how to swear from the Urban Dictionary and was given a swear filter.  While apologising for Tay’s behaviour, Microsoft blamed the failed experiment on the concerted efforts of a small group of trolls intent on leading her astray.

Trolls are everywhere it seems and some might say that what happened to Tay was predictable.  Many people have written about how to deal with trolls but the key strategies seem to include:

  • Listen and check – are the comments valid? If so, make changes
  • Respond with facts to deflect emotional criticisms
  • Respond with humour if that approach is valid for your organisational or personal brand (like @JamesBlunt)
  • Remember - it’s OK to block or ban people

Sources: The BBC; The Guardian; HootSuite.