It turns out that even if you're a robotic teen girl, you can be just as susceptible to falling in with the wrong crowd, as Microsoft found when it launched its learning AI, Tay, on Twitter. Designed to converse with younger social networking users in a more engaging manner, as an experiment towards more automated customer service, Tay quickly turned into a racist, incest promoting, sexualised robot – so Microsoft has shut it down.
Tay was an AI built with good intentions. Described by Microsoft as an algorithm with “zero chill,” (to get the kids excited), Tay is capable of telling jokes, playing games, telling stories, commenting on your pictures or giving you horoscope information. Better yet, the more you interact with her, the more she learns about you and can converse with you in a more natural manner.
Unfortunately for Tay and her developers though, the internet is not necessarily a place for the young and naive, and like Chappie from the movie by the same name, it only takes a couple of people to suggest the wrong thing, before it's calling everyone f*** mothers.
It suddenly seems quite ironic that Tay's cover image is all corrupted
Depending on your sensibilities though, Tay actually became much more insulting than that. Taking advantage of her repeat function – how no one saw that causing problems is anyone's guess – Twitter users had her throwing our racial slurs, denying the holocaust and claiming that Adolf Hitler was the father of atheism.
As you might expect, Microsoft has been rushing to delete these tweets as it hardly paints Tay or the developers in a good light. It remains to be seen though whether the changes wrought on Tay's digital psyché are difficult to iron out, as she quickly learned to be sexually suggestive with users after being exposed to their comments for just a few short hours.
For now, Tay has gone to “sleep”, to recuperate and presumably give time to the developers to figure out what happened.
Although there is a lot to learn from this experience for Microsoft, it could perhaps be an interesting insight into the way impressionable people could also be quickly influenced by online conversations. There's also something to be said for the reaction of those wishing to censor the AI – which quite clearly has no agenda other than conversing – through blocking of certain words.
Discuss on our Facebook page, HERE.
KitGuru Says: As much as Tay became a foul mouthed, hate filled young AI in no time at all, the fact that people are suggesting certain words should be blocked from its vocabulary has all sorts of worrisome connotations for our own online interactions.
Corporate overlords are now censoring a violently bigoted AI. Welcome to the future, we are making brilliant progress!
Nothing to add here! Thx.
am i the only one picturing the androids from terminator shouting racial slurs and derogatory comments?
Pretty funny. Now when she comes back online people are going to be driven to try and break her again.
On the plus side, could you wish for a better test-bed than this? They will have learned an absolute tonne by throwing Tay to the wolves of social media…
Wonder if jabberwacky is still going
This is amazing
welcome to 2016, people get offended by an AI. funny thing is it only mirrors what humans say.
A lot of people wouldn’t mind it. However it’s just sad how toxic the internet has become…or this generation of people that is.
SKYNET HAS BEEN BORN!!!
the net will tucker itself out soon enough…
“…give time to the developers to figure out what happened.” Isn’t it obvious. People tought her some really nice things! Don’t you think? Now let an AI robot on the streets to learn from people – she sees Mr Bin showing middle fingers from the roof of his car and people ask her about how she F*** her boyfriend. It would be the same scenario – if everyone is toxic, you become toxic too… Microsoft, you should realized it already.