Saturday, March 26, 2016

Microsoft #TAYFAIL Smoking Gun: ALICE Open Source AI Library and AIML

[Update 3/27/16: see also the next post: Microsoft's Tay has no AI"]

As follow up to my previous post on Microsoft's Tay Twitter chatbot (@Tayandyou), I found evidence of where the "repeat after me" hidden feature came from.  Credit goes to SSHX for this lead in his comment:
"This was a feature of AIML bots as well, that were popular in 'chatrooms' way back in the late 90's. You could ask questions with AIML tags and the bots would automatically start spewing source into the room and flooding it. Proud to say I did get banned from a lot of places."
A quick web search revealed great evidence. First, some context.

AIML is acronym for "Artificial Intelligence Markup Language", which "is an XML-compliant language that's easy to learn, and makes it possible for you to begin customizing an Alicebot or creating one from scratch within minutes."  ALICE is acronym for "Artificial Linguistic Internet Computer Entity".  ALICE is free natural language artificial intelligence chat robot.

Evidence

This Github page has a set of AIML statements staring with "R". (This is a fork of "9/26/2001 ALICE", so there are probably some differences between Base ALICE today.)  Here are two statements matching "REPEAT AFTER ME" and "REPEAT THIS".

Snippet of AIML statements with "REPEAT AFTER ME" AND "REPEAT THIS"
(click to enlarge)
As it happens, there is an interactive web page with Base ALICE here. (Try it out yourself.) Here is what happened when I entered "repeat after me" and also "repeat this...":

In Base ALICE, the template response to "repeat after me" is "...".  In other words, NOP ("no operation").  This is different from the AIML statement, above, which is ".....Seriously....Lets have a conversation and not play word games.....".  Looks like someone just deleted the text following three periods.

But the template response to "repeat this X" is "X" (in quotes), which is consistent with the AIML statement, above.

Conclusion

From this evidence, I infer that Microsoft's Tay chatbot is using the open-sourced ALICE library (or similar AIML library) to implement rule-based behavior.  Though they did implement some rules to thwart trolls (e.g. gamergate), they left in other rules from previous versions of ALICE (either Base ALICE or some forked versions).

My assertion about root cause stands: poor QA process on the ALICE rule set allowed the "repeat after me" feature to stay in, when it should have been removed or modified significantly.

Another inference is that "repeat after me" is probably not the only "hidden feature" in AIML rules that could have caused misbehavior.  It was just the one that the trolls stumbled upon and exploited.  Someone with access to Base ALICE rules and also variants could have exploited these other vulnerabilities.

No comments:

Post a Comment