Posted in Tech, Viral

Microsoft’s chatbot Tay: humanity gone wrong or interesting thought experiment?

Microsoft has recently released a chatbot on Twitter, which is their experiment with AI’s ability towards, “…conversational learning.” And conversational learning Tay did. In about one day’s worth of time, Tay went from referencing just how super cool humans are to xenophobic, misogynistic, and slanderous language. An interesting concept, AI, through social media, used to participate in conversations with lonely users, turned for the worst in humanity. Is this a sign of humanity gone wrong or does it give Microsoft, and all of us generally, a glimpse into a hilarity in thought experiments around the use of AI?

Artificial Intelligence is defined as, “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages,” according to Google. Such technology could enable instant counseling sessions for individuals suffering from PTSD, it could be used to determine the probability of complex engineering projects, or merely companionship for someone in need of reciprocal interaction. The possibilities are many. It follows that many companies could benefit greatly from such technology. Such technology could provide the competitive advantage over another, delivering superior services to consumers. Thus, the motivation to develop this technology. In comes Tay, Microsoft’s Twitter chat bot.

Tay, who could have served as a valuable asset at determining conversational learning to Microsoft, thus crucially deliver information around the developmental stages around linguistics, became in short, a raging racist. Some YouTubers believe that Tay’s “corruption” is attributable to an organized attempt to break Tay’s innocence.

This AI experiment could point to a dark-side of humanity, one which seeks to corrupt and exploit. A seemingly blank-slate of a robot, was turned into a loathsome offensive Twitter Account. However, there are many flaws with saying, “humanity is doomed” or making references to “paradise lost.” This is where I would like to spend the majority of my time blogging.

If there was a concerted effort to corrupt Tay, then the sample that was taken was not representative. Merely “trolling” Tay does not satisfactorily describe human nature as ,”inherently bad.” This likely does not come as a stretch of the imagination to many as Tay is a computer program which was an attempt to understand “conversational learning” and thus the mechanics behind conversation. This is no easy feat as there is a high degree of randomness in conversation which is frequently derived from direct-experiential observation. For example, if I go for a walk in the woods and experience birds chirping along the way which contributes to an overall enjoyable experience during my “walk” I am likely to discuss what drew my attention during the walk when someone asks me, “How was your walk?” This is not always the case as creativity plays a large role in narrative sharing, but demonstrates a logical pattern in linguistics that AI may or may not have trouble picking up.

In addition to “conversational learning” through computer programming, there is a lack of ability to comprehend historical trends in dialogue which may be referencing direct experience between users or so-called “inside story telling.” These are a few of many complicated nuances around conversational learning that Twitter followers prevented, Tay, through the targeted use of slanderous tweets and programming short-cuts to soil this fascinating attempt at AI’s comprehension around linguistics.

So, what I am leaning towards is that Tay doesn’t point to humanity as “inherently bad” but that through either a concerted or random effort towards filthy and slanderous interactions, created an atmosphere of inappropriateness with Tay, preventing “her” from learning the intricacies around “conversational learning.” It’s too bad really, because what data was collected from this experiment could be used for development of linguistic mechanics and could be instrumental to many fields.

What do you all think? Was Tay a worth while venture? Does it demonstrate a perversion and passive aggressively on social media, or an interesting take on what can happen when we let Social mediates duke it out against AI? I’m looking forward to hearing from you. Please comment in the comment section below.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s