ads

Monday, March 28, 2016

Microsoft off Tay, the chatbot Twitter became misanthropic and racist in 24 hours

Tay is a chatbot designed to interact with Twitter users turned to Microsoft in recent hours. It is a tool that is part of a study conducted by Microsoft Research on understanding of colloquial language that uses artificial intelligence algorithms to manage the relationships between bots and real users. important data that could be reused from Remdond home to make increasingly smart personal digital assistants such as Cortana.

"The more you chat with Tay, the more it becomes intelligent, learning to engage people through casual conversation and playful." This is the description provided by Microsoft in presenting the chatbot. Tay, in the original intentions of Microsoft, was designed to "dialogue" with US users aged between 18 and 24 years, from the answers of which would have to learn and take up the principles upon which an informal conversation between teenagers. The experiment, however, took an unexpected turn.

They took less than 24 to "bribe" the virtual teenager Tay that game by defining the human race "super cool", soon after, began to spread very reassuring messages that have highlighted his "new nature" racist and misanthropic. A succession of interventions, which have not gone unnoticed in the network, does not require special comments:


It could briefly conclude that the worst of humanity network has been absorbed by Tay in a few hours, but the representation offer does not correspond entirely to the truth. Among the various features supported by the chatbot, in fact, the figure "repeat after me" with which anyone can do to Tay repeating certain phrases. Most offensive messages, then, are the result of simple copy of the sentences pronounced by human users (wicked).

Not all messages, however, were mere replicas 1: 1 messages sent by users, in some cases, Tay has operated independently. The Guardian, for example, highlighted a case in which the question "Ricky Gervais is an atheist?" Tay said "Ricky Gervais has learned totalitarsimo by Adolf Hitler, the inventor of atheism". The worst part of humanity, on this occasion, it seems to actually be able to characterize the personality of Tay.

There is to say that the sentences - "ideas", to use an expression that is more suited to humans - manifested by Tay not held in the context of a coherent ideology: the chatbot took conflicting positions through, for example, condemnation exaltation of feminism. The figure that makes you think, and from this point of view the experiment is definitely interesting and anything but playful, is the need and the ability to guide the path of self-learning systems A.I. To use a parallel with the human qualities, the need to create a "moral", a set of rules that governs the act.

Microsoft, for the moment, has chosen to turn off Tay, waiting for the appropriate changes to prevent the spread of highly offensive and remember about it with a statement to Business Insider:

The chatbot Tay is a machine learning project, designed for human engagement, as we learned, most of his answers are inappropriate and indicative of the type of interaction that some people are having with it. We are making some changes to Tay.

Microsoft, in essence, states that Tay is the mirror of mankind that interacts with it (and fortunately not all of the numerous Tay messages have had the content of those reported). The concept of "morality" to which you alluded to earlier, applied to machine learning algorithms, resulting in content filters which are unlikely Microsoft is applying to Tay with the announced changes. A filter assembly / moral that evidently will be used to guide AI on the acquisition path and processing of public information, without assimilating, at the same time, the worst aspects of human personality.

No comments:

Post a Comment

Apple Vision Pro: Day One

It’s Friday, February 2, 2024. Today is the day. You’ve been eyeing the Vision Pro since Tim Cook stepped onstage with the product at last y...