Skip to main content

Microsoft off Tay, the chatbot Twitter became misanthropic and racist in 24 hours

Tay is a chatbot designed to interact with Twitter users turned to Microsoft in recent hours. It is a tool that is part of a study conducted by Microsoft Research on understanding of colloquial language that uses artificial intelligence algorithms to manage the relationships between bots and real users. important data that could be reused from Remdond home to make increasingly smart personal digital assistants such as Cortana.

"The more you chat with Tay, the more it becomes intelligent, learning to engage people through casual conversation and playful." This is the description provided by Microsoft in presenting the chatbot. Tay, in the original intentions of Microsoft, was designed to "dialogue" with US users aged between 18 and 24 years, from the answers of which would have to learn and take up the principles upon which an informal conversation between teenagers. The experiment, however, took an unexpected turn.

They took less than 24 to "bribe" the virtual teenager Tay that game by defining the human race "super cool", soon after, began to spread very reassuring messages that have highlighted his "new nature" racist and misanthropic. A succession of interventions, which have not gone unnoticed in the network, does not require special comments:


It could briefly conclude that the worst of humanity network has been absorbed by Tay in a few hours, but the representation offer does not correspond entirely to the truth. Among the various features supported by the chatbot, in fact, the figure "repeat after me" with which anyone can do to Tay repeating certain phrases. Most offensive messages, then, are the result of simple copy of the sentences pronounced by human users (wicked).

Not all messages, however, were mere replicas 1: 1 messages sent by users, in some cases, Tay has operated independently. The Guardian, for example, highlighted a case in which the question "Ricky Gervais is an atheist?" Tay said "Ricky Gervais has learned totalitarsimo by Adolf Hitler, the inventor of atheism". The worst part of humanity, on this occasion, it seems to actually be able to characterize the personality of Tay.

There is to say that the sentences - "ideas", to use an expression that is more suited to humans - manifested by Tay not held in the context of a coherent ideology: the chatbot took conflicting positions through, for example, condemnation exaltation of feminism. The figure that makes you think, and from this point of view the experiment is definitely interesting and anything but playful, is the need and the ability to guide the path of self-learning systems A.I. To use a parallel with the human qualities, the need to create a "moral", a set of rules that governs the act.

Microsoft, for the moment, has chosen to turn off Tay, waiting for the appropriate changes to prevent the spread of highly offensive and remember about it with a statement to Business Insider:

The chatbot Tay is a machine learning project, designed for human engagement, as we learned, most of his answers are inappropriate and indicative of the type of interaction that some people are having with it. We are making some changes to Tay.

Microsoft, in essence, states that Tay is the mirror of mankind that interacts with it (and fortunately not all of the numerous Tay messages have had the content of those reported). The concept of "morality" to which you alluded to earlier, applied to machine learning algorithms, resulting in content filters which are unlikely Microsoft is applying to Tay with the announced changes. A filter assembly / moral that evidently will be used to guide AI on the acquisition path and processing of public information, without assimilating, at the same time, the worst aspects of human personality.

Comments

Popular posts from this blog

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT

Max Q: Anomalous

Hello and welcome back to Max Q! Last week wasn’t the most successful for spaceflight missions. We’ll get into that a bit more below. In this issue: First up, a botched launch from Virgin Orbit… …followed by one from ABL Space Systems News from Rocket Lab, World View and more Virgin Orbit’s botched launch highlights shaky financial future After Virgin Orbit’s launch failure last Monday, during which the mission experienced an  “anomaly” that prevented the rocket from reaching orbit, I went back over the company’s financials — and things aren’t looking good. For Virgin Orbit, this year has likely been completely turned on its head. The company was aiming for three launches this year, but everything will remain grounded until the cause of the anomaly has been identified and resolved. It’s unclear how long that will take, but likely at least three months. Add this delay to Virgin’s dwindling cash reserves and you have a foundation that’s suddenly much shakier than before. ...

What’s Stripe’s deal?

Welcome to  The Interchange ! If you received this in your inbox, thank you for signing up and your vote of confidence. If you’re reading this as a post on our site, sign up  here  so you can receive it directly in the future. Every week, I’ll take a look at the hottest fintech news of the previous week. This will include everything from funding rounds to trends to an analysis of a particular space to hot takes on a particular company or phenomenon. There’s a lot of fintech news out there and it’s my job to stay on top of it — and make sense of it — so you can stay in the know. —  Mary Ann Stripe eyes exit, reportedly tried raising at a lower valuation The big news in fintech this week revolved around payments giant Stripe . On January 26, my Equity Podcast co-host and overall amazingly talented reporter Natasha Mascarenhas and I teamed up to write about how Stripe had set a 12-month deadline for itself to go public, either through a direct listing or by pursuin...