

As with Tay, Xiaoice was equipped with the capacity to parrot social media chatter as a shortcut to natural language responses. Microsoft was embolden to introduce Tay by the success of its Chinese-language Xiaoice chatbot, which was launched in 2014 and which has more than 660 million users.


Within 24 hours, trolls flooded Tay with the language of racism and misogyny, which turned Tay into a woman-hating racist. Good thinking, Google.įour years ago Microsoft unveiled a chatbot called Tay, which was designed to absorb the language of the people who interacted with Tay on Twitter. The company plans to first make sure Meena is safe and unbiased. Google has not released a demo version for public use. Still, the claims are credible - as well as incredible, in the sense of being extraordinary. (Google may demonstrate Meena, or even make it publicly available, May 12, at Google's I/O developer's conference.) And all the judgments about Meena come from its own creators. Until we can try Meena for ourselves, we're taking Google's word for it. To be clear, these are claims, not facts. Google researchers claim that human-level SSA is "within reach." (You can chat with Mitsuku here.) In other words, Meena is theoretically closer to the ability to converse to humans than to the second best chatbot. That's lower than the average human score of 86 percent, but much higher than the highest score of the previous Loebner Prize chatbot champion, Mitsuku, which scored 56 percengt.
#Try meena chatbot how to#
Meena's specialty is the minimization of perplexity itself, rather than focusing on how to convincingly hide the perplexity with general or all-purpose responses. That's why most chatbots are novelties and parlor tricks, rather than useful conversational agents. More importantly, the response is useless. For example, if you tell a typical chatbot: "I like to scuba dive," the response might be: "I'm glad you like to scuba dive." It's a plausibly human-like response, but it's obvious that the chatbot is exercising this fallback option: Just say you're glad, followed by whatever they said. So part of the parlor trick with conversational agents is the graceful handling of perplexity. When someone says something to a chatbot that it doesn't understand, it's called perplexity. They rely on tricks, such as generic vagueness in response to sentences they don't understand. It's called the Sensibleness and Specificity Average (SSA) metric, and it judges whether each word makes sense within the context of the whole thread of conversation, rather than as an isolated response to the previous user input.Ĭonversational chatbots have been around for decades. Google has invented a new metric to keep Meena from going off the conversational rails, as most chatbots have traditionally done. Google says Meena is designed to be specific, which would be impressive, and sensible -and mind boggling astonishing. The dataset is filtered through, among other things, an algorithm that removes offensive content. It has 2.6 billion parameters - far more than other leading chatbots. Researchers fed Meena 341 gigabytes of social media conversation from public social media posts. Meena is based on new technology, old technology, new approaches and mind-blowing quantities of data. (Google declined an interview for this article.) There's good reason to believe this claim is true. Google unveiled this week an open-domain, neural-network-powered chatbot called Meena, and claimed that it's the best chatbot ever created.
