Stop saying Facebook’s bots ‘invented’ a new language – Mashable

Image: Shutterstock / Zapp2Photo

Tesla CEO Elon Musk made headlines last week when he tweeted about his frustrations that Mark Zuckerberg, ever the optimist, doesn't fully understand the potential danger posed by artificial intelligence.

So when media outlets began breathlessly re-reporting a weeks-old story that Facebook's AI-trained chatbots "invented" their own language, it's not surprising the story caught more attention than it did the first time around.

Understandable, perhaps, but it's exactly the wrong thing to be focusing on. The fact that Facebook's bots "invented" a new way to communicate wasn't even the most shocking part of the research to begin with.

A bit of background: Facebook's AI researchers published a paper back in June, detailing their efforts to teach chatbots to negotiate like humans. Their intention was to train the bots not just to imitate human interactions, but to actually act like humans.

You can read all about the finer points of how this went down over on Facebook's blog post about the project, but the bottom line is that their efforts were far more successful than they anticipated. Not only did the bots learn to act like humans, actual humans were apparently unable to discern the difference between bots and humans.

At one point in the process though, the bots' communication style went a little off the rails.

Facebook's researchers trained the bots so they would learn to negotiate in the most effective way possible, but they didn't tell the bots they had to follow the rules of English grammar and syntax. Because of this, the bots began communicating in a nonsensical way saying things like "I can can I I everything else," Fast Company reported in the now highly cited story detailing the unexpected outcome.

This, obviously, wasn't Facebook's intention since their ultimate goal is to use their learnings to improve chatbots that will eventually interact with humans, which, you know, communicate in plain English. So they adjusted their algorithms to "produce humanlike language" instead.

That's it.

So while the bots did teach themselves to communicate in a way that didn't make sense to their human trainers, it's hardly the doomsday scenario so many are seemingly implying. Moreover, as others have pointed out, this kind of thing happens in AI research all the time. Remember when an AI researcher tried to train a neural network to invent new names for paint colors and it went hilariously wrong? Yeah, it's because English is difficult not because we're on the verge of some creepy singularity, no matter what Musk says.

In any case, the obsession with bots "inventing a new language" misses the most notable part of the research in the first place: that the bots, when taught to behave like humans, learned to lie even though the researchers didn't train them to use that negotiating tactic.

Whether that says more about human behavior (and how comfortable we are with lying), or the state of AI, well, you can decide. But it's worth thinking about a lot more than why the bots didn't understand all the nuances of English grammar in the first place.

Read the original post:
Stop saying Facebook's bots 'invented' a new language - Mashable

Related Posts