How Robots Acting Randomly Can Help Speed Human Problem-Solving – Live Science

Robots that occasionally act randomly can help groups of humans solve collective-action problems faster, new research has shown.

Playing a game with someone unpredictable can be annoying, particularly when you're on the same team. But in an online game designed to test group decision-making, adding computer-controlled players that sometimes behave randomly more than halved the time it took to solve the problem, according to the new study.

That shouldn't come as too much of a surprise, said study leader Nicholas Christakis, director of the Human Nature Lab at Yale University. Random mutations make evolution possible; random movements by animals in flocks and schools enhances group survival; and computer scientists often introduce noise a statistical term for random or meaningless information to improve search algorithms, he said. [Super-Intelligent Machines: 7 Robotic Futures]

But the discovery that these effects are mirrored in combined groups of humans and machines could have wide-ranging implications, Christakis told Live Science. To start, self-driving cars will soon share roads with human drivers, and more people may soon find themselves working alongside robots or with "smart" software.

In the study, published online today (May 17) in the journal Nature, the researchers describe how they recruited 4,000 human workers from Amazon's Mechanical Turk online crowdsourcing platform to play an online game.

Each participant was assigned at random to one of 20 locations, or "nodes," in an interconnected network. Players can select from three colors and the goal is for every node to have a different color from the neighbors they are connected to.

Players can see only their neighbors' colors, which means that while the problem may seem to have been solved from their perspective, the entire game may still be unsolved.

While highly simplified, this game mimics a number of real-world problems, such as climate change or coordinating between different departments of a company, Christakis said, where from a local perspective, a solution has been reached but globally it has not.

In some games, the researchers introduced software bots instead of human players that simply seek to minimize color conflicts with neighbors. Some of these bots were then programmed to be "noisy," with some having a 10 percent chance of making a random color choice and others a 30 percent chance.

The researchers also experimented with putting these bots in different areas of the network. Sometimes they were placed in central locations that have more connections to other players, and other times they were just placed at random or on the periphery where there are fewer links.

What the researchers found was that games in which bots exhibiting 10 percent noise were placed in the center of the network were typically solved 55.6 percent times faster than sessions involving just humans.

"[The bots] got the humans to change how they interacted with other humans," Christakis said. "They created these kinds of positive ripple effects to more distant parts of the network. So the bots in a way served a kind of teaching function." [The 6 Strangest Robots Ever Created]

There's a fine balance, though. The researchers found that the bots that had a 30 percent change of making a random color choice introduced too much noise and increased the number of conflicts in the group-decision-making process. Similarly, bots that exhibited no randomness actually reduced the randomness of human players, resulting in more of them becoming stuck in unresolvable conflicts, the scientists said.

Iain Couzin, director of the Max Planck Institute for Ornithology in Germany and an expert in collective behavior, said the study's findings mimic what he has seen in animals, where uninformed individuals can actually improve collective decision-making.

He said it is a very important first step toward a scientific understanding of how similar processes impact human behavior, particularly in the context of interactions between humans and machines.

"Already we are making our decisions in the context of algorithms and that's only going to expand as technology advances," he told Live Science. "We have to be prepared for that and understand these types of processes. And we almost have a moral obligation to improve our collective decision-making in terms of climate change and other decisions we need to make at a collective level for humanity."

The new research also points to an alternative paradigm for the widespread introduction of artificial intelligence into society, Christakis said. "Dumb AI" (bots that follow simple rules compared to sophisticated AI) could act as a catalyst rather than a replacement for humans in various kinds of cooperative networks, ranging from the so-called sharing economy (which encompasses services like ride-sharing, home-lending and coworking) to citizen science.

"We're not trying to build AlphaGo or [IBM's] Watson to replace a person we are trying to build technology that helps supplement groups of people, and in a way, I think that might be a little less frightening," Christakis said. "The bots don't need to be very smart because they're interacting with smart humans. They don't need to be able to do stuff by themselves; they just need to help the humans help themselves," he added.

Original article on Live Science.

See the rest here:
How Robots Acting Randomly Can Help Speed Human Problem-Solving - Live Science

Related Posts