Robotic Imitation of Human Behavior Just Took a Big Step Forward … – Inverse

Artificial intelligence research firm OpenAI took inspiration from infants for its latest project, specifically the stunning ability for a newborn to mimic a person minutes after birth. The result is a robot that learns by example, and if you squint, you can see a future where helper robots mimic a person doing household chores once, and then repeats them forever.

The nonprofit of which Elon Musk is a founder and whose mission is discovering and enacting the path to safe artificial general intelligence revealed Tuesday the system that uses two neural networks to train a robot how to mimic behavior performed in virtual reality. The example behavior was simple: stacking blocks a certain way.

The robot uses two brains to get this done, which work in sequential order. One brain (the vision network) uses information from a camera and transfers what it sees to the second brain (the imitation network) that controls the robotic block-stacking arm.

Our system can learn a behavior from a single demonstration delivered within a simulator, then reproduce that behavior in different setups in reality, OpenAI explains in a blog post. You might be thinking to yourself, why does the demonstration have to be delivered within a simulator? Wouldnt it be easier if a human stacked up actual blocks in real life, instead of doing it all in virtual reality? Itd be easier on the human, sure, but processing those images would be glacially slow.

Heres why: Traditional vision networks (most of them around today) are programmed to merely classify images and do nothing else. OpenAIs Jack Clark offers Inverse this example: Take 10,000 photos of dogs. Some photos have labels, perhaps by breed, while others do not. When all the images are fed through a vision network, it will determine how to sort any unlabeled photos under the right label.

But thats just classifying images, not taking action on them.

If we used real-world images wed need the robot to be storing a real-world image of every single action it took and appropriately labeling them, Clark explains. This is extremely slow.

Instead, researchers at OpenAI use simple virtual reality simulations of objects the A.I. already knows. And thats why this robot needs to learn from VR for its real-life block-stacking.

Belows an animation of block-stacking that a human does using a VR headset and controller, which the robot learns from before imitating it in the real world. Check it out:

The announcement from OpenAI builds on two recent developments from the research firm. The first was vision-based and announced in April: An A.I. trained in VR was used in a real-world robot to successfully identify and pick up a can of Spam from a small table of groceries and throw it in the garbage. It was, naturally, dubbed a Spam-Detecting A.I. That was a fairly simple task, though.

The researchers combined the vision-based learning you see above with so-called one-shot imitation learning, wherein robots should be able to learn from very few demonstrations of any given task. This one-shot learning ability means a human only has to perform a task in this case stacking blocks in a certain order one time for the robot to nail it.

Belows a video released by OpenAI about the project. So while speedy robot butlers may not be right around the corner, training robots in VR to do basic physical tasks is something thats happening right now.

Nick is deputy editor at Inverse. Email him at nick@inverse.com

Here is the original post:
Robotic Imitation of Human Behavior Just Took a Big Step Forward ... - Inverse

Related Posts