HomeTechGadgetsWhy Solving a Rubik's Cube Does Not Signal Robot Supremacy

Why Solving a Rubik’s Cube Does Not Signal Robot Supremacy


“From the robotics perspective, it’s extraordinary that they were able to get it to work,” says Leslie Pack Kaelbling, a professor at MIT who has previously worked on reinforcement learning. But Kaelbling cautions that the approach likely won’t create general-purpose robots, because it requires so much training. Still, she adds, “there’s a kernel of something good here.”

Dactyl’s real innovation, which isn’t evident from the videos, involves how it transfers learning from simulation to the real world.

OpenAI’s system consists of a humanoid hand, from UK-based Shadow Robot Company, connected to a powerful computer system and an array of cameras and other sensors. Dactyl figures out how to manipulate something using reinforcement learning, which trains a neural network to control the hand based on extensive experimentation.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

Reinforcement learning has produced other impressive AI demos. Most famously, DeepMind, an Alphabet subsidiary, used reinforcement learning to train a program called AlphaGo to play the devilishly difficult and subtle board game Go better than the best human players.

The technique has been used with robots as well. In 2008, Andrew Ng, an AI expert who would go on to hold prominent roles at Google and Baidu, used the technique to make drones perform aerobatics. A few years later, one of Ng’s students, Pieter Abbeel, showed that the approach can teach a robot to fold towels, although this never proved commercially viable. (Abbeel also previously worked part time at OpenAI and still serves as an adviser to the company).

Last year, OpenAI showed Dactyl simply rotating a cube in its hand using a motion learned through reinforcement learning. To wrangle the Rubik’s Cube, however, Dactyl didn’t rely entirely on reinforcement learning. It got help from a more conventional algorithm to determine how to solve the puzzle. What’s more, although Dactyl is equipped with several cameras, it cannot see every side of the cube. So it required a special cube equipped with sensors to understand how the squares are oriented.

Success in applying reinforcement learning to robotics have been hard won because the process is prone to failure. In the real world, it’s not practical for a robot to spend years practicing a task, so training is often done in simulation. But it’s often difficult to translate what works in simulation to more complex conditions, where the slightest bit of friction or noise in a robot’s joints can throw things off.

This is where Dactyl’s real innovation comes in. The researchers devised a more effective way to simulate the complexity of the real world by adding noise, or perturbations to their simulation. In the latest work, this entails gradually adding noise so that the system learns to be more robust to real-world complexity. In practice, it means the robot is able to learn, and transfer from simulation to reality, more complex tasks than previously demonstrated.



Source link

NypTechtek
NypTechtek
Media NYC Local Family and National - World News

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read