Teaching robots to solve complex tasks in the real world is a foundational problem of robotics research. Current algorithms require too much interaction with the environment to learn successful behaviors, making them impractical for many real-world tasks.
A research team from Berkeley set out to solve this problem with a new algorithm called ‘Dreamer’. Constructing what’s called a ‘world model’, Dreamer can project the probability how a future action will achieve its goal.
Beginning on its back, legs waving, the robot learns to flip itself over, stand up, and walk in an hour. A further ten minutes of harassment with a roll of cardboard is enough to teach it how to withstand and recover from being pushed around by its handlers.MORE