Magic is the first fully differentiable physics engine

Magic finally bridges the gap between the research in supervised Deep Learning and continous control. It's ability to compute the gradient of any continuous loss with respect to the parameters of your controller means that training agents for continous control tasks is now up to 1000x faster than with classical Reinforcement Learning methods. Magic supports the industry standard URDF format so you can import your existing robot models today and do research faster.

Why is a differentiable simulator a big deal?

When scientists train robotics agents in simulation (ie: programs which can control robotic hardware to accomplish a set of goals), they often use the Reinforcement Learning (RL) framework.

Reinforcement Learning is a way to approximate Gradient, which is a mathematical signal pointing towards a better solution. The gradient is then leveraged by an Optimizer which uses the signal to improve the robotic agent over time.

RL is slow. And that's because it needs to try new strategies multiple times with small variations to approximate the Gradient.

Wouldn't it be wonderful if we could compute that gradient straight from the simulator instead of approximating it? That's what Magic is all about. It's the first simulator which allows for an explicit computation of the Gradient.

Early benchmarks show that training robots using Magic is 1000x faster than classical RL

Sign up for our newsletter and stay updated