I wish I could upvote this a bunch. It will only happen if people being to reflect on their gains at the hands of a specific vendor makes them beholden to that vendor. I know it is an idealism but I often think of statements like "source code or didn't happen" or wonder why people entrust such important things to one company an executive order away from being illegal.
I think the bigger deal for founders is that Nvidia can decide at a whim to deny you supply or, better yet give 100 billion dollars to your competitor.
I've been playing around with this for a few weeks. Newton is a pretty thin wrapper around mujoco-warp, which is trying to port mujoco, originally a CPU sim, over to warp on the GPU. There is also mujoco-mjx for this purpose, but using jax instead of warp. I think mjx/jax has the edge on performance because there are mature RL libraries for jax (brax) and big advantages to using Jax for everything, especially with its ability to "vmap" over each layer of abstraction. But I can see why nvidia wants to move away from IsaacLab using physX+pytorch because physX was made for games and interfacing with it through IsaacSim is a bit of a kludge. And apparently mjx isn't so accurate with collisions because of the way it has to be treated in jax. Pytorch RL works decently with newton/warp, at least they can share GPU buffers and you don't have to copy things back and forth to the CPU, however you can't optimize with cuda graphs past the newton/warp boundary because newton/warp have their own cuda graph capture scheme going on at the same time underneath.
They already have a newton branch of IsaacLab on github but its pretty early for it. I just came across a dope project today that is a different wrapper around mujoco-warp that already mimics IsaacLab's api and you can run some robot environments on it. Clean code too, very promising: https://github.com/mujocolab/mjlab.git
One feedback from someone interested in using this about the examples: I have looked at several and they seem too high level to get a sense of the actual API (i.e. the expected benefit of using this library vs the development complexity of using it).
For example, the cloth bending simulation is almost entirely: at __init__, call a function to add a cloth mesh to model builder obj, pass built model to initializer of a solver class; and at each timestep: call a collide model function, then call another function called solver.step. That's really it.
Heh, that was my first thought too and it doesn't look like it's related.? The nvidia dudes could have done some minimal amount of googling to pick a name that causes less confusion.
No, but the choice of name exudes a certain arrogance that aligns with the authors of MuJoCo. It's a very capable and robust engine, but the authors have been very condescending of other technologies and confusing the terms gaming with game.
It certainly won't replace PhysX as they are designed for comparatively small scale simulations, however with instancing. For instance, MuJoCo doesn't have real broadphase that scales with large environments, but sticks with the old tried-and-true SAP. Neither does it have separate friction coefficients for slip, tying it into robotics, except maybe for the VDB solver.
I think this is a step in the right direction, but I really dislike the Pythonification of everything. After using IsaacSim/IsaacLab for work, I'm convinced that Python is not the right tool for the job.
Developers inevitably write slow, error filled code when dealing with Python and working with the type annotations can be a pain.
Happy there's something to replace PhysX for robotics, and I do really like MuJoCo's API, but really wish we could get some good C/C++ APIs.
Apart from the language, NVIDIA doesn't seem to be great when dealing with software. IsaacSim and IsaacLab have so many bugs, are incredibly slow, and hard to debug. We spend so many hours on my team findings bugs for IsaacSim, it's just a pain. On version 5.0 and still feels like beta software.
Also IsaacSim's relience on USD to hold the scene structure and update prims makes it so hard to program for. USD isn't really performant when trying to generate a large amount of scenes. And the USD interface stops working completly when simulation starts on IsaacLab. I hope Newton goes a different route, and has less of a reliacne on USD. IMO USD should just be used as an interchange format, rather than how you actually represent the scene and properties internally. I much prefer that approach, which Unreal Engine seems to support.
Lastly, my god the names in this field are terrible. USD (Googling becomes a pain sometimes), Newton (Already another engine), Warp (literally the name of the architecture and a way to write Python GPU kernels, wtf).
Agreed on most, and naming is terrible. Note that at run-time Python is out-of-the loop, since Newton Physics records a CUDA graph, and executes it, so performance is not impacted (aside from startup JIT time for modified kernels). I'd prefer C/C++ as well, and although you can call Warp-compiled kernels from C++ (without Python, see my https://github.com/erwincoumans/warp_cpp project), it would be better to have native C/C++ support without requiring a Python interpreter. It just happens that almost all Deep Learning/RL for robotics uses Python.
It's just a bad programming language. Guido has bad taste, it creates lazy, boring affordances that don't scale beyond a single file without paying for it somewhere.
Note that I'm not saying it's not useful (python is what C was to unix if it was invented now)
It's way better JS, and at the time it was a breath of fresh air. It was NodeJS before JS, and it was much better than node will be (I scratched out that "never").
I do recall Guido's claims about developers cognition - stating that (quoting from memory) a person can only hold up to 50k lines of code in their head, and Py can have way more code in that dimension that C or Java, but that's just stupid.
Python has always been an incredibly useful scripting and glue language and as such it is pretty much perfect (or rather: Python 2.x has been). The problems started only to appear when people tried using it as an actual programming language for writing large code bases entirely in Python. E.g. even a great tool can be the wrong tool for a job.
My problem is that every machine learning ecosystem has decided that python is the main API and that if you want to call your model from another programming language you either have to build everything yourself (see llama.cpp/ggml) or use something extremely bare bones like IREE or worst of all, access python over IPC.
The primary use case of Newton Physics is reinforcement learning, with 1000s of similar environments. Even if each environment would have sequential actions, you run many envs in parallel.
This will eventually replace PhysX, some of its developers are working on Newton Physics. Newton Physics has multiple solvers, including MuJoCo-Warp and is easier to customize and extend.
Probably the parent commenter has much more insider info than all of us since he's currently at NVIDIA...
From what I understand, PhysX has been built primarily as a physics engine middleware for games. So when folks at NVIDIA tried to extend this engine to robotics (for IsaacSim/IsaacLab) it seems they've faced lots of challenges (mainly with subpar multi-env performance and inaccurate solver, but also lots of technical debt over the years). So changing the internal engine to a more robotics-oriented one (Mujoco-warp) doesn't seem far-fetched. Nowadays for game engine development there are much better middleware CPU-based physics engines available (mainly Jolt Physics) - and GPU physics in games aren't that popular anymore due to pragmatic reasons (the GPU -> CPU roundtrip defeats the whole purpose of better performance)
I was assuming the context of robot learning (IsaacLab), where Newton Physics will eventually replace PhysX. Newton Physics doesn't target games or other areas.
We're becoming too reliant on libraries made by our hardware vendors (vendor, singular, actually, to make it worse).
I wish I could upvote this a bunch. It will only happen if people being to reflect on their gains at the hands of a specific vendor makes them beholden to that vendor. I know it is an idealism but I often think of statements like "source code or didn't happen" or wonder why people entrust such important things to one company an executive order away from being illegal.
I think the bigger deal for founders is that Nvidia can decide at a whim to deny you supply or, better yet give 100 billion dollars to your competitor.
I guess it's a problem of monopoly, what could or should be done to solve it?
I've been playing around with this for a few weeks. Newton is a pretty thin wrapper around mujoco-warp, which is trying to port mujoco, originally a CPU sim, over to warp on the GPU. There is also mujoco-mjx for this purpose, but using jax instead of warp. I think mjx/jax has the edge on performance because there are mature RL libraries for jax (brax) and big advantages to using Jax for everything, especially with its ability to "vmap" over each layer of abstraction. But I can see why nvidia wants to move away from IsaacLab using physX+pytorch because physX was made for games and interfacing with it through IsaacSim is a bit of a kludge. And apparently mjx isn't so accurate with collisions because of the way it has to be treated in jax. Pytorch RL works decently with newton/warp, at least they can share GPU buffers and you don't have to copy things back and forth to the CPU, however you can't optimize with cuda graphs past the newton/warp boundary because newton/warp have their own cuda graph capture scheme going on at the same time underneath.
They already have a newton branch of IsaacLab on github but its pretty early for it. I just came across a dope project today that is a different wrapper around mujoco-warp that already mimics IsaacLab's api and you can run some robot environments on it. Clean code too, very promising: https://github.com/mujocolab/mjlab.git
One feedback from someone interested in using this about the examples: I have looked at several and they seem too high level to get a sense of the actual API (i.e. the expected benefit of using this library vs the development complexity of using it).
For example, the cloth bending simulation is almost entirely: at __init__, call a function to add a cloth mesh to model builder obj, pass built model to initializer of a solver class; and at each timestep: call a collide model function, then call another function called solver.step. That's really it.
Is this related to the Newton Dynamics physics engine? https://newtondynamics.com/
Heh, that was my first thought too and it doesn't look like it's related.? The nvidia dudes could have done some minimal amount of googling to pick a name that causes less confusion.
No, but the choice of name exudes a certain arrogance that aligns with the authors of MuJoCo. It's a very capable and robust engine, but the authors have been very condescending of other technologies and confusing the terms gaming with game. It certainly won't replace PhysX as they are designed for comparatively small scale simulations, however with instancing. For instance, MuJoCo doesn't have real broadphase that scales with large environments, but sticks with the old tried-and-true SAP. Neither does it have separate friction coefficients for slip, tying it into robotics, except maybe for the VDB solver.
My first thoughts as well, it's a well established engine.
Yeah, this is going to be confusing for sure
No
> Newton extends and generalizes Warp's (deprecated) warp.sim module, and integrates MuJoCo Warp as its primary backend.
It’s MuJoco GPU Edition. Nothing new or improved.
Well, MuJoCo initially used JAX for GPU (MJX) and MuJoCo Warp replaces MJX with better performance.
I think this is a step in the right direction, but I really dislike the Pythonification of everything. After using IsaacSim/IsaacLab for work, I'm convinced that Python is not the right tool for the job.
Developers inevitably write slow, error filled code when dealing with Python and working with the type annotations can be a pain.
Happy there's something to replace PhysX for robotics, and I do really like MuJoCo's API, but really wish we could get some good C/C++ APIs.
Apart from the language, NVIDIA doesn't seem to be great when dealing with software. IsaacSim and IsaacLab have so many bugs, are incredibly slow, and hard to debug. We spend so many hours on my team findings bugs for IsaacSim, it's just a pain. On version 5.0 and still feels like beta software.
Also IsaacSim's relience on USD to hold the scene structure and update prims makes it so hard to program for. USD isn't really performant when trying to generate a large amount of scenes. And the USD interface stops working completly when simulation starts on IsaacLab. I hope Newton goes a different route, and has less of a reliacne on USD. IMO USD should just be used as an interchange format, rather than how you actually represent the scene and properties internally. I much prefer that approach, which Unreal Engine seems to support.
Lastly, my god the names in this field are terrible. USD (Googling becomes a pain sometimes), Newton (Already another engine), Warp (literally the name of the architecture and a way to write Python GPU kernels, wtf).
Agreed on most, and naming is terrible. Note that at run-time Python is out-of-the loop, since Newton Physics records a CUDA graph, and executes it, so performance is not impacted (aside from startup JIT time for modified kernels). I'd prefer C/C++ as well, and although you can call Warp-compiled kernels from C++ (without Python, see my https://github.com/erwincoumans/warp_cpp project), it would be better to have native C/C++ support without requiring a Python interpreter. It just happens that almost all Deep Learning/RL for robotics uses Python.
Hopefully I didn't come across as too negative, My entire team and I are really excited for Newton. Hope to get some time this week to try things out!
It's just a bad programming language. Guido has bad taste, it creates lazy, boring affordances that don't scale beyond a single file without paying for it somewhere.
Note that I'm not saying it's not useful (python is what C was to unix if it was invented now)
my take:
The language is A ok.
It's way better JS, and at the time it was a breath of fresh air. It was NodeJS before JS, and it was much better than node will be (I scratched out that "never").
I do recall Guido's claims about developers cognition - stating that (quoting from memory) a person can only hold up to 50k lines of code in their head, and Py can have way more code in that dimension that C or Java, but that's just stupid.
But similarly to JS it's heavily misused.
"But similarly to JS it's heavily misused."
Anything easy to use, will have also many unskilled people use it.
Python has always been an incredibly useful scripting and glue language and as such it is pretty much perfect (or rather: Python 2.x has been). The problems started only to appear when people tried using it as an actual programming language for writing large code bases entirely in Python. E.g. even a great tool can be the wrong tool for a job.
Define large?
For Python projects, I would define anything above 20kloc as 'large'.
My problem is that every machine learning ecosystem has decided that python is the main API and that if you want to call your model from another programming language you either have to build everything yourself (see llama.cpp/ggml) or use something extremely bare bones like IREE or worst of all, access python over IPC.
What type of solver is mujoco?
How do they parallelize the sequential actions ?
The primary use case of Newton Physics is reinforcement learning, with 1000s of similar environments. Even if each environment would have sequential actions, you run many envs in parallel.
Years ago there was PhysX... how does this compare?
This will eventually replace PhysX, some of its developers are working on Newton Physics. Newton Physics has multiple solvers, including MuJoCo-Warp and is easier to customize and extend.
Their FAQ explicitly says
> Will Newton replace PhysX?
> No, the two engines serve different primary goals
https://newton-physics.github.io/newton/faq.html#will-newton...
Probably the parent commenter has much more insider info than all of us since he's currently at NVIDIA...
From what I understand, PhysX has been built primarily as a physics engine middleware for games. So when folks at NVIDIA tried to extend this engine to robotics (for IsaacSim/IsaacLab) it seems they've faced lots of challenges (mainly with subpar multi-env performance and inaccurate solver, but also lots of technical debt over the years). So changing the internal engine to a more robotics-oriented one (Mujoco-warp) doesn't seem far-fetched. Nowadays for game engine development there are much better middleware CPU-based physics engines available (mainly Jolt Physics) - and GPU physics in games aren't that popular anymore due to pragmatic reasons (the GPU -> CPU roundtrip defeats the whole purpose of better performance)
I was assuming the context of robot learning (IsaacLab), where Newton Physics will eventually replace PhysX. Newton Physics doesn't target games or other areas.
Ah the last sentence was just about PhysX in general.
[dead]