So it’s some brittle crap built on verl, which is already pretty much train by config (and makes breaking changes with _every single commit_), with no documentation, no examples, and no clear purpose? Heck yeah Microsoft
It doesn't replace core algorithms. It plumbs things together. It means you're not having to write the framework to connect things, your algos are still going to have the same problems as they had before.
This is just a style I've seen a lot of people who are a generation or so younger than me enjoy.
I'm not expected to write docs the way my father's generation did (thank god), so I don't expect them to write the docs the way I would. If this gets people engaged and excited, I lose nothing, they get something, we're fine.
As to the LLM generation claim, I don't care if it is or it isn't. The project seems legit, they're making claims that 3rd parties have verified ("Community Projects"), it looks useful and interesting, so I might spend more time with it soon.
A framework for optimizing LLM agents, including but not limited to RL. You can even do fine tuning, they have an example with unsloth in there.
The design of this is pretty nice, it's based on a very simple to add instrumentation to your agent and the rest happens in parallel while your workload runs which is awesome.
You can probably do also what DSPy does for optimizing prompts but without having to rewrite using the DSPy API which can be a big win.
All these agent documentations seem to compete for the most complex set of flow charts imaginable without ever mentioning what the Rube Goldberg machine is supposed to accomplish. Given that the real output in open source of these contraptions is zero, it seems that the flow charts are the goal. Some kind of modern art.
So it’s some brittle crap built on verl, which is already pretty much train by config (and makes breaking changes with _every single commit_), with no documentation, no examples, and no clear purpose? Heck yeah Microsoft
do you have benchmarks on tasks with sparse rewards or partial observability? i feel like thats where most "train any agent" claims tend to break down
It doesn't replace core algorithms. It plumbs things together. It means you're not having to write the framework to connect things, your algos are still going to have the same problems as they had before.
Let’s see…excessive emojis and wacky punctuation hmm maybe this whole readme is LLM generated.
This is just a style I've seen a lot of people who are a generation or so younger than me enjoy.
I'm not expected to write docs the way my father's generation did (thank god), so I don't expect them to write the docs the way I would. If this gets people engaged and excited, I lose nothing, they get something, we're fine.
As to the LLM generation claim, I don't care if it is or it isn't. The project seems legit, they're making claims that 3rd parties have verified ("Community Projects"), it looks useful and interesting, so I might spend more time with it soon.
I bet 80% of the project is LLM generated anyway
if its came at this point, why would we write readme md ourselves????
What actually is this?
A framework for optimizing LLM agents, including but not limited to RL. You can even do fine tuning, they have an example with unsloth in there.
The design of this is pretty nice, it's based on a very simple to add instrumentation to your agent and the rest happens in parallel while your workload runs which is awesome.
You can probably do also what DSPy does for optimizing prompts but without having to rewrite using the DSPy API which can be a big win.
>What actually is this?
Based on the number of emojis, I doubt the author even knows.
Parsing entireties of the I/O agent release version, which is the precommit as text prior to evaluation.
All these agent documentations seem to compete for the most complex set of flow charts imaginable without ever mentioning what the Rube Goldberg machine is supposed to accomplish. Given that the real output in open source of these contraptions is zero, it seems that the flow charts are the goal. Some kind of modern art.
"the absolute trainer to light up AI agents". Doesn't that say enough?? no really tho, I've read the documentation and all I see is a worse DSPy
> Turn your agent into an optimizable beast with ZERO CODE CHANGE (*almost*)!
OP didn’t think to include this very important fine print. Thanks OP!