This is really cool! And even cooler that it's tested on their mini agent harness (only has access to "terminal", no other tools) because this implies it's "raw model power" rather than "software glue".
My speculation: this is an "emergent" capability out of good / scalable / "solved" RL. Both Anthropic and oAI seem to have made huge advances in RL. (xAI as well, but haven't yet released their coding model, so we'll see if that continues). In contrast to other RLd models out there (i.e. the deepseeks, the qwens, etc) that score really well on tasks similar to those in benchmarks, both claude4 and gpt5 seem to have "learned" what agentic means at a different level. They can be guided through tasks, asked to do one particular subpart of a task, or a particular approach, etc. And they do it well. The other implementations feel "stubborn". Can't explain it better.
It will be interesting to see what Gemini3 will bring about. goog / deepmind are experts at RL, and gemini2.5 is a bit too old now, so curious to see what they can deliver on this front. My guess is that we'll see the same kind of "it gets it" after scaled RL.
One note, that I've made after using gpt5 for a bit is that it seems to have this "gettheritis" with solving tasks. It wants to solve them so bad, that sometimes it forgets the plan, or rushes through step 5 after solving 1-4 pretty throughly. Might be prompting as well, maybe those havent yet caught up.
This is really cool! And even cooler that it's tested on their mini agent harness (only has access to "terminal", no other tools) because this implies it's "raw model power" rather than "software glue".
My speculation: this is an "emergent" capability out of good / scalable / "solved" RL. Both Anthropic and oAI seem to have made huge advances in RL. (xAI as well, but haven't yet released their coding model, so we'll see if that continues). In contrast to other RLd models out there (i.e. the deepseeks, the qwens, etc) that score really well on tasks similar to those in benchmarks, both claude4 and gpt5 seem to have "learned" what agentic means at a different level. They can be guided through tasks, asked to do one particular subpart of a task, or a particular approach, etc. And they do it well. The other implementations feel "stubborn". Can't explain it better.
It will be interesting to see what Gemini3 will bring about. goog / deepmind are experts at RL, and gemini2.5 is a bit too old now, so curious to see what they can deliver on this front. My guess is that we'll see the same kind of "it gets it" after scaled RL.
One note, that I've made after using gpt5 for a bit is that it seems to have this "gettheritis" with solving tasks. It wants to solve them so bad, that sometimes it forgets the plan, or rushes through step 5 after solving 1-4 pretty throughly. Might be prompting as well, maybe those havent yet caught up.