> This is a useful starting point for a project like this—in figuring out the compile step the coding agent gets seeded with a little bit of relevant information about the project, and if it can compile that means it can later partially test the code it is writing while it works.
Ran ls
└ AGENTS.md
CHANGELOG.md
+13 lines
scripts
sdk
Ran cat README.md
└ <p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew
install --cask codex</code></p>
+105 lines
This repository is licensed under the [Apache-2.0 License](LICENSE).
Ran rg -n "rust" -n docs/install.md
└ 23:curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
25:rustup component add rustfmt
26:rustup component add clippy
Ran sed -n '1,200p' docs/install.md
└ ## Install & build
+36 lines
cargo test
```
Im gathering the build steps for the Rust tool, focusing on running cargo
build inside the codex-rs directory. I'll check dependencies and confirm
the instructions before actually building, aiming to provide a clear summary
afterward.
Ran cargo build
└ Updating git repository `https://github.com/nornagon/crossterm`
Updating git repository `https://github.com/nornagon/ratatui`
+825 lines
Compiling codex-cli v0.0.0 (/Users/simon/Dropbox/dev/codex/codex-rs/cli)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1m 27s
Ran ls target/debug | head
└ apply_patch
apply_patch.d
+6 lines
codex-execpolicy
That's a decent starting point on seeding the context with information that's relevant to making and then testing the modifications I'm about to ask for.
The README files that it looked at, and the directory structure it explored.
It now knows what the project is, what dependencies it uses, how it's laid out and the set of binaries that it generates.
Even more importantly: it knows that the project can be built without errors. If it tries a build later and sees an error it will know that the error was caused by code it had modified.
You could just tell it to check out readme, but I suspect it would have checked it out anyway or figured out the type of project and how it is structured as a first step of any other command you give it as without it it is impossible to add or update the project.
For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.
But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.
The most important thing is to have it successfully build the software, to prove to both me and itself that a clean compile is possible before making any further changes.
Suggestion: make a “check.sh” script that builds everything, lints everything, and runs all (fast) tests. Add a directive in the agent system prompt to call it before & after doing anything. If it fails, it will investigate why.
In situations like this is better to ask the agent to write a short document about how to run the project. Then you read it and delete useless parts. Then you ask the agent to follow that document and improve it until the software builds. By the final step, you get a personalized README.md for your needs.
Figuring out how to build a project in an unfamiliar language/build system is my least favourite activity, mainly because all the people who are familiar with those tools think it's "as simple as" and don't bother to write it down anywhere. I don't plan on learning every build system ever.
It would be interesting to know what kinds of responses humans offer across different values of Y such as:
1) looked on stack overflow
2) googled it
3) consulted the manual
4) asked an LLM
5) asked a friend
For each of these, does the learner somehow learn something more or better?
Is there some means of learning that doesn't degrade us as human beings according to those in the know?
I ask as someone who listens to audiobooks and answers yes when someone asks me if I've read the book. And that's hardly the extent of my transgressions.
At least if you're copy/pasting from stack overflow you presumably glanced at the change you are copying if only to ensure you select the correct text.
Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days. It's like a professional plumber being unfamiliar with solder. Like, yeah, you can do a lot without soldering, but imagine a pro just straight up not having a clue about solder.
It's worth learning how to do this stuff. Not just because you then know that particular build system, but because you get better at learning. Learning how to learn is super important. I haven't come across a new project that's taken me more than a few minutes to figure out how to build in years.
> Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days.
This isn't even close to true. The majority of programmers will be fine going their entire career without even knowing what Rust is, let alone how to build Rust projects.
A more accurate analogy would be a plumber not knowing how his wrench was manufactured.
Rust ranks 16th on the current TIOBE (https://www.tiobe.com/tiobe-index/), behind assembly, PHP and R. It is still not remotely as popular (as "based on the number of skilled engineers world-wide, courses and third party vendors") as C or C++ (to say nothing of how dominant Python has become).
The supposed ubiquity of Rust is the result of a hype and/or drama bubble.
If you've never used Rust before, I couldn't find good documentation on how to run a existing Rust project nor could I find `cargo install` on the "Getting Started" page. I could read the Cargo Book, or check `--help` I guess, but this can be surprisingly time consuming as well, it might take 5-30 minutes of active searching to locate the information. If you can, try put yourself in a beginner's mindset and think though your argument again.
Regarding your second point, I think people actually underutilise LLMs for simple tasks. Delegating these tasks frees up your problem-solving skills for challenges that truly need human insight. In this case, asking an LLM is arguably the smart choice: it's a common task in training data, easy to verify, and low-risk to run and not a direct learning or benefit for your initial question.
You don’t need to cargo install anything. You just need cargo itself, which is linked on the main page. Once you have that, here’s an example google search that gives you all the info you need to run the project (hint: `cargo run`)
Thanks for the Google link, I was just asking GPT-5-Pro "How to Google: 'How do I run a rust project'", and am still waiting for the answer... the point was that searching for an answer (wherever/however) is not necessary in some cases, like this one, but asking the AI agent to find a solution can be sufficient and is totally ok. Engineers are allowed to delegate, there is not nothing wrong with this.
> In turn, this makes your own research or problem-solving skills less sharp.
Why would that be true? The average assistant is certainly typing more quickly than their boss, but most people would not find issue in that. It's different responsibilities. You free up time to research / problem-solve other things.
> No need to wait for 5-30 minutes until LLM figures this out.
I don't care it the LLM takes 15 additional minutes to figure it out, if it net saves me a minute (we could certainly debate the ergonomics of the multitasking involved, but that is something that every person, who delegates work, has to deal with and that's not unique to working with LLMs in any way)
I got excited about LLM agents thinking it was just about "faster typing". A lot of us have dreamed of a day where we can just transfer what we have in mind directly into the computer, skipping the laborious manual keying step. But using an LLM is not that. It's not that at all.
Instead they let you type vague or ambiguous crap in and just essentially guess about the unclear bits. Hadn't quite thought through which algorithm to use? No worries, the LLM will just pick one. Hadn't considered an edge case? No worries, the LLM will just write 100 lines of code that no sane programmer would ever go through with before realising something isn't right.
I've made the mistake of being that senior who is way too eager to help juniors many times in my career. What happens is they never, ever learn for themselves. They never learn how to digest and work through a problem. They never learn from their mistakes. Because they can always just defer to me. LLMs are the worst thing to happen for these people because unlike a real person like me the LLM is never busy, never grumpy and nobody is taking notes of just how many times they're having to ask.
LLMs are really useful at generating boilerplate, but you have to ask yourself why you're spending your days writing boilerplate in the first place. The danger is it can very quickly become more than just boilerplate and before you know it you've forgotten how to think for yourself.
Sometimes avoiding boilerplate is out of scope. I’m currently using an LLM agent to write a Home Assistant integration. The LLM is happy to write boilerplate crap to interact with the terrible Home Assistant API without complaining about it. Sure, some of the code it writes is awful, and I can fix that. (The record was about 15 lines of code, including non-functional error handling, to compute the fixed number zero.)
Becoming proficient at banging out Home Assistant entities and their utterly ludicrous instantiation process has zero value for my career.
I have used Rust for decades (yeah, Rust is that old) and want to point out that that's not always the case, especially when FFI is involved. At some point, for example, any Rust crate with the `openssl` dependency used to require a special care every time `cargo install` gets run. Cargo itself is super nice; other tools, still not so much.
Funny you say that, because I have the opposite opinion.
It is easy for any of us to quickly bootstrap a new project in whatever language. But this takes a cognitive toll, and adds friction to bring our visions online.
Recently, I went "blind" for a couple of days. My vision was so damaged I could only see blurs. The circumstances of this blindness are irrelevant, but it dawned on me that if I were blind, I could no longer code as I do.
This realization led me to purchase a Max subscription to Claude Code and rely more on LLMs for building, not less.
It was much more effective than I thought it would be. In my blurred blindness, I saw blobs of a beautiful user interface form, from the skeleton of my Rust backend, Vue3 frontend. It took my (complex backend in Rust) and my frontend scaffolding to another level. I could recognize it via the blur distinctly. And it did this in minutes / hours instead of days.
As my vision returned, I began analyzing what happened and conducting experiments. My attitude changed completely. Instead of doing things myself, directly, I set out to make the LLM do it, even if it took more time.
It is painful at first. It makes very stupid mistakes that make an experienced engineer snarl at it, "I can do better myself". But my blindness gave me new sight. If I were blind, I couldn't do it myself. I would need help.
Instead of letting that ego take over, I started patiently understanding how the LLM best operates. I discovered mainly it needs context and specific instructions.
I experimented with a DSL I made for defining LLM instructions that are more suited for it, and I cannot describe the magic that started unfolding.
Now, I am writing a massive library of modular instructions for LLMs, and launching them against various situations. They will run for hours uninterrupted and end up implementing full code bases, with complete test suites, domain separation, and security baked in.
Reviewing their code, it looks better than 90% of what I see people producing. Clear separation of concerns, minimal code reuse, distinct interface definitions, and so much more.
So now, I have been making it more and more autonomous. It doesn't matter if I could bootstrap a project in 30 seconds. If I spend a few hours perfecting the instructions to the LLM, I can bootstrap ANY project for ANY LANGUAGE, forever.
And the great thing? I already know the pattern works. At this point, it is foolish for me to do anything other than this.
Just as a quick datapoint here in case people get worried; yes, it is absolutely possible to program as a blind person, even without language models. Obviously you won't be using your eyes for it, but we have tried and tested tools that help and work. And at the end of the day, someone's going to have to review the code that gets written, so either way, you're not going to get around learning those tools.
Source: Am a blind person coding for many years before language models existed.
I see where you're coming from. But I often find that when I have some idea or challenge that I want to solve, I get bogged down in details (like how do I build that project)... before I even know if the idea I _wanted_ to solve is feasible.
It's not that I don't care about learning how to build Rust or think that it's too big of a challenge. It's just not the thing I was excited about right now, and it's not obvious ahead of time how sidetracked it will get me. I find that having an LLM just figure it out helps me to not lose momentum.
I would have done the same thing. I know how to build software in a dozen or more languages. I've done it manually, from scratch, in all of them. I don't know Rust. I have no immediate plan to learn Rust. I vaguely know that Cargo is something in the Rust toolbox. I don't have it installed. I don't particularly want to learn anything about it. It's a whole lot easier for me to tell the LLM to figure that out.
I might learn Rust some day. At the moment, I don't need the mental clutter.
Well, fyi because it is really simple: if you have rust installed, you have cargo installed too. And to run a project you type “cargo run” from the base directory. That is all.
I tried to make the claude code cli interface with the codex/chatgpt endpoints directly, and found that that’s really hard, because those ChatGPT endpoint accept only one system prompt, and one system prompt only, the one that is baked directly into Codex.
That was a bit dissapointing, because I feel the codex api with claude semantics would be really nice.
Translating all the tools calls when you cannot control the prompt seemed like a bit of a dead end though, so I eventually just switched back to claude (which incidentally allows any prompt you can dream up, but using the codex cli with claude was very much not on my wishlist)
I help run an eval platform and thought it fun to try a bunch of models on this challenge [1].
There's some fun little ones in there. I've not idea what Llama 405B is doing. Qwen 30B A3B is the only one that cutely starts on the landscaping and background. Mistral Large & Nemo are just convinced that front shot is better than portrait. Also interesting to observe varying temperatures.
I feel like this SVG challenge is a pretty good threshold to meet before we start to get too impressed by ARC AGI wins.
> I feel like this SVG challenge is a pretty good threshold to meet before we start to get too impressed by ARC AGI wins.
It's a very bad threshold. The models write the plain SVG without looking at the final image. Humans would be awful at it and you would mistakenly conclude that they aren't general intelligences.
I dunno. A competent human can hold the a mental image and work through it. Not too hard with experience. What I generally mean tho is: I don't think we can state the supreme capabilities of AI (which people love to do with grate fervour and rhetoric) until they can at the very least draw basic objects in well-known declarative languages. And while it may be unwise to judge an AI based on its ability to count the number of 'R' letters in various words, it -- amongst a wider suite ofc -- remains a good minimum threshold of capability.
Previous discussions about "pelican on a bicycle" always mention this, but it's not something they can do without being blatantly obvious. You can always do other x riding y tests. A juggler riding a barrel. A bear riding a unicycle. An anteater riding a horse, etc.
A couple of days ago, inspired by Simon and those discussions, I had Claude create 30 such tests. I posted a Show HN with the results from six models, but it didn’t get any traction. Here it is again:
Oh man, that’s hilarious. I dunno what qwen is doing most of the time. Gemini seems to be either a perfect win or complete nonsense. Claude seems to lean towards “good enough”.
simon has said multiple times he has hidden tests he runs for precisely this eventuality (because of course it will happen someday, and he'll write a banger article calling them out for it)
Under a different account? They could special case his account and even give him more computing power because they know he'll not rest until every dog on this planet has a subscription to at least ChatGPT and Claude Code.
You can already do that through the config file, you can define custom endpoints for any openai compatible API. So you can get openrouter, or even local models via vLLM or alternatives. I think someone even tried to get cheaper API pay-as-you-go usage by hitting their "bulk" API, for tasks that run over night (so no need for immediate responses).
Very mildly related: Pelicans on Bicycles - Raytracer Edition:
https://blog.nawaz.org/posts/2025/Oct/pelican-on-a-bike-rayt...
Hah, love the weird floating eggs!
I blogged about this here, including running your benchmark against gpt-5-codex-mini which did the worst example I've seen by far: https://simonwillison.net/2025/Nov/9/pelican-on-a-bike-raytr...
Here's the codex-mini attempt: https://static.simonwillison.net/static/2025/povray-pelican-...
Installing Rust projects is usually as simple as calling `cargo install`. No need to wait for 5-30 minutes until LLM figures this out.
People are delegating way too much to LLMs. In turn, this makes your own research or problem-solving skills less sharp.
Quoting my article:
> This is a useful starting point for a project like this—in figuring out the compile step the coding agent gets seeded with a little bit of relevant information about the project, and if it can compile that means it can later partially test the code it is writing while it works.
"Figure out how to build this" is a shortcut for getting a coding agent primed for future work. If you look at the transcript you can see what it did: https://gistpreview.github.io/?ddabbff092bdd658e06d8a2e8f142...
That's a decent starting point on seeding the context with information that's relevant to making and then testing the modifications I'm about to ask for.What useful context is in there? How to call “cargo build”? It already knows that.
The README files that it looked at, and the directory structure it explored.
It now knows what the project is, what dependencies it uses, how it's laid out and the set of binaries that it generates.
Even more importantly: it knows that the project can be built without errors. If it tries a build later and sees an error it will know that the error was caused by code it had modified.
You could just tell it to check out readme, but I suspect it would have checked it out anyway or figured out the type of project and how it is structured as a first step of any other command you give it as without it it is impossible to add or update the project.
For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.
But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.
>> He can now speedrun the part he’s not interested in
In this case its more like slowrunning. Building rust project is 1 command and chatgpt will tell you this command in 5 seconds.
Running an agent for that is 1000x more inefficient.
At this point its not optimizing or speeding things up but running agent for the sake of running agent.
The best thing about having an agent figure this out is you don't even need to be at your computer while it works. I was cooking dinner.
The most important thing is to have it successfully build the software, to prove to both me and itself that a clean compile is possible before making any further changes.
Suggestion: make a “check.sh” script that builds everything, lints everything, and runs all (fast) tests. Add a directive in the agent system prompt to call it before & after doing anything. If it fails, it will investigate why.
In situations like this is better to ask the agent to write a short document about how to run the project. Then you read it and delete useless parts. Then you ask the agent to follow that document and improve it until the software builds. By the final step, you get a personalized README.md for your needs.
Figuring out how to build a project in an unfamiliar language/build system is my least favourite activity, mainly because all the people who are familiar with those tools think it's "as simple as" and don't bother to write it down anywhere. I don't plan on learning every build system ever.
I did not know how to do X so I Y.
It would be interesting to know what kinds of responses humans offer across different values of Y such as:
1) looked on stack overflow 2) googled it 3) consulted the manual 4) asked an LLM 5) asked a friend
For each of these, does the learner somehow learn something more or better?
Is there some means of learning that doesn't degrade us as human beings according to those in the know?
I ask as someone who listens to audiobooks and answers yes when someone asks me if I've read the book. And that's hardly the extent of my transgressions.
You forgot to read the readme
At least if you're copy/pasting from stack overflow you presumably glanced at the change you are copying if only to ensure you select the correct text.
Good point. We also sometimes leave comments in code noting the thread we referenced.
Yeah because the code on stack overflow has a license.
Yeah, it's CC BY-SA 4.0: https://stackoverflow.com/help/licensing
Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days. It's like a professional plumber being unfamiliar with solder. Like, yeah, you can do a lot without soldering, but imagine a pro just straight up not having a clue about solder.
It's worth learning how to do this stuff. Not just because you then know that particular build system, but because you get better at learning. Learning how to learn is super important. I haven't come across a new project that's taken me more than a few minutes to figure out how to build in years.
> Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days.
This isn't even close to true. The majority of programmers will be fine going their entire career without even knowing what Rust is, let alone how to build Rust projects.
A more accurate analogy would be a plumber not knowing how his wrench was manufactured.
Rust ranks 16th on the current TIOBE (https://www.tiobe.com/tiobe-index/), behind assembly, PHP and R. It is still not remotely as popular (as "based on the number of skilled engineers world-wide, courses and third party vendors") as C or C++ (to say nothing of how dominant Python has become).
The supposed ubiquity of Rust is the result of a hype and/or drama bubble.
If you've never used Rust before, I couldn't find good documentation on how to run a existing Rust project nor could I find `cargo install` on the "Getting Started" page. I could read the Cargo Book, or check `--help` I guess, but this can be surprisingly time consuming as well, it might take 5-30 minutes of active searching to locate the information. If you can, try put yourself in a beginner's mindset and think though your argument again.
Regarding your second point, I think people actually underutilise LLMs for simple tasks. Delegating these tasks frees up your problem-solving skills for challenges that truly need human insight. In this case, asking an LLM is arguably the smart choice: it's a common task in training data, easy to verify, and low-risk to run and not a direct learning or benefit for your initial question.
You don’t need to cargo install anything. You just need cargo itself, which is linked on the main page. Once you have that, here’s an example google search that gives you all the info you need to run the project (hint: `cargo run`)
https://www.google.com/search?q=how+do+I+run+a+rust+project
Thanks for the Google link, I was just asking GPT-5-Pro "How to Google: 'How do I run a rust project'", and am still waiting for the answer... the point was that searching for an answer (wherever/however) is not necessary in some cases, like this one, but asking the AI agent to find a solution can be sufficient and is totally ok. Engineers are allowed to delegate, there is not nothing wrong with this.
> In turn, this makes your own research or problem-solving skills less sharp.
Why would that be true? The average assistant is certainly typing more quickly than their boss, but most people would not find issue in that. It's different responsibilities. You free up time to research / problem-solve other things.
> No need to wait for 5-30 minutes until LLM figures this out.
I don't care it the LLM takes 15 additional minutes to figure it out, if it net saves me a minute (we could certainly debate the ergonomics of the multitasking involved, but that is something that every person, who delegates work, has to deal with and that's not unique to working with LLMs in any way)
I got excited about LLM agents thinking it was just about "faster typing". A lot of us have dreamed of a day where we can just transfer what we have in mind directly into the computer, skipping the laborious manual keying step. But using an LLM is not that. It's not that at all.
Instead they let you type vague or ambiguous crap in and just essentially guess about the unclear bits. Hadn't quite thought through which algorithm to use? No worries, the LLM will just pick one. Hadn't considered an edge case? No worries, the LLM will just write 100 lines of code that no sane programmer would ever go through with before realising something isn't right.
I've made the mistake of being that senior who is way too eager to help juniors many times in my career. What happens is they never, ever learn for themselves. They never learn how to digest and work through a problem. They never learn from their mistakes. Because they can always just defer to me. LLMs are the worst thing to happen for these people because unlike a real person like me the LLM is never busy, never grumpy and nobody is taking notes of just how many times they're having to ask.
LLMs are really useful at generating boilerplate, but you have to ask yourself why you're spending your days writing boilerplate in the first place. The danger is it can very quickly become more than just boilerplate and before you know it you've forgotten how to think for yourself.
Sometimes avoiding boilerplate is out of scope. I’m currently using an LLM agent to write a Home Assistant integration. The LLM is happy to write boilerplate crap to interact with the terrible Home Assistant API without complaining about it. Sure, some of the code it writes is awful, and I can fix that. (The record was about 15 lines of code, including non-functional error handling, to compute the fixed number zero.)
Becoming proficient at banging out Home Assistant entities and their utterly ludicrous instantiation process has zero value for my career.
I have used Rust for decades (yeah, Rust is that old) and want to point out that that's not always the case, especially when FFI is involved. At some point, for example, any Rust crate with the `openssl` dependency used to require a special care every time `cargo install` gets run. Cargo itself is super nice; other tools, still not so much.
Funny you say that, because I have the opposite opinion.
It is easy for any of us to quickly bootstrap a new project in whatever language. But this takes a cognitive toll, and adds friction to bring our visions online.
Recently, I went "blind" for a couple of days. My vision was so damaged I could only see blurs. The circumstances of this blindness are irrelevant, but it dawned on me that if I were blind, I could no longer code as I do.
This realization led me to purchase a Max subscription to Claude Code and rely more on LLMs for building, not less.
It was much more effective than I thought it would be. In my blurred blindness, I saw blobs of a beautiful user interface form, from the skeleton of my Rust backend, Vue3 frontend. It took my (complex backend in Rust) and my frontend scaffolding to another level. I could recognize it via the blur distinctly. And it did this in minutes / hours instead of days.
As my vision returned, I began analyzing what happened and conducting experiments. My attitude changed completely. Instead of doing things myself, directly, I set out to make the LLM do it, even if it took more time.
It is painful at first. It makes very stupid mistakes that make an experienced engineer snarl at it, "I can do better myself". But my blindness gave me new sight. If I were blind, I couldn't do it myself. I would need help.
Instead of letting that ego take over, I started patiently understanding how the LLM best operates. I discovered mainly it needs context and specific instructions.
I experimented with a DSL I made for defining LLM instructions that are more suited for it, and I cannot describe the magic that started unfolding.
Now, I am writing a massive library of modular instructions for LLMs, and launching them against various situations. They will run for hours uninterrupted and end up implementing full code bases, with complete test suites, domain separation, and security baked in.
Reviewing their code, it looks better than 90% of what I see people producing. Clear separation of concerns, minimal code reuse, distinct interface definitions, and so much more.
So now, I have been making it more and more autonomous. It doesn't matter if I could bootstrap a project in 30 seconds. If I spend a few hours perfecting the instructions to the LLM, I can bootstrap ANY project for ANY LANGUAGE, forever.
And the great thing? I already know the pattern works. At this point, it is foolish for me to do anything other than this.
Just as a quick datapoint here in case people get worried; yes, it is absolutely possible to program as a blind person, even without language models. Obviously you won't be using your eyes for it, but we have tried and tested tools that help and work. And at the end of the day, someone's going to have to review the code that gets written, so either way, you're not going to get around learning those tools.
Source: Am a blind person coding for many years before language models existed.
The DSL sounds interesting, if you talk about it anywhere I'd definitely be interested in reading more!
I see where you're coming from. But I often find that when I have some idea or challenge that I want to solve, I get bogged down in details (like how do I build that project)... before I even know if the idea I _wanted_ to solve is feasible.
It's not that I don't care about learning how to build Rust or think that it's too big of a challenge. It's just not the thing I was excited about right now, and it's not obvious ahead of time how sidetracked it will get me. I find that having an LLM just figure it out helps me to not lose momentum.
I would have done the same thing. I know how to build software in a dozen or more languages. I've done it manually, from scratch, in all of them. I don't know Rust. I have no immediate plan to learn Rust. I vaguely know that Cargo is something in the Rust toolbox. I don't have it installed. I don't particularly want to learn anything about it. It's a whole lot easier for me to tell the LLM to figure that out.
I might learn Rust some day. At the moment, I don't need the mental clutter.
Well, fyi because it is really simple: if you have rust installed, you have cargo installed too. And to run a project you type “cargo run” from the base directory. That is all.
You get a build error because the rust version you have installed is incompatible with the codebase. Now you have to install rustup and...
true bunch of regards in the making like why I need you to do basic ass shit like this, no wonder we see mass layoffs
I tried to make the claude code cli interface with the codex/chatgpt endpoints directly, and found that that’s really hard, because those ChatGPT endpoint accept only one system prompt, and one system prompt only, the one that is baked directly into Codex.
That was a bit dissapointing, because I feel the codex api with claude semantics would be really nice.
Translating all the tools calls when you cannot control the prompt seemed like a bit of a dead end though, so I eventually just switched back to claude (which incidentally allows any prompt you can dream up, but using the codex cli with claude was very much not on my wishlist)
I help run an eval platform and thought it fun to try a bunch of models on this challenge [1].
There's some fun little ones in there. I've not idea what Llama 405B is doing. Qwen 30B A3B is the only one that cutely starts on the landscaping and background. Mistral Large & Nemo are just convinced that front shot is better than portrait. Also interesting to observe varying temperatures.
I feel like this SVG challenge is a pretty good threshold to meet before we start to get too impressed by ARC AGI wins.
[1] https://weval.org/analysis/visual__pelican/f141a8500de7f37f/...
You will never guess what the upcoming gemini 3 has been benchmaxxed upon ;)
> I feel like this SVG challenge is a pretty good threshold to meet before we start to get too impressed by ARC AGI wins.
It's a very bad threshold. The models write the plain SVG without looking at the final image. Humans would be awful at it and you would mistakenly conclude that they aren't general intelligences.
I dunno. A competent human can hold the a mental image and work through it. Not too hard with experience. What I generally mean tho is: I don't think we can state the supreme capabilities of AI (which people love to do with grate fervour and rhetoric) until they can at the very least draw basic objects in well-known declarative languages. And while it may be unwise to judge an AI based on its ability to count the number of 'R' letters in various words, it -- amongst a wider suite ofc -- remains a good minimum threshold of capability.
How long before large language models are specifically trained on drawing pelicans riding a bicycle. ( ͡° ͜ʖ ͡°)
And where on the web has someone shared a human effort at doing the same?
you could literally hire a human to do that, not everything needs to be on the web.
Where on the web do hallucinations come from?
I think it's some part of the Dark Web, or I wish it was.
Previous discussions about "pelican on a bicycle" always mention this, but it's not something they can do without being blatantly obvious. You can always do other x riding y tests. A juggler riding a barrel. A bear riding a unicycle. An anteater riding a horse, etc.
A couple of days ago, inspired by Simon and those discussions, I had Claude create 30 such tests. I posted a Show HN with the results from six models, but it didn’t get any traction. Here it is again:
https://news.ycombinator.com/item?id=45845717
https://gally.net/temp/20251107pelican-alternatives/index.ht...
Oh man, that’s hilarious. I dunno what qwen is doing most of the time. Gemini seems to be either a perfect win or complete nonsense. Claude seems to lean towards “good enough”.
simon has said multiple times he has hidden tests he runs for precisely this eventuality (because of course it will happen someday, and he'll write a banger article calling them out for it)
Under a different account? They could special case his account and even give him more computing power because they know he'll not rest until every dog on this planet has a subscription to at least ChatGPT and Claude Code.
Was a fun idea and fun read. Thank you.
Did you consider expanding the number of models by getting all calls through open router?
I haven't tried it myself yet, but it looks like OpenRouter is a supported feature of Codex already: https://github.com/openai/codex/blob/a47181e471b6efe55e95f98...
It's weird, tools are empty on the http request?
You can already do that through the config file, you can define custom endpoints for any openai compatible API. So you can get openrouter, or even local models via vLLM or alternatives. I think someone even tried to get cheaper API pay-as-you-go usage by hitting their "bulk" API, for tasks that run over night (so no need for immediate responses).