- better for when the response has to address many subgoals coherently
- usually will not undo previous bugfix progress that was made earlier in the conversation, whereas with Claude if you start having extremely long conversations I have noticed it allowing certain bugs it had already fixed to be reintroduced at much later times
Claude:
- image inputs are actually very complementary for debugging issues, esp if visual at all (eg debugging why a GUI framework rendered your UI in an unexpected way, just include a screenshot)
- surprisingly very good at taking descriptions of algorithmic or mathematical procedures and making captioned svg illustrations, then taking screenshots of those svgs + user feedback to enhance the next version of svg illustrations
- more recent knowledge cutoff, so generally speaking somewhat less likely to deny newer APIs/things exist (eg o1 told me tokenizer.apply_chat_template and meta-llama/Llama-3.2-1B-Instruct both did not exist and removed them both from the code I was feeding it)
O1 for collabing on design docs, o1 for overall structure, break it into tasks per preference / sort; sonnet/o1 for executing each small tasks.
O1 is higher quality, more nuanced, and has deeper understanding; the biggest downside rn is the significantly higher latency (both due to thinking, and also, continue.dev doesn't support o1 streaming currently, so you're waiting until it's all done), and higher cost.
In terms of tools: either vscode with continue.dev / cline, or cursor
I prefer o1. I mostly use it as a knowledge system. Don't really care for the automatic code generation nonsense. Unless I'm really tired and the task is very simple, in which case I might decide to write a paragraph of text instead of 30 lines of Python.
My experience is that when ChatGPT fails, Claude fails too.
On some advanced coding tasks, I find ChatGPT's depth of reasoning ability to be better.
- Sonnet 3.5 seems good with code generation and o1-preview seems good with debugging
- Sonnet 3.5 struggles with long contexts whereas o1-preview seems good at identifying interdependencies between files in code repo in answering complex questions
- Breaking the problem into small steps seems to yield better results with Sonnet
- I’m using primarily in Cursor/GH Copilot and with Python
I concur. Sonnet is great at starting projects, but eventually gets 'bogged' down and starts losing the plot. o1 is then useful to sort out the issues and painfully pull things back on track.
o1:
- better for when the response has to address many subgoals coherently
- usually will not undo previous bugfix progress that was made earlier in the conversation, whereas with Claude if you start having extremely long conversations I have noticed it allowing certain bugs it had already fixed to be reintroduced at much later times
Claude:
- image inputs are actually very complementary for debugging issues, esp if visual at all (eg debugging why a GUI framework rendered your UI in an unexpected way, just include a screenshot)
- surprisingly very good at taking descriptions of algorithmic or mathematical procedures and making captioned svg illustrations, then taking screenshots of those svgs + user feedback to enhance the next version of svg illustrations
- more recent knowledge cutoff, so generally speaking somewhat less likely to deny newer APIs/things exist (eg o1 told me tokenizer.apply_chat_template and meta-llama/Llama-3.2-1B-Instruct both did not exist and removed them both from the code I was feeding it)
O1 for collabing on design docs, o1 for overall structure, break it into tasks per preference / sort; sonnet/o1 for executing each small tasks.
O1 is higher quality, more nuanced, and has deeper understanding; the biggest downside rn is the significantly higher latency (both due to thinking, and also, continue.dev doesn't support o1 streaming currently, so you're waiting until it's all done), and higher cost.
In terms of tools: either vscode with continue.dev / cline, or cursor
Languages: node.js / javascript, and lately c# / .net / unity
I like aider with the claude-3-5-sonnet-20241022, haven’t tried it with O1, though.
Also, https://aider.chat/docs/scripting.html offers some nice possibilities.
I prefer o1. I mostly use it as a knowledge system. Don't really care for the automatic code generation nonsense. Unless I'm really tired and the task is very simple, in which case I might decide to write a paragraph of text instead of 30 lines of Python. My experience is that when ChatGPT fails, Claude fails too. On some advanced coding tasks, I find ChatGPT's depth of reasoning ability to be better.
My notes:
- Sonnet 3.5 seems good with code generation and o1-preview seems good with debugging
- Sonnet 3.5 struggles with long contexts whereas o1-preview seems good at identifying interdependencies between files in code repo in answering complex questions
- Breaking the problem into small steps seems to yield better results with Sonnet
- I’m using primarily in Cursor/GH Copilot and with Python
I concur. Sonnet is great at starting projects, but eventually gets 'bogged' down and starts losing the plot. o1 is then useful to sort out the issues and painfully pull things back on track.
Started a small project to compare AI IDEs
https://github.com/StephanSchmidt/ai-coding-comparison/
(no comparison there yet, just some code to play around with)
o1 if you're going to write full specs and not provide any context.
Sonnet 3.5 if you can provide context (e.g. with cursor)
gpt-4o for UI design. Also for solving screenshots of interviews
o1 is much better at finding complex needle in the haystack bugs/fixes. sonnet 3.5 better at shallow generic coding