I remember using Qwen 2.5 Coder for autocomplete with Continue.dev, that experience was a mess both in JetBrains IDEs, as well as Visual Studio Code.
People posting stuff like this is really cool because otherwise it kinda feels like nobody gives a crap, for example even with Cline/RooCode/KiloCode there’s no good way for me to hook up an autocomplete model that either runs in Ollama or maybe a remote Cerebras Code model, like KiloCode doesn’t have a proper model configuration option even if it has it for the chat or regular agentic stuff - I don’t get why autocomplete is such a special case.
I guess what I’m saying is that I’m glad someone’s at least trying so I don’t have to keep a Copilot subscription just because I genuinely like their autocomplete and the rest of it is basically wasted: Claude Code and Codex and others are better for the actual chat/agentic stuff, KiloCode and others are really nice IDE plugins.
I understand that the 1.5B is small enough to run locally... but does it actually in the Sweep AI Jetbrains plugin? That is, if I install the plugin, will I download the model automatically and the plugin doesn't phone home?
Sometimes when I use a plugin like this I get reminded just how much of a productivity nerf it is to code without an autocomplete AI. Honestly in my opinion if you write a lot of boilerplate code this is almost more useful than something like Claude Code, because it turbocharges your own train of thought rather than making you review someone else's, which may not align with your vision.
This is a really good plugin. I'm a diehard JetBrains user, I tried switching to VSCode and its various forks many times because of AI but muscle memory from years of use is hard to override. And for a lot of languages JetBrains is just much better, especially out of the box. But they dropped the ball so hard on AI it's unbelievable. Claude Code pulled it back a bit because at least now the cutting edge tools aren't just VSCode plugins, but I was still missing a solid autocomplete tool. Glad this is here to fill that niche. Very likely will be switching my GitHub copilot subscription to this.
I also really appreciate publishing open weights and allowing a privacy mode for anonymous trial users, even if it's opt-in. Usually these things seem to be reserved for paying tiers these days...
Is there a way to use this (or similar) model in Visual Studio? Extensions on Visual Studio Marketplace are clunky and sluggish at best, if they even work at all.
> We’re open sourcing the model weights so the community can build fast, privacy-preserving autocomplete for every IDE - VSCode, Neovim, Emacs, and beyond.
Of course they are different products, but is there really a meaningful distinction between VS Code and an IDE? For all i care VS Code is a complete IDE.
Surprising how badly Jetbrains implemented AI. Apparently to such an extent that even after multiple years of LLM's someone felt confident enough to build a company that can do better.
This looks really neat, interesting technical writeup as well!
Nice work the next-edit framing matches how real refactors happen much better than token-level autocomplete.
The diff-format insight is especially interesting. Smaller models struggling with unified diffs lines up with what I’ve seen too simpler original/updated blocks reduce noise and improve intent capture.
On the infra side, training a 1.5B model in ~4 hours on 8×H100 is impressive. For folks experimenting with similar mid-scale models, we’ve been running comparable workloads on decentralized GPU aggregators (I’ve used io.net) to avoid cloud quota limits and keep costs predictable with the tradeoff that you handle orchestration yourself.
Curious if you saw diminishing returns when including older edits as context? That cutoff seems tricky in larger repos.
You’re subtly pushing the same product in basically every one of your comments. If these are good faith comments please edit out the product name, it’s unnecessary and doing so as a green account just makes people consider you a spammer. Establish yourself first.
> On the infra side, training a 1.5B model in ~4 hours on 8×H100 is impressive.
It's hard to compare without more details about the training process and the dataset, but, is it? Genuine question, because I had the opposite impression. Like, for example, recently I did a full finetuning run on a 3B model chewing through a 146k entry dataset (with 116k entries having reasoning traces, so they're not short) in 7 hours on a single RTX 6000.
This is cool! I am more interested in how you guys generated next edit training data from repos, seems like there are lots of caveats here. Would love your insights
Again amazing work! waiting for what you guys cook next
I read the release but didn't quite understand the difference between a next-edit model and a FIM model - does anyone have a clear explanation of when to use one over the other? I'd love if there was a sublime plugin to utilize this model and try it out, might see if I can figure that out.
Ingesting multiple code files will take forever in prompt processing without a GPU though, tg will be the least of your worries. Especially when you don't append but change it in random places so caching doesn't work.
Followed your work since the beginning and used it for inspiration for some cool demos on self-healing web scrapers. fascinating to see the transition from original concept to producing models. cool stuff.
> Would instead of the RL step a constrained decoding say via something like xgrammar fix syntax generation issue ?
It can, but you have to consider two things here:
a) constrained decoding ensures adherence to syntax, not semantics. Say you're editing a field in an enum in rust. You can write syntactically correct rust code that doesn't address the new field further in the code (say in a switch). You'd get correctly syntactic code, but the compiler will scream at you. RL works on both.
b) if your goal is to further train the model, so it works on many tasks, RL helps with exploring new paths and training the model further. Constrained grammars help with inference, but the model doesn't "learn" anything. With RL you can also have many reward functions at the same time. Say one that rewards good syntax, one that rewards "closing" all the functions so tree-sitter doesn't complain, and one that rewards 0 errors from the compiler. The model gets to train on all 3 at the same time.
I use llama.vim with llama.cpp and the qwen2.5-coder 7B model. Easily fits on a 16 GB GPU and is fast even on a tiny RTX 2000 card with 70 watts of power. Quality of completions is good enough for me, if I want something more sophisticated I use something like Codex
Yeap, the two seems like game changer. For now, I'm using "Qwen2.5-Coder-7B".
Sweep 1.5B is "just" 12 % point better than Qwen2.5-Coder, but Sweep 7B is 25% point better.
I remember using Qwen 2.5 Coder for autocomplete with Continue.dev, that experience was a mess both in JetBrains IDEs, as well as Visual Studio Code.
People posting stuff like this is really cool because otherwise it kinda feels like nobody gives a crap, for example even with Cline/RooCode/KiloCode there’s no good way for me to hook up an autocomplete model that either runs in Ollama or maybe a remote Cerebras Code model, like KiloCode doesn’t have a proper model configuration option even if it has it for the chat or regular agentic stuff - I don’t get why autocomplete is such a special case.
I guess what I’m saying is that I’m glad someone’s at least trying so I don’t have to keep a Copilot subscription just because I genuinely like their autocomplete and the rest of it is basically wasted: Claude Code and Codex and others are better for the actual chat/agentic stuff, KiloCode and others are really nice IDE plugins.
Very cool!
I understand that the 1.5B is small enough to run locally... but does it actually in the Sweep AI Jetbrains plugin? That is, if I install the plugin, will I download the model automatically and the plugin doesn't phone home?
no, as far as I can see there is no way to configure the Jetbrains plugin to use a local endpoint.
Sometimes when I use a plugin like this I get reminded just how much of a productivity nerf it is to code without an autocomplete AI. Honestly in my opinion if you write a lot of boilerplate code this is almost more useful than something like Claude Code, because it turbocharges your own train of thought rather than making you review someone else's, which may not align with your vision.
This is a really good plugin. I'm a diehard JetBrains user, I tried switching to VSCode and its various forks many times because of AI but muscle memory from years of use is hard to override. And for a lot of languages JetBrains is just much better, especially out of the box. But they dropped the ball so hard on AI it's unbelievable. Claude Code pulled it back a bit because at least now the cutting edge tools aren't just VSCode plugins, but I was still missing a solid autocomplete tool. Glad this is here to fill that niche. Very likely will be switching my GitHub copilot subscription to this.
I also really appreciate publishing open weights and allowing a privacy mode for anonymous trial users, even if it's opt-in. Usually these things seem to be reserved for paying tiers these days...
Is there a way to use this (or similar) model in Visual Studio? Extensions on Visual Studio Marketplace are clunky and sluggish at best, if they even work at all.
If you mean VSCode (or any other editor):
> We’re open sourcing the model weights so the community can build fast, privacy-preserving autocomplete for every IDE - VSCode, Neovim, Emacs, and beyond.
https://blog.sweep.dev/posts/oss-next-edit
No, I mean Visual Studio (the IDE), not Visual Studio Code (the editor).
Of course they are different products, but is there really a meaningful distinction between VS Code and an IDE? For all i care VS Code is a complete IDE.
I thought there was already a generic plugin for this :(. Let's wait for one then ha, or I may just make one.
Surprising how badly Jetbrains implemented AI. Apparently to such an extent that even after multiple years of LLM's someone felt confident enough to build a company that can do better.
This looks really neat, interesting technical writeup as well!
Nice work the next-edit framing matches how real refactors happen much better than token-level autocomplete.
The diff-format insight is especially interesting. Smaller models struggling with unified diffs lines up with what I’ve seen too simpler original/updated blocks reduce noise and improve intent capture.
On the infra side, training a 1.5B model in ~4 hours on 8×H100 is impressive. For folks experimenting with similar mid-scale models, we’ve been running comparable workloads on decentralized GPU aggregators (I’ve used io.net) to avoid cloud quota limits and keep costs predictable with the tradeoff that you handle orchestration yourself.
Curious if you saw diminishing returns when including older edits as context? That cutoff seems tricky in larger repos.
You’re subtly pushing the same product in basically every one of your comments. If these are good faith comments please edit out the product name, it’s unnecessary and doing so as a green account just makes people consider you a spammer. Establish yourself first.
> On the infra side, training a 1.5B model in ~4 hours on 8×H100 is impressive.
It's hard to compare without more details about the training process and the dataset, but, is it? Genuine question, because I had the opposite impression. Like, for example, recently I did a full finetuning run on a 3B model chewing through a 146k entry dataset (with 116k entries having reasoning traces, so they're not short) in 7 hours on a single RTX 6000.
It's good. The blog post about it is very interesting. I hope, a plugin for neovim will be made soon.
https://blog.sweep.dev/posts/oss-next-edit
This is cool! I am more interested in how you guys generated next edit training data from repos, seems like there are lots of caveats here. Would love your insights
Again amazing work! waiting for what you guys cook next
I read the release but didn't quite understand the difference between a next-edit model and a FIM model - does anyone have a clear explanation of when to use one over the other? I'd love if there was a sublime plugin to utilize this model and try it out, might see if I can figure that out.
What type of hardware do I need to run a small model like this? I don't do Apple.
1.5B models can run on CPU inference at around 12 tokens per second if I remember correctly.
Ingesting multiple code files will take forever in prompt processing without a GPU though, tg will be the least of your worries. Especially when you don't append but change it in random places so caching doesn't work.
1.54GB model? You can run this on a raspberry pi.
How easy is it to re-train these to specific subset of programming languages? Could there be a "ruby+rails+html" version, etc?
I use Sweep’s Jetbrains autocomplete plugin daily, it really stands out.
Does it run totally offline?
Better than the one that ships with Jetbrains?
I did buy their $100/yr AI but its about to run out.
Any easy way to try on vscode?
It sounds like you might be killing Zed's ability to monetize, am I misunderstanding that?
If your only feature worth monetizing is replicated by a solo dev in his freetime you might have a problem.
So SFT cost less only low hundreds of dollars? (1-10$ per hour per H100 if I'm seeing this correctly).
What about SFT?
Presumably basing this of Qwen is the reason it can be done for so cheap?
Does anyone know if the 7B model is also available somewhere?
Wow super fun read, I love how it went into the technical details. Any way to make it work with vscode?
Followed your work since the beginning and used it for inspiration for some cool demos on self-healing web scrapers. fascinating to see the transition from original concept to producing models. cool stuff.
I'm very green to this so forgive if this question sounds silly:
Would instead of the RL step a constrained decoding say via something like xgrammar fix syntax generation issue ?
> Would instead of the RL step a constrained decoding say via something like xgrammar fix syntax generation issue ?
It can, but you have to consider two things here:
a) constrained decoding ensures adherence to syntax, not semantics. Say you're editing a field in an enum in rust. You can write syntactically correct rust code that doesn't address the new field further in the code (say in a switch). You'd get correctly syntactic code, but the compiler will scream at you. RL works on both.
b) if your goal is to further train the model, so it works on many tasks, RL helps with exploring new paths and training the model further. Constrained grammars help with inference, but the model doesn't "learn" anything. With RL you can also have many reward functions at the same time. Say one that rewards good syntax, one that rewards "closing" all the functions so tree-sitter doesn't complain, and one that rewards 0 errors from the compiler. The model gets to train on all 3 at the same time.
Really cool.
But how to use it instead of Copilot in VSCode ?
what do people use for Neovim to integrate these models for tab-completion level of stuff. (i.e. non agentic/vibe coding)
I use llama.vim with llama.cpp and the qwen2.5-coder 7B model. Easily fits on a 16 GB GPU and is fast even on a tiny RTX 2000 card with 70 watts of power. Quality of completions is good enough for me, if I want something more sophisticated I use something like Codex
Very interesting - and cool to read about the development process. I'd love to hear more about how genetic algorithm worked here.
I wonder whether we are perhaps the point of usefulness of 'next edit' code development in 2026 though.
Do you plan to release Sweep 3B/7B on HF?
Yeap, the two seems like game changer. For now, I'm using "Qwen2.5-Coder-7B". Sweep 1.5B is "just" 12 % point better than Qwen2.5-Coder, but Sweep 7B is 25% point better.
can it be integrated in monaco editor ?
Based on qwen2.5-coder? seems like a "why not/resume embellish/show VC" type release I guess
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html