Just wanted to mention that some basic Windows-OS keyboard shortcuts don't work, like ALT+F to open the File menu. Also things like ALT+SPACEBAR to bring up the system context menu for the focussed window (the menu with maximise, minimise, close options etc.) do not seem to work. I'm guessing with the DirectX rendering backend, the 'app' is rendered more akin to a video game than a native win32 process.
Also after install, the install directory takes up 400MB+. Even VSCode only takes up around 380MB. I believe it when they say it's not an Electron app, but I do wonder what's being packed in there. I was always under the impresion that Rust apps are pretty lightweight, but that install size is nearing Java levels of binary/dependency bloat.
I just compiled "zed" with "cargo build --release" and not only did it pull >2000 dependencies, its size (executable file) is literally 1.4G. Debug is 1.2G.
$ pwd
/tmp/zed/target/release
$ ls -lh ./zed
-rwx------ 2 john john 1.4G Aug 28 17:10 zed
$ du -h ./target/debug/deps/
20G ./target/debug/deps/
$ du -h ./target/release/deps/
8.0G ./target/release/deps/
$ du -h ./target/debug/zed
1.2G ./target/debug/zed
$ du -h ./target/release/zed
1.4G ./target/release/zed
This is on a whole new level of bloat; both with regarding to dependencies AND the resulting executable file(s) (EDIT: executable files are unstripped).
Any explanations as to why "cargo" does not seem to re-use libraries (dependencies) in a shared directory, or why it needs >2000 dependencies (that I see being downloaded and compiled), or why the executable file of the release mode is 1.4G unstripped while of the debug one it is less?
Cargo does the de-duplication, but only up to a point. If two packages request the same dependency with semver ranges that have a common overlap (say, `1.4` and `1.6`) then it will use a single package for both (say, `1.7.12`). But if they request semver-incompatible versions (`2.1` and `1.6`) then cargo will use both.
Probably due to treesitter modules for many languages compiled in. AFAK Treesitter's codegen is unfortunately a share nothing between different languages. So a dozen language parsers can easily cross upward of 200 MB.
Binaries for dynamic libraries of tree-sitter (usually compiled with C compiler) would be smaller than that. For example this [1] .so bundle for 107 different grammars is ~137 MiB.
Unless by "compiled in", some in-lining of the C code into Rust codebase is meant.
I suppose it has to do with how every Rust crate (including dependencies) gets statically linked into the final binary, and this leads to extremely large intermediate artifacts, even when many crates share common dependencies.
Or the fact that there is incremental compilation artifacts...
And of course the amount of dependencies. A single project might depend on hundreds of crates (quite common), each compiled separately with its own build artifacts. sighs.
If they're going to implement every feature under the sun and include half of userspace to support it, they might as well build the whole thing on top of a browser.
I haven't seen video but it does have voice. And similarly I don't think it's screen-share, it's just editor state syncing, so live collaboration. Still quite a lot.
ST is a text editor while Zed is an IDE. I wish there were something like VSCode that is very modular but written in native. But VSCode is good enough and it is my daily driver.
No, that doesn't matter. I think you should be looking for how quickly it can get you a working environment for your favorite language not how long it takes to boot up once per reboot. If you want features the bits have to live somewhere. Look at it like a trade off, if you're just going to look at it, by all means, take a memory dump. But I find that a little bit hard to work with.
For me, as long as it's better than alternatives it's good enough. Especially if it's not running JS.
The Rust compiler always produces quite large binaries compared to other programming language. I notice there's a (closed) issue on the Zed github [https://github.com/zed-industries/zed/issues/34376],
> At this time, we prioritize performance and out-of-the-box functionality over minimal binary size. As is, this issue isn't very actionable, but if you have concrete optimization ideas that don't compromise these priorities, we'd be happy to consider them in a new issue.
I think there should be a best-of-both-worlds type of linking - during compilation, the linker places a statically compiled library at a certain address, but doesn't include it in the binary. Then, during startup, the OS maps the same library to the given address (sharing the data between processes). This would improve memory use and startup time both and performance, avoiding dynamic linking.
Of course you need to match the exact versions between the compiled executable and the dependency, but this should be a best practice anyways.
Static linkers generally don't compile a "full copy" of the library. Just the code paths the compiled application uses. The compiler may have also made optimizations based on the apppication's usage patterns.
I say "strangely" because honestly it just seems large for any application. I thought they might not be doing LTO or something but they do thin LTO. It's just really that much code.
> The world moved into dynamic linking in the 1980's for a reason.
Reasons that no longer exist. Storage is cheap, update distribution is free, time spent debugging various shared lib versions across OSes is expensive.
Tbh, the rights and wrongs aside, I suspect "everyone" is complaining about it because it's the easiest thing to talk about. Much like how feature discussions tend towards bikeshedding.
My /usr is 15G already, and /var/lib/docker isn't that far off despite people's obsession with alpine images. If more people would dismiss storage as cheap it'll quickly become expensive, just not per GiB.
> update distribution is free
I wouldn't be surprised if at one point Github would start restricting asset downloads for very popular projects simply because of how much traffic they'd generate.
Also, there's still plenty of places on the planet with relatively slow internet connectivity.
Storage doesn't really feel cheap. I'm considering buying a new laptop, and Apple charges $600 per TB. Sure, it's cheaper than it was in the '80s, but wasting a few gigabytes here and a few gigabytes there is quickly enough to at least force you to go from a 500GB drive to a 1TB drive, which costs $300.
It's the reality of storage pricing. The general statement "storage is cheap" is incorrect. For some practically relevant purposes, such as Apple laptops, it's $600/TB. For other purposes, it's significantly below $50/TB.
You could say "just don't buy Apple products". And sure, that might be a solution for some. But the question of what laptop to buy is an extremely complicated one, where storage pricing is just one of many, many, many different factors. I personally have landed on Apple laptops, for a whole host of reasons which have nothing to do with storage. That means that if I have to bump my storage from 1TB to 2TB, it directly costs me $600.
If you're buying Apple then you should expect inflated prices.
I got a 4TB NVMe SSD for like 350€, a 2TB one goes from 122 - 220 € depending on read/write speeds.
I don't check the installation size of applications anymore.
I'm just saying that $600/TB is a real storage price that lots of people deal with. Storage isn't universally cheap.
This feels especially relevant since we're discussing Zed here, the Mac-focused developer tool, and developers working on Mac are the exact people who pay $600/TB.
A 2TB SSD for the Framework 13 cost me 200 euros. But I agree that it's not cheap, files are getting bigger, games are big, apps are huge, and then you need backups and external storage and always some free space as temp storage so you can move files around.
I don't need to "get far in the Apple universe", I need a laptop. My current MacBook Pro cost about the same as the Dell XPS I was using before it, I like nice laptops
RAM isn't cheap (it may be for your tasks and wallet depth, but generally it isn't, especially since DDR5). Shared objects also get "deduplicated" in RAM, not just on disk.
What objects is the Zed process using that would even be shared with any other process on my system? Language support is mostly via external language servers. It uses its own graphics framework, so the UI code wouldn't be shared. A huge amount of the executable size is tree-sitter related.
I 100% agree.
As soon as you step outside of the comfort of your Linux distributions' package manager, dynamic linking turns into dependency hell.
And the magic solution to that problem our industry has come up with is packaging half an OS inside of a container...
OSes don't load the full executable into physical RAM, only the pages in the working set. Most of the Zed executable's size is tree-sitter code for all the supported languages, and only needs to page in if those languages are being used in a project.
Even if it’s delta, it cannot patch itself when running on Windows. So it runs the updater, creates a new exec and switches to it after relaunch. Same as Chrome or Firefox.
Is it? On Linux, you can overwrite the file, but the underlying inode will still be open, and the 'invisble' old version will linger around - you don't have any easy way short of restarting everything to make sure the new versions are being used.
And with Chromium this directly leads to crashes - when you update the browser as its open, the new tabs will open with the new version of the binary, with the old ones still using the old binary - which usually leads to crash.
I prefer 'you cannot do X' instead of 'we allow you to do it, but it might misbehave in unpredictable ways'.
I don't use Chromium. I never had issues with Apache, MySQLd, Firefox, Thunderbird, ... . You can even swap out the Linux kernel under userspace it still keeps all running.
> maybe programs shouldn't be allowed to update themselves.
Honestly I'd be all for this if the OS had a good autoupdate mechanism for 3rd party applications. But that's not the world we live in. Certainly not on windows - which is too busy adding antivax conspiracy articles to the start menu.
Will it though?
I mean it's a lot for a "text editor", but much less than a classical IDE.
And 400M is pretty negligible if you're on Windows, where your OS takes up dozens of GB for no reason.
Entirely untrue. Download git, run make and you'll get a 19MB `git` binary along with a whole lot of other 19MB binaries. Running `cargo build` produces a 3.8MB binary.
And that's still comparing apples to oranges, because git is compiled with full optimizations. Running `cargo build --release` produces a 462KB binary.
Even if I'm comparing to my system's git installation, that's still 3.9MB, and that's with all the debug info stripped.
Yes rust (like C++) tends to produce larger binaries than C, but lets be real here: The reason Zed has a bloated binary is the ~2000 rust packages that comprise it.
That's an entirely different issue. The kb's of overhead for backtrace printing and the format machinery is fixed and does not grow with the binary size. All combined it wouldn't account for anywhere close to 1mb let alone 100's of mb.
Most of the sticker shock from Rust binaries is due to them being statically-linked by default. Considering that, Rust binaries aren't especially large, especially if you strip them. Dynamically-linked binaries are better at obscuring their size.
> I was always under the impresion that Rust apps are pretty lightweight
Maybe compared to electron, but binary size is an issue with any nontrivial rust application. Due to how cargo works, it compiles and bundles in every dependency in world.
> I believe it when they say it's not an Electron app, but I do wonder what's being packed in there
Half of Electron namely Node.js. As majority of lsp are .js based. Also extensions are WASM. Also VS Code keeps extensions in separate config directory, while Zed in main directory.
Zed looks and feels amazing to use. I test-drove it for a bit on my linux system, and the feel of it is difficult to convey to those who have not tried it yet. It's easy to overlook the significance of gpu accelerated editor - but I promise you, use it for a bit and you'll be sold.
The only feature that is preventing me from switching to Zed is the current lack of DevContainer support[1]. After investing a significant amount of time perfecting a devcontainer (custom fat image with all tools/libs + config), it would be a large step backwards to go back to installing everything locally.
There's a lot of eyes on this feature, so I'm hopeful it will be implemented in the future.
One of the reasons why I'm not fully switching yet is a Zed's inability to update the currently open file with changes made elsewhere [1].
All the other editors I use are aware of outside changes, but not Zed. And I'm just not willing to close and reopen the file to get fresh contents. Eventually, I'll forget to do it and lose some work.
How do other editors handle this in your experience? I’m pretty sure VSCode behaves exactly the same, with the addition of a manual “merge” editor when you try to save, but never shows changes live for a modified file.
Changes to open files without any modifications in the buffer are always shown. Are you using any kind of containers or virtual fs that might be interfering?
I tried it for a while. It's okay, I guess. Typing latency is neither an issue nor a bottleneck for me, so personally, I don't see the appeal. Apart from that, it feels lacking and offers nothing else that I don't already get from VS Code. If I really cared about performance, I would use Neovim.
I find Neovim to be surprisingly sluggish. That's of course after installing extensions, but I don't find it particularly performant. Zed feels way snappier.
I write C++ and Typescript and just tried Zed on Windows, it didn't feel significantly faster for me (maybe a bit).
What's worse is C++ autocomplete - both on the same project with clang and CMake - it might be that I need some more setup, but I felt like Zed understood much less of tricky modern C++ (tbf, VS Code isn't perfect either) and the autocomplete was kinda broken. I fully admit this might be due to my own inexperience, so I won't hold it against Zed, but so far I haven't seen anything significantly better to justify making the switch.
Not the OP, but it's pretty useful in my team, we all work on the same environment, with the same system dependencies, with no setup required on development machine (except the need for docker).
In the devcontainer you can run code snippets, use the system shell and access an execution environment close to production if well made.
It also allows to avoid (system) dependency conflicts between different projects you may work on.
I sometimes work on old PHP applications, some of which require versions as ancient as 5.4 and refuse to run on anything beyond that. They're not only difficult/impossible to install on modern systems, but also make it hard to onboard other people — you have to write a detailed setup document and then waste a couple of hours time for each new system. And then repeat that a few more years down the line.
Why not write that document as a bunch of build instructions instead of free-form text, while also isolating the result from the rest of the system.
Are you talking about the same thing? This sounds like your typical docker/compose dev setup for running the software under development. The "dev container" stuff is more about running your editor stack in a container, like static analysis tools, language servers etc.
In our case, instant on-boarding of devs with no host setup needed. It's a large, complicated repo with a lot of unusual system dependencies. We build a multi-arch base image for all this and then the devcontainer is built on-top. Devcontainer hooks then let us do all the env setup in a consistent, version controlled way. It's been a god send for us and we'll never go back.
I am a big fan of using it when working with somewhat complicated toolchains, especially if they need to be compiled. Cleaning up the file system is as simple as deleting the container and starting a new one. OCaml on windows comes to mind.
It's ideal for open source projects, no need to install a toolchain locally for small changes. I've used it for a POC in Go where nobody has Go installed in my organization. Not that Go is complicated to install, but I wouldn't ask anyone to install a toolchain for a one-off POC.
There's other toolchains that are more involved or conflict with an operating system's pre-installed stuff, like (iirc) Ruby on MacOS.
I've been using Zed primarily for months but I just switched back to VSCode for 2 reasons, one of which is kinda my fault and the other it's unclear where the fault is.
1. I deleted a few hours of work because I was late night coding and I decided to rename a file before check-in and delete the old version. Well I renamed it and Right-Click -> Deleted the new version by accident.
It turns out Zed has both "Delete" and "Trash" right next to each other in its right-click menu. Delete skips the trash and Zed does not implement Ctrl+Z yet, so unless you have a backup that file is gone. Because I renamed it and had not yet checked in it wasn't in version control yet.
2. For some reason a particular crate in my Rust workspace wasn't having errors and warnings show up in the editor. I messed with configuration a bunch and couldn't figure it out, so I tried VSCode and it just worked without any additional configuration.
> Delete skips the trash and Zed does not implement Ctrl+Z yet,
I'm not going to claim Zed has a good UI in this space, but saying it doesn't implement Ctrl + Z for a feature which is literally "skip the undo-ability of this option" is a bit misleading.
As I just elaborated on in another comment VSCode and Zed both have "Delete" in the right-click menu as the last entry. VSCode only has 'Delete' and while it has a confirm screen it goes to the trash with the ability to Ctrl+Z.
After years of deleting files in VSCode I have a muscle memory for that behavior and I just skip through the dialogue. I didn't realize Zed's 'Delete' worked differently until I lost work, so I was just reflexively skipping through its confirm screen as well.
While I understand your point, and think you are correct - if the 'Trash' button is not behind a confirmation box, and it's not undoable, then that is a pretty terrible design choice.
I've just grabbed Zed, and both Trash and Delete are behind confirmation boxes. Trash sends to my Trash (recycle bin on windows), and delete properly deletes. There's no built in undo for trash, but I can restore from the trash/recycle bin normally.
yeah I just tried the windows build and they are indeed behind a confirmation box (for both trash and delete). That's not the impression I got this morning from reading OP's comment
OP here. I have muscle memory from many years of using VSCode before Zed.
"Delete" is the last option in the right click menu for both and when you select it both show a very similar dialogue box asking you to confirm.
VSCode's 'Delete' by default moves the file to the trash and can be undone with Ctrl + Z. Zed's 'Delete' skips the trash and can't be undone with Ctrl + Z.
I should have mentioned the confirmation box but after years of use I've begun clicking through that box so quick I didn't realize the behavior was different in Zed.
On the subject of surprising behaviour. I did something last week. I don't know what it was, but maybe some kind of fat-finger error.
It randomly deleted 3-line chunks all across my codebase. No rhyme or reason. I was just about to commit a big change, so it was annoying.
It might have been a global search and replace, but there was no clue in the history. And I can't think what kind of regex it could have been.
Or maybe a bug in IPC with the language server?
Anyone got any clues about what I did?
(I love Zed, on Mac and Linux. Even went as far as buying a new graphics card as my 5-year-old one didn't have Vulcan support, which is pretty much mandatory for Zed)
That's almost certainly it. I'd probably searched for common fragment. And 'select all' does indeed select across all of the search result buffers. Thanks!
About no. 1, I think any sane application will show a warning "Do you want to delete abc.c" before permanently removing the file. I cannot verify it now, but pretty sure VSCode has it.
So --
* if there is no such dialog, it's on zed
* If there is such a dialog and you clicked yes, it's on you
It does have such a dialogue but VSCode's identically named "Delete" option has a nearly identical warning dialogue that moves the file to the trash instead.
Zed's dialogue just says "Delete file.txt?" so if you're used to VSCode it's very easy to skip through that dialogue reflexively.
Zed is pretty awesome on macOS and Linux, but I tried the Windows version tonight and things that work great on the other OSes aren't working on Windows. I've noticed:
- Issues with various keybinds not working as expected, e.g. Ctrl+S to save works on my Linux machines but not Windows if Vim Mode is enabled
- Issues with LSPs/diagnostics not working on Windows the same way they work on macOS/Linux
- The terminal holds a lot of assumptions about how the system works that don't hold true on Windows, e.g. "the user has one shell they use for all things" or "setting environment variables in the OS is easy"
I love Zed on my work issued macbook and use it full time, but the Windows version still needs some work.
It isn't as easy as on Linux, but you press Windows, type env, click edit Variables then select User vs. System, the you press add. That's not too complicated.
… And then you completely close all instances of the app that you need to see the changed environment variable, including any integrated terminal processes within that app. And then you relaunch all instances of that app and recover whatever workflow you were currently in the middle of.
That sets them globally, right? I need different variables and values in different contexts. e.g. in VSC I can set variables per-directory in settings.json files.
Have they implemented subpixel font rendering by now? I remember that being a sticking point when it came to Linux because they had designed their custom UI renderer around the Macs ubiquitous HiDPI displays, leading to blurry fonts for the much, much larger proportion of Linux (and Windows) users who still use LoDPI displays.
I'm glad there's finally some progress in that direction. If they actually implement subpixel RGB anti-aliasing, it would definitely be worth considering as an alternative. It's been surprising to see so many people praise Zed when its text rendering (of all things) has been in such a state for so long.
Even then... my visibility is pretty bad, so earlier this year I upgraded to 45" 3440x1440 monitors, and even then I'm viewing at 125%, so subpixel fonts helps a lot in terms of readability, even if I cannot pick out the native pixels well.
They aren't high-dpi though, just big and still zoomed. On the plus side, it's very similar experience to two 4:3 monitors glued together... side by side apps on half the screen is a pretty great experience... on the down side, RDP session suck, may need to see if I can find a scaling RDP app.
It doesn't take being outside of the west for this to be relevant. Two places I currently frequent, A) the software development offices of a fortune 500 company, and B) the entire office & general-spaces (classrooms, computer labs, etc) of a sizeable university, have 1080p monitors for >80% of their entire monitor deployment.
My gaming PC is also connected to a 1080p display because tbh for gaming that's good enough, but I don't whine about application text quality on that setup since it looks pretty bad with or without ClearType compared to a highdpi display ;)
> The Windows build uses DirectX 11 for rendering, and DirectWrite for text rendering, to match the Windows look and feel.
As I understand, DirectWrite font rendering uses Windows subpixel font rendering. It seems OK on my monitor (better than the linux version) but haven't done any pixel peeping.
They seem to have anticipated this issue and designed it accordingly!
I tried it out on macOS and have a 1440p external monitor that the fonts just look horrible on. Looks fine on the laptop's "retina" display but blurry enough every else that it actually gave me a headache after a few hours.
Spitballing here, but subpixel rendering also requires integration with the rendering pipeline, specifically that you have the infrastructure to pass fractional offsets to the rasterization engine based on the concrete screen position of a glyph, and that the glyph atlas is a full RGBA texture (not just an alpha texture), and that the rasterizer knows about the particular texture format (RGBA vs BGRA etc.)
It does not imply fractional glyph positioning though. By itself subpixel rendering simply exploits knowledge about the configuration of the subpixels (red, green, blue) of the pixels on a particular raster display. (horizontal, vertical, what order,... ?)
I installed Zed and tested out a bunch of fonts on my 1440p monitor. It looks decent, but not great. I think that's more a byproduct of Windows' awful font rendering in general though moreso than a Zed specific problem. VSCode is no better.
Seems like the only way to get high quality font rendering these days is a 4k+ display.
I watched the video on the home page and thought it is weird that they spend an inordinate amount of time on frame rate. Who picks an editor based on frame rate?
If you want to talk about perf in the context of a text editor show me how big of a file you can load--especially if the file has no line breaks. Emacs has trouble here. If you load a minified js file it slows to a crawl especially if syntax highlighting is on. Also show me how fast the start up time is. This is another area where Emacs does not do well.
So Zed is available on Windows--but only if you have a x64 processor. Lots of people run Windows on Arm64 and I don't see any mention of Arm64. This is where the puck is heading.
It's not just frame rate, but also input delay. If you're using Visual Studio Code, you might be used to waiting 100 ms for a character you typed to appear. My personal workflow is based on Kitty and Neovim, which I've configured so that it can launch within 20 ms. Working without any input delay allows me to explore and edit projects at typing speed. As such, even tiny delays really bother me and make me lose my flow. I would believe Zed's focus on performance is motivated similarly.
Also, I do not believe Windows on Arm64 is a very large demographic? Especially for developers, unless they're specifically into that platform.
The only IDE I have used where frame rate is noticeable was Visual Studio (not Code).
Once you are beyond a bare minimum, every other speed metric is more important. Zed does really well on many of those, but some depend on the LSP, so they become the bottleneck quickly.
Yeah. The Steam survey isn't a perfect sample since it's skewed towards gamers, but that currently shows just 0.08% of Windows users are on ARM, while 81.5% of Mac users are on ARM.
That may be true if you're looking at all windows computers in existence. If you look at new laptops being sold you see different numbers. As of 2025, Arm processors hold about 13% to 20% of the market share for new Windows laptops. This is important because these are the people who are more likely to download and install your software.
You literally can’t tell the difference in a 20ms delay. That is an order of magnitude lower than the neural feedback loop latency. You may think that you can, but studies don’t back this up.
> At the most sensitive, our findings reveal that some perceive delays below 40 ms. However, the median threshold suggests that motorvisual delays are more likely than not to go undetected below 51-90 ms.
By this study's numbers, 20ms is somewhat below the lower limit of ~40ms, but not too far below. 100ms would be easily perceivable - though, based on the other replies, it seems that VS Code does not actually have that much latency.
Don't confuse this with human reaction time, which is indeed an order of magnitude higher, at over 200ms. For one thing, reaction time is based on unpredictable events, whereas the appearance of keystrokes is highly predictable. It's based on the user's own keypresses, which a touch typer will usually have subconsciously planned (via muscle memory) several characters in advance. So the user will also be subconsciously predicting when the text will appear, and can notice if the timing is off. Also, even when it comes to unpredictable events, humans can discern, after the fact, the time difference between two previous sensory inputs (e.g. between feeling a keyboard key press down and seeing the character on screen), for much shorter time differences than the reaction time.
Of course, just because these levels of latency are perceptible doesn't mean they're a material obstacle to getting work done. As a relatively latency-sensitive person, I'm not sure whether they're a material obstacle. I just think they're annoying. Higher levels of latency (in the hundreds of ms) can definitely get in the way though, especially when the latency is variable (like SSH over cellular connections).
You're the one who said "you literally can't tell the difference". I agree to a point. It seems plausible that they were experiencing some other effect such as hitching or slowdown, rather than just a constant 100ms delay (which again isn't supposed to happen).
On the other hand, I just thought of one way that even a small fixed amount of latency can be a material obstacle. Personally, I type fast but make lots of typos, and I don't use autocorrect. So I need to react to incorrect text appearing on screen, backspace, and retype. The slower I react, the more text I have to delete (which means not just more keypresses but also more mental overhead figuring out what I need to retype). For this purpose, I am bound by the human reaction time, but editor latency is added on top of that. The sooner the text appears, the sooner my 'reaction timer' can start, all the way down to 0 latency. [Edit: And 100ms of latency can make a meaningful difference here. I just did a quick typing speed test and measured 148 WPM which is around 12 characters per second, so 100ms is one extra character, or a bit more.]
Also, latency might affect productivity just by being annoying and distracting. YMMV on whether this is a legitimate complaint or whether you should just get used to it. But personally I'm glad I don't have to get used to it, and can instead just use editors with low latency.
"Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Also you can absolutely feel the visual difference between 60Hz (~16ms) and 120Hz (~8ms), and for audio it's even more nuanced.
Just because studies don't back this up yet doesn't make it false. I imagine this is really hard to measure accurately, and focusing only on neuron activity seems misguided too. Our bodies are more than just brains.
> "Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Human neural feedback loop latency is a range that varies widely depending on the type of loop involved. Reflex loops are fastest, operating in tens of milliseconds, while complex loops involving conscious thought can take hundreds of milliseconds.
Short-latency reflex: 20-30ms. Signal travels through spinal cord, bypassing the brain. E.g. knee-jerk reflex.
Long-latency reflex: 50-100ms. Signal travels to the brainstem and cortex for processing before returning. E.g. Adjusting grip strength when an object begins to slip from your hand.
Simple sensorimotor reaction: 230 - 330ms. Simple stimulus-response pathway involving conscious processing, but minimal decision-making. E.g. pressing a button as soon as light turns on.
Visuomotor control: ~150ms, adaptable with training. Complex, conscious loops involving vision, processing in the cortex, and motor commands. E.g. steering a bike to stay on a path in a video game.
Complex cognitive loops: Brain's processing speed for conscious thought is estimated at 10 bits per second, much slower than the speed of sensory data. High-level thought, decision-making, internal mental feedback. E.g. complex tasks like analyzing a chess board or making a strategic decision.
A few years ago I did some testing with a quick Arduino-based setup I cobbled together and got some interesting results.
The first test was the simple one-light-one-button test. I found that I had reaction time somewhere in the 220-270ms range. Pretty much what you'd expect.
The second test was a sound reaction test: it makes a noise, and I press the button. I don't remember the exact times, but my reaction times for audio were comfortably under 200ms. I was surprised at how much faster I was responding to sound compared to sight.
The last test was two lights, and two buttons. When the left light came on I press the left button; right light, right button. My reaction times were awful and I was super inaccurate, frequently pressing the wrong button. Again, I don't remember the times (I think near 400ms), but I was shocked at how much just adding a simple decision slowed me down.
High frame rates (low frame times, really) are essential to responsiveness which, for those who appreciate it, is going to make much more of a difference day to day than the odd hiccup opening a large file (not that zed does have that issue, I wouldn't know as I haven't tried opening something huge).
You probably do. Many people just never notice that. It's not about typing or reading fast either, it's just about how it feels. Typing into something with shitty latency feels like dragging my fingernails across a chalkboard.
It's the same with high dpi monitors. Some people (me included) are driven absolutely insane by the font rendering on low density monitors, and other people don't even notice a difference.
Honestly, consider yourself blessed. One less thing in the world to annoy you.
Yes, I can perceive that latency, if I am actively looking for it. No, it has absolutely no effect whatsoever on my ability to work. The latency is far, far below what could possibly affect neural feedback loops, even on the slowest editors. And it doesn’t bother me in the slightest.
Low-dpi font rendering also isn’t an issue for me, unless it is so bad as to be illegible (which no modern system is).
Me! Frame rate and input latency are very important for a tool I use for hours every day. Obviously that's not the only feature I look for in an editor but if an editor _doesn't_ have it, I skip it. I also try to work on devices with 120Hz displays and above these days.
This always makes me laugh. The editor was barely announced two years ago. They've built it from the ground up with native support now for three different operating systems. They're experimenting with some cool new features, and even though I don't care about it I've heard their AI integration is pretty damn good.
But waaaaah they don't support a processor that accounts for probably less then 10% of Windows Machines
Ubiquity is pretty important when you're going to invest in learning a new editor. This is one of the advantages of vim for example. It is available everywhere... linux, windows, terminal, gui, etc.
[Window Title]
Critical
[Main Instruction]
Unsupported GPU
[Content]
Zed uses DirectX for rendering and requires a compatible GPU.
Currently you are using a software emulated GPU (Microsoft Basic Render Driver) which
will result in awful performance.
For troubleshooting see: https://zed.dev/docs/windows
Set ZED_ALLOW_EMULATED_GPU=1 env var to permanently override.
[Skip] [Troubleshoot and Quit]
That's more of an issue with your system than an issue with Zed, you have to veer pretty far from the beaten path to not have proper DirectX nowadays. Are you running Windows in a VM?
I've literally programmed with VSCode for basically a decade this way without issues. Zed lags, it's disappointing and if I really like Zed I'll have to sort out another setup.
MSTSC is one of the rock-solid tools from Microsoft, and does better than almost everything else available on the market other than some PC over IP technologies specifically highly optimized for this use case. I've been programming with an ancient ThinkPad forever because I'm just remoting into a much more powerful machine.
If you are having problems with Windows RDP then the problem is somewhere else along the pipeline, not RDP itself. No other remote solution I've used is even in the same ballpark as RDP in terms of responsiveness and quality. RDP on a LAN is very usable for actual coding work.
Maybe try gaming-oriented remote desktop tools, like steam link or sunshine/moonlight. Those work great with directx, assuming you have a working gpu (at least integrated gpu) on your remote box. They also have way better latency, though use a lot more bandwidth.
By explaining the advantages over "older" methods. A lot of people use Moonlight/Sunshine for non gaming related stuff, specially considering than the alternatives are all proprietary.
They're productivity tools, not gaming software. You'll be faster and deal with less errors using the correct optimized remote desktop tool for your job, versus what you're using now, which can be slow and error prone.
Sunshine and Moonlight are no more than accelerated and finely tuned VNC servers that happen to be targeted at gaming. You can totally set them up as a regular remote desktop solution.
Right, but I'm saying their sites make it very clear that they're meant for gaming, so GGP may indeed have a hard time convincing corporate IT that they're for work.
My post was answering the question of how they should ask for permission to use it. You pitch it as productivity software that helps you do your job better.
If no monitor is connected then RDP doesn't load gpu drivers. It can also make RDP performance much worse as HW accelerated video encoding is not working
I assume it is not an issue with other applications like Chrome or VSCode (I mean VSCode is Chrome). They usually have some sort of fallback rendering on these environments that work well enough.
It may not be an actual problem with zed either, despite the warning.
Maybe it's misdetecting something? I used to use Remote Desktop for game development sometimes, as weather could make the journey to the office impossible, and the performance was always pretty ok, even with UE5 stuff making heavy use of Nanite running at 1440p.
And if it was actually software emulated, which I can't believe for a moment, though I admit I never checked (I just always assumed the window contents were transmitted via some kind of video encoder) - then I can't imagine that a text editor would be a problem.
The input latency might not be as good as you'd like.
> The input latency might not be as good as you'd like.
Yeah input latency is annoyingly rough, not super bad but _just_ laggy enough to make it annoying to use.
Debating how much I want to change things, I can directly punch into my Linux machine but all my dev IDEs are on a VM for a long list of reasons, and I doubt Zed is going to run on my old ThinkPad if it struggles on software rendering, but we'll see.
Size on disk is about 64x less relevant than size in RAM for me. To give Zed some credit in this area, it's statically linked and the Linux binary is half the size as the Windows one.
Every Unity game ships with three UI frameworks (IMGUI, UGUI, UI Elements) built-in, in addition to everything else the game engine supports, and the engine is only about 50 MB.
A renderer/rendering library for something as simple as a text editor is not (or is not supposed to be) a lot of code and should not take up a large amount of space in a binary. Odds are good it's the embedded Treesitter grammars and other things that Zed uses that takes up space.
It is Rust software, so there is probably a good 50-90% waste when it comes to just raw needed vs. actual lines of code in there, but there is no way anyone makes a renderer so overcomplicated that it takes up a meaningful part of the 500 MB or so Zed takes up on Windows.
Yes. Zed is snappy in a way that makes my copy of Sublime Text 3 feel slow. Compared to VSC it feels like going from a video game at 40 FPS with 2-3 frames of input lag to 120 FPS.
Why don't you actually do some considering how mistaken this superficial dismissal is: storage is not cheap where it matters - for example, your laptop's main drive. It doesn't help you that an external hard drive can be purchased cheaply since you won't be able conveniently upgrade your main mobile system. Also, the size of various used content (games/photos /videos) has increased a lot, leaving constrains on storage in place despite the price drop.
I just refuse to use any software that baloons it's filesize. Not because I can't afford storage, but because there are always alternatives that have similar features and packed into fraction (usually less than 1%) of filesize. If one of them can do it and other can't, it's a bad product, that I have no intention to support.
We should strive to write better software that is faster, smaller and more resilient.
"Storage is cheap" is a bad mentality. This way of thinking is why software only gets worse with time: let's have a 400mb binary, let's use javascript for everything, who needs optimization - just buy top of the shelf super computer. And it's why terabytes of storage might not be enough soon.
I can empathize with how lazy some developers have gotten with program sizes. I stopped playing CoD because I refused to download their crap 150+ GB games with less content than alot of other titles that are much smaller.
That said, storage is cheap, it's not a mentality but a simple statement of fact. You think zed balloons their file sizes because the developers are lazy. It's not true. It's because the users have become lazy. No one wants to spend time downloading the correct libraries to use software anymore. We've seen a rise in binary sizes in most software because of a rise in static linking, which does increase binary size, but makes using and testing the actual software much less of a pain. Not to mention the benefits in reduced memory overhead.
VSCode and other editors aren't smaller because the developers are somehow better or more clever. They're using dynamic linking to call into libraries on the OS. This linking itself is a small overhead, but overhead none-the-less, and all so they can use electron + javascript, the real culprits which made people switch to neovim + zed in the first place. 400mb is such a cheap price to pay for a piece of software I use on a daily basis.
I'm not here to convince you to use Zed or any editor for that matter. Use what you want. But you're not going to somehow change this trend by dying on this hill, because unless you're working with actual hardware constraints, dynamic linking makes no sense nowadays. There's no such thing as silver bullet in software. Everything is a tradeoff, and the resounding answer has been people are more than happy to trade disk space for lower memory & cpu usage.
Absent evidence to the contrary, the reasonable assumption here is that all of those pages are being used. The rest compiler is not in the habit of generating volumes of code that aren’t being called.
Unfortunately, I tried to use zed as my daily driver, but the typescript experience was subpar. While the editor itself was snappy, LSP actions like "jump to declaration" were incredibly slow on our codebase compared to VS Code / Cursor.
As the owner of a somewhat popular language, I checked to see if there's an extension available (there is!) as we publish a language server.
One thing I noticed in the implementation is that it looks like it is using stdin/stdout for the JSONRPC inter-process communication, rather than named pipes or sockets. VSCode uses named pipes. I wouldn't be at all surprised if that's a significant bottleneck - I'm about to file a bug.
VSCode's TypeScript service does not currently use LSP, and many language queries run in the same thread as the extension (or "Extension Host"). That does not necessarily explain the performance difference though
Well exactly, I'm pretty sure that's what the GP is getting at — it would be a surprise if Rust didn't have good JSON support. Which it does. So it's unlikely to be the bottleneck.
I had the same experience and the same outcome. Zed was super fast for editing but slow for rich features, which on the net slowed me down compared with VSCode
Are you saying that VSCode runs tsserver in its own NodeJS process? Or are you saying that VSCode uses the NodeJS it ships to run tsserver in a different process?
That option is either not explained well, or not what OP was really asking for. It doesn't specify what's the default behaviour for extensions that are neither explicitly enabled, nor disabled. It also doesn't say how to disable everything apart from some set of extensions.
Extensions to me mean something that I explicitly choose to install (regardless of whether they automatically update). As far as I can tell, Zed goes about downloading code (“extensions”?) invisibly with no user input. See, for example, https://github.com/zed-industries/zed/issues/12589
> or you have an "official" extension manager that rolls out an update every next year
Tested it a bit on a 2000 LOC python file, autocomplete is sluggish compared to VSCode (VSCode completes almost instantly but Zed waits about half a second before doing so). I didn't configure anything so I'm a bit disappointed...
Zed's ootb Python experience is poor since they use pyright as the LSP. There's threads on the issue tracker about replacing it, maybe check back in six months.
I build Zed for Windows aarch64 from source -- works great, though the build process is quite slow on my 16GB Surface Pro. Definitely hoping for official binaries, though!
I think I got as far as installing the Visual Studio Installer so I could install Visual Studio and I just bailed on that whole thing, lol. I'll have to take some time out on a weekend to take another look :)
I absolutely love the idea of Zed, and I'm regularly giving it a go. Typing in Zed really feels better than VSCode. It's hard to describe, but impossible to discard once you've used it for a short while.
Unfortunately, there's a bunch of small things still holding me back. Proper file drag & drop for one, the ability to listen to audio files inside the editor, and even a bunch of extensions, in particular one that shows a spectrogram for an audio file.
Maybe my biggest gripe is that Python support is still better in VSCode. Clicking on definitions is faster and more reliable.
In vscode you can click on various assets, like images or audio files, and then view them right inside vscode. If you work with datasets, the ability to inspect them is crucial.
Yes ofc I can use Finder instead but in vscode I just cmd+p.
The reason it's faster is largely because it doesn't have all those little quality of life features and extension ecosystem. It's easyish to make software perform well if it doesn't do all that much. If you take base vscode, no extensions, and just do raw text editing, it's hard for me to tell the difference between vscode, zed, or any other editor.
When vscode was released, Sublime was faster - and it stayed faster. But that wasn't enough to stop the rise of vscode.
It's great that they finally target all three mayor platforms! Let's see how developers using Windows treat them.
I used to be a Neovim purist, spending days on my config and I have barely touched it since I moved to Zed. It's so much nicer to use then VScode (and it's forks) because it's so snappy.
I hope they ship leap.nvim (https://github.com/ggandor/leap.nvim) support soon, then I am completely happy!
I installed the beta a week or two ago. Many of the files I tried opening in it just did not work at all. It can only open utf-8 encoded files.
That is not a problem for code, but if you just want to open some random .txt file or .csv file, these are often enough not utf-8 encoded, especially if they contain non English text.
The github issue for this is showing progress, but I think it's a mistake to promote the editor to stable as is.
I am using nightly on Windows and startup time is very slow. It can take 10 seconds to boot up. To quickly edit random files, I would open notepad++ or simple notepad instead, which isn't what I expected from Zed.
I tried it for a bit. But unless you want to use their choice of lsp/linter/whatever from what you are used to, then you will waste even more time customising zed to your needs from your previous solution.
Technically, Zed's Super Compete will almost never catch up with Cursor, or even as good as the free Windsurf.
If you look a little closer, you will see that the cursor and cursor completion requests are not initiated by the editor itself, which only interacts with the lsp.
Yes. They all come with an LSP, taking free windsurf as an example, when windsurf starts, it will start a process in /Applications/Windsurf.app/Contents/Resources/app/extensions/windsurf/bin/languageservermacosarm, This process is responsible for syntactic analysis of the current code's workspace and then local indexing. The editor communicates with LSP through localhost encoding through protobuf, including the current cursor, currently open tabs, recently viewed files, etc.
The LSP calculates locally based on this information, and if the inference cannot be completed locally, it will automatically extract the necessary information to interact with the server-side LLM. This ensures the intelligence of Super Complete.
However, ZED's Super Complete does not have an LSP. Instead, it interacts directly with the server, which runs a zeta model. This pattern is not smart, takes up a lot of bandwidth and more useless context, and lacks more useful information due to the lack of LSP support.
I hope my technical interpretation above will give you an understanding of why zed's super complete is technically backward. And, since ZED's LSP is configured by a third party, ZED itself does not offer its own languageserver, unless ZED is developing it now, but this is obviously difficult.
In this case, I don't understand why zed would charge for a feature that is clearly lagging behind.
You have to ask me why I know so much about this, because I am a member of Cursor, but I want a Zed Editor, so I want to customize an endpoint I developed by ZEDPREDICTEDITS_URL, relay requests to Cursor, to achieve powerful completion with Cursor in Zed. Obviously, it failed in the end. The above mentioned is the reason for the failure.
Excuse me, can you read what I wrote? I didn't say zed editor is bad, I just said that zed's PREDICT EDIT isn't good enough, or at least it shouldn't charge for it.
I never thought you said Zed was bad. I was genuinely asking since Cursor and Windsurf exclusively advertise AI features, whereas Zed is also a good editor without AI. I never bothered to try them because I inherently distrust AI that I don't control, and find the cloud stuff to be too expensive on top. Just wanted to know if they're any good, or can even be used at all, without AI; since their websites say nothing on the subject.
Yes, windsurf and cursor will also work whitout AI, cause it just fork from vscode.
I like zed very much. If u dont like AI, then zed is best editor for you. (and for me if zed's AI is good enough)
I just say they shouldn't charge for it PREDICT EDIT.
The reason is that it's not even as good as the free ones.
I don't know why you've become such a fanatic, but if you want to support them you can donate.
Zed is awesome. It does everything I need. I easily find everything I'm looking for. It's fast, and I can use forked CLI terminals from the IDE via ACP mode. This means Cerebras and Qwen code 480b for super fast and cheap incredibly smart CLI agents.
Is this something like Sublime? Light/responsive editor for one-off files? But maybe with some better introspection? That would fill a niche for me; trying it. FYI download+install is the smoothest experience of any software I've loaded recently I didn't build myself. Going to daily-drive it for a bit and see what's up; promising!
as a 10+ year Sublime user, Zed is the best of the more "modern" GUI editors I've tried.
I haven't fully switched over to using it as my daily-driver because Zed's heavy reliance on GPU rendering means that it locks up every time I suspend & resume my Linux box [0,1] but I'm optimistic about the progress they're making on it.
Was a heavy sublime user for many years, slowly migrated to vim (first sublime with vim keybindings) but now daily drive lazyvim and the defaults with that are very sane.
Quick install on any platform and just works. And obviously plenty of configuration that’s available to you but if you haven’t I’d give that a go.
Tried zed, it's interesting but several things are missing including the ability to skip extensions auto-update... which imho is critical for security.
Uhm... tried to use it but it feels so lacking. Yeah, fine - it's "lean" (text?) editor but that's about it. Opening any project (even with plugins enabled) doesn't make feel any "improvements" (code navigation, sane autocomplete, project structure detection)...
Yes, the same experience. Although it had amazing support for Vim keybindings, it offered me nothing that I wasn't already getting from Neovim. The in-editor chat seems like it could be useful, but I don't have any need for it.
I switched from VSCode to Zed as my "I just need to edit this 1 little text file" app because VSCode was way too spammy. Every time I open it it wants to update something or complain about some extensions or complain that I had a file open and now it MUST close it because heaven forbid I open two things at once.
I hope Zed stays clean. We'll see. So far so good. Was quite happy they had a JetBrains base hotkey setup. Had to add a few of my own but I could pretty easily, nothing missing so far.
> winget search zed
Name Id Version Match Source
-----------------------------------------------------------------------------------------------------------------------
Zed Preview ZedIndustries.Zed 0.208.4-pre Moniker: zed winget
Just downloaded and installed on mac. It asking access to pretty much every folder and even music apps. What a privacy nightmare. NOPE, will stick to cursor.
So many new options. NeoVim and Helix in terminal, Zed as a beautiful GUI editor. - A few years ago I thought it’s Emacs/vim for us hackers, and for the rest it’s Sublime, and later VS Code, and then comes nothing. But wow, here we are: Ever more options. I‘m loving it.
I don't use windows, but this is good development as all platforms should be present for editors to be worth using. I am happy Zed user since long time, I am happy it had kept with out demands, with adding AI, Git etc.
Also integration of cli tools into AI is excellent and really refreshing.
In January 2024 zed was open sourced, with only mac support. One week later Dzmitry Malyshau showed up with a prototype renderer for Linux. In July 2024 official Linux builds were available mostly thanks to community contributors. The swift Linux support is a tale of community steppung up to open source development, not one of manufacturer provides something. So Windows is certainly not popular with the kind of crowd that gets excited about an editor
Just surprised, as I thought building GUI app on Windows must be easy right, as must be libs/frameworks already available to support that? It's just not.
I hope you realize that Linux in this table is split up into many individual options where the others are not, i.e. that Ubuntu, Debian, Arch etc also all count towards Linux?
First, you should fix fundamental operations on Mac and other distributions - for example when you stash or perform operations on files from other tools, it will put the state out of sync.
You can build the most beautiful and fastest IDE, but with this bugs, it’s useless
Just wanted to mention that some basic Windows-OS keyboard shortcuts don't work, like ALT+F to open the File menu. Also things like ALT+SPACEBAR to bring up the system context menu for the focussed window (the menu with maximise, minimise, close options etc.) do not seem to work. I'm guessing with the DirectX rendering backend, the 'app' is rendered more akin to a video game than a native win32 process.
Also after install, the install directory takes up 400MB+. Even VSCode only takes up around 380MB. I believe it when they say it's not an Electron app, but I do wonder what's being packed in there. I was always under the impresion that Rust apps are pretty lightweight, but that install size is nearing Java levels of binary/dependency bloat.
> I was always under the impresion that Rust apps are pretty lightweight, but that install size is nearing Java levels of binary/dependency bloat.
For what it's worth, the zed executable on Linux weighs 3.2 MB.
EDIT: Sorry, the nix store is too good at hiding things from me. It's actually around 337 MB plus webrtc-sys.
I just compiled "zed" with "cargo build --release" and not only did it pull >2000 dependencies, its size (executable file) is literally 1.4G. Debug is 1.2G.
--- Summary: This is on a whole new level of bloat; both with regarding to dependencies AND the resulting executable file(s) (EDIT: executable files are unstripped).Any explanations as to why "cargo" does not seem to re-use libraries (dependencies) in a shared directory, or why it needs >2000 dependencies (that I see being downloaded and compiled), or why the executable file of the release mode is 1.4G unstripped while of the debug one it is less?
This is pretty common for larger rust projects. Its basically the new javascript+npm mess, this time with a borrow checker.
Cargo does the de-duplication, but only up to a point. If two packages request the same dependency with semver ranges that have a common overlap (say, `1.4` and `1.6`) then it will use a single package for both (say, `1.7.12`). But if they request semver-incompatible versions (`2.1` and `1.6`) then cargo will use both.
Unstripped, perhaps?
EDIT: Ah, it was too good to be true. The true binary is hidden in libexec/.zed-editor-wrapped :( Extra weight also comes from webrtc, which nixpkgs dynamically links. So yeah, it's quite a large binary indeed.Additionally, in any case, now I know what I have to do to free up some space. Get rid of Rust projects I built from scratch.
Maybe something like this to figure out what directories to delete:
I found "websocat" and "ripgrep". Thankfully I got rid of everything else. That said, ripgrep itself is only 5.0M.Probably due to treesitter modules for many languages compiled in. AFAK Treesitter's codegen is unfortunately a share nothing between different languages. So a dozen language parsers can easily cross upward of 200 MB.
Binaries for dynamic libraries of tree-sitter (usually compiled with C compiler) would be smaller than that. For example this [1] .so bundle for 107 different grammars is ~137 MiB.
Unless by "compiled in", some in-lining of the C code into Rust codebase is meant.
[1] https://github.com/emacs-tree-sitter/tree-sitter-langs/relea...
To make matters worse, it takes several minutes for Zed's window to appear on a cold start, whereas VSCode launches almost instantly.
[1] I am trying to measure it as we speak but it is taking quite a long time.
> Not to mention it takes minutes for the window of Zed to open, whereas VSCode is almost instant.
That one is interesting. It's much quicker for me, even cold starts are below 1s, and subsequent startups are basically instant.
Cold starts are minutes, subsequent startups are much faster than VSCode[1].
I wonder why though.
[1] I have not measured subsequent launches of VSCode though, but Zed is relatively pretty quick after the initial launch.
Maybe some kind of "security" software interfering?
I suppose it has to do with how every Rust crate (including dependencies) gets statically linked into the final binary, and this leads to extremely large intermediate artifacts, even when many crates share common dependencies.
Or the fact that there is incremental compilation artifacts...
And of course the amount of dependencies. A single project might depend on hundreds of crates (quite common), each compiled separately with its own build artifacts. sighs.
What does a desktop text editor have to do with WebRTC?
Judging by this note in the docs: Collaboration features.
https://zed.dev/docs/development/freebsd#webrtc-notice
If they're going to implement every feature under the sun and include half of userspace to support it, they might as well build the whole thing on top of a browser.
vscode has entered the chat...
Amazon q is 100mb and that’s a cli app. Rust programs be huge.
Compared to Sublime Text:
RAM:
213 MB Zed
41 MB ST
Storage:
406 MB Zed
52 MB ST
Startup time:
Zed is slower than ST (but only by a few milliseconds).
Also when you reopen ST it will remember how you've resized the window from last time whereas Zed won't.
Probably it helps that Sublime doesn't come with an AI agentic features, LSP, and a whole video-conferncing and screen-sharing client by default.
> and a whole video-conferncing and screen-sharing client by default
Haha wait what? Are you confusing Zed (the text editor) with something else? Surely it doesn't ship with video conferencing???
I haven't seen video but it does have voice. And similarly I don't think it's screen-share, it's just editor state syncing, so live collaboration. Still quite a lot.
There is a feature that lets you share your screen. It shows up for other participants in the collaboration session as a tab in the editor.
Probably referring to the collaboration tools. Zed has a bunch of stuff around remote pair programming with people.
It has a voice chat system built in as part if the collaboration tools. Personally I think they should remove the voice chat...
ST is a text editor while Zed is an IDE. I wish there were something like VSCode that is very modular but written in native. But VSCode is good enough and it is my daily driver.
For those on Windows, which is the topic at hand, UltraEdit and Notepad++.
I disagree Zed is an IDE, it is quite far from InteliJ, Borland/Embarcadero, VS, XCode, Eclipse, Netbeans...
If it is about adding enough plugins until it eventually becomes one, then any programmer's editor is an IDE.
My line is - if I can compile, run and debug my program through the editor UI instead of a terminal, it's an IDE.
As I said, then any programmer editor is an IDE, including UltraEdit and Notepad++.
Notepad++ has a debugger UI? One that goes beyond running a terminal inside a pane?
It has plugins....and about 30 years of ecosystem history.
Try to use Zed to debug Go code.
I'm literally doing that right now. I can set breakpoints and graphically step through them in Go files.
Yeah, thanks to
https://microsoft.github.io/debug-adapter-protocol
> ST is a text editor while Zed is an IDE.
Zed is the new emacs?
No. In Emacs you can write a simple one line script and change anything.
In Zed, you need to write a full blown plugin, and can change only what the Zed authors exposed through their plugin API.
No, that doesn't matter. I think you should be looking for how quickly it can get you a working environment for your favorite language not how long it takes to boot up once per reboot. If you want features the bits have to live somewhere. Look at it like a trade off, if you're just going to look at it, by all means, take a memory dump. But I find that a little bit hard to work with.
For me, as long as it's better than alternatives it's good enough. Especially if it's not running JS.
A 400mb+ install of bloat will upset many people
This needs to be justified asap to help people understand and reconsider installing it.
Strangely it's the actual binary's .text section that's about 400MB. Time to dive in!
The Rust compiler always produces quite large binaries compared to other programming language. I notice there's a (closed) issue on the Zed github [https://github.com/zed-industries/zed/issues/34376],
> At this time, we prioritize performance and out-of-the-box functionality over minimal binary size. As is, this issue isn't very actionable, but if you have concrete optimization ideas that don't compromise these priorities, we'd be happy to consider them in a new issue.
Welcome to static linking of large applications.
The world moved into dynamic linking in the 1980's for a reason.
It is cool to advocate for a return to static linking when it is basic CLI tools.
All those beautiful dlls will anyways sit comfortably in the same folder as your "dynamically" linked executable on Windows.
They might be, or not.
I think there should be a best-of-both-worlds type of linking - during compilation, the linker places a statically compiled library at a certain address, but doesn't include it in the binary. Then, during startup, the OS maps the same library to the given address (sharing the data between processes). This would improve memory use and startup time both and performance, avoiding dynamic linking. Of course you need to match the exact versions between the compiled executable and the dependency, but this should be a best practice anyways.
Static linkers generally don't compile a "full copy" of the library. Just the code paths the compiled application uses. The compiler may have also made optimizations based on the apppication's usage patterns.
I say "strangely" because honestly it just seems large for any application. I thought they might not be doing LTO or something but they do thin LTO. It's just really that much code.
> The world moved into dynamic linking in the 1980's for a reason.
Reasons that no longer exist. Storage is cheap, update distribution is free, time spent debugging various shared lib versions across OSes is expensive.
Yet everyone is complaining on this thread about Zed distribution size, go figure.
They should shut up and just buy bigger drives. Ah, they can't on their laptops, bummer.
Also try to develop mobile apps with that mentality,
https://www.abbacustechnologies.com/why-your-app-keeps-getti...
Tbh, the rights and wrongs aside, I suspect "everyone" is complaining about it because it's the easiest thing to talk about. Much like how feature discussions tend towards bikeshedding.
Precisely. It seems like the people who say storage is cheap assume everyone is using desktop PCs
Storage is cheap and upgradeable on all but very very few Windows laptops.
> Storage is cheap
My /usr is 15G already, and /var/lib/docker isn't that far off despite people's obsession with alpine images. If more people would dismiss storage as cheap it'll quickly become expensive, just not per GiB.
> update distribution is free
I wouldn't be surprised if at one point Github would start restricting asset downloads for very popular projects simply because of how much traffic they'd generate.
Also, there's still plenty of places on the planet with relatively slow internet connectivity.
Storage doesn't really feel cheap. I'm considering buying a new laptop, and Apple charges $600 per TB. Sure, it's cheaper than it was in the '80s, but wasting a few gigabytes here and a few gigabytes there is quickly enough to at least force you to go from a 500GB drive to a 1TB drive, which costs $300.
That's more of an Apple problem? Storage is under $50/TB.
It's the reality of storage pricing. The general statement "storage is cheap" is incorrect. For some practically relevant purposes, such as Apple laptops, it's $600/TB. For other purposes, it's significantly below $50/TB.
You could say "just don't buy Apple products". And sure, that might be a solution for some. But the question of what laptop to buy is an extremely complicated one, where storage pricing is just one of many, many, many different factors. I personally have landed on Apple laptops, for a whole host of reasons which have nothing to do with storage. That means that if I have to bump my storage from 1TB to 2TB, it directly costs me $600.
If you're buying Apple then you should expect inflated prices. I got a 4TB NVMe SSD for like 350€, a 2TB one goes from 122 - 220 € depending on read/write speeds.
I don't check the installation size of applications anymore.
I'm just saying that $600/TB is a real storage price that lots of people deal with. Storage isn't universally cheap.
This feels especially relevant since we're discussing Zed here, the Mac-focused developer tool, and developers working on Mac are the exact people who pay $600/TB.
A 2TB SSD for the Framework 13 cost me 200 euros. But I agree that it's not cheap, files are getting bigger, games are big, apps are huge, and then you need backups and external storage and always some free space as temp storage so you can move files around.
Bro, with this mentality, you won't get far in Apple universe.
Embrace your wallet will be owned by Apple. Then you can continue.
Sorry, but people buying Apple products are different bread :D
I don't need to "get far in the Apple universe", I need a laptop. My current MacBook Pro cost about the same as the Dell XPS I was using before it, I like nice laptops
RAM isn't cheap (it may be for your tasks and wallet depth, but generally it isn't, especially since DDR5). Shared objects also get "deduplicated" in RAM, not just on disk.
What objects is the Zed process using that would even be shared with any other process on my system? Language support is mostly via external language servers. It uses its own graphics framework, so the UI code wouldn't be shared. A huge amount of the executable size is tree-sitter related.
I 100% agree. As soon as you step outside of the comfort of your Linux distributions' package manager, dynamic linking turns into dependency hell. And the magic solution to that problem our industry has come up with is packaging half an OS inside of a container...
> Storage is cheap
I'll be very grateful if you stopped using all my RAM for two buttons and a scrollbar thank you.
OSes don't load the full executable into physical RAM, only the pages in the working set. Most of the Zed executable's size is tree-sitter code for all the supported languages, and only needs to page in if those languages are being used in a project.
Big sigh. I wish we still had pride in our field, rather than this race to the bottom mentality.
I really like this article "How Swift Achieved Dynamic Linking Where Rust Couldn't" https://faultlore.com/blah/swift-abi
I was a little sus, so I checked: https://imgur.com/a/AJFQjfL
897MB! But it appears to have installed itself twice for some reason. Maybe one is an 'update' which it didn't clean up...? I'm not sure.
Edit: I just opened it and it cleaned itself up. 408MB now. I guess it was in the process of upgrading.
So the upgrades are not delta diffs either?
Even if it’s delta, it cannot patch itself when running on Windows. So it runs the updater, creates a new exec and switches to it after relaunch. Same as Chrome or Firefox.
OS deficiency. And maybe programs shouldn't be allowed to update themselves.
Is it? On Linux, you can overwrite the file, but the underlying inode will still be open, and the 'invisble' old version will linger around - you don't have any easy way short of restarting everything to make sure the new versions are being used.
And with Chromium this directly leads to crashes - when you update the browser as its open, the new tabs will open with the new version of the binary, with the old ones still using the old binary - which usually leads to crash.
I prefer 'you cannot do X' instead of 'we allow you to do it, but it might misbehave in unpredictable ways'.
I don't use Chromium. I never had issues with Apache, MySQLd, Firefox, Thunderbird, ... . You can even swap out the Linux kernel under userspace it still keeps all running.
> maybe programs shouldn't be allowed to update themselves.
Honestly I'd be all for this if the OS had a good autoupdate mechanism for 3rd party applications. But that's not the world we live in. Certainly not on windows - which is too busy adding antivax conspiracy articles to the start menu.
Will it though? I mean it's a lot for a "text editor", but much less than a classical IDE. And 400M is pretty negligible if you're on Windows, where your OS takes up dozens of GB for no reason.
Yeah I don't think 400M is really that big a deal. My `.emacs.d/` dir weighs in at over 1G and I've never thought twice about it.
For people who are serious about their text editors, 400m is a small price to pay for something that works for you.
If the OS is already bloated, that leaves LESS space for your editor!
Rust Hello World is larger than Git. Still smaller than Java and Electron, but not exactly small.
Entirely untrue. Download git, run make and you'll get a 19MB `git` binary along with a whole lot of other 19MB binaries. Running `cargo build` produces a 3.8MB binary.
And that's still comparing apples to oranges, because git is compiled with full optimizations. Running `cargo build --release` produces a 462KB binary.
Even if I'm comparing to my system's git installation, that's still 3.9MB, and that's with all the debug info stripped.
Yes rust (like C++) tends to produce larger binaries than C, but lets be real here: The reason Zed has a bloated binary is the ~2000 rust packages that comprise it.
> The reason Zed has a bloated binary is the ~2000 rust packages that comprise it.
Hundreds of those MBs are from tree-sitter grammars, which are JavaScript compiled to C.
That's an entirely different issue. The kb's of overhead for backtrace printing and the format machinery is fixed and does not grow with the binary size. All combined it wouldn't account for anywhere close to 1mb let alone 100's of mb.
> I was always under the impresion that Rust apps are pretty lightweight
I'm not sure what gave you that impression. I'd say Rust is pretty well known for fat binaries
Most of the sticker shock from Rust binaries is due to them being statically-linked by default. Considering that, Rust binaries aren't especially large, especially if you strip them. Dynamically-linked binaries are better at obscuring their size.
> I was always under the impresion that Rust apps are pretty lightweight
Maybe compared to electron, but binary size is an issue with any nontrivial rust application. Due to how cargo works, it compiles and bundles in every dependency in world.
400MB is unnecessarily large though.
Helix binary on my system is 20MB+ but dynamically linked grammars are additional 200MB. Those 380-400MB are probably not pure binaries are they?
> I believe it when they say it's not an Electron app, but I do wonder what's being packed in there
Half of Electron namely Node.js. As majority of lsp are .js based. Also extensions are WASM. Also VS Code keeps extensions in separate config directory, while Zed in main directory.
Just so others here know, it’s possible to have a graphics context and a Win32 menu bar in the same window.
PSPad is 40MB. And that is quite a legacy software that is still being updated to this day. Notepad++ is 17 mb.
400 mb for new project in this amazing bestest compiled language ever made is ridiculous.
Zed looks and feels amazing to use. I test-drove it for a bit on my linux system, and the feel of it is difficult to convey to those who have not tried it yet. It's easy to overlook the significance of gpu accelerated editor - but I promise you, use it for a bit and you'll be sold.
The only feature that is preventing me from switching to Zed is the current lack of DevContainer support[1]. After investing a significant amount of time perfecting a devcontainer (custom fat image with all tools/libs + config), it would be a large step backwards to go back to installing everything locally.
There's a lot of eyes on this feature, so I'm hopeful it will be implemented in the future.
[1] https://github.com/zed-industries/zed/issues/11473
One of the reasons why I'm not fully switching yet is a Zed's inability to update the currently open file with changes made elsewhere [1].
All the other editors I use are aware of outside changes, but not Zed. And I'm just not willing to close and reopen the file to get fresh contents. Eventually, I'll forget to do it and lose some work.
[1] https://github.com/zed-industries/zed/issues/15791
How do other editors handle this in your experience? I’m pretty sure VSCode behaves exactly the same, with the addition of a manual “merge” editor when you try to save, but never shows changes live for a modified file.
Changes to open files without any modifications in the buffer are always shown. Are you using any kind of containers or virtual fs that might be interfering?
I tried it for a while. It's okay, I guess. Typing latency is neither an issue nor a bottleneck for me, so personally, I don't see the appeal. Apart from that, it feels lacking and offers nothing else that I don't already get from VS Code. If I really cared about performance, I would use Neovim.
I find Neovim to be surprisingly sluggish. That's of course after installing extensions, but I don't find it particularly performant. Zed feels way snappier.
I write C++ and Typescript and just tried Zed on Windows, it didn't feel significantly faster for me (maybe a bit).
What's worse is C++ autocomplete - both on the same project with clang and CMake - it might be that I need some more setup, but I felt like Zed understood much less of tricky modern C++ (tbf, VS Code isn't perfect either) and the autocomplete was kinda broken. I fully admit this might be due to my own inexperience, so I won't hold it against Zed, but so far I haven't seen anything significantly better to justify making the switch.
Any chance you'd be willing to share more about your custom dev container?
What I do is run Zed in the dev container. But a custom one with bubblewrap.
What does having a DevContainer get you?
I’m all for documenting every bit of my setup, but beyond that…
Not the OP, but it's pretty useful in my team, we all work on the same environment, with the same system dependencies, with no setup required on development machine (except the need for docker).
In the devcontainer you can run code snippets, use the system shell and access an execution environment close to production if well made.
It also allows to avoid (system) dependency conflicts between different projects you may work on.
I sometimes work on old PHP applications, some of which require versions as ancient as 5.4 and refuse to run on anything beyond that. They're not only difficult/impossible to install on modern systems, but also make it hard to onboard other people — you have to write a detailed setup document and then waste a couple of hours time for each new system. And then repeat that a few more years down the line.
Why not write that document as a bunch of build instructions instead of free-form text, while also isolating the result from the rest of the system.
Are you talking about the same thing? This sounds like your typical docker/compose dev setup for running the software under development. The "dev container" stuff is more about running your editor stack in a container, like static analysis tools, language servers etc.
In our case, instant on-boarding of devs with no host setup needed. It's a large, complicated repo with a lot of unusual system dependencies. We build a multi-arch base image for all this and then the devcontainer is built on-top. Devcontainer hooks then let us do all the env setup in a consistent, version controlled way. It's been a god send for us and we'll never go back.
I use a devcontainer to run Claude Code on a sandbox with all tools and services installed. That way I can run YOLO mode with some safety.
That's quite unnecessary. You could just mount "all tools and services" from your base system without the complicated Dockerfile: https://blog.gpkb.org/posts/ai-agent-sandbox/
Why would I do something that’s more work, less secure, and not cross platform?
It's less work and just as secure, but you do you.
I am a big fan of using it when working with somewhat complicated toolchains, especially if they need to be compiled. Cleaning up the file system is as simple as deleting the container and starting a new one. OCaml on windows comes to mind.
It's ideal for open source projects, no need to install a toolchain locally for small changes. I've used it for a POC in Go where nobody has Go installed in my organization. Not that Go is complicated to install, but I wouldn't ask anyone to install a toolchain for a one-off POC.
There's other toolchains that are more involved or conflict with an operating system's pre-installed stuff, like (iirc) Ruby on MacOS.
It works with devpods
I've been using Zed primarily for months but I just switched back to VSCode for 2 reasons, one of which is kinda my fault and the other it's unclear where the fault is.
1. I deleted a few hours of work because I was late night coding and I decided to rename a file before check-in and delete the old version. Well I renamed it and Right-Click -> Deleted the new version by accident.
It turns out Zed has both "Delete" and "Trash" right next to each other in its right-click menu. Delete skips the trash and Zed does not implement Ctrl+Z yet, so unless you have a backup that file is gone. Because I renamed it and had not yet checked in it wasn't in version control yet.
2. For some reason a particular crate in my Rust workspace wasn't having errors and warnings show up in the editor. I messed with configuration a bunch and couldn't figure it out, so I tried VSCode and it just worked without any additional configuration.
> Delete skips the trash and Zed does not implement Ctrl+Z yet,
I'm not going to claim Zed has a good UI in this space, but saying it doesn't implement Ctrl + Z for a feature which is literally "skip the undo-ability of this option" is a bit misleading.
As I just elaborated on in another comment VSCode and Zed both have "Delete" in the right-click menu as the last entry. VSCode only has 'Delete' and while it has a confirm screen it goes to the trash with the ability to Ctrl+Z.
After years of deleting files in VSCode I have a muscle memory for that behavior and I just skip through the dialogue. I didn't realize Zed's 'Delete' worked differently until I lost work, so I was just reflexively skipping through its confirm screen as well.
While I understand your point, and think you are correct - if the 'Trash' button is not behind a confirmation box, and it's not undoable, then that is a pretty terrible design choice.
I've just grabbed Zed, and both Trash and Delete are behind confirmation boxes. Trash sends to my Trash (recycle bin on windows), and delete properly deletes. There's no built in undo for trash, but I can restore from the trash/recycle bin normally.
yeah I just tried the windows build and they are indeed behind a confirmation box (for both trash and delete). That's not the impression I got this morning from reading OP's comment
OP here. I have muscle memory from many years of using VSCode before Zed.
"Delete" is the last option in the right click menu for both and when you select it both show a very similar dialogue box asking you to confirm.
VSCode's 'Delete' by default moves the file to the trash and can be undone with Ctrl + Z. Zed's 'Delete' skips the trash and can't be undone with Ctrl + Z.
I should have mentioned the confirmation box but after years of use I've begun clicking through that box so quick I didn't realize the behavior was different in Zed.
On the subject of surprising behaviour. I did something last week. I don't know what it was, but maybe some kind of fat-finger error.
It randomly deleted 3-line chunks all across my codebase. No rhyme or reason. I was just about to commit a big change, so it was annoying.
It might have been a global search and replace, but there was no clue in the history. And I can't think what kind of regex it could have been.
Or maybe a bug in IPC with the language server?
Anyone got any clues about what I did?
(I love Zed, on Mac and Linux. Even went as far as buying a new graphics card as my 5-year-old one didn't have Vulcan support, which is pretty much mandatory for Zed)
Just a pure guess, but if you do a global find, you get a list of 3-line-ish chunks of context around search matches.
If you then Ctrl+A to select all and press backspace, I wonder if that would delete all those 3-line chunks...
That's almost certainly it. I'd probably searched for common fragment. And 'select all' does indeed select across all of the search result buffers. Thanks!
This is like when the macOS touchbar was a thing, if I recall correctly, in the commit management menu, cancel was right next to force push.
About no. 1, I think any sane application will show a warning "Do you want to delete abc.c" before permanently removing the file. I cannot verify it now, but pretty sure VSCode has it.
So --
* if there is no such dialog, it's on zed * If there is such a dialog and you clicked yes, it's on you
It does have such a dialogue but VSCode's identically named "Delete" option has a nearly identical warning dialogue that moves the file to the trash instead.
Zed's dialogue just says "Delete file.txt?" so if you're used to VSCode it's very easy to skip through that dialogue reflexively.
> If there is such a dialog
Just checked, there is a modal dialog that reads:
> Zed does not implement Ctrl+Z yet
Wait, what? That seems like an incredibly important missing piece of functionality.
Undo works for editing. Just not (apparently) file deletion.
To clarify, you can't Ctrl+Z immediate deletion of a file (skipping the Recycle Bin/Trash). You can undo other stuff, of course.
How can people justify using editors like this when they are missing basic features?
Like what's the benefit?
That's a bit dramatic, isn't it? Zed is still fairly new and although it's not perfect, they're doing quite a good job so far.
I would be using Zed full time if Sublime Text wasn't already perfect for me.
Zed is pretty awesome on macOS and Linux, but I tried the Windows version tonight and things that work great on the other OSes aren't working on Windows. I've noticed:
- Issues with various keybinds not working as expected, e.g. Ctrl+S to save works on my Linux machines but not Windows if Vim Mode is enabled
- Issues with LSPs/diagnostics not working on Windows the same way they work on macOS/Linux
- The terminal holds a lot of assumptions about how the system works that don't hold true on Windows, e.g. "the user has one shell they use for all things" or "setting environment variables in the OS is easy"
I love Zed on my work issued macbook and use it full time, but the Windows version still needs some work.
> setting environment variables in the OS is easy
Windows has a GUI, how can it not be easy. /s
It isn't as easy as on Linux, but you press Windows, type env, click edit Variables then select User vs. System, the you press add. That's not too complicated.
… And then you completely close all instances of the app that you need to see the changed environment variable, including any integrated terminal processes within that app. And then you relaunch all instances of that app and recover whatever workflow you were currently in the middle of.
Like, it’s not great.
That sets them globally, right? I need different variables and values in different contexts. e.g. in VSC I can set variables per-directory in settings.json files.
Yeah Windows having Rename and Delete next to each other was always bad design.
Windows has undo for Delete, however. You have to press Shift to delete permanently, and then you also get a confirmation prompt.
Ok, I always do this unconsciously.
Have they implemented subpixel font rendering by now? I remember that being a sticking point when it came to Linux because they had designed their custom UI renderer around the Macs ubiquitous HiDPI displays, leading to blurry fonts for the much, much larger proportion of Linux (and Windows) users who still use LoDPI displays.
Idk about subpixel font rendering, but font rendering on Linux looks massively better after a patch last week: https://github.com/zed-industries/zed/issues/7992#issuecomme...
I'm glad there's finally some progress in that direction. If they actually implement subpixel RGB anti-aliasing, it would definitely be worth considering as an alternative. It's been surprising to see so many people praise Zed when its text rendering (of all things) has been in such a state for so long.
Tbh though, is subpixel text rendering really all that important anymore when high resolution monitors are common now and low-dpi is the exception?
You should get outside your major metropolis and highly paid Western job once in a while. High-DPI monitors are the exception for most of the world.
Even then... my visibility is pretty bad, so earlier this year I upgraded to 45" 3440x1440 monitors, and even then I'm viewing at 125%, so subpixel fonts helps a lot in terms of readability, even if I cannot pick out the native pixels well.
They aren't high-dpi though, just big and still zoomed. On the plus side, it's very similar experience to two 4:3 monitors glued together... side by side apps on half the screen is a pretty great experience... on the down side, RDP session suck, may need to see if I can find a scaling RDP app.
It doesn't take being outside of the west for this to be relevant. Two places I currently frequent, A) the software development offices of a fortune 500 company, and B) the entire office & general-spaces (classrooms, computer labs, etc) of a sizeable university, have 1080p monitors for >80% of their entire monitor deployment.
Most people I know are on 1920x1080 LCDs. Over half of PC gamers seem to be on that resolution, for example: https://store.steampowered.com/hwsurvey
My gaming PC is also connected to a 1080p display because tbh for gaming that's good enough, but I don't whine about application text quality on that setup since it looks pretty bad with or without ClearType compared to a highdpi display ;)
Yea, I tried to give it a go on Fedora, but the terrible text rendering made it a insta-delete, for me.
ok for what reason we need sub-pixel rgb anti aliasing here???? does we run game engine for code??
Subpixel antialiasing of fonts is a pretty standard feature for a few decades. Without it, text can look fuzzy on certain displays.
> does we run game engine for code??
Zed literally does this; they render their UI using a graphics library just like a video game.
It's fun to see "GPU accelerated" and "like game engine" when literally every application is rendered the same way with the same APIs.
Last I checked I don't create a GL context to make a WPF app.
That "after" image is still rendered with greyscale AA rather than subpixel, but whatever they changed did make it more legible at least.
> The Windows build uses DirectX 11 for rendering, and DirectWrite for text rendering, to match the Windows look and feel.
As I understand, DirectWrite font rendering uses Windows subpixel font rendering. It seems OK on my monitor (better than the linux version) but haven't done any pixel peeping.
They seem to have anticipated this issue and designed it accordingly!
I tried it out on macOS and have a 1440p external monitor that the fonts just look horrible on. Looks fine on the laptop's "retina" display but blurry enough every else that it actually gave me a headache after a few hours.
I am confused about this. Doesn't Zed use CoreText on MacOS just as they use DirectWrite on windows. Shouldn't MacOS CoreText handle all this ?
Spitballing here, but subpixel rendering also requires integration with the rendering pipeline, specifically that you have the infrastructure to pass fractional offsets to the rasterization engine based on the concrete screen position of a glyph, and that the glyph atlas is a full RGBA texture (not just an alpha texture), and that the rasterizer knows about the particular texture format (RGBA vs BGRA etc.)
You don't _need_ fractional offsets no?
There's many ways to represent it, but the "sub" in "subpixel" does tell us that someone along the way needs to deal with fractional pixels. :-)
It does not imply fractional glyph positioning though. By itself subpixel rendering simply exploits knowledge about the configuration of the subpixels (red, green, blue) of the pixels on a particular raster display. (horizontal, vertical, what order,... ?)
If they’re like egui, they may render to a font atlas which complicates subpixel-AA.
I also backed out from using Zed a couple months ago, but since last week, Linux font rendering looks good to me both on full HD and HiDPI displays.
No lol
I installed Zed and tested out a bunch of fonts on my 1440p monitor. It looks decent, but not great. I think that's more a byproduct of Windows' awful font rendering in general though moreso than a Zed specific problem. VSCode is no better.
Seems like the only way to get high quality font rendering these days is a 4k+ display.
> Windows' awful font rendering
Just be aware that half the population prefers Windows font rendering.
Fair point. Maybe I should have said something about modern fonts not being properly hinted for windows instead of placing all the blame on the OS.
I watched the video on the home page and thought it is weird that they spend an inordinate amount of time on frame rate. Who picks an editor based on frame rate?
If you want to talk about perf in the context of a text editor show me how big of a file you can load--especially if the file has no line breaks. Emacs has trouble here. If you load a minified js file it slows to a crawl especially if syntax highlighting is on. Also show me how fast the start up time is. This is another area where Emacs does not do well.
So Zed is available on Windows--but only if you have a x64 processor. Lots of people run Windows on Arm64 and I don't see any mention of Arm64. This is where the puck is heading.
Also noticed Emacs key binding is in beta still.
It's not just frame rate, but also input delay. If you're using Visual Studio Code, you might be used to waiting 100 ms for a character you typed to appear. My personal workflow is based on Kitty and Neovim, which I've configured so that it can launch within 20 ms. Working without any input delay allows me to explore and edit projects at typing speed. As such, even tiny delays really bother me and make me lose my flow. I would believe Zed's focus on performance is motivated similarly.
Also, I do not believe Windows on Arm64 is a very large demographic? Especially for developers, unless they're specifically into that platform.
The only IDE I have used where frame rate is noticeable was Visual Studio (not Code).
Once you are beyond a bare minimum, every other speed metric is more important. Zed does really well on many of those, but some depend on the LSP, so they become the bottleneck quickly.
It is 18% according to this story: https://www.tomshardware.com/laptops/projections-show-that-a...
Most of that is macOS and ChromeOS, not Windows.
Yeah. The Steam survey isn't a perfect sample since it's skewed towards gamers, but that currently shows just 0.08% of Windows users are on ARM, while 81.5% of Mac users are on ARM.
That may be true if you're looking at all windows computers in existence. If you look at new laptops being sold you see different numbers. As of 2025, Arm processors hold about 13% to 20% of the market share for new Windows laptops. This is important because these are the people who are more likely to download and install your software.
> you might be used to waiting 100 ms for a character you typed to appear.
The latest benchmark I could find is 2022 and it's nowhere as bad as you claim
https://github.com/microsoft/vscode/issues/161622#issuecomme...
I ran it on VS Code and kitty recently:
https://news.ycombinator.com/item?id=45253927
https://news.ycombinator.com/item?id=45254054
Did you run this on a clean VSCode install?
I know as a matter of fact that bad extensions slow down VSCode significantly.
Thanks. Could you please do Zed also? I tried, but it was struggling to type.
You literally can’t tell the difference in a 20ms delay. That is an order of magnitude lower than the neural feedback loop latency. You may think that you can, but studies don’t back this up.
Let's see.
https://web-backend.simula.no/sites/default/files/publicatio...
> At the most sensitive, our findings reveal that some perceive delays below 40 ms. However, the median threshold suggests that motorvisual delays are more likely than not to go undetected below 51-90 ms.
By this study's numbers, 20ms is somewhat below the lower limit of ~40ms, but not too far below. 100ms would be easily perceivable - though, based on the other replies, it seems that VS Code does not actually have that much latency.
Don't confuse this with human reaction time, which is indeed an order of magnitude higher, at over 200ms. For one thing, reaction time is based on unpredictable events, whereas the appearance of keystrokes is highly predictable. It's based on the user's own keypresses, which a touch typer will usually have subconsciously planned (via muscle memory) several characters in advance. So the user will also be subconsciously predicting when the text will appear, and can notice if the timing is off. Also, even when it comes to unpredictable events, humans can discern, after the fact, the time difference between two previous sensory inputs (e.g. between feeling a keyboard key press down and seeing the character on screen), for much shorter time differences than the reaction time.
Of course, just because these levels of latency are perceptible doesn't mean they're a material obstacle to getting work done. As a relatively latency-sensitive person, I'm not sure whether they're a material obstacle. I just think they're annoying. Higher levels of latency (in the hundreds of ms) can definitely get in the way though, especially when the latency is variable (like SSH over cellular connections).
He’s not arguing that he can perceive the latency. He’s arguing that the latency affects his work/productivity. That claim seems unreasonable.
You're the one who said "you literally can't tell the difference". I agree to a point. It seems plausible that they were experiencing some other effect such as hitching or slowdown, rather than just a constant 100ms delay (which again isn't supposed to happen).
On the other hand, I just thought of one way that even a small fixed amount of latency can be a material obstacle. Personally, I type fast but make lots of typos, and I don't use autocorrect. So I need to react to incorrect text appearing on screen, backspace, and retype. The slower I react, the more text I have to delete (which means not just more keypresses but also more mental overhead figuring out what I need to retype). For this purpose, I am bound by the human reaction time, but editor latency is added on top of that. The sooner the text appears, the sooner my 'reaction timer' can start, all the way down to 0 latency. [Edit: And 100ms of latency can make a meaningful difference here. I just did a quick typing speed test and measured 148 WPM which is around 12 characters per second, so 100ms is one extra character, or a bit more.]
Also, latency might affect productivity just by being annoying and distracting. YMMV on whether this is a legitimate complaint or whether you should just get used to it. But personally I'm glad I don't have to get used to it, and can instead just use editors with low latency.
"Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Also you can absolutely feel the visual difference between 60Hz (~16ms) and 120Hz (~8ms), and for audio it's even more nuanced.
Just because studies don't back this up yet doesn't make it false. I imagine this is really hard to measure accurately, and focusing only on neuron activity seems misguided too. Our bodies are more than just brains.
> "Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Human neural feedback loop latency is a range that varies widely depending on the type of loop involved. Reflex loops are fastest, operating in tens of milliseconds, while complex loops involving conscious thought can take hundreds of milliseconds.
Short-latency reflex: 20-30ms. Signal travels through spinal cord, bypassing the brain. E.g. knee-jerk reflex.
Long-latency reflex: 50-100ms. Signal travels to the brainstem and cortex for processing before returning. E.g. Adjusting grip strength when an object begins to slip from your hand.
Simple sensorimotor reaction: 230 - 330ms. Simple stimulus-response pathway involving conscious processing, but minimal decision-making. E.g. pressing a button as soon as light turns on.
Visuomotor control: ~150ms, adaptable with training. Complex, conscious loops involving vision, processing in the cortex, and motor commands. E.g. steering a bike to stay on a path in a video game.
Complex cognitive loops: Brain's processing speed for conscious thought is estimated at 10 bits per second, much slower than the speed of sensory data. High-level thought, decision-making, internal mental feedback. E.g. complex tasks like analyzing a chess board or making a strategic decision.
A few years ago I did some testing with a quick Arduino-based setup I cobbled together and got some interesting results.
The first test was the simple one-light-one-button test. I found that I had reaction time somewhere in the 220-270ms range. Pretty much what you'd expect.
The second test was a sound reaction test: it makes a noise, and I press the button. I don't remember the exact times, but my reaction times for audio were comfortably under 200ms. I was surprised at how much faster I was responding to sound compared to sight.
The last test was two lights, and two buttons. When the left light came on I press the left button; right light, right button. My reaction times were awful and I was super inaccurate, frequently pressing the wrong button. Again, I don't remember the times (I think near 400ms), but I was shocked at how much just adding a simple decision slowed me down.
very cool data. thanks for sharing!
Do you not perceive more than 50 Hz?
Do you react and respond per-frame of a video?
Most people can with some Aimlabs training...
I don't have to think a full thought for every keystroke.
Then you don’t have to have an input response latency so small as to be perceived instantaneously.
I dunno what to tell you, Zed feels so much snappier than VSC to me.
High frame rates (low frame times, really) are essential to responsiveness which, for those who appreciate it, is going to make much more of a difference day to day than the odd hiccup opening a large file (not that zed does have that issue, I wouldn't know as I haven't tried opening something huge).
This is one of those things that make me question whether I experience the world fundamentally differently than many of you.
I have never, ever felt “latency” in editor UI. Any editor UI. It’s editing text for Pete’s sake. I can only type so fast, or read so fast.
You probably do. Many people just never notice that. It's not about typing or reading fast either, it's just about how it feels. Typing into something with shitty latency feels like dragging my fingernails across a chalkboard.
It's the same with high dpi monitors. Some people (me included) are driven absolutely insane by the font rendering on low density monitors, and other people don't even notice a difference.
Honestly, consider yourself blessed. One less thing in the world to annoy you.
Yes, I can perceive that latency, if I am actively looking for it. No, it has absolutely no effect whatsoever on my ability to work. The latency is far, far below what could possibly affect neural feedback loops, even on the slowest editors. And it doesn’t bother me in the slightest.
Low-dpi font rendering also isn’t an issue for me, unless it is so bad as to be illegible (which no modern system is).
We really do perceive things differently.
Have you ever used a display running over 60hz refresh rate?
Has it ever impacted your ability to read, or type?
That's an interesting take. For whatever reason, frame rate is not one of my complaints about existing editors such as Emacs, VS Code, etc.
Latency often is for VS Code - that's frame rate. Your editor taking 1s to respond to inputs is not normal.
It's expected for editors to have non-perceivable latency. It's just text, how hard can it be.
> Who picks an editor based on frame rate?
Me! Frame rate and input latency are very important for a tool I use for hours every day. Obviously that's not the only feature I look for in an editor but if an editor _doesn't_ have it, I skip it. I also try to work on devices with 120Hz displays and above these days.
I think the claim is more that an editor is supposed to have an arbitrary good frame rate.
Yeah, Kate will choke on a large single line file. Its one of the very few issues I bump into from time to time.
This always makes me laugh. The editor was barely announced two years ago. They've built it from the ground up with native support now for three different operating systems. They're experimenting with some cool new features, and even though I don't care about it I've heard their AI integration is pretty damn good.
But waaaaah they don't support a processor that accounts for probably less then 10% of Windows Machines
Ubiquity is pretty important when you're going to invest in learning a new editor. This is one of the advantages of vim for example. It is available everywhere... linux, windows, terminal, gui, etc.
You mean... like a GUI editor that runs on Windows, Mac, and Linux?
That's more of an issue with your system than an issue with Zed, you have to veer pretty far from the beaten path to not have proper DirectX nowadays. Are you running Windows in a VM?
No, but I am remoted in to my dev box (over RDP/mstsc).
If you're using it via RDP then you won't notice any rendering performance issues since RDP itself has terrible rendering performance.
I've literally programmed with VSCode for basically a decade this way without issues. Zed lags, it's disappointing and if I really like Zed I'll have to sort out another setup.
MSTSC is one of the rock-solid tools from Microsoft, and does better than almost everything else available on the market other than some PC over IP technologies specifically highly optimized for this use case. I've been programming with an ancient ThinkPad forever because I'm just remoting into a much more powerful machine.
If you are having problems with Windows RDP then the problem is somewhere else along the pipeline, not RDP itself. No other remote solution I've used is even in the same ballpark as RDP in terms of responsiveness and quality. RDP on a LAN is very usable for actual coding work.
Maybe try gaming-oriented remote desktop tools, like steam link or sunshine/moonlight. Those work great with directx, assuming you have a working gpu (at least integrated gpu) on your remote box. They also have way better latency, though use a lot more bandwidth.
How am I supposed to ask for permission from IT to install Steam or gaming related tools like Moonshine just to use a code editor?
By explaining the advantages over "older" methods. A lot of people use Moonlight/Sunshine for non gaming related stuff, specially considering than the alternatives are all proprietary.
They're productivity tools, not gaming software. You'll be faster and deal with less errors using the correct optimized remote desktop tool for your job, versus what you're using now, which can be slow and error prone.
> Moonlight
> Open source game streaming client
> Moonlight allows you to play your PC games on almost any device
OK, fine, maybe Sunshine will be different.
> Sunshine is a self-hosted game stream host for Moonlight.
Maybe not.
Sunshine and Moonlight are no more than accelerated and finely tuned VNC servers that happen to be targeted at gaming. You can totally set them up as a regular remote desktop solution.
Right, but I'm saying their sites make it very clear that they're meant for gaming, so GGP may indeed have a hard time convincing corporate IT that they're for work.
My post was answering the question of how they should ask for permission to use it. You pitch it as productivity software that helps you do your job better.
Every part of this sentence made me so sad.
Your company trusts you to write code but not run code?
It is for protection against rootkits, not to be nanny to the employees.
It's a free Parsec alternative.
I've played modern DirectX games over RDP, and they most definitely used the local GPU for rendering. Wonder why it doesn't find the GPU?
Zed supports remote editing if that helps.
If no monitor is connected then RDP doesn't load gpu drivers. It can also make RDP performance much worse as HW accelerated video encoding is not working
I assume it is not an issue with other applications like Chrome or VSCode (I mean VSCode is Chrome). They usually have some sort of fallback rendering on these environments that work well enough.
It may not be an actual problem with zed either, despite the warning.
What does a text editor have to do to achieve >>awful performance<< when software rendering?
I also get that message when opening Zed in a VM. The performance is not actually awful, I can use it just fine.
Maybe it's misdetecting something? I used to use Remote Desktop for game development sometimes, as weather could make the journey to the office impossible, and the performance was always pretty ok, even with UE5 stuff making heavy use of Nanite running at 1440p.
And if it was actually software emulated, which I can't believe for a moment, though I admit I never checked (I just always assumed the window contents were transmitted via some kind of video encoder) - then I can't imagine that a text editor would be a problem.
The input latency might not be as good as you'd like.
> The input latency might not be as good as you'd like.
Yeah input latency is annoyingly rough, not super bad but _just_ laggy enough to make it annoying to use.
Debating how much I want to change things, I can directly punch into my Linux machine but all my dev IDEs are on a VM for a long list of reasons, and I doubt Zed is going to run on my old ThinkPad if it struggles on software rendering, but we'll see.
It's extremely refreshing to see the editor's memory and processor usage be smaller than the webapp tab I'm working on.
I'm really liking it thus far!
Its binary is half a gig in size, so just like a browser, nothing fresh about that.
It has a huge amount of treesitter modules, etc., statically compiled into the executable. They're not all loaded the instant you fire it up.
Size on disk is about 64x less relevant than size in RAM for me. To give Zed some credit in this area, it's statically linked and the Linux binary is half the size as the Windows one.
What could they be statically linking to have a 400MB executable?
A tonne of treesitter grammars
They wrote their own graphics renderering library for starters, that's bundled into the editor when compiled.
https://www.gpui.rs/
Every Unity game ships with three UI frameworks (IMGUI, UGUI, UI Elements) built-in, in addition to everything else the game engine supports, and the engine is only about 50 MB.
A renderer/rendering library for something as simple as a text editor is not (or is not supposed to be) a lot of code and should not take up a large amount of space in a binary. Odds are good it's the embedded Treesitter grammars and other things that Zed uses that takes up space.
It is Rust software, so there is probably a good 50-90% waste when it comes to just raw needed vs. actual lines of code in there, but there is no way anyone makes a renderer so overcomplicated that it takes up a meaningful part of the 500 MB or so Zed takes up on Windows.
Is that really necessary for an ide? Seems like a ton of added complexity for little to no trade off or even a downside considering...
Yes. Zed is snappy in a way that makes my copy of Sublime Text 3 feel slow. Compared to VSC it feels like going from a video game at 40 FPS with 2-3 frames of input lag to 120 FPS.
You can read about it here:
https://zed.dev/blog/videogame
The OS loads the entire binary into RAM.
It loads on demand, and pages you don't use don't need to be on RAM
considering how cheap storage is nowadays nitpicking about binary size is a very weird take. Are you editing code on an esp32?
Why don't you actually do some considering how mistaken this superficial dismissal is: storage is not cheap where it matters - for example, your laptop's main drive. It doesn't help you that an external hard drive can be purchased cheaply since you won't be able conveniently upgrade your main mobile system. Also, the size of various used content (games/photos /videos) has increased a lot, leaving constrains on storage in place despite the price drop.
I just refuse to use any software that baloons it's filesize. Not because I can't afford storage, but because there are always alternatives that have similar features and packed into fraction (usually less than 1%) of filesize. If one of them can do it and other can't, it's a bad product, that I have no intention to support.
We should strive to write better software that is faster, smaller and more resilient.
"Storage is cheap" is a bad mentality. This way of thinking is why software only gets worse with time: let's have a 400mb binary, let's use javascript for everything, who needs optimization - just buy top of the shelf super computer. And it's why terabytes of storage might not be enough soon.
I can empathize with how lazy some developers have gotten with program sizes. I stopped playing CoD because I refused to download their crap 150+ GB games with less content than alot of other titles that are much smaller.
That said, storage is cheap, it's not a mentality but a simple statement of fact. You think zed balloons their file sizes because the developers are lazy. It's not true. It's because the users have become lazy. No one wants to spend time downloading the correct libraries to use software anymore. We've seen a rise in binary sizes in most software because of a rise in static linking, which does increase binary size, but makes using and testing the actual software much less of a pain. Not to mention the benefits in reduced memory overhead.
VSCode and other editors aren't smaller because the developers are somehow better or more clever. They're using dynamic linking to call into libraries on the OS. This linking itself is a small overhead, but overhead none-the-less, and all so they can use electron + javascript, the real culprits which made people switch to neovim + zed in the first place. 400mb is such a cheap price to pay for a piece of software I use on a daily basis.
I'm not here to convince you to use Zed or any editor for that matter. Use what you want. But you're not going to somehow change this trend by dying on this hill, because unless you're working with actual hardware constraints, dynamic linking makes no sense nowadays. There's no such thing as silver bullet in software. Everything is a tradeoff, and the resounding answer has been people are more than happy to trade disk space for lower memory & cpu usage.
Does static linking really reduce memory and cpu usage significantly compared to dynamic linking?
I keep hearing this since I've got my first Pentium 1 PC, that storage is cheap.
RAM is not cheap. Executables live in RAM when running.
Executables lives in pages of RAM and not all pages are in physical memory at once.
No, they're mmapped to RAM. Only the pages that get used are loaded to RAM.
Absent evidence to the contrary, the reasonable assumption here is that all of those pages are being used. The rest compiler is not in the habit of generating volumes of code that aren’t being called.
Only the parts that are being used (the working set).
Just be sure about that because zed fires up node as well for lsp.
Does the editor have an "absolutely no AI crap" version? Also, it says that it uses WSL, but i do not use it. I use Git bash instead. Is it supported?
(I'm asking instead of finding out myself because of the little interest I have in this editor)
You can disable all of the AI features with a single config setting.
I was able to change the integrated terminal to Git Bash.
You can just ask AI
Unfortunately, I tried to use zed as my daily driver, but the typescript experience was subpar. While the editor itself was snappy, LSP actions like "jump to declaration" were incredibly slow on our codebase compared to VS Code / Cursor.
Check if it supports using typescript-go as your LSP. IDEA recently added this, and I've been using it for a couple of months already; it's fantastic.
https://github.com/zed-extensions/tsgo does exactly this
That doesn't make sense, they both use tsserver under the hood.
I've heard that VSCode gets some special treatment and integrations with the typescript server that go deeper than normal LSP
To expand on this, the vscode editor can do a lot more than what is specified in LSP.
You can have custom functions in your language server that is not in spec and have your editor specific plugin call them.
I imagine there is a lot of this for typescript.
But I'm not sure if this can explain the speed difference..
As the owner of a somewhat popular language, I checked to see if there's an extension available (there is!) as we publish a language server.
One thing I noticed in the implementation is that it looks like it is using stdin/stdout for the JSONRPC inter-process communication, rather than named pipes or sockets. VSCode uses named pipes. I wouldn't be at all surprised if that's a significant bottleneck - I'm about to file a bug.
EDIT - commented on the tsserver thread here: https://github.com/zed-industries/zed/issues/18698#issuecomm...
VSCode's TypeScript service does not currently use LSP, and many language queries run in the same thread as the extension (or "Extension Host"). That does not necessarily explain the performance difference though
You could have an lsp server of infinite speed, but that wouldn't help one bit if the bottleneck is how the client deals with the messaging.
The specific techniques used to send, receive, and parse JSON could matter.
Could, ya, but I'd be pretty impressed and sad if Rust didn't have really good JSON parsers/serializers by now.
What do you mean? It has exceptionally good crates for that, and has for more than a decade. Is there something you feel is missing?
Well exactly, I'm pretty sure that's what the GP is getting at — it would be a surprise if Rust didn't have good JSON support. Which it does. So it's unlikely to be the bottleneck.
It’s had them for a very long time, pre-1.0.
I had the same experience and the same outcome. Zed was super fast for editing but slow for rich features, which on the net slowed me down compared with VSCode
Electron compiles NodeJS with v8's pointer compression which leads to an up to 50% decrease in memory usage, and might speed it up too.
Are you saying that VSCode runs tsserver in its own NodeJS process? Or are you saying that VSCode uses the NodeJS it ships to run tsserver in a different process?
I want to try Zed, but it’s not going to happen until they remove the nonconsensual download of unattested third party code.
You can easily disable automatic downloading of extensions.
https://zed.dev/docs/configuring-zed#auto-install-extensions
That option is either not explained well, or not what OP was really asking for. It doesn't specify what's the default behaviour for extensions that are neither explicitly enabled, nor disabled. It also doesn't say how to disable everything apart from some set of extensions.
What do you mean by that? The extensions? Do you want them to maintain that themselves?
Either you have a community that happily maintains extensions or you have an "official" extension manager that rolls out an update every next year
Extensions to me mean something that I explicitly choose to install (regardless of whether they automatically update). As far as I can tell, Zed goes about downloading code (“extensions”?) invisibly with no user input. See, for example, https://github.com/zed-industries/zed/issues/12589
> or you have an "official" extension manager that rolls out an update every next year
As a debian user, sign me up!
They clearly just want the option to say yes/no to the downloads, hence the use of the word "nonconsensual".
Seems like a reasonable ask.
If you open a .cpp file it will silently download clangd in the background. Same is true for other language servers AFAIK.
Nice but too little/too late, already switched to Linux - where Zed already works great!
Tested it a bit on a 2000 LOC python file, autocomplete is sluggish compared to VSCode (VSCode completes almost instantly but Zed waits about half a second before doing so). I didn't configure anything so I'm a bit disappointed...
Zed's ootb Python experience is poor since they use pyright as the LSP. There's threads on the issue tracker about replacing it, maybe check back in six months.
I have waited for this for months... but it's still only an x86_64 binary!
I love my ARM Surface Pro, and Zed would make a wonderful editor on this hardware. If anyone from Zed is reading this, please think about it!
I build Zed for Windows aarch64 from source -- works great, though the build process is quite slow on my 16GB Surface Pro. Definitely hoping for official binaries, though!
I think I got as far as installing the Visual Studio Installer so I could install Visual Studio and I just bailed on that whole thing, lol. I'll have to take some time out on a weekend to take another look :)
For whatever reason, zed compilation on windows with msvc is extremely slow compared to the Linux counterpart.
https://github.com/rust-lang/rust/issues/145864 was opened because of the the discrepancy
I absolutely love the idea of Zed, and I'm regularly giving it a go. Typing in Zed really feels better than VSCode. It's hard to describe, but impossible to discard once you've used it for a short while.
Unfortunately, there's a bunch of small things still holding me back. Proper file drag & drop for one, the ability to listen to audio files inside the editor, and even a bunch of extensions, in particular one that shows a spectrogram for an audio file.
Maybe my biggest gripe is that Python support is still better in VSCode. Clicking on definitions is faster and more reliable.
I have to ask because I just can't wrap my head around it, what does 'ability to listen to audio files inside the editor' mean for a text editor?
In vscode you can click on various assets, like images or audio files, and then view them right inside vscode. If you work with datasets, the ability to inspect them is crucial.
Yes ofc I can use Finder instead but in vscode I just cmd+p.
The reason it's faster is largely because it doesn't have all those little quality of life features and extension ecosystem. It's easyish to make software perform well if it doesn't do all that much. If you take base vscode, no extensions, and just do raw text editing, it's hard for me to tell the difference between vscode, zed, or any other editor.
When vscode was released, Sublime was faster - and it stayed faster. But that wasn't enough to stop the rise of vscode.
It's great that they finally target all three mayor platforms! Let's see how developers using Windows treat them. I used to be a Neovim purist, spending days on my config and I have barely touched it since I moved to Zed. It's so much nicer to use then VScode (and it's forks) because it's so snappy. I hope they ship leap.nvim (https://github.com/ggandor/leap.nvim) support soon, then I am completely happy!
I installed the beta a week or two ago. Many of the files I tried opening in it just did not work at all. It can only open utf-8 encoded files.
That is not a problem for code, but if you just want to open some random .txt file or .csv file, these are often enough not utf-8 encoded, especially if they contain non English text.
The github issue for this is showing progress, but I think it's a mistake to promote the editor to stable as is.
I'm editing a UTF16-LE file right now. Works just fine. It does show the BOM at the beginning, though.
The project wide search looks quite unusable compared to Visual Studio Code. You only see the matches one file at a time.
I am using nightly on Windows and startup time is very slow. It can take 10 seconds to boot up. To quickly edit random files, I would open notepad++ or simple notepad instead, which isn't what I expected from Zed.
I use this as Sublime replacement on my macOS. So far happy with it. Only use it as a general purpose text editor.
Okay, I may ocassionally do some code editing on it. But most of the time it's gotta be VSCode or vim/nvim.
The windowing is pretty broken if you use system scaling https://github.com/zed-industries/zed/issues/40272
I tried it for a bit. But unless you want to use their choice of lsp/linter/whatever from what you are used to, then you will waste even more time customising zed to your needs from your previous solution.
Technically, Zed's Super Compete will almost never catch up with Cursor, or even as good as the free Windsurf. If you look a little closer, you will see that the cursor and cursor completion requests are not initiated by the editor itself, which only interacts with the lsp. Yes. They all come with an LSP, taking free windsurf as an example, when windsurf starts, it will start a process in /Applications/Windsurf.app/Contents/Resources/app/extensions/windsurf/bin/languageservermacosarm, This process is responsible for syntactic analysis of the current code's workspace and then local indexing. The editor communicates with LSP through localhost encoding through protobuf, including the current cursor, currently open tabs, recently viewed files, etc. The LSP calculates locally based on this information, and if the inference cannot be completed locally, it will automatically extract the necessary information to interact with the server-side LLM. This ensures the intelligence of Super Complete. However, ZED's Super Complete does not have an LSP. Instead, it interacts directly with the server, which runs a zeta model. This pattern is not smart, takes up a lot of bandwidth and more useless context, and lacks more useful information due to the lack of LSP support. I hope my technical interpretation above will give you an understanding of why zed's super complete is technically backward. And, since ZED's LSP is configured by a third party, ZED itself does not offer its own languageserver, unless ZED is developing it now, but this is obviously difficult. In this case, I don't understand why zed would charge for a feature that is clearly lagging behind. You have to ask me why I know so much about this, because I am a member of Cursor, but I want a Zed Editor, so I want to customize an endpoint I developed by ZEDPREDICTEDITS_URL, relay requests to Cursor, to achieve powerful completion with Cursor in Zed. Obviously, it failed in the end. The above mentioned is the reason for the failure.
Do either Windsurf or Cursor work well with local or no AI? One of the things I like about Zed is that it can, and is completely free.
Excuse me, can you read what I wrote? I didn't say zed editor is bad, I just said that zed's PREDICT EDIT isn't good enough, or at least it shouldn't charge for it.
I never thought you said Zed was bad. I was genuinely asking since Cursor and Windsurf exclusively advertise AI features, whereas Zed is also a good editor without AI. I never bothered to try them because I inherently distrust AI that I don't control, and find the cloud stuff to be too expensive on top. Just wanted to know if they're any good, or can even be used at all, without AI; since their websites say nothing on the subject.
Yes, windsurf and cursor will also work whitout AI, cause it just fork from vscode. I like zed very much. If u dont like AI, then zed is best editor for you. (and for me if zed's AI is good enough)
they shouldn't charge for it because their competetors have money to burn?? that is not how business works
I just say they shouldn't charge for it PREDICT EDIT. The reason is that it's not even as good as the free ones. I don't know why you've become such a fanatic, but if you want to support them you can donate.
Paragraphs are overrated.
Zed is awesome. It does everything I need. I easily find everything I'm looking for. It's fast, and I can use forked CLI terminals from the IDE via ACP mode. This means Cerebras and Qwen code 480b for super fast and cheap incredibly smart CLI agents.
Is this something like Sublime? Light/responsive editor for one-off files? But maybe with some better introspection? That would fill a niche for me; trying it. FYI download+install is the smoothest experience of any software I've loaded recently I didn't build myself. Going to daily-drive it for a bit and see what's up; promising!
as a 10+ year Sublime user, Zed is the best of the more "modern" GUI editors I've tried.
I haven't fully switched over to using it as my daily-driver because Zed's heavy reliance on GPU rendering means that it locks up every time I suspend & resume my Linux box [0,1] but I'm optimistic about the progress they're making on it.
0: https://github.com/zed-industries/zed/issues/7940
1: https://github.com/zed-industries/zed/issues/23288
Was a heavy sublime user for many years, slowly migrated to vim (first sublime with vim keybindings) but now daily drive lazyvim and the defaults with that are very sane.
Quick install on any platform and just works. And obviously plenty of configuration that’s available to you but if you haven’t I’d give that a go.
Still on Sublime, and helix in terminal.
Tried zed, it's interesting but several things are missing including the ability to skip extensions auto-update... which imho is critical for security.
I still Sublime for quick text file changes and then Zed for programming/AI assisted tasks.
I also use Sublime heavily and then started with Zed for the AI assistance.
Uhm... tried to use it but it feels so lacking. Yeah, fine - it's "lean" (text?) editor but that's about it. Opening any project (even with plugins enabled) doesn't make feel any "improvements" (code navigation, sane autocomplete, project structure detection)...
Yes, the same experience. Although it had amazing support for Vim keybindings, it offered me nothing that I wasn't already getting from Neovim. The in-editor chat seems like it could be useful, but I don't have any need for it.
Nice to see. Will probably start using it over quick edits on Windows.
I'd like to properly give it a go one day due to the effort put into its vim keybindings, but until then I'll stick to neovim.
If I could finally ditch Visual Studio that d be nice
I'm so impressed by how quickly this team can ship new features. It seems like every few weeks there's a new major update!
My work PC is a $600 laptop with an on-board GPU, will I notice a difference between this and Vscode?
$600 laptop can mean a Macbook with Apple Sillicon CPU in this day and age.
When they say $600 they probably mean brand new pricing.
Last I checked, cheapest MacBook is $999+ for the air.
I bought two of mine for $700 brand new at Best Buy. And I didn't even wait for a sale.
> Zed isn't an Electron app; we integrate directly with the underlying platform for maximal control.
Sounds great. Looking forward to doing a simple test run with Astro SSG
I switched from VSCode to Zed as my "I just need to edit this 1 little text file" app because VSCode was way too spammy. Every time I open it it wants to update something or complain about some extensions or complain that I had a file open and now it MUST close it because heaven forbid I open two things at once.
I hope Zed stays clean. We'll see. So far so good. Was quite happy they had a JetBrains base hotkey setup. Had to add a few of my own but I could pretty easily, nothing missing so far.
Neat project for collab dev work
tried to switch a few times
i can’t describe it in any other way other than it feels cold to use
i wonder if anyone else felt the same in earlier versions and feels that it’s fixed
What is Zed? A vscode like editor?
Pretty much, yeah.
It's native code, and it shows. It also has some AI integrations that are different, but I don't know how well they work.
It looks nice, but I haven't managed to use it for long.
Is it on winget?
Soon https://github.com/microsoft/winget-pkgs/pull/303728
These ones actually: https://github.com/microsoft/winget-pkgs/issues/304075
Just downloaded and installed on mac. It asking access to pretty much every folder and even music apps. What a privacy nightmare. NOPE, will stick to cursor.
So many new options. NeoVim and Helix in terminal, Zed as a beautiful GUI editor. - A few years ago I thought it’s Emacs/vim for us hackers, and for the rest it’s Sublime, and later VS Code, and then comes nothing. But wow, here we are: Ever more options. I‘m loving it.
For some of us it has been IDEs since the Borland days on MS-DOS, unless forced otherwise.
ok dear thank you
I don't use windows, but this is good development as all platforms should be present for editors to be worth using. I am happy Zed user since long time, I am happy it had kept with out demands, with adding AI, Git etc. Also integration of cli tools into AI is excellent and really refreshing.
Neovim > everything else.
Bye
Viable C# support has stalled https://github.com/zed-extensions/csharp/pull/11
I didn't know bundling app for the most popular OS is that hard.
In January 2024 zed was open sourced, with only mac support. One week later Dzmitry Malyshau showed up with a prototype renderer for Linux. In July 2024 official Linux builds were available mostly thanks to community contributors. The swift Linux support is a tale of community steppung up to open source development, not one of manufacturer provides something. So Windows is certainly not popular with the kind of crowd that gets excited about an editor
For Zed user, yes. They don't care (I too, didn't use Windows for years, but Windows is popular is a fact, not opinion).
What I meant was there are so many problems with Windows that the team cannot do it quickly (they post about it before: https://zed.dev/blog/windows-progress-report).
Just surprised, as I thought building GUI app on Windows must be easy right, as must be libs/frameworks already available to support that? It's just not.
Not the most popular for programmers
It is, based on Stack Overflow survey: https://survey.stackoverflow.co/2025/technology#most-popular... Couldn't find a way to access Statista report.
It obviously varies a lot by the preferred languages, but Windows is still at the top, on average.
I hope you realize that Linux in this table is split up into many individual options where the others are not, i.e. that Ubuntu, Debian, Arch etc also all count towards Linux?
Those are further breakdowns. It's not a pie chart, not all bars need to add up to 100%.
Only if they happen to be UNIX programmers.
Turns out there are plenty of other jobs as programmers.
First, you should fix fundamental operations on Mac and other distributions - for example when you stash or perform operations on files from other tools, it will put the state out of sync.
You can build the most beautiful and fastest IDE, but with this bugs, it’s useless
It's a different tier of software engineers to be able to write a text editor, let alone the fastest one...
I worked at multiple FAANGs and can't see me or any of my colleagues capable of doing this.