However, 50 concurrent VMs is not a lot. Similar limits exists on all cloud providers, except perhaps in AWS where the cost is prohibitive and it is slow.
Earlier this year, we ended up rolling out own. It is nothing special. We keep X number of machines in a warm pool. Everything is backed by a cluster of firecracker vms. There is no boot time that we care about. Every new sandbox gets vm instantaneously as long as the pool is healthy.
> It is nothing special. We keep X number of machines in a warm pool.
I'd love to better understand the unit economics here. Specifically, whether cost is a meaningful factor.
The reason I ask is that many startups we've seen focus heavily on optimizing their technology to reduce cold/boot startup times. As you pointed out, perceived latency can also be improved by maintaining a warm pool of VMs.
Given that, I'm trying to determine whether it's more effective to invest in deeper technical optimizations, or to address the cold start problem by keeping a warm pool.
Wow, forking memory along with disk space this quickly is fascinating! That's something that I haven't seen from your competitors.
If the machine can fork itself, it could allow for some really neat auto-forking workflows where you fuzz the UI testing of a website by forking at every decision point. I forget the name of the recent model that used only video as its latent space to control computers and cars, but they had an impressive demo where they fuzzed a bank interface by doing this, and it ended up with an impressive number of permutations of reachable UI states.
So isolation is correct. Forking a sandbox gives you multiple exact duplicates of isolated environments.
When your coding agent has 10 ideas for what to do, to evaluate them correctly it needs to be able to evaluate them in isolation.
If you're building a website testing agent and halfway down a website, with a form half filled out a session ongoing, etc and it realizes it wants to test 2 things in isolation, forking is the only way.
We also envision this powering the next generation of devcycles "AI Agent, go try these 10 things and tell me which works best". AI forks the environment 10 times, gets 10 exact copies, does the thing in each of them, evaluates it, then takes the best option.
Yep I can see this especially when the agent is spinning up test servers/smokes and you don't want those conflicting. How do we reconcile all the potential different git hashes though, upstream I guess etc (this might be an easy answer and I'm not super proficient with git so forgive)
So we recommend branch per fork, merge what you like.
You have to change the branch on each fork individually currently and thats unlikely to change in the short term due to the complexity of git internals, but its not that hard to do yourself `git checkout -b fork-{whateverDiscriminator}`
For postgres there are pg containers, we use them in pytest fixtures for 1000's of unit-tests running concurrently. I imagine you could run them for integration test purposes too. What kind of testing would you run with these that can't be run with pg containers or not covered by conventional testing?
I'll say this is still quite useful win for browser control usecases and also for debugging their crashes.
Agreed, the thing I'd be most interested in is the isolated execution environment you mentioned. Agents running autopilot are powerful. Agents running unsupervised on a machine with developer permissions and certificates where anything could influence the agent to act on an attacker's behalf is terrifying
I recommend running the agent harness outside of the computer. The mental model I like to use is the computer is a tool the agent is using, and anything in the computer is untrusted.
I would recommend not giving an agent the full run of any computing environment. Do handle fine grained internet access controls and credential injection like OpenShell does?
I used to believe this, but I think the next generation of agents is much more autonomous and just needs a computer.
The work of a developer is open ended, so we use a computer for it. We don't try to box developers into small granular screwdrivers for each small thing.
Thats whats coming to all agents, they might want to run some analysis with python, want to generate a website/document in typescript, and might want to store data in markdown files or in MongoDB. I expect them to get much more autonomous and with that to end up just needing computers like us.
The difference is that I am not always legally liable for what a rogue developer does with their computer - if I had no knowledge of what they were up to and had clear policies they violated then I'm probably fine. But I'm definitely always liable for anything an agent I created does with the computer I gave it.
And while they are getting better I see them doing some spectacularly stupid shit sometimes that just about no person would ever do. If you tell an agent to do something and it can't do what it thinks you want in the most straightforward way, there is really no way to put a limit on what it might try to do to fulfill its understanding of its assignment.
Kind of. The chat logs of the agent are trustworthly, as should any telemetry you have on it or coming out of the VM. Its behavior should be treated as probabilistic and therefore untrustworthly.
It’s untrustworthy because its context can be poisoned and then the agent is capable of harm to the extent of whatever the “computer” you give it is capable of.
The mitigation is to keep what it can do to “just the things I want it to do” (e.g. branch protection and the like, whitelisted domains/paths). And to keep all the credentials off its box and inject them inline as needed via a proxy/gateway.
I mean, that’s already something you can do for humans also.
I think one of the very few who actually support ebpf & xdp, which you do need when you're building low level stuff. + the bare metal setup is like out of the world lol.
Never tried them, I think the weird thing about VM providers is the difference really all is in the execution. These guys seem great in concept but I don’t know enough about how they properly work.
I currently use lightweight VMs (Proxmox containers) and git worktrees. I can fork an existing VM in in seconds. It is not entirely clear to me what I would gain from using your solution.
This is awesome - the snapshotting especially is critical for long running agents. Since we run agents in a durable execution harness (similar to Temporal / DBOS) we needed a sandboxing approach that would snapshot the state after every execution in order to be able to restore and replay on any failure.
We ended up creating localsandbox [0] with that in mind by using AgentFS for filesystem snapshotting, but our solution is meant for a different use case than Freestyle - simpler FS + code execution for agents all done locally. Since we're not running a full OS it's much less capable but also simpler for lots of use cases where we want the agent execution to happen locally.
The ability to fork is really interesting - the main use case I could imagine is for conversations that the user forks or parallel sub-agents. Have you seen other use cases?
Deterministic testing of edge cases. It can be really hard to recreate weird edge cases of running services, but if you can create them we can snapshot them exactly as they are.
I built something like this at work using plain Docker images. Can you help me understand your value prop a little better?
The memory forking seems like a cool technical achievement, but I don't understand how it benefits me as a user. If I'm delegating the whole thing to the AI anyway, I care more about deterministic builds so that the AI can tackle the problem.
So first MicroVM != Container, and container is not a secure isolation system. I would not run untrusted containers on your nodes without extra hardening.
The memory forking was originally invented because for AI App Builders and first response driven applications its extremely important that they are instant (difference between running bun dev and the dev server already being running).
However its much more generally applicable, Postgres is a great example of this. You can't fork the filesystem under postgres and get consistency. Same thing with a browser state, a weird server state, or anything that exists in memory. The memory forking gives a huge performance boost while snapshotting whats actually going on at one instant.
Generally kernel level attacks and neighbor performance impacts on the security side.
On the functional side without a kernel per guest you can't allow kernel access for stuff like eBPF, networking, nested virtualization and lots of important features.
theoretically you can get to fairly complete security via containers + a gVisor setup but at the expense of a ton of syscall performance and disabling lots of features (which is a 100% valid approach for many usecases).
The technical challenges in getting memory forking to deliver those sub-second start and fork times are significant. I've seen the pain of trying to achieve that level of state transfer and rapid provisioning. While "EC2-like" gets the point across for many, going bare metal reveals the practical limits of cloud virtualization for high-performance, complex workloads like these. It shows a real understanding of where cloud abstraction helps and where it just adds overhead.
The cost argument for owning the hardware for this specific use case also makes sense, considering the scale these agent environments will demand. Also worth noting, sandboxes are effectively an open attack surface; architecting them not to be in your main VPC is a sound security decision from the start.
Would love to understand how you compare to other providers like Modal, Daytona, Blaxel, E2B and Vercel. I think most other agent builders will have the same question. Can you provide a feature/performance comparison matrix to make this easier?
I'm working on an article deep diving into the differences between all of us. I think the goal of Freestyle is to be the most powerful and most EC2 like of the bunch.
I haven't played around with Blaxel personally yet.
E2B/Vercel are both great hardware virtualized "sandboxes"
Freestyle VMS are built based on the feedback our users gave us that things they expected to be able to do on existing sandboxes didn't work. A good example here is Freestyle is the only provider of the above (haven't tested blaxel) that gives users access to the boot disk, or the ability to reboot a VM.
Fly.io sprites is the most similar to us of the bunch. They do hardware virtualization as well, have comparable start times and are full Linux. What we call snapshots they call checkpoints.
The big pros of Sprites over us is their advanced networking stack and the Fly.io ecosystem. The big cons are that Sprites are incredibly bare bones — they don't have any templating utilities. I've also heard that Sprites sometimes become unavailable for extended periods of time.
The big pros of Freestyle over Sprites is fork, advanced templating, and IMO a better debugging experience because of our structure.
Thanks for the thoughtful response. I'm predominantly a self-hoster, but I think your product makes a lot of sense for a wide variety of users and businesses. I'm excited to try out freestyle!
Freestyle/other providers will likely provide better debugging experience but thats something you can probably get past for a lot of workloads.
The time when you/anyone should think about Freestyle/anyone is when the load spikes/the need to create hundreds of VMs in short spikes shows up, or when you're looking for some of the more complex feature sets any given provider has built out (forks, GPUs, network boundaries, etc).
I also highly recommend self hosting anything you do outside of your normal VPC. Sandboxes are the biggest possible attack surface and it is a feature of us that we're not in your cloud; If we mess up security your app is still fine.
Obviously your service/approach is different than exe, more like sprites but like you said more targeted/opinionated to AI coding/sandboxing tasks it looks like. Interesting space for sure!
I built yoloAI, which is a single go binary that runs anywhere on mac or linux, sandboxing your agents in disposable containers or VMs, nested or not.
Your agent never has access to your secrets or even your workdir (only a copy, and only what you specify), and you pull the changes back with a diff/apply workflow, reviewing any changes before they land. You also control network access.
Still WIP, but the core works — three rootfs tiers (minimal Ubuntu, headless Chromium with CDP, Docker-in-VM), OCI image support (pull any Docker image), automatic thermal management (idle VMs pause then snapshot to disk, wake transparently on next API call), per-user bridge networking with L2 isolation, named checkpoints, persistent volumes, and preview URLs with auto-wake.
Fair warning: the website is too technical and the docs are mostly AI-generated, both being actively reworked. But I've been running it daily on a Hetzner server for my AI agents' browser automation, and deploy previews.
I'd love any feedback if you want to go ahead and try it yourself
We do auto suspend depending on your configured timeout. We'll pause your VM and when you come back the processes will be in the exact same state as when you left.
But your pricing page suggests that that is not available without a subscription: in the on-demand pricing section "persistent Snapshots" and "Persistent VM's" have an 'x'.
We do not allow long term persistence for the free tier.
This is purely a defense mechanism, I don't want to guarantee storing the data of an entire VM forever for non paying users. We have persistence options for them like Sticky persistence but it doesn't come with the reliability of long term persistence storage.
Is it possible to run a Kubernetes cluster inside one? (E.g. via KIND.)
If so, we'd very much like to test this. We make extensive use of Claude Code web but it can't effectively test our product inside the sandbox without running a K8s cluster
Just want to say that even if alternatives exist (not necessarily exact capabilities obviously), I appreciate what seems to be genuine excitement on your part of having built something cool / best in class.
Cool! I've been using your API for running sandboxed JS. Nice to see you also support VMs now.
> we mean forking the whole memory of it
How does this work? Are you copying the entire snapshot, or is this something fancy like copy-on-write memory? If it's the former, doesn't the fork time depend on the size of the machine?
We're using copy on write with the memory itself. Fork time is completely decoupled from the size of the machine.
Creating snapshots takes a 2-4 second interruption in the VM due to sheer IO that we didn't want here.
Whats especially cool about this approach is not only is fork time O(1) with respect to machine size, but its also O(1) with respect to the amount of forks.
It doesn't seem very easy to calculate how much it would cost per month to keep a mostly-idle VM running (for example, with a personal web app). The $20/month plan from exe.dev seems more hobbyist-friendly for that. Maybe that's not the intended use, though?
Non open source and non local SAAS sandboxes are offensive to even try to launch. No one needs this and the only customers will be vibe coders who just don't know any better. There are teams building actual sandboxes like smolmachines, podman, colima and mre. At least be honest and put the virtualisation tech you are using as well as that its closed source SAAS on the landing page to safe people time.
Our users are platforms, and many of the best already build on us.
Self hosting is a valuable feature but our technology is unfriendly to small nodes — it will not work on consumer hardware. Many of the optimizations we spend our time on only seriously kick in above 2TB of storage and above 500GB of RAM.
> Non open source and non local SAAS sandboxes are offensive to even try to launch. No one needs this and the only customers will be vibe coders who just don't know any better.
This is simply not true, but also not a very charitable take.
Congrats guys!
Would share some technical details, I bet you have great stories to tell.
Let’s, what is forking? You completely copy disk, make ram snapshot and run it? If CoW, but ram? You mentioned 8GB ram vms. Sounds like impossible to copy 8Gb under 500ms, also disk?
Insane. Does it possible to fork to another bare metal machine? Maybe multi region as fly io.
If not, I bet you have huge disk sizes on your machines to store all the snapshots (you said, you store them and bill only for disk space).
So forking across multiple nodes in that speed is not possible — we run extremely beefy nodes in order to avoid moving VMs across nodes as much as possible.
We are researching systems of hot moving VMs across VMs but it would have very different performance characteristics.
Our tech is not decades old so there is a chance we've missed something but our layer management is atomic so I'd be shocked if you'd be able to corrupt state across forks/snapshots.
Seen this pattern many times -- the root cause is usually not the cloud itself but a combination of over-provisioned resources, missing cost visibility, and reactive budgeting. Things like untagged resources, idle NAT gateways, oversized instances, and no automated scale-down policies quietly eat the budget.
At UPSystems we typically start with a quick resource rightsizing + tagging audit, set up real-time cost alerts, and move heavy AI/LLM workloads to on-prem where it makes financial sense. Happy to share our FinOps checklist if useful -- feel free to DM.
Any ideas for locking down remote access from an untrusted VM? Cloudflare has object-based capabilities and some similar thing might be useful to let a VM make remote requests without giving it API keys. (Keys could be exfiltrated via prompt injection.)
So we have there are 3 solutions to this, Freestyle supports 2 of them:
1. Freestyle supports multiple linux users. All linux users on the VM are locked down, so its safe to have a part of the vm that has your secret keys/code that the other parts cannot access.
2. A custom proxy that routes the traffic with the keys outside
3. We're working on a secrets api to intercept traffic and inject keys based on specific domains and specific protocols starting with HTTP Headers, HTTP Git Authentication and Postgres. That'll land in a few weeks.
We're working on a similar solution at UnixShells.com [1]. We built a VMM that forks, and boots, in < 20ms and is live, serving customers! We have a lot of great tools available, via MIT, on our github repo [2] as well!
? You say 'yes' but you seem to be answering a different question. Docker desktop only makes me choose a max ram - it dynamically scales RAM usage. I don't need fully automatic like that, but the ability to vertically scale RAM for an existing instance is really important, particularly given the cost of RAM these days.
Generally compared to those two more powerful. Freestyle VMs are full Debian machines, with support for sysd, docker in docker, multiple users, hardware virtualization etc. Daytona and E2B are both great "sandbox" providers but don't really feel like VMs/you can't run everything you can in an EC2.
We also support the forking/snapshotting/long running jobs that they struggle to.
The problem with agents is that it is currently way too expensive. 100 times more expensive maybe. Another big issue is the lack of interactivity with an agent. Therefor for now agentic development is only viable from your own machine. And there isolation is less of an issue easier to manage.
It's hard to tell what this is or how it compares to other things that are out there, but what I latched onto is this:
> Freestyle is the only sandbox provider with built-in multi-tenant git hosting — create thousands of repos via API and pair them directly with sandboxes for seamless code management. On top of that, Freestyle VMs are full Linux virtual machines with nested virtualization, systemd, and a complete networking stack, not containers.
Edit: I realize the Loom is a way to look at it. Loom interrupted me twice and I almost skipped it. However it gave me a better idea of what it does, it "invents" snapshotting and restoring of VMs in a way that appears faster. That actually makes sense and I know it isn't that hard to do with how VMs work and that it greatly benefits from having only part of the VM writable and having little memory used (maybe it has read-only memory too?).
So the snapshotting tech is actually 100% independent of Git.
Git is useful for branching vs forking (IE you can't merge two VM forks back together), but all the tech I showed in the Loom exists independently from Git.
The hard part of it was making the VM large and powerful while making snapshotting/forking instant, which required a lot of custom VMM work.
> The hard part of it was making the VM large and powerful while making snapshotting/forking instant, which required a lot of custom VMM work.
I don't find "large and powerful" in reference to a VM to sound compelling. What should be large? The memory? The root disk? As I alluded to in my comment, I'm more curious about what can be made small.
Also I'm skeptical that if I forked a vm running a busy Gas Town that it would be very light or fast in how it forks. A well behaved sqlite I could see, but then I'd wonder why not just fork the storage volume containing the database...
So thats what we did. We've made forking a whole gas town performant in 100s of milliseconds. Try it — you can definitely see it working on free tier.
In respect to large and powerful RAM + Size is important but I was more-so referring to full Linux power. The ability to run nested virtualization, ebpf, fuse, and the powerful features of a normal Linux machine instead of a container.
Well that does sound pretty impressive then. And as a champion of open source it wouldn't make me feel like I was getting locked in because the regular speeds I could live with (on a server with KVM or a nested virtalization setup).
If you put your gmail credentials into a VM that an AI Agent dealing with untrusted prompts has access to they should be treated as leaked and be disabled immediately.
However, if you don't put your administrative credentials inside of the VM and treat it as an unsafe environment you can safely give it minimal permissions to access specific things that it needs and using that access it can perform complex tasks.
I have so many interesting problems on Ai, sandboxing isn't one of them. It's a pointless excercise yet disproportionately so many people love to to do this. Probably because sandboxing doesn't feel as magic as Agents itself and more like the old times of "traditional" software development.
It is a mostly pointless exercise if the goal is trying to contain negative impact of AI agents (e.g. OpenClaw).
It is a very necessary building block for many common features that can be steered in a more deterministic way, e.g. "code interpreter" feature for data analysis or file creation like commonly seen in chat web UIs.
Believe it or not, once you start working for a regulated industry, it is all you would ever think of. There, people don't care if you are vibing with the latest libraries and harnesses or if it's magic, they care that the entire deployment is in some equivalent of a Faraday cage. Plus, many people just don't appreciate it when their agents go rm -rf / on them.
With respect to the market, every single sandbox sucks. I'm not gonna shit talk competitors but there is not a good sandboxing platform out there yet — including me — compared to where we'll be in 6 months.
We've heard all the platforms have consistent uptime, feature completeness, networking and debugging issues. And in our own platform we're not 1/10ths of the way through solving the requests we've gotten.
Next generation of Agents needs computers, and those computers are gonna look really different than "sandboxes" do today.
I don't think you're wrong, but if you really want to really re-think the approach, building an orchestration layer for Firecracker like every other company in the space is doing is probably not it.
Looks cool - would be great to see a PR with some benchmarks on this repo if you can: https://github.com/computesdk/benchmarks
edit: just saw the pr for freestyle. something seems to be blocking, but curious how it compares: https://github.com/computesdk/benchmarks/pull/41
Nice work.
However, 50 concurrent VMs is not a lot. Similar limits exists on all cloud providers, except perhaps in AWS where the cost is prohibitive and it is slow.
Earlier this year, we ended up rolling out own. It is nothing special. We keep X number of machines in a warm pool. Everything is backed by a cluster of firecracker vms. There is no boot time that we care about. Every new sandbox gets vm instantaneously as long as the pool is healthy.
Thanks for sharing your approach!
> It is nothing special. We keep X number of machines in a warm pool.
I'd love to better understand the unit economics here. Specifically, whether cost is a meaningful factor.
The reason I ask is that many startups we've seen focus heavily on optimizing their technology to reduce cold/boot startup times. As you pointed out, perceived latency can also be improved by maintaining a warm pool of VMs.
Given that, I'm trying to determine whether it's more effective to invest in deeper technical optimizations, or to address the cold start problem by keeping a warm pool.
50 is not heavy, what is heavy is 1000 VMs that can be paused/brought back 50 in 1 second.
Though generally ya, handrolling this stuff can work at the scale of 50 VMs, it becomes a lot harder once you hit hundreds/thousands.
Wow, forking memory along with disk space this quickly is fascinating! That's something that I haven't seen from your competitors.
If the machine can fork itself, it could allow for some really neat auto-forking workflows where you fuzz the UI testing of a website by forking at every decision point. I forget the name of the recent model that used only video as its latent space to control computers and cars, but they had an impressive demo where they fuzzed a bank interface by doing this, and it ended up with an impressive number of permutations of reachable UI states.
That’s what I’m hoping for!
I’m super interested since it seems like you have given everything a lot of thought and effort but I am not sure I understand it.
When I’m thinking of sandboxes, I’m thinking of isolated execution environments.
What does forking sandboxes bring me? What do your sandboxes in general bring me?
Please take this in the best possible way: I’m missing a use case example that’s not abstract and/or small. What’s the end goal here(
So isolation is correct. Forking a sandbox gives you multiple exact duplicates of isolated environments.
When your coding agent has 10 ideas for what to do, to evaluate them correctly it needs to be able to evaluate them in isolation.
If you're building a website testing agent and halfway down a website, with a form half filled out a session ongoing, etc and it realizes it wants to test 2 things in isolation, forking is the only way.
We also envision this powering the next generation of devcycles "AI Agent, go try these 10 things and tell me which works best". AI forks the environment 10 times, gets 10 exact copies, does the thing in each of them, evaluates it, then takes the best option.
Yep I can see this especially when the agent is spinning up test servers/smokes and you don't want those conflicting. How do we reconcile all the potential different git hashes though, upstream I guess etc (this might be an easy answer and I'm not super proficient with git so forgive)
So we recommend branch per fork, merge what you like.
You have to change the branch on each fork individually currently and thats unlikely to change in the short term due to the complexity of git internals, but its not that hard to do yourself `git checkout -b fork-{whateverDiscriminator}`
Have you considered git worktree?
Great for simple things, but git worktrees don't work when you have to fork processes like postgres/complex apps.
For postgres there are pg containers, we use them in pytest fixtures for 1000's of unit-tests running concurrently. I imagine you could run them for integration test purposes too. What kind of testing would you run with these that can't be run with pg containers or not covered by conventional testing?
I'll say this is still quite useful win for browser control usecases and also for debugging their crashes.
Agreed, the thing I'd be most interested in is the isolated execution environment you mentioned. Agents running autopilot are powerful. Agents running unsupervised on a machine with developer permissions and certificates where anything could influence the agent to act on an attacker's behalf is terrifying
I recommend running the agent harness outside of the computer. The mental model I like to use is the computer is a tool the agent is using, and anything in the computer is untrusted.
I would recommend not giving an agent the full run of any computing environment. Do handle fine grained internet access controls and credential injection like OpenShell does?
I used to believe this, but I think the next generation of agents is much more autonomous and just needs a computer.
The work of a developer is open ended, so we use a computer for it. We don't try to box developers into small granular screwdrivers for each small thing.
Thats whats coming to all agents, they might want to run some analysis with python, want to generate a website/document in typescript, and might want to store data in markdown files or in MongoDB. I expect them to get much more autonomous and with that to end up just needing computers like us.
The difference is that I am not always legally liable for what a rogue developer does with their computer - if I had no knowledge of what they were up to and had clear policies they violated then I'm probably fine. But I'm definitely always liable for anything an agent I created does with the computer I gave it.
And while they are getting better I see them doing some spectacularly stupid shit sometimes that just about no person would ever do. If you tell an agent to do something and it can't do what it thinks you want in the most straightforward way, there is really no way to put a limit on what it might try to do to fulfill its understanding of its assignment.
The problem is the agent, which should be treated untrusted. The computer isn’t the problem
Kind of. The chat logs of the agent are trustworthly, as should any telemetry you have on it or coming out of the VM. Its behavior should be treated as probabilistic and therefore untrustworthly.
It’s untrustworthy because its context can be poisoned and then the agent is capable of harm to the extent of whatever the “computer” you give it is capable of.
The mitigation is to keep what it can do to “just the things I want it to do” (e.g. branch protection and the like, whitelisted domains/paths). And to keep all the credentials off its box and inject them inline as needed via a proxy/gateway.
I mean, that’s already something you can do for humans also.
I think one of the very few who actually support ebpf & xdp, which you do need when you're building low level stuff. + the bare metal setup is like out of the world lol.
Tx it took a lot of work lol
Is this similar to https://instavm.io/?
Never tried them, I think the weird thing about VM providers is the difference really all is in the execution. These guys seem great in concept but I don’t know enough about how they properly work.
I currently use lightweight VMs (Proxmox containers) and git worktrees. I can fork an existing VM in in seconds. It is not entirely clear to me what I would gain from using your solution.
Proxmox forking in a few seconds is a miracle!
These are likely only a better value for you at large scale/if you start wanting to run hundreds.
This is awesome - the snapshotting especially is critical for long running agents. Since we run agents in a durable execution harness (similar to Temporal / DBOS) we needed a sandboxing approach that would snapshot the state after every execution in order to be able to restore and replay on any failure.
We ended up creating localsandbox [0] with that in mind by using AgentFS for filesystem snapshotting, but our solution is meant for a different use case than Freestyle - simpler FS + code execution for agents all done locally. Since we're not running a full OS it's much less capable but also simpler for lots of use cases where we want the agent execution to happen locally.
The ability to fork is really interesting - the main use case I could imagine is for conversations that the user forks or parallel sub-agents. Have you seen other use cases?
[0] https://github.com/coplane/localsandbox
Deterministic testing of edge cases. It can be really hard to recreate weird edge cases of running services, but if you can create them we can snapshot them exactly as they are.
I built something like this at work using plain Docker images. Can you help me understand your value prop a little better?
The memory forking seems like a cool technical achievement, but I don't understand how it benefits me as a user. If I'm delegating the whole thing to the AI anyway, I care more about deterministic builds so that the AI can tackle the problem.
So first MicroVM != Container, and container is not a secure isolation system. I would not run untrusted containers on your nodes without extra hardening.
The memory forking was originally invented because for AI App Builders and first response driven applications its extremely important that they are instant (difference between running bun dev and the dev server already being running).
However its much more generally applicable, Postgres is a great example of this. You can't fork the filesystem under postgres and get consistency. Same thing with a browser state, a weird server state, or anything that exists in memory. The memory forking gives a huge performance boost while snapshotting whats actually going on at one instant.
What does this protect you from that you’re exposed to by running a well-crafted rootless container on a system with SELinux or similar?
Generally kernel level attacks and neighbor performance impacts on the security side.
On the functional side without a kernel per guest you can't allow kernel access for stuff like eBPF, networking, nested virtualization and lots of important features.
Here is a good blog from docker explaining how even the best container is not as safe as a MicroVM https://www.docker.com/blog/containers-are-not-vms/
theoretically you can get to fairly complete security via containers + a gVisor setup but at the expense of a ton of syscall performance and disabling lots of features (which is a 100% valid approach for many usecases).
The technical challenges in getting memory forking to deliver those sub-second start and fork times are significant. I've seen the pain of trying to achieve that level of state transfer and rapid provisioning. While "EC2-like" gets the point across for many, going bare metal reveals the practical limits of cloud virtualization for high-performance, complex workloads like these. It shows a real understanding of where cloud abstraction helps and where it just adds overhead.
The cost argument for owning the hardware for this specific use case also makes sense, considering the scale these agent environments will demand. Also worth noting, sandboxes are effectively an open attack surface; architecting them not to be in your main VPC is a sound security decision from the start.
Would love to understand how you compare to other providers like Modal, Daytona, Blaxel, E2B and Vercel. I think most other agent builders will have the same question. Can you provide a feature/performance comparison matrix to make this easier?
I'm working on an article deep diving into the differences between all of us. I think the goal of Freestyle is to be the most powerful and most EC2 like of the bunch.
Daytona runs on Sysbox (https://github.com/nestybox/sysbox) which is VM-like but when you run low level things it has issues.
Modal is the only provider with GPU support.
I haven't played around with Blaxel personally yet.
E2B/Vercel are both great hardware virtualized "sandboxes"
Freestyle VMS are built based on the feedback our users gave us that things they expected to be able to do on existing sandboxes didn't work. A good example here is Freestyle is the only provider of the above (haven't tested blaxel) that gives users access to the boot disk, or the ability to reboot a VM.
And fly.io sprites
Fly.io sprites is the most similar to us of the bunch. They do hardware virtualization as well, have comparable start times and are full Linux. What we call snapshots they call checkpoints.
The big pros of Sprites over us is their advanced networking stack and the Fly.io ecosystem. The big cons are that Sprites are incredibly bare bones — they don't have any templating utilities. I've also heard that Sprites sometimes become unavailable for extended periods of time.
The big pros of Freestyle over Sprites is fork, advanced templating, and IMO a better debugging experience because of our structure.
Thanks for the thoughtful response. I'm predominantly a self-hoster, but I think your product makes a lot of sense for a wide variety of users and businesses. I'm excited to try out freestyle!
Self hosting can be doable for constant small/medium size workloads
You can handroll a lot with: https://github.com/nestybox/sysbox?tab=readme-ov-file https://gvisor.dev https://github.com/containers/bubblewrap?tab=readme-ov-file
For hardware virtualized machines it much harder but you can do it via: https://github.com/firecracker-microvm/firecracker/ https://github.com/cloud-hypervisor/cloud-hypervisor
Freestyle/other providers will likely provide better debugging experience but thats something you can probably get past for a lot of workloads.
The time when you/anyone should think about Freestyle/anyone is when the load spikes/the need to create hundreds of VMs in short spikes shows up, or when you're looking for some of the more complex feature sets any given provider has built out (forks, GPUs, network boundaries, etc).
I also highly recommend self hosting anything you do outside of your normal VPC. Sandboxes are the biggest possible attack surface and it is a feature of us that we're not in your cloud; If we mess up security your app is still fine.
This is what I do (my project) for self hosting on a VPS/server:
https://GitHub.com/jgbrwn/vibebin
Also I'm a huge proponent of exe.dev
Obviously your service/approach is different than exe, more like sprites but like you said more targeted/opinionated to AI coding/sandboxing tasks it looks like. Interesting space for sure!
I built yoloAI, which is a single go binary that runs anywhere on mac or linux, sandboxing your agents in disposable containers or VMs, nested or not.
Your agent never has access to your secrets or even your workdir (only a copy, and only what you specify), and you pull the changes back with a diff/apply workflow, reviewing any changes before they land. You also control network access.
Free, open-source, no account needed.
https://github.com/kstenerud/yoloai
I've been building an open-source, self-hostable Firecracker orchestrator for the past month: https://github.com/sahil-shubham/bhatti (https://bhatti.sh)
Still WIP, but the core works — three rootfs tiers (minimal Ubuntu, headless Chromium with CDP, Docker-in-VM), OCI image support (pull any Docker image), automatic thermal management (idle VMs pause then snapshot to disk, wake transparently on next API call), per-user bridge networking with L2 isolation, named checkpoints, persistent volumes, and preview URLs with auto-wake.
Fair warning: the website is too technical and the docs are mostly AI-generated, both being actively reworked. But I've been running it daily on a Hetzner server for my AI agents' browser automation, and deploy previews.
I'd love any feedback if you want to go ahead and try it yourself
sprites have weird lately, i think fly.io is having trouble with capacity in various locations.
is the experience similar? can i just get console to one machine, work for a bit, logout. come back later, continue?
how does i cost work if i log into a machine and do nothing on it? just hold the connection.
This will just work on us.
We do auto suspend depending on your configured timeout. We'll pause your VM and when you come back the processes will be in the exact same state as when you left.
But your pricing page suggests that that is not available without a subscription: in the on-demand pricing section "persistent Snapshots" and "Persistent VM's" have an 'x'.
We do not allow long term persistence for the free tier.
This is purely a defense mechanism, I don't want to guarantee storing the data of an entire VM forever for non paying users. We have persistence options for them like Sticky persistence but it doesn't come with the reliability of long term persistence storage.
But it wouldn’t be non paying customers. That was from the on demand section. I just want to pay for what I use without getting into a subscription.
Ah I see. This is very interesting but not what we're focused on right now. I will keep this in mind for future prioritization.
I'd also be interested in a comparison with exe.dev which I'm currently using.
Exe.dev is a individual developer oriented service. Freestyle is more oriented at platforms building the next exe.dev.
Thats why our pricing is usage based and we have a much larger API surface.
Is it possible to run a Kubernetes cluster inside one? (E.g. via KIND.)
If so, we'd very much like to test this. We make extensive use of Claude Code web but it can't effectively test our product inside the sandbox without running a K8s cluster
Yes! You can def run something like K3s in these VMs.
Your UI design is really nice.
Just want to say that even if alternatives exist (not necessarily exact capabilities obviously), I appreciate what seems to be genuine excitement on your part of having built something cool / best in class.
So best of luck with your vision for it!
Cool! I've been using your API for running sandboxed JS. Nice to see you also support VMs now.
How does this work? Are you copying the entire snapshot, or is this something fancy like copy-on-write memory? If it's the former, doesn't the fork time depend on the size of the machine?We're using copy on write with the memory itself. Fork time is completely decoupled from the size of the machine.
Creating snapshots takes a 2-4 second interruption in the VM due to sheer IO that we didn't want here.
Whats especially cool about this approach is not only is fork time O(1) with respect to machine size, but its also O(1) with respect to the amount of forks.
It doesn't seem very easy to calculate how much it would cost per month to keep a mostly-idle VM running (for example, with a personal web app). The $20/month plan from exe.dev seems more hobbyist-friendly for that. Maybe that's not the intended use, though?
We're not going after hobbyists. We're building the platform for companies like exe.dev to build on. Thats why its all usage based.
That said, our $50 a month plan can be used as an individual for your coding agents, but I wouldn't recommend it.
Ooof, if you are the middleman platform then it's sure gonna get expensive for the end user
> The $20/month plan from exe.dev seems more hobbyist-friendly for that. Maybe that's not the intended use, though?
And you can go even below that by self-hosting it yourself with a very cheap Hetzner box for $2 or $5.
Can you start up multiple VM's easily on a Hetzner box?
It is not clear to me how much CPU I get.
"Unlimited" as in 8vCPU and then I am billed for it on consumption?
Billed for wall time. whichever plan you are on you get in credits, so hobby plan gets $50 of credits and beyond that billed on per CPU wall time.
Non open source and non local SAAS sandboxes are offensive to even try to launch. No one needs this and the only customers will be vibe coders who just don't know any better. There are teams building actual sandboxes like smolmachines, podman, colima and mre. At least be honest and put the virtualisation tech you are using as well as that its closed source SAAS on the landing page to safe people time.
Our users are platforms, and many of the best already build on us.
Self hosting is a valuable feature but our technology is unfriendly to small nodes — it will not work on consumer hardware. Many of the optimizations we spend our time on only seriously kick in above 2TB of storage and above 500GB of RAM.
Your comment could have been "I prefer these open source alternatives:" but you chose to be a hater.
There's nothing wrong with offering services that people find useful.
> Non open source and non local SAAS sandboxes are offensive to even try to launch. No one needs this and the only customers will be vibe coders who just don't know any better.
This is simply not true, but also not a very charitable take.
Congrats guys! Would share some technical details, I bet you have great stories to tell. Let’s, what is forking? You completely copy disk, make ram snapshot and run it? If CoW, but ram? You mentioned 8GB ram vms. Sounds like impossible to copy 8Gb under 500ms, also disk?
So fork time is actually O(1) with VM size, its 500ms even for 64gb + disk. We're using some pretty weird COW techniques to pull it off.
O(1) What! What might bring it down to say 10's of ms? Looks like its some kind of optimizable wall that its 500 for everything.
Like with 10ms then online replication/backup — analogus to litestream for sqlite — but for in memory processes becomes feasible, no?
Insane. Does it possible to fork to another bare metal machine? Maybe multi region as fly io. If not, I bet you have huge disk sizes on your machines to store all the snapshots (you said, you store them and bill only for disk space).
So forking across multiple nodes in that speed is not possible — we run extremely beefy nodes in order to avoid moving VMs across nodes as much as possible.
We are researching systems of hot moving VMs across VMs but it would have very different performance characteristics.
Yeah, I see. Is it possible to get a corrupted state? Let’s say we had realtime database actively writing at that moment?
It is impossible.
Our tech is not decades old so there is a chance we've missed something but our layer management is atomic so I'd be shocked if you'd be able to corrupt state across forks/snapshots.
Seen this pattern many times -- the root cause is usually not the cloud itself but a combination of over-provisioned resources, missing cost visibility, and reactive budgeting. Things like untagged resources, idle NAT gateways, oversized instances, and no automated scale-down policies quietly eat the budget.
At UPSystems we typically start with a quick resource rightsizing + tagging audit, set up real-time cost alerts, and move heavy AI/LLM workloads to on-prem where it makes financial sense. Happy to share our FinOps checklist if useful -- feel free to DM.
Any ideas for locking down remote access from an untrusted VM? Cloudflare has object-based capabilities and some similar thing might be useful to let a VM make remote requests without giving it API keys. (Keys could be exfiltrated via prompt injection.)
So we have there are 3 solutions to this, Freestyle supports 2 of them: 1. Freestyle supports multiple linux users. All linux users on the VM are locked down, so its safe to have a part of the vm that has your secret keys/code that the other parts cannot access. 2. A custom proxy that routes the traffic with the keys outside 3. We're working on a secrets api to intercept traffic and inject keys based on specific domains and specific protocols starting with HTTP Headers, HTTP Git Authentication and Postgres. That'll land in a few weeks.
Can you develop freestyle in freestyle vms?
Yessir, we haven't mastered it yet but we've compiled the kernel with enough flags for stuff like nftables and KVM to make it possible.
Congratulations on the launch! Will definitely test this out.
how many seconds to provision are we talking about here? 1 sec vs 60 is a dealbreaker for me, some clarity on that would be nice.
500ms. Less than 1 second. We're aiming to get that down to 200ms in the next 3 months.
Checkout shellbox.dev, you can do pretty much the same, automating it all bia ssh
Interesting!
We're working on a similar solution at UnixShells.com [1]. We built a VMM that forks, and boots, in < 20ms and is live, serving customers! We have a lot of great tools available, via MIT, on our github repo [2] as well!
[1] https://unixshells.com
[2] https://github.com/unixshells
Can your service scale ram? like the way docker desktop does. Manual is fine.
yep you can choose ram + disk + cpu size
? You say 'yes' but you seem to be answering a different question. Docker desktop only makes me choose a max ram - it dynamically scales RAM usage. I don't need fully automatic like that, but the ability to vertically scale RAM for an existing instance is really important, particularly given the cost of RAM these days.
Ah we cannot do this without a restart. Hot pluggable ram is something I'm interested in but is currently a backburner feature.
how does this differ from daytona or e2b?
Generally compared to those two more powerful. Freestyle VMs are full Debian machines, with support for sysd, docker in docker, multiple users, hardware virtualization etc. Daytona and E2B are both great "sandbox" providers but don't really feel like VMs/you can't run everything you can in an EC2.
We also support the forking/snapshotting/long running jobs that they struggle to.
Also modal.com, I saw a few more as well.
Congrats Ben and Jacob!
Your pricing page is broken
Reviewing this now. our public pricing at www.freestyle.sh/pricing seems to be working, can you point me in a more specific direction?
Honestly never considered the forking use case; but it makes a ton of sense when explained
Congrats on the launch. This is cool tech
The problem with agents is that it is currently way too expensive. 100 times more expensive maybe. Another big issue is the lack of interactivity with an agent. Therefor for now agentic development is only viable from your own machine. And there isolation is less of an issue easier to manage.
It's hard to tell what this is or how it compares to other things that are out there, but what I latched onto is this:
> Freestyle is the only sandbox provider with built-in multi-tenant git hosting — create thousands of repos via API and pair them directly with sandboxes for seamless code management. On top of that, Freestyle VMs are full Linux virtual machines with nested virtualization, systemd, and a complete networking stack, not containers.
It makes me think of the git automation around rigs in Gas Town: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
Edit: I realize the Loom is a way to look at it. Loom interrupted me twice and I almost skipped it. However it gave me a better idea of what it does, it "invents" snapshotting and restoring of VMs in a way that appears faster. That actually makes sense and I know it isn't that hard to do with how VMs work and that it greatly benefits from having only part of the VM writable and having little memory used (maybe it has read-only memory too?).
So the snapshotting tech is actually 100% independent of Git.
Git is useful for branching vs forking (IE you can't merge two VM forks back together), but all the tech I showed in the Loom exists independently from Git.
The hard part of it was making the VM large and powerful while making snapshotting/forking instant, which required a lot of custom VMM work.
> The hard part of it was making the VM large and powerful while making snapshotting/forking instant, which required a lot of custom VMM work.
I don't find "large and powerful" in reference to a VM to sound compelling. What should be large? The memory? The root disk? As I alluded to in my comment, I'm more curious about what can be made small.
Also I'm skeptical that if I forked a vm running a busy Gas Town that it would be very light or fast in how it forks. A well behaved sqlite I could see, but then I'd wonder why not just fork the storage volume containing the database...
So thats what we did. We've made forking a whole gas town performant in 100s of milliseconds. Try it — you can definitely see it working on free tier.
In respect to large and powerful RAM + Size is important but I was more-so referring to full Linux power. The ability to run nested virtualization, ebpf, fuse, and the powerful features of a normal Linux machine instead of a container.
Well that does sound pretty impressive then. And as a champion of open source it wouldn't make me feel like I was getting locked in because the regular speeds I could live with (on a server with KVM or a nested virtalization setup).
dumb question. none of these protect your from prompt injection. yes?
no, but the goal of these is if you are faced with prompt injection the worst case scenario is the AI uses that computer badly.
unless i am misundestanding. not sure how this computer prevents secrets from my gmail leaking. thats the worst case.
If you put your gmail credentials into a VM that an AI Agent dealing with untrusted prompts has access to they should be treated as leaked and be disabled immediately.
However, if you don't put your administrative credentials inside of the VM and treat it as an unsafe environment you can safely give it minimal permissions to access specific things that it needs and using that access it can perform complex tasks.
i am talking about this . not my gmail credentials.
https://simonwillison.net/2024/Mar/5/prompt-injection-jailbr...
I have so many interesting problems on Ai, sandboxing isn't one of them. It's a pointless excercise yet disproportionately so many people love to to do this. Probably because sandboxing doesn't feel as magic as Agents itself and more like the old times of "traditional" software development.
It is a mostly pointless exercise if the goal is trying to contain negative impact of AI agents (e.g. OpenClaw).
It is a very necessary building block for many common features that can be steered in a more deterministic way, e.g. "code interpreter" feature for data analysis or file creation like commonly seen in chat web UIs.
Believe it or not, once you start working for a regulated industry, it is all you would ever think of. There, people don't care if you are vibing with the latest libraries and harnesses or if it's magic, they care that the entire deployment is in some equivalent of a Faraday cage. Plus, many people just don't appreciate it when their agents go rm -rf / on them.
Yeah, idk I guess it’s interesting if you are an engineer looking for something to do,
But like I see multiple sandbox for agents products a week. Way too saturated of a market
I disagree (as a sandboxing company).
With respect to the market, every single sandbox sucks. I'm not gonna shit talk competitors but there is not a good sandboxing platform out there yet — including me — compared to where we'll be in 6 months.
We've heard all the platforms have consistent uptime, feature completeness, networking and debugging issues. And in our own platform we're not 1/10ths of the way through solving the requests we've gotten.
Next generation of Agents needs computers, and those computers are gonna look really different than "sandboxes" do today.
I don't think you're wrong, but if you really want to really re-think the approach, building an orchestration layer for Firecracker like every other company in the space is doing is probably not it.