I’m part of a small team based in Iași, Romania, and we’ve spent the last year building Cronos Browser. To be honest, we built this because we were frustrated that "AI integration" in modern browsers usually just means sending all your data to a cloud API. We wanted to see if we could move that intelligence directly to the edge.
The browser is based on Chromium for compatibility, but we stripped out the telemetry and Google services. The main differentiator is that we integrated an inference engine directly into the client. This means features like summarization, translation, or asking the assistant (we call it UIKI) happen 100% locally on your CPU/GPU. No data leaves the machine, which we think is the only way to do privacy-first AI.
We’re also experimenting with something pretty ambitious called the AVALW Protocol, which is a decentralized layer we're designing to mitigate some of the MITM vulnerabilities inherent in standard HTTPS, though it's still in heavy development. There’s also an opt-in "Pool Mode" for P2P distributed computing if you want to contribute resources to the network.
We know the browser market is incredibly tough and skepticism is high for new protocols, so we’re here specifically for technical feedback. We’re currently live on Windows, with Mac/Linux in the pipeline.
I’ll be hanging around the comments to answer questions about our local inference implementation or the protocol design. Roast us or help us improve!
I’m part of a small team based in Iași, Romania, and we’ve spent the last year building Cronos Browser. To be honest, we built this because we were frustrated that "AI integration" in modern browsers usually just means sending all your data to a cloud API. We wanted to see if we could move that intelligence directly to the edge. The browser is based on Chromium for compatibility, but we stripped out the telemetry and Google services. The main differentiator is that we integrated an inference engine directly into the client. This means features like summarization, translation, or asking the assistant (we call it UIKI) happen 100% locally on your CPU/GPU. No data leaves the machine, which we think is the only way to do privacy-first AI. We’re also experimenting with something pretty ambitious called the AVALW Protocol, which is a decentralized layer we're designing to mitigate some of the MITM vulnerabilities inherent in standard HTTPS, though it's still in heavy development. There’s also an opt-in "Pool Mode" for P2P distributed computing if you want to contribute resources to the network. We know the browser market is incredibly tough and skepticism is high for new protocols, so we’re here specifically for technical feedback. We’re currently live on Windows, with Mac/Linux in the pipeline. I’ll be hanging around the comments to answer questions about our local inference implementation or the protocol design. Roast us or help us improve!