I've done something similar for Linux and Mac. I originally used Whisper and then switched to Parakeet. I much prefer whisper after playing with both. Maybe I'm not configuring Parakeet correctly, But the transcription that comes out of Whisper is usually pretty much spot on. It automatically removes all the "ooms" and all the "ahs" and it's just way more natural, in my opinion. I'm using Whisper.CPP with CUDA acceleration. This whole comment is just written with me dictating to a whisper, and it's probably going to automatically add quotes correctly, there's going to be no ums, there's going to be no ahs, and everything's just going to be great.
> I’m allowed to run Python, but not install or launch new `.exe` files.
> NVIDIA’s ParakeetV3 model
You can't install .exe's, but you can connect to the Internet, download and install approximately two hundred wheels (judging by uv.lock), many of which contain opaque binary blobs, including an AI model?
Why does your organization think this makes any sense?
In my opinion, attempting to perform live dictation is a solution that is looking for a problem. For example, the way I'm writing this comment is: I hold down a keyboard shortcut on my keyboard, and then I just say stuff. And I can say a really long thing. I don't need to see what it's typing out. I don't need to stream the speech-to-text transcription. When the full thing is ingested, I can then release my keys, and within a second it's going to just paste the entire thing into this comment box. And also, technical terms are going to be just fine with Whisper. For example, Here's a JSON file.
(this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)
My project has a built-in word_replacement so you can automatically replace certain terms if that's important to you in the config.toml
i loved whisper but it was insanely slow on cpu only and even then it was with a smaller whisper that isn't as accurate as parakeet.
my windows environment locks down the built-in windows option so i don't have a way to test it. i've heard it's pretty good if you're allowed to use it, but your inputs don't stay local which is why i needed to create this project.
I've done something similar for Linux and Mac. I originally used Whisper and then switched to Parakeet. I much prefer whisper after playing with both. Maybe I'm not configuring Parakeet correctly, But the transcription that comes out of Whisper is usually pretty much spot on. It automatically removes all the "ooms" and all the "ahs" and it's just way more natural, in my opinion. I'm using Whisper.CPP with CUDA acceleration. This whole comment is just written with me dictating to a whisper, and it's probably going to automatically add quotes correctly, there's going to be no ums, there's going to be no ahs, and everything's just going to be great.
Mind sharing your local setup for Mac?
https://github.com/lxe/yapyap/tree/parakeet-nemo
It's been a while, so I don't know if it's going to work because of the Nemo toolkit ASR numpy dependency issues.
I use it for Linux using whisper CPP and it works great
> I’m allowed to run Python, but not install or launch new `.exe` files.
> NVIDIA’s ParakeetV3 model
You can't install .exe's, but you can connect to the Internet, download and install approximately two hundred wheels (judging by uv.lock), many of which contain opaque binary blobs, including an AI model?
Why does your organization think this makes any sense?
Here is the huggingface ASR leaderboard for those wondering how parakeet V3 compares to Whisper Large V3
Accuracy Average WER: Whisper-large-v3 4.91 vs Parakeet V3 5.05
Speed RTFx: Whisper-large-v3 126 vs PArakeet V3 2154
~17x faster
https://huggingface.co/spaces/hf-audio/open_asr_leaderboard
Cool use of ONNX! Fluid Inference also have great implementations of Parakeet v2/v3 in CoreML for Apple devices and OpenVINO for Intel:
https://github.com/FluidInference/FluidAudio
https://github.com/FluidInference/eddy-audio
how does the quality compare with the windows built in one (Win+H), the one with online models?
I'm using that to dictate prompts, it struggles with technical terms: JSON becomes Jason, but otherwise is fine
In my opinion, attempting to perform live dictation is a solution that is looking for a problem. For example, the way I'm writing this comment is: I hold down a keyboard shortcut on my keyboard, and then I just say stuff. And I can say a really long thing. I don't need to see what it's typing out. I don't need to stream the speech-to-text transcription. When the full thing is ingested, I can then release my keys, and within a second it's going to just paste the entire thing into this comment box. And also, technical terms are going to be just fine with Whisper. For example, Here's a JSON file.
(this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)
I’ve been using Parakeet with MacWhisper for a lot of my AI coding interactions. It’s not perfect but generally saves me a lot of time.
I barely use a keyboard for most things anymore.
My project has a built-in word_replacement so you can automatically replace certain terms if that's important to you in the config.toml
i loved whisper but it was insanely slow on cpu only and even then it was with a smaller whisper that isn't as accurate as parakeet.
my windows environment locks down the built-in windows option so i don't have a way to test it. i've heard it's pretty good if you're allowed to use it, but your inputs don't stay local which is why i needed to create this project.
btw this is my first open-source project