The output quality is remarkable. You mentioned that there are 1 billion illiterate people who would benefit from this, and I would add that there are at least 1 billion additional people who would benefit because they speak a regional dialect. There are many countries across the developing world where the AI tools and translation apps only produce output in the official government dialect (e.g. the Thai spoken in Bangkok, the Hindi spoken in Delhi, or the Mandarin spoken in Beijing). It would be interesting to see how a voice model could be "fine tuned" to better serve a specific regional dialect.
Yes! First goal is to get coverage ASAP. I think it will be easy to get dialects in with current model architecture. The hard part will be LLMs catching up on producing consistent text that respects the linguistics as we drill deeper.
Looks really cool, exciting to see. I have two questions around this:
1. Given that you are concerned with providing access a class of folks that are traditionally ignored by technologists, do you plan to make these models usable for offline purposes? For example an illiterate person I know from Uttarkhand: his home village is not connected to road. Interestingly he does speak Hindi, but his native language I believe is something more obscure. To get home, he walks five hours from the terminus of a road. Connectivity is obviously both limited and intermittent. A usable device might want the voice interface embedded on it. Any plans for this?
2. I have minimal understanding of this but as someone who has learned Hindi/Urdu as a foreign language but in the US, I am often in mixed conversation w/ both Indians and Pakistanis. There never seems to be any issues with communication. I have heard that certain terms (like for example "khub suraat", "shukria", "kitaab") are more Urdu than Hindi. I also studied Arabic, Farsi, and Swahili so I am familiar with these as loanwords Arabic and/or Persian, but in practice I hear Hindi speakers using these terms often. Is the primary value add here political? Is it an accent thing? Thanks in advance for any explanation. This is still very much a mystery to me.
To increase access we’re also exploring telco hotlines. Carrier penetration is much higher than internet, so this could let people use AI through a simple phone call. Some users already pay for similar services like weather updates (for farmers) via SIM balance. But to scale it will likely require government or telco partnerships.
1. Offline models: Yes that is on the roadmap.
There is a big demand for them especially in interactive educational use-cases.
2. Urdu and Modern Hindi can be cross understood in spoken form. The authentic Hindi is much different though and I can't understand the press releases that are done in super authentic Hindi.
The writing systems in Urdu and Hindi is completely different too, so even if there is a great TTS system in Hindi, I cant use it. Accent are very different too.
As a Sindhi speaker myself, amazing stuff. The output is so good. This unlocks the vastness of the internet for millions of people. I am imaging something like NotebookLM but for under-served languages or a hotline where people can call and talk/learn about anything. Do you guys have plans to create b2c products yourself?
At the moment we are focused on making the models available through API so developers can make some cool things. We are actively monitoring to see if there is an opportunity that we will be better positioned to solve.
We are planning on hosting an online hackathon soon, so will suggest these things as ideas!
btw I did try to first make it with Pipecat and was having some annoying windows issues with getting libraries installed for daily etc. so I posted something that was easily reproducible for the tutorial...
Hi! Pipecat maintainer here. There is no Windows restriction for Pipecat, in general. The DailyTransport does not support Windows, but works on WSL. Though, you don't have to use the DailyTransport. Pipecat has interchangeable transport support. You can do all of your testing on a free, P2P WebRTC transport (SmallWebRTCTransport, based on aiortc) without system restrictions.
Your datasets. Are they public? For more under represented languages we DONT need closed voice models - what the world really needs is open voice data repositories (eg TTS ready voice banks AND phonemization db in projects like Mozila CommonVoice). Why? Because there is so small need commercially these countries are not commercially viable - but we DO need TTS for assistive technology purposes and this has very little $$$ associated with it
(Saying that Urdu is NOT a small population so well done..!)
They aren't public. Agreed on commercially viable, even in Pakistan, businesses are price sensitive, currently there priced realllly cheap (just because they are small).
This is what my Master project was about, working in the case of Wolof. I've trained XTTSv2 and had solid results with less than 20h of paired data that wasn't of the highest quality either - hmu: tkerjan@outlook.com
Nice, this is really needed. Would be cool to see some of the less common regional Chinese dialects, which are widely spoken and often the only language older people speak. And even just more accurate regional accents for Mandarin.
Very cool, congrats on the launch! What's your plan for when one of the larger players like ElevenLabs or Google adds support for these languages? I would guess the reason why they haven't is because they don't see a large opportunity. How are you thinking about it?
Thanks! You’re right, the big players mostly ignore these languages. The additional challenge is the lack of online data, so we spend a lot of effort on data collection and labeling on the ground.
Also companies like ElevenLabs, and Deepgram have done well by focusing on specific use cases, even when the big labs are amazing at English.
Right now these languages are underserved, so there’s a window to build the best models for these languages.
What does it take to build such a model? As in, the key steps. And how expensive does it get? I might be interested in being a regional player and winner as well, lol. In my own corner of the world in Africa.
Not much... Just the willingness to work hard on this problem instead of others problems where large revenue is perhaps quicker :)
Ingredients:
Decent audio scraping skills, hiring great voice actors for each language, algos to gather text/audio with diverse phonetics, decent ML skills (enough to merge the best features of a few different papers). Lots and lots of data labels (and your own tools to get the data labeled efficiently) And finally GPUs!!!!
None of this is technically hard... the hardest thing is working with Voice Actors (oh man!!!)
This is really cool. Congrats on the launch. Would be interested to know which low resource languages in Sub-Saharan Africa you'd be working on, particularly in Nigeria and South Africa.
If you have interest/insights in specific languages, would love if you can fill out this form so we can reach out in the future https://forms.gle/XA6nZbmBNK5K7GJv5
The model itself can work well for new languages, its just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.
Currently the model is only given data for these languages so it doesn't know anything else.
Gathering audio data online is not that hard, but getting it accurately labelled is challenging, as the speech understanding systems for those languages aren't there either, so we can't automatically do that
Hope so! It is great that it overall has a big impact on making knowledge more accessible (i.e Khan Academy using it to dub their content in minutes instead of weeks). But there are lots of other areas where it applies as well.
Have you looked at the MMS models from Meta and how do they compare?
By publicly release, does that mean offering an API or have you considered huggingface model release? I understand why that might not be best for your business model - but what would be your goal from a business perspective?
Yes we read the paper when it came out and reviewed the audios. We didnt find it good enough for adoption.
We didnt compare results with MMS in a systematic way coz it seems irrelevant.
Launched them through API. From a business perspective is to get adoption of voice apps in targeted regions. Some companies can now create voice agents etc.
The output quality is remarkable. You mentioned that there are 1 billion illiterate people who would benefit from this, and I would add that there are at least 1 billion additional people who would benefit because they speak a regional dialect. There are many countries across the developing world where the AI tools and translation apps only produce output in the official government dialect (e.g. the Thai spoken in Bangkok, the Hindi spoken in Delhi, or the Mandarin spoken in Beijing). It would be interesting to see how a voice model could be "fine tuned" to better serve a specific regional dialect.
Yes! First goal is to get coverage ASAP. I think it will be easy to get dialects in with current model architecture. The hard part will be LLMs catching up on producing consistent text that respects the linguistics as we drill deeper.
Looks really cool, exciting to see. I have two questions around this:
1. Given that you are concerned with providing access a class of folks that are traditionally ignored by technologists, do you plan to make these models usable for offline purposes? For example an illiterate person I know from Uttarkhand: his home village is not connected to road. Interestingly he does speak Hindi, but his native language I believe is something more obscure. To get home, he walks five hours from the terminus of a road. Connectivity is obviously both limited and intermittent. A usable device might want the voice interface embedded on it. Any plans for this?
2. I have minimal understanding of this but as someone who has learned Hindi/Urdu as a foreign language but in the US, I am often in mixed conversation w/ both Indians and Pakistanis. There never seems to be any issues with communication. I have heard that certain terms (like for example "khub suraat", "shukria", "kitaab") are more Urdu than Hindi. I also studied Arabic, Farsi, and Swahili so I am familiar with these as loanwords Arabic and/or Persian, but in practice I hear Hindi speakers using these terms often. Is the primary value add here political? Is it an accent thing? Thanks in advance for any explanation. This is still very much a mystery to me.
To increase access we’re also exploring telco hotlines. Carrier penetration is much higher than internet, so this could let people use AI through a simple phone call. Some users already pay for similar services like weather updates (for farmers) via SIM balance. But to scale it will likely require government or telco partnerships.
Telco integration sounds amazing. Wishing yall success
Thanks!
1. Offline models: Yes that is on the roadmap. There is a big demand for them especially in interactive educational use-cases.
2. Urdu and Modern Hindi can be cross understood in spoken form. The authentic Hindi is much different though and I can't understand the press releases that are done in super authentic Hindi. The writing systems in Urdu and Hindi is completely different too, so even if there is a great TTS system in Hindi, I cant use it. Accent are very different too.
Scripts: ہیلو हेलो
As a Sindhi speaker myself, amazing stuff. The output is so good. This unlocks the vastness of the internet for millions of people. I am imaging something like NotebookLM but for under-served languages or a hotline where people can call and talk/learn about anything. Do you guys have plans to create b2c products yourself?
At the moment we are focused on making the models available through API so developers can make some cool things. We are actively monitoring to see if there is an opportunity that we will be better positioned to solve.
We are planning on hosting an online hackathon soon, so will suggest these things as ideas!
Fair enough. I don’t have a use case for the API yet but I am looking forward to the products that come out of this
Maybe will make another post in a month of all the cool products that have come out so far :)..
Nice! Clearly a big and underserved market for voice AI solutions.
Would be nice to have some code examples for using your TTS API with Pipecat.
I have to make that.. I did make one for LiveKit which utilizes our websocket API designed for real-time conversation API:
https://docs.upliftai.org/tutorials/livekit-voice-agent
btw I did try to first make it with Pipecat and was having some annoying windows issues with getting libraries installed for daily etc. so I posted something that was easily reproducible for the tutorial...
Hi! Pipecat maintainer here. There is no Windows restriction for Pipecat, in general. The DailyTransport does not support Windows, but works on WSL. Though, you don't have to use the DailyTransport. Pipecat has interchangeable transport support. You can do all of your testing on a free, P2P WebRTC transport (SmallWebRTCTransport, based on aiortc) without system restrictions.
Reach out on Discord if you have any challenges.
will do!
Your datasets. Are they public? For more under represented languages we DONT need closed voice models - what the world really needs is open voice data repositories (eg TTS ready voice banks AND phonemization db in projects like Mozila CommonVoice). Why? Because there is so small need commercially these countries are not commercially viable - but we DO need TTS for assistive technology purposes and this has very little $$$ associated with it
(Saying that Urdu is NOT a small population so well done..!)
They aren't public. Agreed on commercially viable, even in Pakistan, businesses are price sensitive, currently there priced realllly cheap (just because they are small).
This is what my Master project was about, working in the case of Wolof. I've trained XTTSv2 and had solid results with less than 20h of paired data that wasn't of the highest quality either - hmu: tkerjan@outlook.com
Nice, this is really needed. Would be cool to see some of the less common regional Chinese dialects, which are widely spoken and often the only language older people speak. And even just more accurate regional accents for Mandarin.
wow did not know that! Do you feel there is gap in speech understanding here or personalization missing with current TTS?
Very cool, congrats on the launch! What's your plan for when one of the larger players like ElevenLabs or Google adds support for these languages? I would guess the reason why they haven't is because they don't see a large opportunity. How are you thinking about it?
Thanks! You’re right, the big players mostly ignore these languages. The additional challenge is the lack of online data, so we spend a lot of effort on data collection and labeling on the ground.
Also companies like ElevenLabs, and Deepgram have done well by focusing on specific use cases, even when the big labs are amazing at English.
Right now these languages are underserved, so there’s a window to build the best models for these languages.
I think the Voice Models market will be like eCommerce. There will be no global winner instead a few regional winners -- each being really big.
We plan to be one of those winners.
What does it take to build such a model? As in, the key steps. And how expensive does it get? I might be interested in being a regional player and winner as well, lol. In my own corner of the world in Africa.
Not much... Just the willingness to work hard on this problem instead of others problems where large revenue is perhaps quicker :)
Ingredients: Decent audio scraping skills, hiring great voice actors for each language, algos to gather text/audio with diverse phonetics, decent ML skills (enough to merge the best features of a few different papers). Lots and lots of data labels (and your own tools to get the data labeled efficiently) And finally GPUs!!!!
None of this is technically hard... the hardest thing is working with Voice Actors (oh man!!!)
Would love to see Malayalam here one day!
Yes! I will keep track of this comment for the day we do :P
Unless that happens within a week or so, this thread will be locked and you won't be able to reply anymore.
It would be good to have a company blog with an RSS feed that people can subscribe to for updates.
ah, created a quick google form for language requests! https://forms.gle/XA6nZbmBNK5K7GJv5
Submitted!
appreciate it!
This is really cool. Congrats on the launch. Would be interested to know which low resource languages in Sub-Saharan Africa you'd be working on, particularly in Nigeria and South Africa.
If you have interest/insights in specific languages, would love if you can fill out this form so we can reach out in the future https://forms.gle/XA6nZbmBNK5K7GJv5
Lots of area to cover for sure!
Submitted!
Any plans for speech to text? I want to automatically generate subtitles for videos which have Urdu audio
Yes, we are working speech to text as well. It should be out in the next 2 months.
Congrats on launch, I have been sole-funding a dataset for Sindhi on Common Voice. Did you check that out by any chance?
Amazing! Not yet, I will check it out.
Also, some super cool projects on your website :)
Pretty cool! Do you think the model would be good at other under-served languages as well? Or is it hypertuned to just these?
The model itself can work well for new languages, its just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.
Currently the model is only given data for these languages so it doesn't know anything else.
> just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.
À crawler and data ingestion pipeline will not help with that?
Gathering audio data online is not that hard, but getting it accurately labelled is challenging, as the speech understanding systems for those languages aren't there either, so we can't automatically do that
Cool - makes sense!
Congrats on the launch! Having support for regional voices is going to open up so many opportunities.
Agreed!
Congratulations on the launch! I really hope it doesn't get used to launch misinformation campaigns against the country.
Are you aware of any effort to educate and fight against misinformation in Pakistan?
Hope so! It is great that it overall has a big impact on making knowledge more accessible (i.e Khan Academy using it to dub their content in minutes instead of weeks). But there are lots of other areas where it applies as well.
Nice work.
Have you looked at the MMS models from Meta and how do they compare?
By publicly release, does that mean offering an API or have you considered huggingface model release? I understand why that might not be best for your business model - but what would be your goal from a business perspective?
Yes we read the paper when it came out and reviewed the audios. We didnt find it good enough for adoption. We didnt compare results with MMS in a systematic way coz it seems irrelevant.
Launched them through API. From a business perspective is to get adoption of voice apps in targeted regions. Some companies can now create voice agents etc.