Pocket TTS: A high quality TTS that gives your CPU a voice

635 pointsposted 23 days ago
by pain_perdu

163 Comments

derHackerman

22 days ago

I read this, then realized I needed a browser extension to read my long case study and made a browser interface of this and put this together:

https://github.com/lukasmwerner/pocket-reader

laszbalo

22 days ago

You can do the same thing with Firefox' Reader Mode. On Linux you have to set up speech-dispatcher to use your favorite TTS as a backend.Once it is set up, there will be an option to listen the page.

mentalgear

22 days ago

Firefox should integrate that in their Reader Mode (the default System Voices are often very un-listable). Would seems like an easy win, and it's a non-AI feature so not polarising.

laszbalo

22 days ago

Not sure about macOS or Windows, but on Linux Firefox uses speech-dispatcher, which is a server, and Firefox is the client. Speech-dispatcher then delegates the text to the correct TTS backend. It basically runs a shell command, either sending the text to a TTS HTTP server using curl, or piping it to the standard input of a TTS binary.

Speech-dispatcher commonly uses espeak-ng, which sounds robotic but is reportedly better for visually impaired users, because at higher speeds it is still intelligible. This allows visually impaired users to hear UI labels more quickly. For non visually impaired users, we generally want natural sounding voices and to use TTS in the same way we would listen to podcasts or a bedtime story.

With this system, users are in full control and can swap TTS models easily. If a model is shipped and, two weeks later, a smaller, newer, or better one appears, their work would become obsolete very quickly.

Barbing

21 days ago

Fascinating. Might be part of why I’ve seen some folks have such love for old voices like Fred.

armcat

22 days ago

Oh this is sweet, thanks for sharing! I've been a huge fan of Kokoro and event setup my own fully-local voice assistant [1]. Will definitely give Pocket TTS a go!

[1] https://github.com/acatovic/ova

gropo

22 days ago

Kokoro is better for tts by far

For voice cloning, pocket tts is walled so I can't tell

echelon

22 days ago

What are the advantages of PocketTTS over Kokoro?

It seems like Kokoro is the smaller model, also runs on CPU in real time, and is more open and fine tunable. More scripts and extensions, etc., whereas this is new and doesn't have any fine tuning code yet.

I couldn't tell an audio quality difference.

hexaga

22 days ago

Kokoro is fine tunable? Speaking as someone who went down the rabbit hole... it's really not. There's no (as of last time I checked) training code available so you need to reverse engineer everything. Beyond that the model is not good at doing voices outside the existing voicepacks: simply put, it isn't a foundation model trained on internet scale data. It is made from a relatively small set of focused, synthetic voice data. So, a very narrow distribution to work with. Going OOD immediately tanks perceptual quality.

There's a bunch of inference stuff though, which is cool I guess. And it really is a quite nice little model in its niche. But let's not pretend there aren't huge tradeoffs in the design: synthetic data, phonemization, lack of train code, sharp boundary effects, etc.

jamilton

22 days ago

Being able to voice clone with PocketTTS seems major, it doesn't look like there's any support for that with Kokoro.

echelon

22 days ago

Zero shot voice clones have never been very good. Fine tuned models hit natural speaker similarity and prosody in a way zero shot models can't emulate.

If it were a big model and was trained on a diverse set of speakers and could remember how to replicate them all, then zero shot is a potentially bigger deal. But this is a tiny model.

I'll try out the zero shot functionality of Pocket TTS and report back.

Barbing

21 days ago

Would be curious to hear!

jhatemyjob

22 days ago

Less licensing headache, it seems. Kokoro says its Apache licensed. But it has eSpeak-NG as a dependency, which is GPL, which brings into question whether or not Kokoro is actually GPL. PocketTTS doesn't have eSpeak-NG as a dependency so you don't need to worry about all that BS.

Btw, I would love to hear from someone (who knows what they're talking about) to clear this up for me. Dealing with potential GPL contamination is a nightmare.

miki123211

22 days ago

Kokoro only uses Espeak for text-to-phoneme (AKA G2P) conversion.

If you could find another compatible converter, you could probably replace eSpeak with it. The data could be a bit OOD, so you may need to fiddle with it, but it should work.

Because the GPL is outdated and doesn't really consider modern gen AI, what you could also do is to generate a bunch of text-to-phoneme pairs with Espeak and train your own transformer on them,. This would free you from the GPL license completely, and the task is easy enough that even a very small model should be able to do it.

jcelerier

22 days ago

If it depends on espeak NG code, the complete product is 100% GPL. That said, if you are able to change the code to take off the espeak dependency then the rest would revert to non-GPL (or even if it's a build time option that you can disable like FFMPEG with --enable-gpl)

seunosewa

22 days ago

Chatterbox-turbo is really good too. Has a version that uses Apple's gpu.

amrrs

22 days ago

Thanks for sharing your repo..looks super cool.. I'm planning to try out. Is it based on mlx or just hf transformers?

armcat

22 days ago

Thank you, just transformers.

lukebechtel

22 days ago

Nice!

Just made it an MCP server so claude can tell me when it's done with something :)

https://github.com/Marviel/speak_when_done

tarcon

22 days ago

macOS already has some great intrinsic TTS capability as the OS seems to include a naturally sounding voice. I recently built a similar tool to just run the "say" command as a background process. Had to wrap it in a Deno server. It works, but with Tahoe it's difficult to consistently configure using that one natural voice, and not the subpar voices downloadable in the settings. The good voice seems to be hidden somehow.

supriyo-biswas

22 days ago

> The good voice seems to be hidden somehow.

How am I supposed to enable this?

tarcon

22 days ago

My mistake, seems like I was refering to the Siri voice, which seems to be the default. It sounds good. It is selectable and to my surprise - even configurable in speed, pitch and volume - in the OS Accessibility settings -> System Voice -> Click on the (i) symbol. (macOS Tahoe)

Fnoord

21 days ago

Or via $ say --voice "?"

codepoet80

22 days ago

I just setup pushover to send a message to my phone for this exact reason! Trying out your server next!

singpolyma3

22 days ago

Love this.

It says MIT license but then readme has a separate section on prohibited use that maybe adds restrictions to make it nonfree? Not sure the legal implications here.

CGamesPlay

22 days ago

For reference, the MIT license contains this text: "Permission is hereby granted... to deal in the Software without restriction, including without limitation the rights to use". So the README containing a "Prohibited Use" section definitely creates a conflicting statement.

jandrese

22 days ago

The "prohibited uses" section seems to be basically "not to be used for crime", which probably doesn't have much legal weight one way or another.

WhyNotHugo

22 days ago

You might use it for something illegal in one country, and then leave for another country with no extradition… but you’ve lost the license to sue the software and can be sued for copyright infringement.

mips_avatar

22 days ago

I think the only restriction that seems problematic is not being able to clone someone’s voice without permission. I think there’s probably a valid case for using it for satire.

Buttons840

22 days ago

Good question.

If a license says "you may use this, you are prohibited from using this", and I use it, did I break the license?

ethin

22 days ago

If memory serves, the license is the ultimate source of truth on what is allowed or not. You cannot add some section that isn't in the text of the license (at least in the US and other countries that use similar legal systems) on some website and expect it to hold up in court because the license doesn't include that text. I know of a few other bigger-name projects that try to pull these kinds of stunts because they don't believe anyone is going to actually read the text of the license.

HenrikB

22 days ago

The copyright holder can set whatever license they want, including writing their own.

In this case, I'd interpret it as they made up a new licence based on MIT, but their addendum makes it non-MIT, but something else. I agree with what others said; this "new" license has internal conflicts.

kaliqt

22 days ago

The license is clearly defined. It would be misleading, possibly fraudulent for them to then override the license elsewhere.

Simply, it's MIT licensed. If they want to change that, they have to remove that license file OR clearly update it to be a modified version of MIT.

IshKebab

22 days ago

I think if they took you to court for cloning someone's voice without permission they would probably lose because this conflict makes the terms unclear.

Buttons840

21 days ago

An unclear license would default back to full copyright protection I would think.

yencabulator

18 days ago

Not necessarily. I believe many courts have a principle that an unclear agreement is read in favor of the party that did not write the agreement.

MatthiasPortzel

21 days ago

Tried to use voice cloning but in order to download the model weights I have to create a HuggingFace account, connect it on the command line, give them my contact information, and agree to their conditions. The open source part is just the client and chunking logic which is pretty minimal.

syockit

22 days ago

From my understanding, the code is MIT, but the model isn't? What consitutes a "Software" anyway? Aren't resources like images, sounds and the likes exempt from it (hence, covered by usual copyright unless separately licensed)? If so, in the same vein, an ML model is not part of "Software". By the way, the same prohibition is repeated on the huggingface model card.

iamrobertismo

22 days ago

Yeah, I don't understand the point of the prohibited use section at all, seems like unnecessary fluff.

pain_perdu

22 days ago

I'm psyched to see so much interest in my post about Kyutai's latest model! I'm working on part of a related team in Paris that's building off Kutai's research to provide enterprise-grade voice solutions. If anyone building in this space I'd love to chat and share some our upcoming models and capabilities that I am told are SOTA. Please don't hesitate to ping me via the address in my profile.

armcat

22 days ago

Just want to say amazing work. It's really pushing the envelope of what is possible to run locally on everyday devices.

mgaudet

22 days ago

Eep.

So, on my M1 mac, did `uvx pocket-tts serve`. Plugged in

> It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way—in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only

(Beginning of Tale of Two Cities)

but the problem is Javert skips over parts of sentences! Eg, it starts:

> "It was the best of times, it was the worst of times, it was the age of wisdom, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the spring of hope, it was the winter of despair, we had everything before us, ..."

Notice how it skips over "it was the age of foolishness,", "it was the winter of despair,"

Which... Doesn't exactly inspire faith in a TTS system.

(Marius seems better; posted https://github.com/kyutai-labs/pocket-tts/issues/38)

Paul_S

22 days ago

All the models I tried have similar problems. When trying to batch a whole audiobook, the only way is to run it, then run a model to transcribe and check you get the same text.

sbarre

22 days ago

Yeah Javert mangled up those sentences for me as well, it skipped whole parts and then also moved words around

- "its noisiest superlative insisted on its being received"

Win10 RTX 5070 Ti

vvolhejn

22 days ago

Václav from Kyutai here. Thanks for the bug report! A workaround for now is to chunk the text into smaller parts where the model is more reliable. We already do some chunking in the Python package. There is also a more fancy way to do this chunking in a way that ensures that the stitched-together parts continue well (teacher-forcing), but we haven't implemented that yet.

mgaudet

21 days ago

Is this just sort of expected for these models? Should users of this expect only truncation or can hallucinated bits happen too?

I also find Javert in particular seems to put in huge gaps and spaces... side effect of the voice?

vvolhejn

16 days ago

> Is this just sort of expected for these models? Should users of this expect only truncation or can hallucinated bits happen too?

Basically, yes, sort of expected: we don't have detailed enough control to precent it fully. We can measure how much it happens and train better models, but no 100% guarantee. The bigger the model, the less this happens, but this one is tiny, so it's not the sharpest tool in the shed. Hallucinated bits can theoretically happen but I haven't observed it with this model yet.

small_scombrus

22 days ago

Using your first text block 'Eponine' skips "we had nothing before us" and doesn't speak the final "that some of its noisiest"

I wonder what's going wrong in there

memming

22 days ago

interesting; it skipped "we had everything before us," in my test. Yeah, not a good sign.

GaggiX

22 days ago

I love that everyone is making their own TTS model as they are not as expensive as many other models to train. Also there are plenty of different architecture.

Another recent example: https://github.com/supertone-inc/supertonic

nunobrito

22 days ago

Thank you. Very good suggestion with code available and bindings for so many languages.

nowittyusername

22 days ago

Thanks for heads up, this looks really interesting and claimed speed is nuts..

NoSalt

22 days ago

> "You can also clone the voice from any audio sample by using our repo."

Ok, who knows where I can get those high-quality recordings of Majel Barrett' voice that she made before she died?

freedomben

22 days ago

TOS computer voice must be my computer's voice. And after every command I run, I need a "Working."

dale_glass

21 days ago

Is there any TTS engine that doesn't need cloning and has some sort of parameters one can specify?

Like what if I want to graft on TTS to an existing text chat system and give each person an unique, randomly generated voice? Or want to try to get something that's not quite human, like some sort of alien or monster?

unleaded

21 days ago

You could use an old-school formant synthesizer that lets you tune the parameters, like espeak or dectalk. espeak apparently has a klatt mode which might sound better than the default but i haven't tried it.

bkitano19

21 days ago

You can use voice prompting; it's supported on ElevenLabs and Hume.

Imustaskforhelp

22 days ago

Perhaps I have been not talking to voice models that much or the chatgpt voice always felt weird and off because I was thinking it goes to a cloud server and everything but from Pocket TTS I discovered unmute.sh which is open source and I think is from the same company as Pocket TTS/can I think use Pocket TTS as well

I saw some agentic models at 4B or similar which can punch above its weights or even some basic models. I can definitely see them in the context of home lab without costing too much money.

I think atleast unmute.sh is similar/competed with chatgpt's voice model. It's crazy how good and (effective) open source models are from top to bottom. There's basically just about anything for almost everyone.

I feel like the only true moat might exist in coding models. Some are pretty good but its the only industry where people might pay 10x-20x more for the best (minimax/z.ai subscription fees vs claude code)

It will be interesting to see if we will see another deepseek moment in AI which might beat claude sonnet or similar. I think Deepseek has deepseek 4 so it will be interesting to see how/if it can beat sonnet

(Sorry for going offtopic)

StevenNunez

21 days ago

Great find! unmute was a trip to play with

Imustaskforhelp

20 days ago

Your welcome! Glad you appreciated it man. I think Unmute was really cool and is open source but its deployment is a little on the more complex side of things.

dust42

22 days ago

Good quality but unfortunately it is single language English only.

riedel

21 days ago

I am also quite irritated by the fact that many TTS fail to state what language (and probably even dialect) they support. Actually to support a really good workflow for many Europeans (and probably also the rest of the world) one would actually need a multi language models that also support the use foreign words within one's own language. I am using a local notification reader on my smartphone (with SherpaTTS) and the mix of notification language as well as languages embedded in each other makes the experience rather funny at times.

jiehong

22 days ago

Agreed.

I think they should have added the fact that it's English only in the title at the very least.

dust42

22 days ago

Yes, apart from voice cloning nothing really new. Kokoro is out since a long time and it supports at least a few languages other than english. Also there is supertonic TTS and there is Soprano TTS. The latter is developed by a single guy while Kyutai is funded with 150M€.

  https://github.com/supertone-inc/supertonic 
  https://github.com/ekwek1/soprano
No affiliation with either.

phoronixrly

22 days ago

I echo this. For a TTS system to be in any way useful outside the tiny population of the world that speaks exclusively English, it must be multilingual and dynamically switch between languages pretty much per word.

Cool tech demo though!

kamranjon

22 days ago

That's a pretty crazy requirement for something to be "useful" especially something that runs so efficiently on cpu. Many content creators from non-english speaking countries can benefit from this type of release by translating transcripts of their content to english and then running it through a model like this to dub their videos in a language that can reach many more people.

phoronixrly

22 days ago

You mean youtubers? And have to (manually) synchronise the text to their video, and especially when youtube apparently offers voice-voice translation out of the box to my and many others' annoyance?

littlestymaar

22 days ago

YouTube's voice to voice is absolutely horrible though. Having the ability for the youtubers to clone their own voice would make it much, much more appealing.

ethin

22 days ago

Uh, no? This is not at all an absurd requirement? Screen readers literally do this all the time, with voices that are the classic way of making a speech synthesizer, no AI required. ESpeak is an example, or MS OneCore. The NVDA screen reader has an option for automatic language switching as does pretty much every other modern screen reader in existence. And absolutely none of these use AI models to do that switching, either.

kube-system

22 days ago

They didn’t say it was a crazy requirement. They said it was crazy to consider it useless without meeting that requirement.

ethin

22 days ago

That doesn't really change what I said though. It isn't crazy to call it useless without some form of ALS either. Given that old school synthesis has been able to do it for like 20 years or so.

echoangle

22 days ago

How does state of the art matter when talking about usefulness? Is old school synthesis useless?

ethin

21 days ago

No? But is it not unreasonable to expect "state of the art" TTS to be able to do at least what old school synthesis is capable of doing? Being "state of the art" means being the highest level of development or achievement in a particular field, device, procedure, or technique at a specific point in time. I don't think it's therefore unreasonable to expect supposed "state of the art" text-to-speech synthesis to do far better at everything old-school TTS could do and then some.

kube-system

21 days ago

> Being "state of the art" means being the highest level of development or achievement in a particular field, device, procedure, or technique at a specific point in time. I don't think it's therefore unreasonable to expect supposed "state of the art" text-to-speech synthesis to do far better at everything old-school TTS could do and then some.

Non sequitur. Unless the 'art' in question is the 'art of adding features', usually this phrase is to describe the quality of a very specific development, these are often not even feature complete products.

bingaweek

22 days ago

This is a great illustration that nothing you ever do will be good enough without people whining.

phoronixrly

22 days ago

Excuse me for pointing out that yet another LLM tech demo is presented to our attention.

Levitz

22 days ago

But it wouldn't be for those who "speak exclusively English", rather, for those who speak English. Not only that but it's also common to have system language set to English, even if one's language is different.

There's about 1.5B English speakers in the planet.

phoronixrly

22 days ago

Let's indeed limit the use case to the system language, let's say of a mobile phone.

You pull up a map and start navigation. All the street names are in the local language, and no, transliterating the local names to the English alphabet does not make them understandable when spoken by TTS. And not to mention localised foreign names which then are completely mangled by transliterating them to English.

You pull up a browser, open up an news article in your local language to read during your commute. You now have to reach for a translation model first before passing the data to the English-only TTS software.

You're driving, one of your friends Signals you. Your phone UI is in English, you get a notification (interrupting your Spotify) saying 'Signal message', followed by 5 minutes of gibberish.

But let's say you have a TTS model that supports your local language natively. Well due to the fact that '1.5B English speakers' apparently exist in the planet, many texts in other languages include English or Latin names and words. Now you have the opposite issue -- your TTS software needs to switch to English to pronounce these correctly...

And mind you, these are just very simple use cases for TTS. If you delve into use cases for people with limited sight that experience the entire Internet, and all mobile and desktop applications (often having poor localisation) via TTS you see how mono-lingual TTS is mostly useless and would be switched for a robotic old-school TTS in a flash...

> only that but it's also common to have system language set to English

Ask a German whether their system language is English. Ask a French person. I can go on.

VMG

22 days ago

> Ask a German whether their system language is English. Ask a French person. I can go on.

I'm German but my system language is English

Because translations often suck, are incomplete or inconsistent

numpad0

22 days ago

If you don't speak the local language anyway, you can't decode pronounced spoken local language names anyway. Your speech sub-systems can't lock and sync to the audio track containing languages you don't speak. Let alone transliterate or pronounce.

Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained. It's more like you can make;make install docker, after which you can attach/detach into/out of alternate environments while on terminal to do things or take in/out notes.

People sometimes picture multilingualism as owning a single joined-together super-language in the brain. That usually doesn't happen. Attempting this especially at young age could lead a person into a "semi-lingual" or "double-limited" state where they are not so fluent or intelligent in any particular languages.

And so, trying to make an omnilingual TTS for criticizing someone not devoting significant resources at it, don't make much sense.

phoronixrly

22 days ago

> If you don't speak the local language anyway, you can't decode pronounced spoken local language names anyway

This is plainly not true.

> Multilingual doesn't mean language agnostic. We humans are always monolingual, just multi-language hot-swappable if trained

This and the analogy make no sense to me. Mind you I am trilingual.

I also did not imply that the model itself needs to be multilingual. I implied that the software that uses the model to generate speech must be multilingual and support language change detection and switching mid-sentence.

knowitnone3

22 days ago

I'm Martian so everything you create better support my language on day 1

numpad0

22 days ago

> it must be multilingual and dynamically switch between languages pretty much per word

Not abundantly obviously a satire and so interjecting: humans, including professional "simultaneous" interpreters, can't do this. This is not how languages work.

koakuma-chan

22 days ago

You can speak one language, switch to another language for one word, and continue speaking in the previous language.

numpad0

22 days ago

But that's my point. You'll stop, switch, speak, stop, switch, resume. You're not going to be "I was in 東京 yesterday" as a single continuous sentence. It'll have to be broken up to three separate sentences spoken back to back, even for humans.

jiehong

22 days ago

>"I was in 東京 yesterday"

I think it's the wrong example, because this is actually very common if you're a Chinese speaker.

Actually, people tend to say the name of the cities in their own countries in their native language.

> I went to Nantes [0], to eat some kouign-amann [1].

As a French, both [0] and [1] will be spoken the French way on the fly in the sentence, while the other words are in English. Switching happens without any pause whatsoever (because there is really only one single way to pronounce those names in my mind, no thinking required).

Note that with Speech Recognition, it is fairly common to have models understanding language switches within a sentence like with Parakeet.

numpad0

21 days ago

Okay, it's getting clear that I'm in the wrong here with my insistence that languages don't mix and foreign words can't be inserted mid-sentence, yet that is my experience as well as behaviors of people sharing the language, incidentally including GP who suggested that I can always do the switching dance - people can if wanted, but normally don't. It's considered a show-off if the inserted word could be understood at all.

Perhaps I have to admit that my particular primary language is officially a human equivalent of an esoteric language; the myth that it's a complex language is increasingly becoming obsolete(for good!), but maybe it still qualify as being esoteric one that are not insignificantly more incompatible with others.

polshaw

22 days ago

I think this is totally wrong. When you have both parties speaking multiple languages this happens all the time. You see this more with English being the loaner more often than it is the borrower, due to the reach that the language has. Listen to an Indian or Filipino speak for a while, it's interspersed with English words ALL the time. It happens less in English as there is not the universal knowledge base of one specific other language, but it does happen sometimes when searching for a certain, je ne sais pas.

akshitgaur2005

22 days ago

Not really, most multilinguals switch between languages so seamlessly that you wouldn't even notice it! It even has given birth to new "languages", take for example Hinglish!!

echelon

22 days ago

English has more users than all but a few products.

nmstoker

22 days ago

It's impressive but it's a shame that it's 2026 and despite remarkably lifelike speech, so many models fall on common issues like heteronyms ("the couple had a row because they couldn't agree where to row their boat"), realistic number handling and so on.

woadwarrior01

22 days ago

Yeah most models are quite bad at it. The industry term for it is: homograph disambiguation.

anon84873628

22 days ago

Let's undo the great vowel shift and modernize English spellings :-D

akx

22 days ago

It's pretty good. And for once, a software-engineering-ly high-quality codebase, too!

All too often, new models' codebases are just a dump of code that installs half the universe in dependencies for no reason, etc.

snvzz

22 days ago

Relative to AmigaOS translator.device + narrator.device, this sure seems bloated.

Paul_S

22 days ago

The speed of improvement of tts models reminds me of early days of Stable Diffusion. Can't wait until I can generate audiobooks without infinite pain. If I was an investor I'd short Audible.

asystole

22 days ago

An all-TTS audiobook offering is just about as appealing as an all-stable-diffusion picture gallery (that is, not at all).

echoangle

22 days ago

Isn’t it more like an art gallery of prints of paintings? The primary art is the text of the book (like the painting in the gallery), TTS (and printing a copy) are just methods of making the art available.

306bobby

22 days ago

I think it can be argued that audiobook's add to the art by adding tone and inflection by the reader.

To me, what you're saying is the same as saying the art of a movie is in the script, the video is just the method of making it available. And I don't think that's a valid take

fluoridation

22 days ago

No, that's an incorrect analogy. The script of a movie is an intermediate step in the production process of a movie. It's generally not meant to be seen by any audiences. The script for example doesn't contain any cinematography or any soundtrack or any performances by actors. Meanwhile, a written work is a complete expressive work ready for consumption. It doesn't contain a voice, but that's because the intention is for the reader to interpret the voice into it. A voice actor can do that, but that's just an interpretation of the work. It's not one-to-one, but it's not unlike someone sitting next to you in the theater and telling you what they think a scene means.

So yes, I mostly agree with GP. An audiobook is a different rendering of the same subject. The content is in the text, regardless of whether it's delivered in written or oral form.

sysworld

21 days ago

There already are audiobooks on audible that are 100% TTS, while it's playable, it's no substitute (yet) for a real human.

It's just too flat/dead compared to a human reader.

everyday7732

22 days ago

It's not perfect, but I already have a setup for doing this on my phone. Add SherpaTTS and Librera Reader to your phone. (both available free on fdroid).

Set up SherpaTTS as the voice model for your phone (I like the en_GB-jenny_dioco-medium voice option, but there are several to choose from). Add a ebook to librera reader and open it. There's an icon with a little person wearing headphones, which lets you send the text continuously to your phone's tts, using just local processing on the phone. I don't have the latest phone but mine is able to process it faster than the audio is read, so the audio doesn't stop and start.

The voice isn't totally human sounding, but it's a lot better than the microsoft sam days, and once you get used to it the roboticness fades into the background and I can just listen to the story. You may get better results with kokoro (I couldn't get it running on my phone) or similar tts engines and a more powerful phone.

One thing I like about this setup is that if you want to swap back and forth between audio and text, you can. The reader scrolls automatically as it makes the audio, and you can pause it, read in silence for a while yourself and later set it going from a new point.

gempir

22 days ago

I feel like TTS is one of the areas that as evolved the least. Small TTS models have been around for like 5+ years and they've only gotten incrementally better. Giants like ElevenLabs make good sounding TTS but it's not quite human yet and the improvements get less and less each iteration.

rowanG077

22 days ago

Wouldn't audible be perfectly positioned to take advantage of this. They have the perfect setup to integrate this into their offering.

Manfred

22 days ago

It seems more likely that people will buy a digital copy of the book for a few bucks and then run the TTS themselves on devices they already own.

howdareme9

22 days ago

Not likely at all, people pay for convenience. They don't want to do that

johanyc

21 days ago

Yeah hackernews users kept thinking the average consumers like to tinker like we do lol

pantalaimon

22 days ago

eBooks are much more expensive then an Audible subscription though.

potatoman22

22 days ago

I wouldn't say so. Audible gives you 1 book a month for $15. Most e-books I see are around $10.

donpdonp

22 days ago

it'd be nice to get some idea of what kind of hardware a laptop needs to be able to run this voice model.

donpdonp

21 days ago

for example, How much disk is needed? I started the uvx command and it started to download hundreds of megabytes. How much cpu ram is necessary and how much gpu ram is necessary? will an integrated intel gpu work? some ARM boards have a dedicated AI processor, are any of those supported?

febin

21 days ago

I've vibecoded a Rust port of Pocket TTS using candle.

https://github.com/jamesfebin/pocket-tts-candle

The port supports:

- Native compilation with zero Python runtime dependency

- Streaming inference

- Metal acceleration for macOS

- Voice cloning (with the mimi feature)

Note: This was vibecoded (AI-assisted), but features were manually tested.

OfflineSergio

22 days ago

This is amazing. The audio feels very natural and it's fairly good at handling complext text to speech tasks. I've been working on WithAudio (https://with.audio). Currently it only uses Kokoros. I need to test this a bit more but I might actually add it to the app. It's too good to be ignored.

user

22 days ago

[deleted]

syntaxing

22 days ago

Is there something similar for STT? I’m using whisper distill models and they work ok. Sometimes it gets what I say completely wrong.

daemonologist

22 days ago

Parakeet is not really more accurate than Whisper, but it's much faster - faster than realtime even on CPU: https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3 . You have to use Nemo though, or mess around with third-party conversions. (Also has a big brother Canary: https://huggingface.co/nvidia/canary-1b-v2. There's also the confusingly named/positioned Nemotron speech: https://huggingface.co/nvidia/nemotron-speech-streaming-en-0...)

jokethrowaway

22 days ago

Parakeet feels much more accurate in practice than whisper, it was a real "a-ha" moment for me.

Of course, English only

satvikpendem

22 days ago

Keep in mind Parakeet is pretty limited in the number of languages it supports compared to Whisper.

smallerfish

22 days ago

Hopefully the browsers will improve their built in TTS soon. It's still pretty unusable unless you really need it.

sysworld

21 days ago

And OS's. Mac has some decent models, but kokoro is much better. Even this one is better.

Zardoz84

22 days ago

I'm missing the old days that connecting a SPOKE256 to the Spectrum and making it speak, looked like magic.

_ache_

22 days ago

It's very impressive! I'm mean, it's better than other <200M TTS models I encounter.

In English, it's perfect and it's so funny in others languages. It sounds exactly like someone who actually doesn't speak the language, but got it anyway.

I don't know why Fantine is just better than the others in others languages. Javer seems to be the worst.

Try Jean in Spanish « ¡Es lo suficientemente pequeño como para caber en tu bolsillo! » sound a lot like they don't understand the language.

Or Azelma in French « C'est suffisament petit pour tenir dans ta poche. » is very good.I mean half of the words are from a Québécois accent, half French one but hey, it's correct French.

Però non capisce l'italiano.

lykahb

22 days ago

It'd be great if it supported stdin&stdout for text and wav. Then it could get piped right into afplay

gabrieldemarm

22 days ago

Gabriel from Kyutai here, we do support outputting wav to stdout. We don't support reading text from stdin but that should be easy enough. Feel free to drop a pull request!

exceptione

22 days ago

Question: does anyone recommend a TTS that automatically recognizes emotion from the text it self?

sofixa

21 days ago

Gradium (https://gradium.ai/), a commercial company offshoot of Kyutai (open source lab), are focusing on emotion (both being able to recognise emotion and also understanding what emotion to use depending on context). I don't think any of their public existing models already does that, but they demoed it pretty impressively at the ai-Pulse conference.

fluoridation

22 days ago

Chatterbox does something like that. For example, if the input is

"so and so," he <verb>

and the verb is not just "said", but "chuckled", or "whispered", or "said shakily", the output is modified accordingly, or if there's an indication that it's a woman speaking it may pitch up during the quotation. It also tries to guess emotive content from textual content, such if a passage reads angry it may try to make it sound angry. That's more hit-and-miss, but when it hits, it hits really well. A very common failure case is, imagine someone is trying to psych themselves up and they say internally "come on, Steve, stand up and keep going", it'll read it in a deeper voice like it was being spoken by a WW2 sergeant to a soldier.

butz

22 days ago

How large is the model and is it possible to train it read other languages, not only English?

butz

22 days ago

After pip install pocket-tts all dependencies are 7.4 GB. And it generates at 2x speed on CPu. Neat!

aki237

22 days ago

This is impressive.

I just tried some sample verses, sounds natural.

But there seems to be a bug maybe? Just for fun, I had asked it to play the Real Slim Shady lyrics. It always seems to add 1 extra "please stand-up" in the chorus. Anyone see that?

gabrieldemarm

22 days ago

Hello Gabriel from Kyutai here, maybe it's related to the way we chunk the text? Can you post an issue on github with the extact text and voice? I'll take a look.

aidenn0

22 days ago

I'm sure I'm being stupid, but every voice except "alba" I recognize from Les Miserables; is there a character I'm forgetting?

vvolhejn

22 days ago

Václav from Kyutai here. Yes the original naming scheme was from Les Miserables, glad you noticed! We just stuck to Alba because that's the real name of the voice actor that provided the voice sample to us (see https://huggingface.co/kyutai/tts-voices), the other ones are either from pre-existing datasets or given anonymously.

kreelman

21 days ago

Had so much fun with this. Was able to get my favourite celebrities to warn me about things happening on this PC.

indigodaddy

22 days ago

Perfect timing that is exactly what I am looking for for a fun little thing I'm working on. The voices sound good!

g947o

22 days ago

I wonder if this could be adapted into an app that can run completely offline?

dhruvdh

22 days ago

Try `uvx pocket-tts serve`

bboplifa

18 days ago

it is similar to chatterbox as far as realism at half the speed and no gpu needed which leads me to wonder, why is chatterbox so slow ?

anonymous344

22 days ago

doesn't seem to know thai language. anyobody can suggest thai tts?

maxglute

22 days ago

Would be nice if preview supports variable speed.

user

22 days ago

[deleted]

grahamrr

22 days ago

voices sound great! i see sample rate can be adjusted, is there any way to adjust the actual speed of the voice?

fuzzer371

22 days ago

Haven't we had TTS for like 20+ years? Why does AI need to be shoved into it all of a sudden. Total waste of electricity.

rhdunn

22 days ago

Using neural nets (machine learning) to train TTS voices has been around a long time.

[1] (2016 https://arxiv.org/abs/1609.03499) WaveNet: A Generative Model for Raw Audio

[2] (2017 https://arxiv.org/abs/1711.10433) Parallel WaveNet: Fast High-Fidelity Speech Synthesis

[3] (2021 https://arxiv.org/abs/2106.07889) UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

[4] (2022 https://arxiv.org/abs/2203.14941) Neural Vocoder is All You Need for Speech Super-resolution

oybng

22 days ago

>If you want access to the model with voice cloning, go to https://huggingface.co/kyutai/pocket-tts and accept the terms, then make sure you're logged in locally with `uvx hf auth login` lol

andhuman

22 days ago

I’ve tried the voice clinking and it works great. I added a 9s clip and it captured the speaker pretty well.

But don’t do the fake mistake I did and use a hf token that doesn’t have access to read from repos! The error message said that I had to request access to the repo, but I’ve had already done that, so I couldn’t figure out what was wrong. Turns out my HF token only had access to inference.