

I do use music and video (Flash animations in video too), though yeah they have been moved to slower drives (because data, easier migration). I use XFCE but don’t use desktop icons.


I do use music and video (Flash animations in video too), though yeah they have been moved to slower drives (because data, easier migration). I use XFCE but don’t use desktop icons.


At least projects is an understandable purpose, I don’t use the Templates, Desktop and Public folders at all, and aside from desktop (which I know is a workflow thing that I don’t even use) I would need someone to explain them to me. I’m guessing public would be for a multi-user system, templates maybe for printing stuff (I do not).
I mean consistent sound is fully in-line with what I’m saying, I am fine with robotic sound though the issue I have is that it can be grating for newer. Which I just assumed was something about how samples are used (compared to older speech synthesis). Is the sound actually part of the design to allow such high-speed?
Even if it were, older-style synthesis could likely have that as a parameter or option (or just… a dedicated voice).
I’ve seen some videos on screen-readers with a somewhat fast voice (not quite as fast as your link) that does sound better, similar voices to DECtalk Paul. They don’t seem to always give the voice name but I’ve seen some mention of IBMTTS so it might be related (though current results give AI service stuff that I’m not sure would trace back to those old videos (2016) but either way it might be some Paul derivative). EDIT: It might be ETI Eloquence?
It seems ETI Eloquence is both beloved in the blind community as well as something that has had support issues (proprietary abandonware). And I’ve seen one person on the subject:
It’s frustrating to say the least. Eloquence haters are like, what’s the big deal, but I’m like, show me a voice that is fast and responsive, and doesn’t make me wanna claw my eyes out like eSpeak does. I don’t like concatenative voices because you hear where the splices take place and it’s just weird and off-putting. They are also not as snappy.
The problem I have with Dectalk is that it slurs like a drunk as you speed it up.
I just listened to the samples and it seems a bit hit-or-miss. Some of them still stumble over words, have stilted pacing, or just sound off in some other way (raspy-ness, speed). It seems to vary more voice-to-voice than by the quality setting.
I mean I’m sure some of these voices are fine and probably better than other AI models in terms of performance… though they are a bit uncanny valley and I still think a voice meant to sound robotic (while still having personality) is probably an easier target. I didn’t notice anything like that in the samples, though I did see a couple of YT videos with a GlaDOS voice (sounding fairly accurate) that mention Piper (though I know such a thing likely wouldn’t be front-and-center due to licensing).
Honestly, a lot of newer TTS is worse than the 80s/90s stuff like DECtalk or PlainTalk (/MacinTalk). Both of which, while not exactly human-sounding, actually sounded better (at least in a sort of aesthetic way). For an example, Microsoft Sam (and whatever the voice is default for espeak) is such a downgrade IMO.
I’m not sure how heavy Piper models are (data or running), but I’m sure TTS could be better without neural anything.
Interesting. I guess I’m not that far along (sort of stalled now), and quite possibly may never really need that.
Though for this one:
creating a blank file and renaming to .txt before editing seems good enough for me.