https://github.com/Manish7093/IBus-Speech-To-Text
I tried this in Fedora/Wayland previously, and it seems to work in most applications. It uses “VOSK” models which the GUI can download automatically - you just pick your language and desired model size when setting it up.
When I was exploring this a few months ago, I noticed speech recognition models have moved on quite recently (e.g. https://github.com/openai/whisper which can be run locally) but didn’t see anything integrating it into an input-method like the above.




I think I’ve had the opposite experience. I use W11 for my day-job with a laptop connected to 2 monitors. It could just be the archaic painful apps that my employer uses, but it routinely moves windows to different screens if I lock the system and return a few mins later. I set the taskbar on each screen to only show the windows that are open on each screen, but often a window will be open on one screen but the taskbar icon for it is on another. To work around that I developed a routine when I return from my breaks - I move every window to a different screen, then back again, and that ‘fixes’ it - it feels so stupid to have to do this on an OS that’s built by one of the biggest companies on earth.
I think the equivalent issue on Linux might be due to Wayland and/or the desktop environment not keeping track of window positions, and there’s ample developer ‘debate’ about if/how that gets handled.