At Mozilla, we work hard to make Firefox the best browser for you. That’s why we're always focused on building a browser that empowers you to choose your own path, that gives you the freedom to explore without worry or compromises. We’re excited to share more about the updates and improvements we ha...
It’s local. You’re not sending data to their servers.
We’re looking at how we can use local, on-device AI models – i.e., more private – to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities. The alt text is then processed on your device and saved locally instead of cloud services, ensuring that enhancements like these are done with your privacy in mind.
That’s somewhat awkward phrasing but I think the visual processing will also be done on-device. There are a few small multimodal models out there. Mozilla’s llamafile project includes multimodal support, so you can query a language model about the contents of an image.
Even just a few months ago I would have thought this was not viable, but the newer models are game-changingly good at very small sizes. Small enough to run on any decent laptop or even a phone.
deleted by creator
It’s local. You’re not sending data to their servers.
At least use the whole quote.
yeah, of course its gonna look like its not local if you take out the part where it says its local
That’s somewhat awkward phrasing but I think the visual processing will also be done on-device. There are a few small multimodal models out there. Mozilla’s llamafile project includes multimodal support, so you can query a language model about the contents of an image.
Even just a few months ago I would have thought this was not viable, but the newer models are game-changingly good at very small sizes. Small enough to run on any decent laptop or even a phone.