Induced demand. Having it there means people will use it, doesn’t mean they wanted to or asked Firefox to add it.
It’s simple… Don’t want it, don’t use it. There, problem solved! Now get over it…
This could have been an extension downloaded by only the people who want it and no one would have complained. Nothing like developing a full featured extension system and promptly not using it for a feature they know for a fact people have very polarized opinions about. What is it called when software ships with random shit nobody wanted or asked for and can’t be removed? Oh right! Bloatware!
“Oh just disable it” yeah just like the Android apps that came with my phone which have disable instead of uninstall in settings. A nice big middle finger to the user that makes it clear they don’t want them in control of their own devices.
If I wanted an “AI browser” or a browser with every feature under the sun related to web browsing or not, I’d be using Edge. Nobody is choosing Firefox because they want the same experience as a big tech corporate browser, they’d choose an actual big tech corporate browser in that case.
You clearly doesn’t know what bloatware is. But I guess you’d like a CLI-interface, to browse the web… You can get that, but you wouldn’t like it much.
Why disable it, when you don’t have to use it? It’s not critical in any way.
You seem like a swell guy, that likes to complain about every little thing you don’t like… I bet you don’t even donate some money to Firefox, but only expect them to provide you with free software that you can removed about…
I do disable it. And then they add something else and I disable that, and they add something else and Idisable that, and they add something else and I search the settings and I disable whatever got re-enabled.
Maybe you can understand why this is frustrating to people.
At the bare minimum, there should be a pop-up saying “do you want to enable the AI sidebar feature” or whatever where people can click yes or no.
Suggest it, don’t whine about it. There are tons of features/possibilities that I bet you never use, and that you don’t turn of. If you really believe that free software, that you don’t even support or contribute to, should listen to your complaints, then my friend, you are delusional.
I really can’t understand why it’s frustrating to get a free app/browser, that cares about internet freedom, not being spied upon and so forth, just because you get the ability to use AI, that many people demand today.
I don’t really use AI, but I really don’t whine about it being there. I don’t care, I don’t notice… Take a deep breath, and enjoy your free software, from a company that cares about all it’s users… And if that’s not enough, submit a suggestion, donate some money to help it evolve faster…
‘We pushed shit on everyone and a bunch of them are using it’ never ever ever vindicates pushing shit on people. People use shit that’s pushed on them! Do you know how low that bar is?!
I am a vocal defender of the underlying technology, and this is still some bullshit.
Cigarettes, credit scores, fidget spinners, AI slop on facebook, payday loans!
They may as well make the same argument about advertisements in general: “we filled Instagram with obnoxious ads and a bunch of them are clicking!”
Its completely true that not all use is because of being forced to use it though. Lots of people do love using these tools whether for their own amusement or because they find it useful whether we like it or not. He’s right.
Are you responding to someone who said, ‘all use is forced?’ Because I didn’t.
If you want this in your browser, great, there’s these things called extensions. The fact a goddamn LLM is standard, but DownThemAll is barely tolerated, speaks to completely fucked priorities at Mozilla.
Regular, non-expert internet users find it interesting, or even amusing, to generate images or videos using AI and to send that media to their friends. While sophisticated media aesthetics find those creations gauche or even offensive, a lot of other cultures find them perfectly acceptable. And it’s an inarguable reality that millions of people find AI-generated media images emotionally moving.
You’re describing the internet equivelant of ignorance and figurative poverty, where people don’t have the knowledge or the resources to make better choices (fast food deserts, faith healers, the PowerBall) because they are institutionally deprived. This is the same class of users/consumers that big companies look for those exact same qualities in to shill the exact same AI garbage to because they don’t know anybetter. This article is suggesting Mozilla flood the streets of a poor neighborhood with their own crank because their crank is cut less. I see the argument as bad, I see the premise as evil, and I see the rhetoric as manipulative. Respectfully, no.
Superbly well said! And essential for understanding and unpacking what “the people want it” really means when leveraged in arguments.
Now for the but - by the sake token, there are applications of LLMs that can step in to do grunt work we don’t want to do (e.g. an on-the-go greasemonkey replacement that can change pages in some customized way in response to natural language requests), or engagements with art and creativity that, to the satisfaction of everyone, is something other than slop. But it shouldn’t be used to advocate for constantly lowering the lowest common denominator.
I like this take
Well said
Although this has been heavily downvoted, the author has a point: what do private, safe AI experiences in a software mean for the common browser user? How does a company that was founded as an ‘alternative’ to a crummy default browser take the same approach? For those that do and will use the tech indiscriminately, what’s next for them?
Just as cookie/site separation became a default setting in FF eventually, or the ability to force a more secure private DNS, what could Mozilla consider on its own to prevent abuse, slop, LLM-syncophantism / deception, undesired user data training, tracking, and more? All that stuff we know is bad, but nobody seems to be addressing all too well. These big AI companies certainly don’t seem to be.
Rather than advocate for Not AI, how do we address it better for those who’ll simply hit up one of these big AI company websites like they would social media or Amazon?
Is it anonymous tokenization systems that prevent a big AI company knowing who a user is, a kind of ‘privacy pass?’ Is it text re-obsfucation at the browser level that jarbles user input so that patterns can’t emerge? Is it even a straightforward warning to users about data hygiene?
The above is silly, and speculative, and mostly for conversation. But: maybe there’s something here for your everyday browser user. And maybe we ought to consider how we help them.
Because AI is a massive waste of resources that has yet to prove (to me at least) that it can provide any kind of real benefit to humanity that couldn’t be better provided by another, less resource intensive means. Advocating for ‘common’ AI use is absurd in the face of the amount of energy and other resources consumed by that usage, especially in the face of a looming climate crises being exacerbated by excesses like this.
LLMs may have valid uses, I doubt it, but they may. Using it to make memes and generate answers of questionable veracity to questions that would be better resolved with a Google search is just dumb.
This. It burns too much electricity, wastes too much water and is wrong 70% of the time. Even if its private and offline the problems with it go waaaaay beyond that.
These concerns about water and electricity are overblown (on the global level, locally it’s still a concern). Though with that said if AI-generated video takes off then it 100% will be a disaster.
The datacenter industry has planned build outs that will require them using 13-15% of the power in America even after they add their own filthy new generation.
That’s insane.
The American power grid is so screwed.
I’m still in total despair over Trump killing that offshore wind farm that was near completion. It’s like he’s going out of his way to crush our hope for the future.
I thought you had just said concerns are overblown?
The American power grid is screwed because of Trump and AI image/video, not chatbots.
Yeah… Only that the Google search is another LLM hit…
Hi, hope you don’t mind me giving my two cents.
Local models are at their m9st useful in daily life when they scrape data from a reliable factual database or from the internet and then present/discuss that data to you through natural language conversation.
Think about searching for things on the internet now a days. Every search provider stuffs ads in top results and intentionally ofsucates the links your looking for especially if its a no-no term like pirating torrent sites.
Local llms can act as an advanced generalized RSS reader that automatically fetches articles and sources, send STEM based queries to wolfram alpha llm api and retrieve answers, fetch the weather directly from openweatherAPI, retrieve definitions and meanings from local dictionary, retrieve Wikipedia article pages from a local kiwix server, search ArXiv directly for prior searching. One of Claude’s big selling points is the research mode toolcall that scrapes hundreds of sites to collect up to date data on the thing your researching and presenting its finins in a neat structured way with cited sources. It does in minutes what would traditionally take a human hours or days of manual googling.
There are genuine uses for llms if your a nerdy computer homelab type of person familiar databases, data handling and can code up/integrate some basic api pipelines. The main challenge is selling these kinds of functions in an easy to understand and use way for the tech illiterate who already think bad of llms and similar due to generative slop. A positive future for llms integrated into Firefox would be something trained to fetch from your favorite sources and sift out the crap based on your preferences/keywords. More sites would have APIs for direct scraping and the key adding process would be one click button
This article had the opposite effect on me than I think the author intended.
Not a single mention that Mozilla acquired an ad company, tried to put user-profiling functionality in the browser for ad networks to use, changed their ToS to remove the part that says they don’t sell your data and partnered with a sketchy “data protection” service that it turned out was owned by the same person as some people-finder data-brokers.
Maybe if we want an open source project to be the bastion of private AI that respects your data and doesn’t surveil you, as Anil suggests, perhaps it should be a company that we still trust?
Good piece. I have been largely impressed with Firefox’s “AI” features. They appear to be just normal useful features they are calling “AI” for marketing.
Which is awful marketing, because of how real people respond to that.
deleted by creator










