HeliBoard is a privacy-conscious and customizable open-source keyboard, based on AOSP / OpenBoard. Does not use internet permission, and thus is 100% offline.
HeliBoard is a privacy-conscious and customizable open-source keyboard, based on AOSP / OpenBoard. Does not use internet permission, and thus is 100% offline.


Any headline that misuses the word “martyr” raises red flags for me anyway. They didn’t die to further a cause. They died because a bunch of terrible people decided to kill innocent, uninvolved civilians.


This is the kind of stuff that trump and his ilk are all about.
To build on this, the anti-American sentiment is what fuels the xenophobia of that party. “Fuck America” is what makes people go “ya know what let’s just not have allies anymore” and start threatening everyone.
Nuance isn’t that hard. It’s simple, actually. Fuck the people who are in charge of and in support of these decisions. Fuck the people who actually gain from fucking over the rest of the world.
It’s not like Trump intends to listen to me or my trans husband anyway. If anything, even against the NRA’s wishes (by some miracle), he wants us to be ineligible to even own a gun.


In some cases, it appears to be the opposite: CEOs want to do mass layoffs, so they blame AI rather than taking accountability themselves. The Amazon layoffs reek of this.


had people understood from the start the limitations of it, investment would’ve been more modest and cautious
People did understand from the start. Those who do the investing just didn’t listen, or they had a different motive. These days it’s impossible to tell which.
And by “people” I’m not referring to random people, but those who have been closer than most to the development of these models. There has been an unbelievable amount of research done on everything from the effectiveness of specific models in niche fields to the ability to use an LLM as the backend for a production service. Again, no amount of negative feedback going up the chain has made a difference in the direction, so that only leaves a few explanations on why the investment continues to be so high.


Could also do this:
#[expect(lint, reason = "TODO: #issue")]
Edit: to clarify, #issue is an issue number that points to a related issue or task. Could also just explain it inline, but if you have a task tracker, better to make a task instead.


There are enough grammatical errors for me to assume incompetence over malice here. In either case, the percentages were clearly calculated incorrectly, and I’d question the results and want to verify them myself.


More complicated Tor, but super cool. It uses garlic routing rather than onion routing to further anonymize packets.
It’s worth reading into what it is (and especially those two terms) to get a better understanding.


Not exactly. Thinking models just inflate the context window to point the model closer to your target. GANs have two models which compete against each other, both training each other, with the goal of one (or both) of those models being improved over time.


This is unironically what I’ve seen people try to do, except they assume the second AI is correct.
Unrelated, but this is how GANs work to some extent. GANs train during the back-and-forth though, while LLMs do not.


Which outputs are accurate, and which ones are inaccurate? How could you tell? What steps did you take to verify accuracy? Was verifying it a manual process?


“Leave the rest of the world alone” as if Iran and Israel wouldn’t still be at war if the US was uninvolved lol.


Dolly’s name is already on a lot of stuff in Tennessee. Heck, Dollywood is named after her.
I don’t think there’s much objection either to slapping her name on things, which is rare.


Is this an esolang? Could be nice for code golf maybe.


Shh you’ll pop the bubble if you start talking sensibly. It’s not an ASIC—it’s a specialized piece of hardware optimized to execute a model with unparalleled performance. Now buy my entire stock of them and all the supply for the next two years please.
(Figuring out the compose combination for an emdash took longer than I’d like to admit lol)


Can’t speak for Git, but caching responses is a common enough problem that it’s built into the standard HTTP headers.
As for building a cache, you’d want to know a few things:
You seem locked into using Git, and if that’s the case, you still need to consider the second point there. Do you plan to evict cache entries? Git repos can grow unbounded in size, and it doesn’t give you many options for determining what entries to keep.


Pre-LLM translation services also generally used AI, just via more traditional machine learning. The only difference is introducing a locally run LLM.
If it runs locally and is openly available, then it doesn’t make much difference to me if it’s a traditional model or a LLM.


Surely the same applies to active military, right? Because that seems like an equally good idea here. Which is to say this is idiotic af.


Do we know it doesn’t?
What in the English language?
Thanks for clarifying that.