
So if there is actually some punishment handed down, any bets on what even more hellish scenario will rise up to replace the one where Google controls internet ads?
So if there is actually some punishment handed down, any bets on what even more hellish scenario will rise up to replace the one where Google controls internet ads?
Not that we have any real info about who collects/uses what when you use the API
I’m not too sure about varietals of any of the trees. One mango I know is called a lemon meringue mango, and as you might guess is very citrusy. It’s much smaller and paler than the usual Caribbean mangoes at the supermarket. Likewise not sure about either avocado. One is what’s colloquially called a Florida avocado. It’s huge - like bigger than a softball - with a smooth, bright green skin. The flesh is a bit watery, to the point where I use cheesecloth to wring it out if making guac. Milder than a haas as well. The other variety is really interesting. It ripens on the vine until it is dark purple or almost black, like an eggplant. This one is delicious and slightly floral. I haven’t seen any fruits on either tree again this year, so something is definitely up. An arborist was over a few years ago to do some pruning and didn’t mention anything problematic about either, so it will likely take some research to figure out. I’m not aware of other avocado trees in the neighborhood, but certainly one possibility is that they’ve lost their pollinators.
Nobody knows! There’s no specific disclosure that I’m aware of (in the US at least), and even if there was I wouldn’t trust any of these guys to tell the truth about it anyway.
As always, don’t do anything on the Internet that you wouldn’t want the rest of the world to find out about :)
They’re talking about what is being recorded while the user is using the tools (your prompts, RAG data, etc.)
Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you’d have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.
And I can’t possibly imagine that Grok actually collects less than ChatGPT.
Gene sequencing wasn’t really a thing (at least an affordable thing) until the 2010s, but once it was widely available archaeologists started using it on pretty much anything they could extract a sample from. Suddenly it became possible to track the migrations of groups over time by tracing gene similarities, determine how much intermarrying there must have been within groups, etc. Even with individual sites it has been used to determine when leadership was hereditary vs not, or how wealth was distributed (by looking at residual food dna on teeth). It really has revolutionized the field and cast a lot of old-school theories (often taken for truth) into the dustbin.
That humans came out of Africa once and then settled the rest of the world. In reality there was a constant migration of humans in and out of Africa for millennia while the rest of the world was being populated (and of course it hasn’t ever stopped since).
I love how much DNA analysis has completely upended so much “known” archaeology and anthropology from even just a couple decades ago.
That’s some fancy joinery!
Or is the arrow of time just our way of perceiving a universally necessary change of entropy?
What’s it called if you’ve done all of these?
Ok so you’d literally be making a regular Lenny post to some particular community on some particular instance in that case, right?
Holy shit I actually agree with Donald Trump about something.
I’m a little lost. You mention hosting content on any instance, or on GitHub. How does that work? And if your content is elsewhere what is Lemmy doing? Authx?
Do you have any sources on this? I started looking around for pre-training, training and post-training impact of new input but didn’t find what I was looking for. In just my own experience with retraining (e.g. fine-tuning) pre-trained models, it seems to be pretty easy to add or remove data to get significantly different results than the original model.
I wonder how much “left-leaning” (a.k.a. in sync with objective reality) content would be needed to reduce the effectiveness of these kinds of efforts.
Like, if a million left-leaning people who still had Twiter/FB/whatever accounts just hooked them up to some kind of LLM service that did nothing but find hard-right content and reply with reasoned replies (so, no time wasted, just some money for the LLM) would that even do anything? What about similar on CNN or local newspaper comment sections?
It seems like there would have to be some amount of new content generated that would start forcing newly-trained models back toward the center unless the LLM builders were just bent on filtering it all out.
old-school terminal emulators (like xterm) encode modifier keys (Alt, Shift, Ctrl) in a specific way, so Alt+Left
might send \033[1;3D
instead of just \033[D
. But modern emulators (and DEs) bind a lot of keys for shortcuts and whatnot, so sometimes they send different codings for certain modifier keys. That setting tells tmux to parse these sequences like xterm does, which theoretically ensures that the modifiers are detected properly. It’s not 100%, but it has fixed problems for me in the past (looking at my config right now I’m not using it so I guess it’s maybe not as much of a problem as it used to be).
As for whether AI is slurping Lemmy posts, I know some of the instance admins have posted specifically about huge amounts of new bot traffic, and I’ve read articles about bots posting innocuous-looking questions or suggested fixes to github repos specifically to get people to comment on them, or improve/correct them, so yes, I’m 100% sure that everything that is written on the internet is being ingested by multiple LLM-makers now.
Those bat-signal eyes! I love simple creativity like this!
All too real.