• 62 Posts
  • 421 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle



  • I’m not too sure about varietals of any of the trees. One mango I know is called a lemon meringue mango, and as you might guess is very citrusy. It’s much smaller and paler than the usual Caribbean mangoes at the supermarket. Likewise not sure about either avocado. One is what’s colloquially called a Florida avocado. It’s huge - like bigger than a softball - with a smooth, bright green skin. The flesh is a bit watery, to the point where I use cheesecloth to wring it out if making guac. Milder than a haas as well. The other variety is really interesting. It ripens on the vine until it is dark purple or almost black, like an eggplant. This one is delicious and slightly floral. I haven’t seen any fruits on either tree again this year, so something is definitely up. An arborist was over a few years ago to do some pruning and didn’t mention anything problematic about either, so it will likely take some research to figure out. I’m not aware of other avocado trees in the neighborhood, but certainly one possibility is that they’ve lost their pollinators.




  • Anthropic and OpenAPI both have options that let you use their API without training the system on your data (not sure if the others do as well), so if t3chat is simply using the API it may be that they themselves are collecting your inputs (or not, you’d have to check the TOS), but maybe their backend model providers are not. Or, who knows, they could all be lying too.





  • Gene sequencing wasn’t really a thing (at least an affordable thing) until the 2010s, but once it was widely available archaeologists started using it on pretty much anything they could extract a sample from. Suddenly it became possible to track the migrations of groups over time by tracing gene similarities, determine how much intermarrying there must have been within groups, etc. Even with individual sites it has been used to determine when leadership was hereditary vs not, or how wealth was distributed (by looking at residual food dna on teeth). It really has revolutionized the field and cast a lot of old-school theories (often taken for truth) into the dustbin.










  • I wonder how much “left-leaning” (a.k.a. in sync with objective reality) content would be needed to reduce the effectiveness of these kinds of efforts.

    Like, if a million left-leaning people who still had Twiter/FB/whatever accounts just hooked them up to some kind of LLM service that did nothing but find hard-right content and reply with reasoned replies (so, no time wasted, just some money for the LLM) would that even do anything? What about similar on CNN or local newspaper comment sections?

    It seems like there would have to be some amount of new content generated that would start forcing newly-trained models back toward the center unless the LLM builders were just bent on filtering it all out.


  • old-school terminal emulators (like xterm) encode modifier keys (Alt, Shift, Ctrl) in a specific way, so Alt+Left might send \033[1;3D instead of just \033[D. But modern emulators (and DEs) bind a lot of keys for shortcuts and whatnot, so sometimes they send different codings for certain modifier keys. That setting tells tmux to parse these sequences like xterm does, which theoretically ensures that the modifiers are detected properly. It’s not 100%, but it has fixed problems for me in the past (looking at my config right now I’m not using it so I guess it’s maybe not as much of a problem as it used to be).

    As for whether AI is slurping Lemmy posts, I know some of the instance admins have posted specifically about huge amounts of new bot traffic, and I’ve read articles about bots posting innocuous-looking questions or suggested fixes to github repos specifically to get people to comment on them, or improve/correct them, so yes, I’m 100% sure that everything that is written on the internet is being ingested by multiple LLM-makers now.