It’s not always easy to distinguish between existentialism and a bad mood.

  • 15 Posts
  • 523 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • But if hypothetically you ask me whether I know about any couples currently doing this ill-advised thing, where it has not yet blown up, then I do not confirm or deny; it would not be my job to run their lives. This is true even if all they’d face is a lot of community frowning about BDSM common wisdom, rather than legal consequences. It is very hard to get me to butt into two people’s lives, if they are both telling me to get out and mind my own business; maybe even to the point of it being an error on my part, because if I was erring there, I sure do know which side I would be erring on.

    This reads a lot like an ixnay on the exualassaultsay admonition towards the broader rationalist community.





  • Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”

    Fun article.

    Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.

    Occasionally I feel that Altman may be plugged into something that’s even dumber and more under the radar than vanilla rationalism.




  • I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives’ careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)

    Plus the fact that it’s always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn’t inspire much confidence.





  • So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

    Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.

    Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.