gay blue dog

https://lucario.dev/

  • 0 Posts
  • 102 Comments
Joined 11 months ago
cake
Cake day: March 19th, 2024

help-circle
  • i can admit it’s possible i’m being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ’s part. but i still think that it’s a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper – though they note using a 2048-GPU cluster of H800’s that would put them down around $40m). i’m thinking in the mode of “the whitepaper exists to serve the company’s bottom line”

    btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn



  • “the media sucks at factchecking DeepSeek’s claims” is… an interesting attempt at refuting the idea that DeepSeek’s claims aren’t entirely factual. beyond that, intentionally presenting true statements that lead to false impressions is a kind of dishonesty regardless. if you mean to argue that DeepSeek wasn’t being underhanded at all and just very innocently presented their figures without proper context (that just so happened to spurn a media frenzy in their favor)… then i have a bridge to sell you.

    besides that, OpenAI is very demonstrably pissing away at least that much money every time they add one to the number at the end of their slop generator






  • if you put this paragraph

    Corporations institute barebones [crappy product] that [works terribly] because they can’t be bothered to pay the [production workers] to actually [produce quality products] but when shit goes south they turn around and blame the [workers] for a bad product instead of admitting they cut corners.

    and follow it up with “It’s China Syndrome”… then it’s pretty astonishingly clear it is meant in reference to the perceived dominant production ideology of specifically China and has nothing to do with nuclear reactors


  • A WELL TRAINED AI can be a very useful tool.

    please do elaborate on exactly what kind of training turns the spam generator into a prescription-writer, or whatever other task that isn’t generating spam

    Edit: to add this is partly why AI gets a bad rap from folks on the outside looking it.

    i’m pretty sure “normal” folks hate it because of all the crap it’s unleashed upon the internet, and not just because they didn’t use the most recent models off the “Hot” tab on HuggingFace

    It’s China Syndrome but instead of nuclear reactors it’s AI.

    what are we a bunch of ASIANS?!?!???






  • maybe i’m a weirdo but i actually really like this a lot. if there weren’t armies of sycophants chanting outside of all our collective windows about how AI is the future of gaming… if you look at this “game” as an art object unto itself i think it is actually really engaging

    it reminds me of other “games” like Marian Kleineberg’s Wave Function Collapse and Bananaft’s Yedoma Globula. there’s one other on the tip of my tongue where you uploaded an image and it constantly reprojected the image onto the walls of a first-person walking simulator, but i don’t recall the name






  • there were bits and pieces that made me feel like Jon Evans was being a tad too sympathetic to Elizer and others whose track record really should warrant a somewhat greater degree of scepticism than he shows, but i had to tap out at this paragraph from chapter 6:

    Scott Alexander is a Bay Area psychiatrist and a writer capable of absolutely magnificent, incisive, soulwrenching work … with whom I often strongly disagree. Some of his arguments are truly illuminatory; some betray the intellectual side-stepping of a very smart person engaged in rationalization and/or unwillingness to accept the rest of the world will not adopt their worldview. (Many of his critics, unfortunately, are inferior writers who misunderstand his work, and furthermore suggest it’s written in bad faith, which I think is wholly incorrect.) But in fairness 90+% of humanity engages in such rationalization without even worrying about it. Alexander does, and challenges his own beliefs more than most.

    the fact that Jon praises Scott’s half-baked, anecdote-riddled, Red/Blue/Gray trichotomy as “incisive” (for playing the hits to his audience), and his appraisal of the meandering transhumanist non-sequitur reading of Allen Ginsberg’s Howl as “soulwrenching” really threw me for a loop.

    and then the later description of that ultimately rather banal New York Times piece as “long and bad” (a hilariously hypocritical set of adjectives for a self-proclaimed fan of some of Scott’s work to use), and the slamming of Elizabeth Sandifer as being a “inferior writer who misunderstands Scott’s work”, for uh, correctly analyzing Scott’s tendencies to espouse and enable white supremacist and sexist rhetoric… yeah it pretty much tanks my ability to take what Jon is writing at face value.

    i don’t get how after so many words being gentle but firm about Elizer’s (lack of) accomplishments does he put out such a full-throated defense of Scott Alexander (and the subsequent smearing of his “”“enemies”“”). of all people, why him?