Fucking piece of shit Sam Altman should be executed on a wooden plank and his corpse should be displayed on a public square for the next 100 years.

I was saving up for building a pc but at this point I’d rather just throw my money in a fire than buy RAM or SSD. Even Apple has better Ram pricing than these goons.

  • WilsonWilson [comrade/them, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    I was thinking about throwing together a linux box and running a smol deep seek model locally but it looks like ima put that on hold for a while. I just checked ebay and used prices for 64GB are out of control as well. Any chance big ai is influencing this? They really seem desperate to get revenue on the books. I am getting hit up for ai services at every turn now. Firefox is asking me if I want my web browsing summarized by ai (NO), google wants me to let them review my email (NO), github trying to force me to use copilot (i use a free model), Insurance tape worms are advertising their use of ai and I’m just waiting for chase bank to push it on me (NO)

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        A couple months ago I tried running oLlama on a server blade with a ton of RAM and traditional processor cores and performance was still pretty horrible. Is there a better way to do it or do you just need a GPU?

        • stupid_asshole69 [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          You need a gpu. Anything nvidia is fine, more ram is better, but you can use system ram to swap out what you’re doing.

          If you’re doing it yourself, consider how smaller models built to do one specific thing can do the job. For example: a small 8gb video card can do text inference and its results can be sent to something like kokoro on cpu for tts and you suddenly have a talking llm on an eight year old budget gpu.

      • WilsonWilson [comrade/them, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Any tips on a specific server model(s)? Years ago I got a GIANT ebay 4u server which I intended to use as a security video server and run a custum isa 16channel bnc card but none of the linux drivers worked. I then started to write my own driver based on video4linux but the card was so old I couldn’t find white paper on i/o specs and scope and logic analyzer couldn’t handle the frequencies for reverse engineering so I just used it for a year as a 200lb paperweight samba server. If I could find something cheap and lightweight with modern hardware that could possibly work. If it already came with the ram that would be a nice bonus.

        • unperson [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 days ago

          Memory bandwidth is what you need. An EPYC 9xx4 / 9xx5 with all 12 DDR5 DIMMs, or a newish dual socket Xeon with 16 modules.

          With fast memory the GPU is unnecessary until you want to do training which needs a lot of FLOPS.