• @abhibeckert@lemmy.world
    link
    fedilink
    4
    edit-2
    8 months ago

    To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4.

    True - but iPhones do run a local language model now as part of their keyboard. It’s definitely not GPT-4 quality but that’s to be expected given it runs on a tiny battery and executes every single time you tap the keyboard. Apple has proven that useful language models can be run locally on the slowest hardware they sell. I don’t know of anyone else who’s done that?

    Even coming close to GPT-3.5-turbo counts as impressive.

    Llama 2 is GPT-3.5-Turbo quality and it runs well on modern Macs which have a lot of very fast memory. Even their smallest fanless laptop can be configured with 24GB of memory and it’s fast memory too - 800Gbps. That’s not quite enough to run the largest Llama2 model but it’s close to enough memory. Their more expensive laptops have more memory and it’s faster - they can run the 70 billion parameter llama 2 without breaking a sweat.

    And on desktops Apple sells Macs with 192GB of memory and it’s way faster at 6.4Tbps. That’s slightly more memory (and for a lot less money) than the most expensive data center GPU NVIDIA sells (the NVIDIA unit is faster at compute operations but LLMs are often limited by available memory not compute speed).

    • Quokka
      link
      fedilink
      English
      18 months ago

      You can even run llama2 locally on android phones.