Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • ArchAengelus@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    Unless you’re getting used datacenter grade hardware for next to free, I doubt this. You need 130 gb of VRAM on your GPUs

      • L_Acacia
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.