Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200’s 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    had people understood from the start the limitations of it, investment would’ve been more modest and cautious

    People did understand from the start. Those who do the investing just didn’t listen, or they had a different motive. These days it’s impossible to tell which.

    And by “people” I’m not referring to random people, but those who have been closer than most to the development of these models. There has been an unbelievable amount of research done on everything from the effectiveness of specific models in niche fields to the ability to use an LLM as the backend for a production service. Again, no amount of negative feedback going up the chain has made a difference in the direction, so that only leaves a few explanations on why the investment continues to be so high.