Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”

https://github.com/nlpxucan/WizardLM

  • notfromhere
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    It is StarCoder and fine tuned on a new Wizard Instruct dataset optimized for coding models. So it follows the instruct formatting of prompts on top of the StarCoder base model.

    • Mixel@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      That sounds great honestly! Does that work with the newest ggml yet?

      • notfromhere
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Doesn’t look like it, hopefully it does someday. I am stoked to try this one out.