Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”

https://github.com/nlpxucan/WizardLM

  • @Mixel@feddit.de
    link
    fedilink
    English
    21 year ago

    So if I understand correctly it is fine tuned for coding or what exactly is this Wizard model doing?

    • @notfromhere
      link
      English
      41 year ago

      It is StarCoder and fine tuned on a new Wizard Instruct dataset optimized for coding models. So it follows the instruct formatting of prompts on top of the StarCoder base model.

        • @notfromhere
          link
          English
          31 year ago

          Doesn’t look like it, hopefully it does someday. I am stoked to try this one out.