Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”
Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”
So if I understand correctly it is fine tuned for coding or what exactly is this Wizard model doing?
It is StarCoder and fine tuned on a new Wizard Instruct dataset optimized for coding models. So it follows the instruct formatting of prompts on top of the StarCoder base model.
That sounds great honestly! Does that work with the newest ggml yet?
Doesn’t look like it, hopefully it does someday. I am stoked to try this one out.