I’m new to the field of large language models (LLMs) and I’m really interested in learning how to train and use my own models for qualitative analysis. However, I’m not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I’d appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.

Thanks!

  • BaroqueInMind
    link
    fedilink
    arrow-up
    2
    ·
    8 months ago

    My setup is Win 11 Pro ➡️ WSL2 / Debian ➡️ Docker Desktop (for windows)

    Should I install the nvidia drivers within Debian even though the host OS already has drivers?

    • xcjs@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 months ago

      I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)

      https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.

      You may also run into performance issues within WSL due to the virtual machine overhead.

      • BaroqueInMind
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        I did indeed follow that guide already, thank you for the respect; I am an idiot and installed both the nvidia WSL driver on top of the host OS driver _as well as the Cuda driver. So I’ll try again with only that guide and see what breaks.