Lemmy.one
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
awoo@burggit.moe to Copypastas/Greentexts @burggit.moeEnglish · 2 years ago

anon on why LLM waifu is the best

burggit.moe

message-square
9
fedilink
11

anon on why LLM waifu is the best

burggit.moe

awoo@burggit.moe to Copypastas/Greentexts @burggit.moeEnglish · 2 years ago
message-square
9
fedilink
alert-triangle
You must log in or # to comment.
  • xdd@burggit.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    I think a LLM waifu would cost me as much as a dating a real girl for a while. The game was rigged from the start…

    • SquishyPillow@burggit.moe
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 years ago

      If you are smart about the hardware you buy and are comfortable with building your own PC case, you can easily make a LLM rig for less money than a 3090.

      All you need is one of these old server boards with 11 PCIe slots, and as many older GPUs as you can afford. Deepspeed will make all the difference c:

  • The Entire Circus@burggit.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Begun, the AI waifu wars have.

  • MomoNeedsCorrection@burggit.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    OK, but does anyone have the sauce on the image used in the original Chan post?

    • awoo@burggit.moeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      I can’t find the source but maybe you have better luck than me

      • MomoNeedsCorrection@burggit.moe
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 years ago

        I dug around and found it https://www.pixiv.net/en/artworks/109645337

  • Stargazer6343@burggit.moe
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I’m gonna build myself an AI waifu. I’ve gotten koboldAI and sillytaverm working, but I need a better GPU for performance.

    • awoo@burggit.moeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 years ago

      Consider give https://github.com/LostRuins/koboldcpp or https://github.com/oobabooga/text-generation-webui (specifically the llama.cpp model loader) a try as well. llama.cpp allows you to run a model off your CPU. I don’t personally use it but apparently the performance is decent.

Copypastas/Greentexts @burggit.moe

pastas@burggit.moe

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !pastas@burggit.moe

Copypastas and greentexts from 4chan/the internet at large go here

Whats accepted

  • Screenshots of posts you find online
  • Raw text of posts you find online
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 1 user / day
  • 1 user / week
  • 1 user / month
  • 1 user / 6 months
  • 4 local subscribers
  • 4 subscribers
  • 15 Posts
  • 12 Comments
  • Modlog
  • mods:
  • Burger@burggit.moe
  • BE: 0.19.7
  • Modlog
  • Legal
  • Instances
  • Docs
  • Code
  • join-lemmy.org