A user on the online forum 4chan has leaked a massive 270GB of data purportedly belonging to The New York Times. This leak includes what is claimed to be the source code for the newspaper’s digital operations.

  • Thats a lot of data but surly its not all their articles cos I’d very much like to train mixtral7x8b on it along with 4chan data and shir from the dark web. Surly there is a project where such a model is public and being trained on literally everything regardless of legality.

    EDIT: why am i getting downvoted?

    • reddithalation@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      you’re getting downvoted because LLMs are simply not very good, they consume lots of energy (bad for climate), and seemingly most people involved in ai hype want to replace human creativity or something.

      how about instead of training a not very trustworthy or useful LLM on lots of nyt, 4chan, and “dark web”, you go read lots of nyt, 4chan, and dark web to train your own (much better) model (your brain).

      • They are very good they exceed the capability of many humans in many tasks. If consume energy = bad for environment then all electric vehicles are bullshit cos they have energy inefficiencies that petrol cars don’t (thermodynamics is a bitch). U do realise the argument about if asking an ai to create an image is art argument is literally the same argument that was had about if photography is art.

        Llm are decently trustworthy especially with chain of thought reasoning and tool capabilities. And they are extraordinarily useful people wouldnt be using them and creating a market for them of they weren’t. I can’t train my brain then share it for free to everyone on the internet to download I can with an ai tho.

        • reddithalation@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          Have you seen that study about the accuracy of chatgpt responding to programming questions? (here) It’s wrong 52% of the time, and I can say that I have personally experienced trying to use chatgpt for programming and getting more confused rather than less. Maybe it is because I wasn’t using gpt4, or claude, or whatever new model is the best, but I’m just sharing my experience.

          Also I support electric vehicles because without them lots of energy (and emissions) is generated for critical infrastructure (we can’t ditch cars yet), and so replacing that with renewably generated energy is a good idea.

          LLMs consume lots of energy to train and use, but instead of literally moving millions of people around, they assist you in doing things you could have done without them, but with dubious accuracy. Look at the massive use of LLMs in by students to cheat in school, yes they may not get detected, but sometimes they have noticable flaws, that get them in large trouble for being too lazy to actually learn anything.

          If you want to learn in depth knowledge about a topic, just go look it up and learn there, it’s more helpful than an LLM.