• Soup@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    21 hours ago

    “While she was reading it”? Like, as if it takes a computer long enough to read something that it will stop for a break and comment to others?

    These people are weird.

    • shneancy@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 hours ago

      i’m assuming there’s some character limit on that AI chat and the guy had to copy pase in parts

        • braxy29@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          19 hours ago

          what language were you reading it in? i certainly noticed the English translators changed between the first and second book, and i am curious to know how it all reads in Chinese.

          • nialv7@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            3 hours ago

            The first and the third books were translated by Ken Liu who is an amazing author and has gotten 4 Hugos himself. Don’t know why they picked someone else for the second book.

      • KazuyaDarklight@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        2 days ago

        Yeah, I was interested at the start and then it started, and continued, to go downhill for me. I kept going in spite of growing concern because I hoped they’d tie it up well in the end, but no. It may partially be cultural and I can also see some argument for artistic tragedy, but it just didn’t work for me.

        • Zirconium@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          I don’t know why this happens to me but sci fi books that have “boring” characters like Raft by Stephen Baxter are books I actually really enjoy and find the character’s to be realistic. Maybe I’m just a boring person irl

      • Zammy95@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        2 days ago

        Wow guess I’m the outlier? I couldn’t put 3BP down, and then I got to the Dark Forest and loved it even more.

        Deaths End fell apart terribly though.

        It’s not that I liked how he wrote, I’m not sure if it was the translation that caused this, but he did not seem like a good writer at all. But I was very intrigued by his plot and ideas.

        • I_Fart_Glitter@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          That’s how I felt too. I was excited for deaths end, but I put it down about half way through and never got back to it.

          • Zammy95@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            22 hours ago

            I finished it, and it just got progressively more and more scrambled. I guess is was diagnosed with cancer when writing it, and tried to cram as many of his ideas into this one novel as he could. And then it turned out it was a misdiagnosis or something? I can’t remember exactly but. Yeah, definitely not the best work lol

          • P00ptart@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            That was me for the first book, and then I hoped the TV show would get me back into it but I quit that halfway through as well.

          • Zammy95@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            2 days ago

            Yeah, I certainly wouldn’t recommend to everyone as being just an absolutely fantastic book. I liked it a lot, but could definitely see why some would not. I have really enjoyed what’s out of the new Netflix series for the show though, I’m probably more likely to just recommend that to people. It actually feels like thought was out into the characters now! Haha

      • clif@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Uh oh. I’ve had it (the first one) sitting on my shelf for a few weeks but need to finish my current series first.

      • DominusOfMegadeus@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        I’ve tried like 6 times to keep slogging through. I was convinced it would be great if I just got to the point where it started being great. Now I feel validated.

  • logicbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    2 days ago

    A lot of writers just write for themselves, and don’t really think or care about what other people might think when they read it. That’s perfectly fine, by the way. Writing can be a worthwhile effort even if nobody ever reads it.

    But if you want other people to enjoy it, then you have to keep them in mind. And honestly, this sort of feedback should be invaluable to authors, assuming it’s not an AI hallucination.

      • logicbomb@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Yeah, I was surprised when they said it could summarize the plot and talk about the characters. To my knowledge, LLMs only memory is in how long their prompt is, so it shouldn’t be able to analyze an entire novel. I’m guessing if an LLM could do something like this, it would only be because the plot was already summarized at the end of the novel.

        • Tar_Alcaran@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          Summarizing is entirely different from analyzing though. It’s a “skill” thats baked into LLMs, because that’s how they manage all information. But any analysis would be based on a summary, which will lose a massive amount of resolution.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          2 days ago

          I once asked ChatGPT for an opinion on my blog and gave the web address. It summarized some historical posts accurately enough. It was definitely making use of the content, and not just my prompt. Flattered me with saying “the author shows a curious mind”. ChatGPT is good at flattery (in fact, it seems to be trained specifically to do it, and this is part of OpenAI’s marketing strategy).

          For the record, yes, this is a bit narcissistic, just like googling yourself. Except you do need to google yourself every once in a while to know what certain people, like employers, are going to see when they do it. Unfortunately, I think we’re going to have to start doing the same with ChatGPT and other popular models. No, I don’t like that, either.

          • ruan@lemmy.eco.br
            link
            fedilink
            arrow-up
            1
            ·
            19 hours ago

            It was definitely making use of the content, and not just my prompt.

            Ok, being simplistic about the actual workings: anything a LLM outputs is based only in the training data or the prompt, a LLM does not “create” anything.

            I really doubt your blog is statistically significant enough represented in the training data, therefore I can only assume that yes, your blog post URL referenced was web scrapped by ChatGPT and, and any other URLs linked by this main URL that the scrapped deemed significant to the prompt, and all that text was in fact added to the full internal prompt that was processed by the actual LLM.

          • oddlyqueer@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            I just had a horrifying vision of AI SM tools that help you optimize your public presentation. Get AI critiques as well as tips for appearing more favorable. People do it because you need to be well-received by AI evaluators to get a job. Gradually social pressure evolves all public figures (famous or not) into polished cartoon figures. The real horror of the dead internet is that we’ll do it to ourselves.

        • baguettefish@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          chatbots also usually have a database of key facts to query, and modern context windows can get very very long (with the right chatbot). but yeah the author probably imagined a lot of complexity and nuance and understanding that isn’t there

        • L0rdMathias@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          Yes but actually no. LLMs can be setup in such a way where they remember previous prompts; most if not all the AI web services do not enable this by default, if they even allow it as an option.

          • logicbomb@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 days ago

            LLMs can be setup in such a way where they remember previous prompts

            All of that stuff is just added to their current prompt. That’s how that function works.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      24
      ·
      2 days ago

      “She listed three characters”

      AI does everything in threes. Likely it just decided to not like three characters not because three characters were bad but because it always does three bullets.

      • Ech@lemmy.ca
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        It didn’t “decide” to “not like” anything. It can’t do either.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      assuming it’s not an AI hallucination.

      All output from an LLM is a “hallucination”. That’s the core function of the algorithm.

      • julietOscarEcho@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        I was a computer scientist at a time when early generative AI work refered to output as the model “dreaming”. Makes it sound kind of sweet. It was viewed as kind of kooky to run pattern recognition models forward…