• 𝙣𝙪𝙠𝙚
    link
    fedilink
    English
    37
    edit-2
    11 months ago

    That’s a bit dramatic of a take. The AI makes recipe suggestions based on ingredients the user inputs. These users inputted things like bleach and glue, and other non-food items, to intentionally generate non-food recipes.

    • chameleon
      link
      fedilink
      3411 months ago

      If you’re making something to come up with recipes, “is this ingredient likely to be unsuitable for human consumption” should probably be fairly high up your list of things to check.

      Somehow, every time I see generic LLMs shoved into things that really do not benefit from an LLM, those kinds of basic safety things never really occurred to the person making it.

      • 𝙣𝙪𝙠𝙚
        link
        fedilink
        English
        411 months ago

        Fair point, I agree there should be such a check. It seems for now that the only ones affected were people who tried to intentionally mess with it. It will be a hard goal to reach completely because what’s ok and healthy for some could also be a deathly allergic reaction for others. There’s always going to have to be some personal accountability for the person preparing a meal to understand what they’re making is safe.

        • @DeltaTangoLima@reddrefuge.com
          link
          fedilink
          English
          711 months ago

          They’re a supermarket, and they own the data for the items they stock. No reason they couldn’t have used their own taxonomy to eliminate the ability to use non-food items in their poorly implemented AI.

          Love how they blame the people that tried it. Like it’s their fault the AI was released for public use without thinking about the consequences. Typical corporate blame shifting.

    • Otter
      link
      fedilink
      English
      211 months ago

      Would it be better to have a massive list of food items to pick from?

      Should take care of bad inputs somewhat