• Stamets@lemmy.worldOP
    link
    fedilink
    arrow-up
    30
    ·
    11 months ago

    Bruh. If AI is being taught to drive cars on the open road then I feel like cameras to detect what’s in your fridge is pathetically easy in comparison and very much an AI task

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      11 months ago

      That’s how you get weird things like the AI determining that your favorite items are jam, baking soda and whatever you left at the back of your fridge to rot for six months.

      It is easy to detect what’s in your fridge. We have that today on some smart fridges.

      The problem to be solved though is

      • what’s in your fridge
      • what’s not in your fridge
      • what do you consume vs throw away
      • what do you buy
      • where do you shop
      • what prices are available
      • what’s the best way to minimize cost and store trips
      • what’s your metric for how to balance that

      Of those things, AI is really only helpful for determining the metric for how much money you need to save to add another grocery stop, and knowing that the orange blob is probably baking soda.

      Most of the rest of that is manual inputs or relatively basic but tedious programming, and those are the parts that would be the most annoying.
      I say this as a person who has repeatedly utterly failed to use https://grocy.info/ because actually recording what you eat vs throw away is painful.

      This isn’t a great AI problem not because AI can’t help, but because the tedious part isn’t the part it can help with right now.

      • decisivelyhoodnoises@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Yeah its not even remotely possible for someone to manually input that they eat 2 slices of cheese and 20grams of butter and 20 grams of jam every time they do so. And it is not feasible for AI to see inside closed packages or jars how much is eaten.

    • IronicDeadPan@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      11 months ago

      Probably would make sense to start with the receipts for what you purchase and aggregate lists from there (pantry, freezer, fridge, etc.).

    • fl42v@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Yeah, kinda. Except you’ll likely need a camera or two for each shelf of the fridge (given the layout remains unchanged), and also you have to make sure they don’t get covered with ice/spilled milk/whatever or blocked by a box of some stuff. Aaaalternatively, you install a receipt scanner and touch scrreen which asks you what you took and updates an internal db accordingly.

      • Stamets@lemmy.worldOP
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        No, not even kinda. Fully. Amazon has stores you can walk in and take whatever you want off the shelf and leave. If you put it back somewhere else, even if not on the same shelf, it can still track that.

        A fridge is a joke.

        • Eccentric@sh.itjust.works
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          I actually work in this field and it’s a lot more complicated than it sounds. When you’re training AI to recognize products in a store, you have a set list of products it needs to be trained on. A person might go to many different stores which increases the possible variation of products exponentially. Amazon’s model is also much more complex than just cameras, involving weight sensors in shelving, pressure detection, facial recognition. A store where everything is laid out in predictable, well lit, organized rows is already a nightmare. A fridge, even if it’s way smaller, is way, way less predictable

        • ImpossibilityBox@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          A typical Amazon store that I’ve been to is around 12,000—16,000 feet. A refrigerator is approx 20-25 cubic feet of real estate.

          Miniaturization of any system is always going to be a massive hurdle.

          Amazon uses biometric recognition to determine if a person has picked up something, RFID tags, Weight Sensors, cameras, Laser gates and probably some other things they aren’t telling us about.

          They also know a specific list of the items in the store and have 3d models for where each item is. nothing unexpected.

          For the fridge to work it would need to know every product ever made and have accurate and reliable scans of the existing product. Sure it might be able to find SOME of the same type of item but it will only work once it can find the EXACT item that I want everytime.

          Good luck finding my favorite brand of Guachujung that can’t be purchased online and is only available from a shady mom and pop grocery in Asia town.

          LASTLY… what’s a camera going to do with this:

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      then I feel like cameras to detect what’s in your fridge is pathetically easy in comparison

      But you’re skipping over a huge amount of context that’s missing. It’s context we (as humans) take for granted. What’s the difference between a jar and a bottle? Is the cream cheese in a tub or in a little cardboard container? Then it would need to be able to see all items in a fridge, know the expiration dates for each thing, know what you want to get, how quickly something gets used, etc.

      Some of those things are more straightforward, and some of them need data well beyond “this container has milk”. The issue isn’t processing all the data, but acquiring it consistently and reliably. We humans are very chaotic with how we do stuff in the physical world. Even the most organized person would throw off an AI system every so often. It’s the reason self driving cars are not a reality yet and won’t be for a while.