• WiildFiire@lemmy.world
      link
      fedilink
      arrow-up
      29
      ·
      11 months ago

      It’ll be kept within product marketing and, I dunno how, but it would absolutely be used to see what they can raise prices on

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      40
      ·
      11 months ago

      It’s getting there. In the next few years as hardware gets better and models get more efficient we’ll be able to run these systems entirely locally.

      I’m already doing it, but I have some higher end hardware.

        • CeeBee@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          Stable diffusion SXDL Turbo model running in Automatic1111 for image generation.

          Ollama with Ollama-webui for an LLM. I like the Solar:7b model. It’s lightweight, fast, and gives really good results.

          I have some beefy hardware that I run it on, but it’s not necessary to have.

        • Ookami38@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Depends on what AI you’re looking for. I don’t know of an LLM (a language model,think chatgpt) that works decently on personal hardware, but I also haven’t really looked. For art generation though, look up automatic1111 installation instructions for stable diffusion. If you have a decent GPU (I was running it on a 1060 slowly til I upgraded) it’s a simple enough process to get started, there’s tons of info online about it, and it’s all run on local hardware.

          • CeeBee@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            I don’t know of an LLM that works decently on personal hardware

            Ollama with ollama-webui. Models like solar-10.7b and mistral-7b work nicely on local hardware. Solar 10.7b should work well on a card with 8GB of vram.

      • anti-idpol action@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Yeah if your willing to carry a brick or at least a power bank (brick) if you don’t want it to constantly overheat or deal with 2-3 hours of battery life. There’s only so much copper can take and there are limits to minaturization.

        • arthurpizza@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 months ago

          It’s not like that though. Newer phones are going to have dedicated hardware for processing neural platforms, LLMs, and other generative tools. The dedicated hardware will make these processes just barely sip the battery life.

          • MenacingPerson@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            wrong.

            if that existed, all those AI server farms wouldn’t be so necessary, would they?

            dedicated hardware for that already exists, it definitely isn’t gonna be able to fit a sizeable model on a phone any time soon. models themselves require multiple tens of gigabytes of storage space. you won’t be able to fit more than a handful on even a 512gb internal storage. the phones can’t hit the ram required for these models at all. and the dedicated hardware still requires a lot more power than a tiny phone battery.

            • arthurpizza@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Those server farms are because the needs of corporations might just be different from the needs of regular users.

              I’m running a 8 GB LLM model locally on my PC that performs better than 16 GB models from just a few months ago.

              It’s almost as if technology can get better and more efficient over time.

    • aubertlone@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      Hey me too.

      And I do have a couple different LLMs installed on my rig. But having that resource running locally is years and years away from being remotely performant.

      On the bright side there are many open source llms, and it seems like there’s more everyday.

  • BoastfulDaedra@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    161
    ·
    11 months ago

    We really need to stop calling things “AI” like it’s an algorithm. There’s image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they’ve all been called AI, but functionally they have almost nothing to do with each other.

    For computer scientists this year has been a sonofabitch to communicate through.

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      82
      ·
      edit-2
      11 months ago

      But “AI” is the umbrella term for all of them. What you said is the equivalent of saying:

      we really need to stop calling things “vehicles”. There’s cars, trucks, airplanes, submarines, and space shuttles and they’ve all been called vehicles, but functionally they have almost nothing to do with each other

      All of the things you’ve mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer “thinking” and making shaved intelligent decisions.

      Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.

      Edit: fixed an autocorrect typo

      • MotoAsh@lemmy.world
        link
        fedilink
        arrow-up
        46
        ·
        edit-2
        11 months ago

        No. No AI is NOT the umbrella term for all of them.

        No computer scientist will ever genuinely call basic algorithmic tasks “AI”. Stop saying things you literally do not know.

        We are not talking about what what the word means to normies colloquially. We’re talking about what it actually means. The entire point it is a separate term from those other things.

        Engineers would REALLY appreciate it if marketing morons would stop misapplying terminology just to make something sound cooler… NONE of those things are “AI”. That’s the fucking point. Marketing gimmicks should not get to choose our terms. (as much as they still do)

        If I pull up to your house on a bicycle and tell you, “quickly, get in my vehicle so I can drive us to the store.” You SHOULD look at that person weirdly: They’re treating a bicycle like it’s a car capable of getting on the freeway with passengers.

        • no banana@lemmy.world
          link
          fedilink
          arrow-up
          35
          ·
          edit-2
          11 months ago

          What I’ve learned as a huge nerd is that people will take a term and use it as an umbrella term for shit and they’re always incorrect but there’s never any point in correcting the use because that’s the way the collective has decided words work and it’s how they will work.

          Now the collective has decided that AI is an umbrella term for executing “more complex tasks” which we cannot understand the technical workings of but need to get done.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            10
            ·
            11 months ago

            Sometimes, but there are many cases where the nerds win. Like with technology. How many times do we hear old people misuse terms because they don’t care about the difference just for some young person to laugh and make fun of their lack of perspective?

            I’ve seen it quite a lot, and I have full confidance it will happen here so long as an actual generalized intelligence comes along to show everyone the HUGE difference every nerd talks about.

            • lad@programming.dev
              link
              fedilink
              arrow-up
              4
              ·
              11 months ago

              But it will be called something different so almost nobody will notice that they now should see the difference

          • Ookami38@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            11 months ago

            This is in fact how common language works, and also how jargon develops. No one in this thread outside of the specific people pointing out the problem cares what it is beyond the colloquial use, keep jargon to the in group, or you’ll just alienate the out-group and your entire point will be missed.

        • misophist@lemmy.world
          link
          fedilink
          arrow-up
          24
          ·
          11 months ago

          No computer scientist will ever genuinely call basic algorithmic tasks “AI”. Stop saying things you literally do not know.

          Speak for yourself. Many of us fought that battle literally years ago and then accepted reality and moved on with our lives. Show me an actual computer scientist still hung up on this little bit of not-so-new-anymore language and I’ll show you a dying curmudgeon who has let the world pass them by. We frequently use AI to refer to these technologies that we have today and we’ve started to use more descriptive language such as post-singularity AI or Artificial General Intelligence (AGI).

          • JDubbleu@programming.dev
            link
            fedilink
            arrow-up
            7
            ·
            11 months ago

            Hard agree. I don’t consider myself a “computer scientist”, but I do have a CS degree. The public use of AI is so far gone it’s just what it is now. I still wouldn’t consider path finding AI, but when you say an AI image creator, or AI chat bot it gets the point across well enough since we all know what is meant.

        • yokonzo@lemmy.world
          link
          fedilink
          arrow-up
          15
          ·
          11 months ago

          Calm down , language is fluid, you may not like it, but if enough people start using it as an umbrella term, that is what it’s colloquially and eventually officially going to be soon. You can’t expect to have such hard set rules this early on in the technology, it’s foolish

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            OP was not only speaking about “AI”. You are strawmanning what I said in order to be correct.

        • TheBlackLounge@lemmy.world
          link
          fedilink
          arrow-up
          9
          ·
          11 months ago

          To be fair, AI was coined to mean programs written in LISP and it changes every time new techniques are developed. It’s definitely just a marketing term, but for grant money.

        • Confused_Emus@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Yeah, and these days “literally” means “figuratively” whether I like it or not. Find a different hill to die on.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            11 months ago

            That is completely and utterly a separate issue. These aren’t words being changed over time. These are words being directly misapplied by morons, NOT for any ironic effect.

            People mean it when they “misuse” literally. People who misuse AI don’t know what AI is. This is a technical term being misused. Not a normal word being redefined.

            For a different word, “narcissism” DOES NOT magically mean, “a mean person” just because morons misuse a technical term. Stop being a piece of shit that wants to sound smart and start using terms correctly or not at all.

            • Confused_Emus@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              11 months ago

              I just think you seem way more angry about this than you should be. It’s not really something to pop a blood vessel over. And “literally” becoming “figuratively” was also because of the misuse by morons. The fact one is a technical term is irrelevant. The not-as-educated masses water down language, particularly “technical” language because of course the general public aren’t going to know the nuances. But it’s not like most of them are talking about any of this stuff on a level where those nuances matter. Referring to the general field of “computers kinda thinking like people” as AI gets the point across for them, and it’s not hurting you. So chill.

            • Ookami38@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              You’re describing jargon vs colloquial use. If you’re talking to your engineer buddy, yeah, use the terms properly or be prepared to offer clarification, because in that context it’s pretty important. If you’re talking to a psychiatrist, same thing for narcissist.

              When it comes to general public, people will use words however they’re most commonly used, and that’s perfectly fine. The average person doesn’t know, can’t be expected to learn, and has no use for knowing the definition of jargon that’s outside of their day-to-day use.

              If the use of colloquial terms is causing you or someone else confusion or losing clarity, ask clarifying questions. That’s how professionals or just more knowledgeable people on the subject in general, have had to interface with the average person for like… How long have we had language?

        • Klear@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          We are not talking about what what the word means to normies colloquially. We’re talking about what it actually means.

          They are the same picture.

        • Pat_Riot@lemmy.today
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          That’s just like your opinion, man - The Dude

          You’re smart right? So, who’s there more of, normies or computer scientists? Just make the tech, if that really is what you do, but marketing and the masses are always going to decide what we call stuff not some cartoonist engineer.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Uhhhh who do you think defines these specialized words in the first place? Not normies. That is the dumbest shit I’ve ever heard, thanks for the laugh.

            • Pat_Riot@lemmy.today
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              I’m pleased you are entertained. But being correct doesn’t make you any less wrong. I’m sorry that you don’t understand how language works. Now go build us some more toys.

            • Ookami38@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              In general conversation? The people there are more of define it lol. In a professional setting yeah, be specific, but… Wait, where are we? Oh the fucking comments section on lemmy. Pretty much the exact opposite of a professional setting. Use the words how the people around you are using it, or you’ll be misunderstood and have to explain yourself.

        • Ookami38@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          You’re talking in a forum to a bunch of normies using words colloquially, or to a bunch of media buffoons who report to nornies who are familiar with colloquial terms. I get your point if you’re talking to engineers, but you’re not

      • benignintervention@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        While this is true, I think of AI in the sci fi sense of a programmed machine intelligence rivaling human problem solving, communication, and opinion forming. Everything else to me is ML.

        But like Turing thought, how can we really tell the difference

        • MotoAsh@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          11 months ago

          Turing’s question wasn’t a philosophical one. It was a literal one, that he tried to answer.

          What the person said is NOT true. Nobody like Turing would EVER call those things AI, because they are very specifically NOT any form of “intelligence”. Fooling a layman in to mislabeling something is not the same as developing the actual thing that’d pass a Turing test.

        • Deuces@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          11 months ago

          As far as taking scifi terms for real things, at least this one is somewhat close. I’m still pissed about hover boards. And Androids are right out!

        • CeeBee@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          11 months ago

          What you’re referring to in movies is properly known as Artificial General Intelligence (AGI).

          AI is correctly applied to systems that process in a “biologically similar” fashion. Basically something a human or “smart” animal could do. Things like object detection, natural language processing, facial recognition, etc, are things you can’t program (there’s more to facial recognition, but I’m simplifying for this discussion) and they need to be trained via a neural network. And that process is remarkably similar to how biological systems learn and work.

          Machine learning, on the other hand, are processes that are intelligent but are not intrinsically “human”. A good example is song recommendations. The more often you listen to a genre of music, the more likely you are to enjoy other songs in that genre. So a system can count the number of songs you listen to the most in a specific genre, and then recommend that genre more than others. Fairly straightforward to program and doesn’t require any training, yet it gets better the more you use it.

          • schmidtster@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            How is my comment “gatekeeping” by any stretch of the defintion? I had one comment and the one making jest that you replied to. I only asked that there should be a catch all term and provided examples to go with it. How is that gatekeeping…?

            It’s more to due with social media tropes. Someone sees a downvoted comment and does the same without even reading.

            Edit and more proof it has nothing to do with the stuff you claimed, it’s now upvoted since the initial wave of people have stopped and people who care to read the meat of the comments now have and have established balance.

        • CeeBee@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          11 months ago

          I didn’t steal anything. When I posted my comment there were only two other comments in the whole thread.

          • schmidtster@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            I know, it’s just funny how two very similar thoughts can be be taking two different ways deepening on who sees it first and interacts with it.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        11 months ago

        You’re right, but so is the previous poster. Actual AI doesn’t exist yet, and when/if it does it’s going to confuse the hell out of people who don’t get the hype over something we’ve had for years.

        But calling things like machine learning algorithms “AI” definitely isn’t going away… we’ll probably just end up making a new term for it when it actually becomes a thing… “Digital Intelligence” or something. /shrug.

        • tegs_terry@feddit.uk
          link
          fedilink
          English
          arrow-up
          12
          ·
          11 months ago

          It isn’t human-level, but you could argue it’s still intelligence of a sort, just erstatz

          • OpenStars@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            11 months ago

            I dunno… I’ve heard that argument, but when something gives you >1000 answers, among which the correct answer might be buried somewhere, and a human is paid to dig through it and return something that looks vaguely presentable, is that really “intelligence”, of any sort?

            Aka, 1 + 1 = 13, which is a real result that AI can and almost certainly has recently offer(ed).

            People are right to be excited about the potential that generative AI offers in the future, but we are far from that atm. Also it is vulnerable to misinformation presented in the training data - though some say that that process might even affect humans too (I know, you are shocked, right? well, hopefully not that shocked:-P).

            Oh wait, nevermind I take it all back: I forgot that Steven Huffman / Elon Musk / etc. exist, and if that is considered intelligence, then AI has definitely passed that level of Turing equivalence, so you’re absolutely right, erstatz it is, apparently!?

            • tegs_terry@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              What’s the human digging through answers thing? I haven’t heard anything about that.

              • OpenStars@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                ChatGPT was caught, and I think later admitted, to not actually using fully automated processes to determine those answers, iirc. Instead, a real human would curate the answers first before they went out. That human might reject answers to a question like “Computer: what is 1+1?” ten times before finally accepting one of the given answers (“you’re mother”, hehe with improper apostrophe intact:-P). So really, when you were asking for an “AI answer”, what you were asking was another human on the other end of that conversation!!!

                Then again, I think that was a feature for an earlier version of the program, that might no longer be necessary? On the other hand, if they SAY that they aren’t using human curation, but that is also what they said earlier before they admitted that they had lied, do we really believe it? Watch any video of these “tech Bros” and it’s obvious in less than a minute - these people are slimy.

                And to some extent it doesn’t matter bc you can download some open source AI programs and run them yourself, but in general from what I understand, when people say things nowadays like “this was made from an AI”, it seems like it is always a hand-picked item from among the set of answers returned. So like, “oooh” and “aaaahhhhh” and all that, that such a thing could come from AI, but it’s not quite the same thing as simply asking a computer for an answer and it returning the correct answer right away! “1+1=?” giving the correct answer of 13 is MUCH less impressive when you find that out of a thousand attempts at asking, it was only returned a couple times. And the situation gets even worse(-r) when you find out that ChatGPT has been getting stupider(-est?) for awhile now - https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/.

                • tegs_terry@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  There’s no way that’s the case now, the answers are generated way too quickly for a human to formulate. I can certainly believe it did happen at one point.

                • Ookami38@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 months ago

                  So reading through your post and the article, I think you’re a bit confused about the “curated response” thing. I believe what they’re referring to is the user ability to give answers a “good answer” or “bad answer” flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.

                  The article also describes “commanders” reviewing and having the code team be responsive to changing the algorithm. Again this isn’t picking responses for the AI. Instead ,it’s reviewing responses it’s given and deciding if they’re good or bad, and making changes to the algorithm to get more accurate answers in the future.

                  I have not heard anything like what you’re describing, with real people generating the responses real time for gpt users. I’m open to being wrong, though, if you have another article.

        • lad@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          11 months ago

          This problem was kinda solved by adding AGI term meaning “AI but not what is now AI, what we imagined AI to be”

          Not going to say that this helps with confusion much 😅 and to be fair, stuff like autocomplete in office soft was called AI long time ago but it was far from LLMs of now

        • Klear@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Enemies in Doom have AI. We’ve been calling simple algorythms in a handful lines of code AI for a long time, the trend has nothing to do with languege models etc.

      • BoastfulDaedra@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        I’m not fighting, I’m just disgusted. As someone’s wise grandma once said, “[BoastfulDaedra], you are not the fuckface whisperer.”

    • OpenStars@kbin.social
      link
      fedilink
      arrow-up
      22
      ·
      11 months ago

      AI = “magic”, or like “synergy” and other buzzwords that will soon become bereft of all meaning as a result of people abusing it.

    • d20bard@ttrpg.network
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      11 months ago

      Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that’ll require some hefty neural net.

      Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn’t AI.

      • BoastfulDaedra@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 months ago

        You’re very wrong.

        Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.

        A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn’t to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.

        • danielbln@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          Computer vision deals with how computers can gain high level understanding of images and videos. It involves much more than just object reconstruction. And more importantly, neural networks are a core component is just about any computer vision application since deep learning took off in the 2010s. Most computer vision is powered by some convolutional neural network or another.

          Your comment contains several misconceptions and overlooks the critical role of neural networks, particularly CNNs, which are fundamental to most contemporary computer vision applications.

          • d20bard@ttrpg.network
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            Thanks, you saved me the trouble of writing out a rant. I wonder if the other guy is actually a computer scientist or just a programmer who got a CS degree. Imagine attending a CV track at AAAI or the whole of CVPR and then saying CV isn’t a sub field of AI.

    • CobblerScholar@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      There’s whole countries that refer to the entire internet itself as Facebook, once something takes root it ain’t going anywhere

    • schmidtster@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      Shouldn’t there be a catch all term to explain the broader scope of the specifics?

      Science is a broad term for multiple different studies, vehicle is a broad term for cars and trucks.

          • lad@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            Well, there’s an argument over not calling machine learning AI in this very thread, so… ¯\_(ツ)_/¯

            • schmidtster@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              11 months ago

              So why suggest it for the catch all term for AI when it’s only one portion of the argument itself? Such a strange suggestion,

          • ParetoOptimalDev@lemmy.today
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Yesterday I prompted gpt4 to convert a power shell script to Haskell. It did it in one shot. This happens more and more frequently for me.

            I don’t want to oversell llms, but you are definitely underselling them.

          • MotoAsh@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Computer science involves all sciences of computing. It has materials science, logics, maths galore, just to do basic circuitry and chip design. It spirals on and on and on to get a real computer.

            The point is it is a culmination of MANY different disciplines, and the people who think it’s only “this” or “that” are demonstrating their great lack of knowledge on the subject.

            It is a generic term because it takes MANY different things to complete the picture. Pidgeonholing things when you do not even understand them is only an excercise of ignorant stereotyping.

    • danielbln@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Language is fluid, and there is plenty of terminology that is dumb or imprecise to someone in the field, but A-ok to the wider populace. “Cloud” is also not actually a formation of water droplets, but someone’s else’s datacenter, but to some people the cloud is everything from Gmail to AWS.

      If I say AI today and most people associate the same thing with it (these days that usually means generative AI , i.e. mostly diffusion or transformer models) then that’s fine for me. Call it Plumbus for all I care.

    • DarkNightoftheSoul@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Those are all very specific intelligences. The goal is to unite them all under a so-called general intelligence. You’re right, that’s the dream, but there are many steps along the way that are fairly called intelligence.

    • DudeBro@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I imagine it’s because all of these technologies combine to make a sci-fi-esque computer assistant that talks to you, and most pop culture depictions of AI are just computer assistants that talk to you. The language already existed before the technology, it already took root before we got the chance to call it anything else.

  • MiDaBa@lemmy.ml
    link
    fedilink
    arrow-up
    92
    ·
    11 months ago

    The bad news is the AI they’ll pay for will instead estimate your net worth and the highest price you’re likely to pay. They’ll then dynamicly change the price of things like groceries to make sure the price they’re charging will maximize their profits on any given day. That’s the AI you’re going to get.

  • BoastfulDaedra@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    60
    ·
    11 months ago

    Next, she’s going to want a Libre AI that does not share her information with third parties or suggest unnecessary changes to make her spend more at sponsored businesses.

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    51
    ·
    11 months ago

    AI could do this. Conventional programming could do it faster and better, even if it was written by AI.

    It’s an important concept to grasp

    • theblueredditrefugee@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      11 months ago

      Cameras in your fridge and pantry to keep tabs on what you have, computer vision to take inventory, clustering to figure out which goods can be interchanged with which, language modeling applied to a web crawler to identify the best deals, and then some conventional code to aggregate the results into a shopping list

      Unless you’re assuming that you’re gonna be supplied APIs to all the grocery stores which have an incentive to prevent this sort of thing from happening, and also assuming that the end user is willing, able, and reliable enough to scan every barcode of everything they buy

      This app basically depends on all the best ai we already have except for image generation

      • lightnsfw@reddthat.com
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        11 months ago

        Cameras and computer vision aren’t necessary. Food products already come with upcs. All you need is a barcode reader to input stuff and to track what you use in meals. Tracking what you use could also be used for meal planning.

        • theblueredditrefugee@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Yeah, I did think of the barcode approach, but I didn’t think anyone would be willing to scan every item, which is why I ignored it

          However, revisiting this question made me realize that we could probably have the user scan receipts. It would take some doing but you could probably extract all the information from the receipt because it’s in a fairly predictable format, and it’s far less onerous.

          OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that

          The tough part, then, is scouring the internet for deals. Should be doable though.

          Might try to slap something together tonight or tomorrow for that first bit, seems pretty easy, I bet you’ve got open source libraries for handling barcodes, and scanning receipts can probably just be done with existing OCR tech, error correction using minimum edit distance, and a few if statements to figure out which is the quantity and which is the item. That is, if my adhd doesn’t cause me to forget

          • lightnsfw@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            OTOH, you still have to scan barcodes every time you cook with something, and you’d probably want some sort of mechanism to track partial consumption and leftovers, though a minimum viable product could work without that

            If you can also keep recipes in the system you could skip scanning the barcodes here. You’d just need to input how many servings you prepared and any waste. Even if the “recipe” is just “hot pocket” or something. If the system knows how much is in a package it can deduct what you use from the total and add it to the list when you need more.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Tracking what you use would be a lot easier with AI. Then you wouldn’t have to keep a barcode scanner in the kitchen. You could just have a camera pointed at your food prep space

          • lightnsfw@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            is AI good enough to manage that with just a camera? how would it determine how much of a given product you uses? Like if you dump a cup of flour in a bowel, how does it know how much that was.

            If you have to point the product in front of the camera to register it anyway, might as well use a barcode reader anyway because it’s the same thing at that point just without the risk of the AI misidentifying something.

      • Vegaprime@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        11 months ago

        Rolling this out for tools and parts at my work. Tool boxes with cameras in the drawers to make sure you put it back. Vending machines for parts with auto order.

      • Komatik@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I think you can achieve a similar result by having one giant DB so we can average out general consumption and then have a personal/family profile, where we in the first place manually feed the AI with data like, what did we bought, exp date, when did we partly or fully consume it. Although intensive at first I think AI will increasingly become more accurate whereby you will need to input less and less data as the data will be comming from both you and the rest of the users. The only thing that still needs to be input is “did you replace it ?”

        This way we don’t need cameras

        • theblueredditrefugee@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Oh, so you’re saying that the only data the algorithm needs in the limit is whether or not the user deviated from the generated shopping list, and if so, how, right?

          This is true, it’s just a bit difficult to cross the gap from here to there

          • Komatik@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Sure no problem, I just need you to puch in some data manually so we can get started. Can you get thid stack done by tomorrow? Awesome, see you tomorrow!

  • Professorozone@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    11 months ago

    I want AI to control traffic lights so that I don’t sit stopped through an entire cycle as the only car in a 1 mile radius. Also, if there is just one more car in line, let the light stay green just a couple seconds longer. Imagine the gas and time that could be saved… and frustration.

          • IndefiniteBen@leminal.space
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            How do I make it sound like that? You first need to build traffic light and road infrastructure that can handle advanced traffic flow, along with the processing power to make decisions based on sensor readings and rules.

            The software (AI is kinda overkill) exists to handle and optimise traffic flow over an entire city, but your software does not matter if there are insufficient sensors for the software to make decisions, or too few controllable lights to implement decisions (or both).

          • NaoPb@eviltoast.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            What they’re saying is if money was adequately invested in infrastructure, these old systems would have been upgraded 10 or 20 years ago and AI would not be necessary at all.

      • LemmyKnowsBest@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        Thank goodness. until every intersection becomes this intuitive, I will only continually notice the ones that hold me hostage through several cycles and /or don’t even notice I’m there waiting at a red light for 5 minutes at 3am when I’m the only car there.

    • Grass@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      ·
      11 months ago

      Doesn’t need AI, and there are countries that already have a system in place with the same result. Unsurprisingly the places with more focus on pedestrian, cyclist, and public transit infrastructure have the most enjoyable driving experience. All the people that don’t want to drive will stop as soon as it is safe and convenient, and all those cars off the road also help with this because the lights will be queued up with fewer cars.

      • Professorozone@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Unfortunately, the US is king of the suburbs and I don’t see that changing any time soon.

        I know you don’t need AI to do this but I think AI would do a great job if properly employed.

        • Wogi@lemmy.world
          link
          fedilink
          arrow-up
          18
          ·
          11 months ago

          It’s not that I can’t fathom how it could be better, I literally wish I could get rid of my car.

          I literally can’t. I live far enough away from my job that not having a car to get there every day isn’t an option, I can’t move close enough to my job to eliminate a car, and even if I did, I’m only making the drive further for my wife. We don’t live within walking distance of a grocery store. I genuinely need a car. My wife needs one too. I don’t live in a city with with even shitty options for public transit. It’s just not an option. My wife doesn’t work in the same city she works in, there is no bus, and the nearest bus stop to my job is a 45 minute walk from my job, and a 2 hour bus trip.

          It’s a 10 minute drive for both of us.

          If I could sell my car I fucking would. I love my car, but I’d give it up in a heartbeat if it were an option. I just don’t have the option. This is without children. When a child is thrown in to the mix we will only depend on having two cars more.

          Our mothers are aging, they live here and don’t have other support. She has licenses that lock her in to this state. We aren’t moving, and this city is a car city.

      • Professorozone@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        That would be great, but it’s just not practical in many places.

        I looked up how to get to work using public transportation once. It was 3 hours using 3 busses and a half hour walk. LOL. I could literally do it in two hours using a bike. But I’m just not willing to spend 4 hours a day getting to work and back. I don’t know many that would of they had a choice. It’s half an hour drive for me, but 22 miles, mostly interstate.

    • NaoPb@eviltoast.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      11 months ago

      To be fair, there are already more intelligent traffic light systems that use sensors in the road to see if there is traffic sitting at the lights, combined with push buttons for pedestrians and cyclists. These can be combined with sensors further up the road to see if more traffic is coming and extend the periods of green light for certain sides. It may not be perfect and it may require touching up later after seeing which times could be extended or shortened. It’s not AI but it works a lot better than the old hard coded traffic lights. Here in the Netherlands there are only a handfull of intersections left that still have the hard coded traffic lights.

      • Professorozone@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Not near me. I can’t speak to the entire US, but everywhere I’ve been, it’s horrible. In Germany they have a green wave, where all of the lights are green if you go the speed limit. I have only encountered this twice within 200 miles of where I live.

    • LemmyKnowsBest@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      You and Sarah Radz and everyone else here with brilliant practical ideas need to submit your resumes to the Silicon-Valley-esque corporations that comandeer such industries, be hired on as brains.

      • Professorozone@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Thank you for the kindness. At least I think it’s kind. I don’t know who Sarah Radz is. So I choose to accept this as a compliment.

  • AeonFelis@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    11 months ago

    I’m sure there are companies who’d love to develop something like this. And collect that information about exactly what groceries you currently have and statistics of how you consume them, so they can sell it to advertisers. Not advertisers that sell these groceries, of course - for these the AI company could just make the AI buy them from suppliers that pay them.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      They already exist and have been doing this for a long time, they are just using dumber versions of deep learning than what we have right now.

      Less about giving your personal information to an advertiser though and more about using aggregate data trends to guide marketing efforts.

      Like if you know buns and hotdogs sell like crazy the week before July 4th merchandizing bundles of both that override brand purchase intent on favor of convenience and discount.

      An example of this kind of market research in action would be a clothes store that knows 20% of its sales were to people who shopped the day before they came back to buy offering 48hr exit coupons that would be valid the next day for a limited time.

      The personalized data is used in house at these aggregators to market to you directly, such as the war and peace length personalized coupons on receipts where they’ve been contracted by the retailers.

    • danciestlobster@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Not just advertisers, it would also get sold to food manufacturers and product developers. This is not so bad though cause it helps new products come out that might be in line with what you want

    • TheSanSabaSongbird@lemdro.id
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This is what “loyalty” cards are for. They give you a little discount in exchange for being able to track your purchases.

  • eclectic_electron@sh.itjust.works
    link
    fedilink
    arrow-up
    40
    ·
    11 months ago

    This is surprisingly difficult problem because different people are okay with different brand substitutions. Some people may want the cheapest butter regardless of brand, while others may only buy brand name.

    For example my wife is okay with generic chex from some grocery stores but not others, but only likes brand names Cheerios. Walmart, Aldi, and Meijer generic cheese is interchangable, but brand name and Kroger brand cheese isn’t acceptable.

    Making a software system that can deal with all this is really hard. AI is probably the best bet, but it needs to be able to handle all this complexity to be useable, which is a lot of up front work

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      11 months ago

      As long as the AI has access to their ongoing purchase histories it’s actually quite easy to have this for day to day situations.

      Where it would have difficulty is unexpected spikes in grocery usage, such as hosting a non-annual party.

      In theory, as long as it was fine tuned on aggregate histories it should be decent at identifying spikes (i.e. this person purchased 10x the normal amount of perishables this week, that typically is an outlier and they’ll be back to 1x next week), but anticipating the spikes ahead of time is pretty much impossible.

      • Seasoned_Greetings@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        Both of these problems could feasibly be solved by user input. If you had the ability to set rules for your personal experience, problems like that would only last as long as it takes the user to manually correct.

        Like, “Ai, I bought groceries for a party on March 5th. Don’t use that bill to predict what I need” or “stop recommending butter that isn’t this specific brand”

    • Prophet@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Also quite difficult from a vision perspective. Tons of potential object classes, objects with no class (e.g., leftovers, homemade things), potential obfuscation if you are monitoring the refrigerator/cabinets. If the object is in a container, how do you measure the volume remaining of that substance? This is just scratching the surface I imagine. These problems individually are maybe not crazy challenging but they are quite hard all together.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        You don’t use vision, or if using it you are only supplementing a model that is mostly using purchase histories as the guiding factor.

        • TheGreenGolem@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          But you actually need vision because purchase history is not indicative of my future purchases. Sometimes I buy butter and eat it in a 3 days and buy again. Sometimes I’m not in the mood and have a chunk of butter to sit in my fridge for 3 weeks. It’s honestly totally random for a lot of things. It depends only on my mood at the moment.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            You’d be surprised at how many of those things you think are random would actually emerge as a pattern in long enough purchase history data.

            For example, it might be that there’s a seasonality to your being in the mood. Or other things you’d have brought a week before, etc.

            Over a decade ago a model looking only at purchase history for Target was able to tell a teenage girl was pregnant before her family knew just by things like switching from scented candles to unscented.

            There’s more modeled in that data than simply what’s on the receipt.

        • Prophet@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I agree, in the context of the tweet, that purchase history is enough to build a working product that roughly meets user requirements (at least in terms of predicting consumed items). This assumes you can find enough purchase history for a given user. Even then, I have doubts about how robust such a strategy is. The sparsity in your dataset for certain items means you will either a.) be forced to remove those items from your prediction service or b.) frustrate your users with heavy prediction bias. Some items also simply won’t work in this system - maybe the user only eats hotdogs in the summer. Maybe they only buy eggs with brownie mix. There will be many dependencies you are required to model to get a system like this working, and I don’t believe there is any single model powerful enough to do this by itself. Directly quantifying the user’s pantry via vision seems easy in comparison.

      • Bread@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        There could be an easy party mode button in which it just ignores the usual and picks likely food options for a party.

      • eclectic_electron@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Honestly I would be perfectly happy with the service like this, even if I had to manually input what groceries I need. It’s still an incredibly complex problem though. AI is probably better suited for it than anything else since you can have iterative conversations with latest generation AIs. That is, if I tell it I need cereal, it looks at my purchase history and guesses what type of cereal I want this week, and adds it to my list, I can then tell it no, actually I want shredded mini wheats.

        So it would probably have to be a combination of a very large database and information gathering system with a predictive engine and a large language model as the user interface.

    • jivemasta@reddthat.com
      link
      fedilink
      arrow-up
      24
      ·
      11 months ago

      I mean, when that xkcd was made, that was a hard task. Now identifying a bird in a picture can be done in realtime on a raspberry pi in a weekend project.

      The problem in the op isn’t really a limitation of AI, it’s coming up with an inventory management system that can detect low inventory without being obtrusive in a users life. The rest is just scraping local stores prices and compiling a list with some annealing algo that gets the best price to stops ratio.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          I see this complaint everywhere. People mentioning business setting prices to maximize profits, like it’s a bad thing.

          In a market with competition, that maximum profit point is also the point of maximum value to the consumer. Because business is a negotiation, not a dictatorial relationship.

          I don’t understand why people don’t understand this.

      • IndefiniteBen@leminal.space
        link
        fedilink
        arrow-up
        9
        ·
        11 months ago

        I think you focused too much on the details…

        AI image manipulation is entirely based in a computer where an image is processed by an algorithm. Grocerybot involves many different systems and crosses the boundary between digital and physical. The intertwined nature of the complexity is what makes it (relatively) difficult to explain.

  • Baines@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    11 months ago

    google used to do this type of stuff then you get SEO shit and in the same way people would try to game the system and ruin it

    • psmgx@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      11 months ago

      Aye, this be the problem. As long as there is a profit motive the AI is going to steer you to whatever makes them money, be it whoever works the SEO game or pays for API access.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        That’s why you charge the end user of the AI: it keeps your own interests aligned with theirs.

        Just don’t make grocerybot a free service. Make it a paid service, and then it works.

  • Snapz@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    11 months ago

    We were already robbed of the brief value stage of AI, it came out of the gate with a corporate handler and a ™

    The internet had a stretch where it was just useful, available and exciting. This does not.

    • danielbln@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      11 months ago

      Local models are a thing, and GPT is extremely useful in some cases, even with the corporate handholding. I find the whole space super exciting, personally.

      • Snapz@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        The accessibility of local models is nowhere near what the early web was. We could ALL have a geocities website and our own goofy “corner of the internet” without the extra bullshit.

        • sighofannoyance@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          be the person that makes local ai’s the new geo cities. Make the tech accessible bro! I will invest in your crowdfund

          .

          Ø.

          . p

          after due dilligence only… that is

    • Alexstarfire@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      No. That’s just what they wanted you to believe. All they really did was find a way to separate people from more money.

      I found out two people in my family bought smart fridges and both listed watching tv and listening to music as reasons for purchase. Not the only ones mind you, but some of the first ones mentioned. I don’t get it.

  • kerrigan778@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    11 months ago

    So many people in this thread saying “you can already do this if you just do all these extra steps” like avoiding extra work isn’t the whole point.

    • Billygoat@catata.fish
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      11 months ago

      Exactly. But also I’m blown away that most grocery stores don’t list inventory and prices on the website. I can only think this is because they don’t want to show prices in an attempt to get you to go to the store.

      • kerrigan778@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        11 months ago

        They absolutely don’t want to make automatic comparison shopping that easy. The goal of every grocery store is to get you there with one or two specific good deals they advertise and then have you do the rest of your shopping there because nobody wants to go to a second store and MAYBE get a slightly better deal but also maybe get a worse deal.

      • fsxylo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I mean… Yeah? Grocery stores want you in the store. If they didn’t they’d be shipping only warehouses.

        • Billygoat@catata.fish
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          I just mean that they must have done some research that says it is more profitable to only list a few prices instead of everything.

      • vivadanang@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        But also I’m blown away that most grocery stores don’t list inventory and prices on the website. I can only think this is because they don’t want to show prices in an attempt to get you to go to the store.

        yuuup.

  • fl42v@lemmy.ml
    link
    fedilink
    arrow-up
    20
    ·
    11 months ago

    It’s not exactly an ai-task, I guess? Like pretty much the only ai-related thing there is to classify stuff in ocr-ed receipts (technically, one can opencv whatever is in the fridge, but I suspect it won’t be reliable enough).

    • Stamets@lemmy.worldOP
      link
      fedilink
      arrow-up
      30
      ·
      11 months ago

      Bruh. If AI is being taught to drive cars on the open road then I feel like cameras to detect what’s in your fridge is pathetically easy in comparison and very much an AI task

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        11 months ago

        That’s how you get weird things like the AI determining that your favorite items are jam, baking soda and whatever you left at the back of your fridge to rot for six months.

        It is easy to detect what’s in your fridge. We have that today on some smart fridges.

        The problem to be solved though is

        • what’s in your fridge
        • what’s not in your fridge
        • what do you consume vs throw away
        • what do you buy
        • where do you shop
        • what prices are available
        • what’s the best way to minimize cost and store trips
        • what’s your metric for how to balance that

        Of those things, AI is really only helpful for determining the metric for how much money you need to save to add another grocery stop, and knowing that the orange blob is probably baking soda.

        Most of the rest of that is manual inputs or relatively basic but tedious programming, and those are the parts that would be the most annoying.
        I say this as a person who has repeatedly utterly failed to use https://grocy.info/ because actually recording what you eat vs throw away is painful.

        This isn’t a great AI problem not because AI can’t help, but because the tedious part isn’t the part it can help with right now.

        • decisivelyhoodnoises@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Yeah its not even remotely possible for someone to manually input that they eat 2 slices of cheese and 20grams of butter and 20 grams of jam every time they do so. And it is not feasible for AI to see inside closed packages or jars how much is eaten.

      • IronicDeadPan@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        11 months ago

        Probably would make sense to start with the receipts for what you purchase and aggregate lists from there (pantry, freezer, fridge, etc.).

      • fl42v@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        Yeah, kinda. Except you’ll likely need a camera or two for each shelf of the fridge (given the layout remains unchanged), and also you have to make sure they don’t get covered with ice/spilled milk/whatever or blocked by a box of some stuff. Aaaalternatively, you install a receipt scanner and touch scrreen which asks you what you took and updates an internal db accordingly.

        • Stamets@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          No, not even kinda. Fully. Amazon has stores you can walk in and take whatever you want off the shelf and leave. If you put it back somewhere else, even if not on the same shelf, it can still track that.

          A fridge is a joke.

          • Eccentric@sh.itjust.works
            link
            fedilink
            arrow-up
            6
            ·
            11 months ago

            I actually work in this field and it’s a lot more complicated than it sounds. When you’re training AI to recognize products in a store, you have a set list of products it needs to be trained on. A person might go to many different stores which increases the possible variation of products exponentially. Amazon’s model is also much more complex than just cameras, involving weight sensors in shelving, pressure detection, facial recognition. A store where everything is laid out in predictable, well lit, organized rows is already a nightmare. A fridge, even if it’s way smaller, is way, way less predictable

          • ImpossibilityBox@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            A typical Amazon store that I’ve been to is around 12,000—16,000 feet. A refrigerator is approx 20-25 cubic feet of real estate.

            Miniaturization of any system is always going to be a massive hurdle.

            Amazon uses biometric recognition to determine if a person has picked up something, RFID tags, Weight Sensors, cameras, Laser gates and probably some other things they aren’t telling us about.

            They also know a specific list of the items in the store and have 3d models for where each item is. nothing unexpected.

            For the fridge to work it would need to know every product ever made and have accurate and reliable scans of the existing product. Sure it might be able to find SOME of the same type of item but it will only work once it can find the EXACT item that I want everytime.

            Good luck finding my favorite brand of Guachujung that can’t be purchased online and is only available from a shady mom and pop grocery in Asia town.

            LASTLY… what’s a camera going to do with this:

      • CeeBee@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        then I feel like cameras to detect what’s in your fridge is pathetically easy in comparison

        But you’re skipping over a huge amount of context that’s missing. It’s context we (as humans) take for granted. What’s the difference between a jar and a bottle? Is the cream cheese in a tub or in a little cardboard container? Then it would need to be able to see all items in a fridge, know the expiration dates for each thing, know what you want to get, how quickly something gets used, etc.

        Some of those things are more straightforward, and some of them need data well beyond “this container has milk”. The issue isn’t processing all the data, but acquiring it consistently and reliably. We humans are very chaotic with how we do stuff in the physical world. Even the most organized person would throw off an AI system every so often. It’s the reason self driving cars are not a reality yet and won’t be for a while.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        The problem is that “AI” is a completely ill-defined term. The commenter above used the definition of it just being a more complex program and then they argued that you don’t need a more complex program. That’s as good of a definition as any other.

      • Programmer Belch@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        AI is just a program that learns from the information you provide it to predict the next element in a series.

        If you want a program to check whats in your fridge, a simple spreadsheet updated whenever you empty a bag is just as good.

        An use for AI could be to update the spreadsheet with images from the inside of the fridge but you would need cameras that can work inside fridges.

      • fl42v@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        By “ai tasks” I mean smth where ai is actually useful, such as object/pattern recognition, object classification, making predictions based on past data, etc. Can one train an ai to predict they need to buy onions when they have less than X in their fridge? Yap. Can one do the same with an if statement and prevent themselves from running into issues when ambient temperature on Mars rises? Also, yes.

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          11 months ago

          An AI task would be literally anything impossible or slow for a human to do that a computer could do instead (without having developers specifically work for months to provide explicit instructions on how to do it). Kinda weird to see technology evolving like this and still set arbitrary defining parameters like that

          • Ookami38@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            I mean, think of it like physical tools. You can use a screwdriver like a hammer, but it’s slow, not what it was designed for, has a higher chance of injury, etc. but if it’s something better done with a hammer, well… That’s a hammer task, not a screwdriver.

            “AI tasks” would then be things that aren’t as easily solved with other tools. You run into a lot of issues with the refrigerator and AI. You can’t easily just visually verify what things are. What if you don’t have a standard package, and are using, say Tupperware. Or you have a jar with some milk and a jar with some cream. Those aren’t as simple as just having a camera look at it and figuring it out.

            In this case, a more simple, manually (either typed or scanned if packaging allows) managed DB would be much better for the refrigerator itemization. Then, for the “finding best prices” problem, there already exist some apps that do that, but I could see having an AI implemented just in this step to potentially be beneficial depending on how you’re finding sale info.

          • fl42v@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Hmm, guess I wasn’t clear. It’s not “arbitrary defining parameters”, but more of “ai is a tool that better solves specific types of tasks” kind of thing. Can you replace an if statement with an ai? Yes, but that’s somewhat like hammering a screw (that is to say, inefficient).

    • Trashcanman@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      I think it would be a perfect function for ai. It’s more than just determining what is/is not in the fridge. Compiling the grocery list and determining which store has the best price for the goods would be great but also the ai knowing the mode of transportation as well as the weather and time of day would be critical as well to determine if it is worth going to multiple stores or not.

    • CluckN@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      AI could potentially do, “write me a python script that scrapes a website for grocery prices and compares them with another” or something.

    • TrickDacy@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      Why are so many of you trivializing the fact that providing perfectly formatted input data that having set logic figure something out is a VERY different thing than providing a firehose of data and then asking the software to make sense of it? Like have you been paying attention here at all?

      • lad@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        In this case I would suppose that there’s no need to get firehose of data, especially if run locally. The user only has so many shops around and the fridge is not a factory scale big

        • TrickDacy@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Yeah true. I guess I should have said a mish mash of data. It’s more about the fact that the data wouldn’t necessarily be in some regular format – the majority of the work you want the machine to do is find and compile that data

    • EnderMB@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 months ago

      In my experience, most things touted as AI are mostly rule-based or graph-based, with a sprinkling of some classification somewhere for a manager to get that sweet VC money.

      That’s not to say that this couldn’t be done with AI, particularly one that is trained on top of a rule-based system to find the best options for given circumstances.