Excerpt:

“Even within the coding, it’s not working well,” said Smiley. “I’ll give you an example. Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven’t engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence.”

Measures of engineering excellence, said Smiley, include metrics like deployment frequency, lead time to production, change failure rate, mean time to restore, and incident severity. And we need a new set of metrics, he insists, to measure how AI affects engineering performance.

“We don’t know what those are yet,” he said.

One metric that might be helpful, he said, is measuring tokens burned to get to an approved pull request – a formally accepted change in software. That’s the kind of thing that needs to be assessed to determine whether AI helps an organization’s engineering practice.

To underscore the consequences of not having that kind of data, Smiley pointed to a recent attempt to rewrite SQLite in Rust using AI.

“It passed all the unit tests, the shape of the code looks right,” he said. It’s 3.7x more lines of code that performs 2,000 times worse than the actual SQLite. Two thousand times worse for a database is a non-viable product. It’s a dumpster fire. Throw it away. All that money you spent on it is worthless."

All the optimism about using AI for coding, Smiley argues, comes from measuring the wrong things.

“Coding works if you measure lines of code and pull requests,” he said. “Coding does not work if you measure quality and team performance. There’s no evidence to suggest that that’s moving in a positive direction.”

  • motruck@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    58 minutes ago

    Hahaha. Im guessing this guy works in developer tools. These types of metrics are great but you rarely get there. You will get a few of them but the reality is the same people who want to use AI to produce faster are the same people that won’t give you time to properly instrument your system for metrics like these. Good luck with your expectation that someone measures the impact of AI in a meaningful way.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    7 hours ago

    People delude themselves if they think LLMs are not useful for coding. People also delude themselves that all code will be AI written in the next 2 years. The reality is that it’s incredibly useful tool but with reasonable limits.

    • rothaine@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      16 minutes ago

      I think part of it is that it’s been overhyped for so long. But now Opus can actually do all the shit we were promised 2 years ago.

  • BrightCandle@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    13 hours ago

    I keep trying to use the various LLMs that people recommend for coding for various tasks and it doesn’t just get things wrong. I have been doing quite a bit of embedded work recently and some of the designs it comes up with would cause electrical fires, its that bad. Where the earlier versions would be like “oh yes that is wrong let me correct it…” then often get it wrong again the new ones will confidently tell you that you are wrong. When you tell them it set on fire they just don’t change.

    I don’t get it I feel like all these people claiming success with them are just not very discerning about the quality of the code it produces or worse just don’t know any better.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 hours ago

      It is possible to get good results, the problem is that you yourself need to have an very good understanding of the problem and how to solve it, and then accurately convey that to the AI.

      Granted, I don’t work on embedded and I’d imagine there’s less code available for AI to train on than other fields.

      • ironhydroxide@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Yes, I definitely want to train a new hire who is superlatively confident that they are correct, while also having to do my job correctly as well, while said new hire keeps putting shit in my work.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    30
    ·
    1 day ago

    Businesses were failing even before AI. If I cannot eventually speak to a human on a telephone then the whole human layer is gone and I no longer want to do business with that entity.

  • python@lemmy.world
    link
    fedilink
    arrow-up
    41
    ·
    1 day ago

    Recently had to call out a coworker for vibecoding all her unit tests. How did I know they were vibe coded? None of the tests had an assertion, so they literally couldn’t fail.

    • ch00f@lemmy.world
      link
      fedilink
      arrow-up
      24
      ·
      1 day ago

      Vibe coding guy wrote unit tests for our embedded project. Of course, the hardware peripherals aren’t available for unit tests on the dev machine/build server, so you sometimes have to write mock versions (like an “adc” function that just returns predetermined values in the format of the real analog-digital converter).

      Claude wrote the tests and mock hardware so well that it forgot to include any actual code from the project. The test cases were just testing the mock hardware.

      • 87Six@lemmy.zip
        link
        fedilink
        arrow-up
        14
        ·
        1 day ago

        Not realizing that should be an instant firing. The dev didn’t even glance a look at the unit tests…

    • urandom@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      That’s weird. I’ve made it write a few tests once, and it pretty much made them in the style of other tests in the repo. And they did have assertions.

      • clif@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        19 hours ago

        My company is pushing LLM code assistants REALLY hard (like, you WILL use it but we’re supposedly not flagging you for termination if you don’t… yet). My experience is the same as yours - unit tests are one of the places where it actually seems to do pretty good. It’s definitely not 100%, but in general it’s not bad and does seem to save some time in this particular area.

        That said, I did just remove a test that it created that verified that IMPORTED_CONSTANT is equal to localUnitTestConstantWithSameHardcodedValueAsImportedConstant. It passed ; )

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Trust with verification. I’ve had it do everything right, I’ve had it do thing so incredibly stupid that even a cursory glance at the could would me more than enough to /clear and start back over.

        claude code is capable of producing code and unit tests, but it doesn’t always get it right. It’s smart enough that it will keep trying until it gets the result, but if you start running low on context it’ll start getting worse at it.

        I wouldn’t have it contribute a lot of code AND unit tests in the same session. new session, read this code and make unit tests. new session read these unit tests, give me advice on any problems or edge cases that might be missed.

        To be fair, if you’re not reading what it’s doing and guiding it, you’re fucking up.

        I think it’s better as a second set of eyes than a software architect.

        • urandom@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          24 hours ago

          I think it’s better as a second set of eyes than a software architect.

          A rubber ducky that talks back is also a good analogy for me.

          I wouldn’t have it contribute a lot of code

          Yeah, I tried that once, for a tedious refactoring. It would’ve been faster if I did it myself tbh. Telling it to do small tedious things, and keeping the interesting things for yourself (cause why would you deprive yourself of that …) is currently where I stand with this tool

          • rumba@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            and keeping the interesting things for yourself (cause why would you deprive yourself of that …

            I fear that will be required at some point. It’s not always good at writing code, but it does well enough that it can turn a seasoned developer into a manager. :/

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      if you reject her pull requests, does she fix it? is there a way for management to see when an employee is pushing bad commits more frequently than usual?

    • melfie@lemy.lol
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Yeah, it’s a bad idea to let AI write both the code and the tests. If nothing else, at least review the tests more carefully than everything else and also do some manual testing. I won’t normally approve a PR unless it has a description of how it was tested with preferably some screenshots or log snippets.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    1 day ago

    Guy selling ai coding platform says other AI coding platforms suck.

    This just reads like a sales pitch rather than journalism. Not citing any studies just some anecdotes about what he hears “in the industry”.

    Half of it is:

    You’re measuring the wrong metrics for productivity, you should be using these new metrics that my AI coding platform does better on.

    I know the AI hate is strong here but just because a company isn’t pushing AI in the typical way doesn’t mean they aren’t trying to hype whatever they’re selling up beyond reason. Nearly any tech CEO cannot be trusted, including this guy, because they’re always trying to act like they can predict and make the future when they probably can’t.

    • yabbadabaddon@lemmy.zip
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      My take exactly. Especially the bits about unit tests. If you cannot rely on your unit tests as a first assessment of your code quality, your unit tests are trash.

      And not every company runs GitHub. The metrics he’s talking about are DevOps metrics and not development metrics. For example In my work, nobody gives a fuck about mean time to production. We have a planning schedule and we need the ok from our customers before we can update our product.

  • magiccupcake@lemmy.world
    link
    fedilink
    arrow-up
    37
    ·
    1 day ago

    I love this bit especially

    Insurers, he said, are already lobbying state-level insurance regulators to win a carve-out in business insurance liability policies so they are not obligated to cover AI-related workflows. “That kills the whole system,” Deeks said. Smiley added: “The question here is if it’s all so great, why are the insurance underwriters going to great lengths to prohibit coverage for these things? They’re generally pretty good at risk profiling.”

  • Thorry@feddit.org
    link
    fedilink
    arrow-up
    116
    ·
    2 days ago

    Yeah these newer systems are crazy. The agent spawns a dozen subagents that all do some figuring out on the code base and the user request. Then those results get collated, then passed along to a new set of subagents that make the actual changes. Then there are agents that check stuff and tell the subagents to redo stuff or make changes. And then it gets a final check like unit tests, compilation etc. And then it’s marked as done for the user. The amount of tokens this burns is crazy, but it gets them better results in the benchmarks, so it gets marketed as an improvement. In reality it’s still fucking up all the damned time.

    Coding with AI is like coding with a junior dev, who didn’t pay attention in school, is high right now, doesn’t learn and only listens half of the time. It fools people into thinking it’s better, because it shits out code super fast. But the cognitive load is actually higher, because checking the code is much harder than coming up with it yourself. It’s slower by far. If you are actually going faster, the quality is lacking.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      It’s like guiding a coked up junior who can write 5000 wpm, has read every piece of documentation ever without understanding any of it.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      21 hours ago

      checking the code is much harder than coming up with it yourself

      That’s always been true. But, at least in the past when you were checking the code written by a junior dev, the kinds of mistakes they’d make were easy to spot and easy to predict.

      LLMs are created in such a way that they produce code that genuinely looks perfect at first. It’s stuff that’s designed to blend in and look plausible. In the past you could look at something and say “oh, this is just reversing a linked list”. Now, you have to go through line by line trying to see if the thing that looks 100% plausible actually contains a tiny twist that breaks everything.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 day ago

      This is very different from my experience, but I’ve purposely lagged behind in adoption and I often do things the slow way because I like programming and I don’t want to get too lazy and dependent.

      I just recently started using Claude Code CLI. With how I use it: asking it specific questions and often telling it exactly what files and lines to analyze, it feels more like taking to an extremely knowledgeable programmer who has very narrow context and often makes short-sighted decisions.

      I find it super helpful in troubleshooting. But it also feels like a trap, because I can feel it gaining my trust and I know better than to trust it.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        I’ve mentioned the long-term effects I see at work in several places, but all I can say is be very careful how you use it. The parts of our codebase that are almost entirely AI written are unreadable garbage and a complete clusterfuck of coding paradigms. It’s bad enough that I’ve said straight to my manager’s face that I’d be embarassed to ship this to production (and yes I await my pink slip).

        As a tool, it can help explain code, it can help find places where things are being done, and it can even suggest ways to clean up code. However, those are all things you’ll also learn over time as you gather more and more experience, and it acts more as a crutch here because you spend less time learning the code you’re working with as a result.

        I recommend maintaining exceptional skepticism with all code it generates. Claude is very good at producing pretty code. That code is often deceptive, and I’ve seen even Opus hallucinate fields, generate useless tests, and misuse language/library features to solve a task.

    • Flames5123@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      29
      ·
      2 days ago

      I code with AI a good bit for a side project since I need to use my work AI and get my stats up to show management that I’m using it. The “impressive” thing is learning new softwares and how to use them quickly in your environment. When setting up my homelab with automatic git pull, it quickly gave me some commands and showed me what to add in my docker container.

      Correcting issues is exactly like coding with a high junior dev though. The code bloat is real and I’m going to attempt to use agentic AI to consolidate it in the future. I don’t believe you can really “vibe code” unless you already know how to code though. Stating the exact structures and organization and whatnot is vital for agentic AI programming semi-complex systems.

  • jimmux@programming.dev
    link
    fedilink
    arrow-up
    55
    ·
    2 days ago

    We never figured out good software productivity metrics, and now we’re supposed to come up with AI effectiveness metrics? Good luck with that.

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      Sure we did.

      “Lines Of Code” is a good one, more code = more work so it must be good.

      I recently had a run in with another good one : PR’s/Dev/Month.

      Not only it that one good for overall productivity, it’s a way to weed out those unproductive devs who check in less often.

      This one was so good, management decided to add it to the company wide catchup slides in a section espousing how the new AI driven systems brought this number up enough to be above other companies.

      That means other companies are using it as well, so it must be good.

  • DickFiasco@sh.itjust.works
    link
    fedilink
    arrow-up
    84
    ·
    2 days ago

    AI is a solution in search of a problem. Why else would there be consultants to “help shepherd organizations towards an AI strategy”? Companies are looking to use AI out of fear of missing out, not because they need it.

    • Honytawk@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      Nah, it is more that LLMs are a neat technology that allows computers to generate stuff on their own. Which has all sort of uses. It has solved the problem of typing big texts on your own. (read: it did not solve the problem of reviewing big texts)

      But it has also gaslit managers into thinking it can do much more than its capabilities, so they demand it to be put into everything. With disastrous results.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 days ago

      When I entered the workforce in the late '90s, people were still saying this about putting PCs on every employee’s desk. This was at a really profitable company. The argument was they already had telephones, pen and paper. If someone needed to write something down, they had secretaries for that who had typewriters. They had dictating machines. And Xerox machines.

      And the truth was, most of the higher level employees were surely still more profitable on the phone with a client than they were sitting there pecking away at a keyboard.

      Then, just a handful of years later, not only would the company have been toast had it not pushed ahead, but was also deploying BlackBerry devices with email, deploying laptops with remote access capabilities to most staff, and handheld PDAs (Palm pilots) to many others.

      Looking at the history of all of this, sometimes we don’t know what exactly will happen with newish tech, or exactly how it will be used. But it’s true that the companies that don’t keep up often fall hopelessly behind.

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        “But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”

        — Carl Sagan

      • mycodesucks@lemmy.world
        link
        fedilink
        arrow-up
        33
        ·
        2 days ago

        If AI is so good at what it does, then it shouldn’t matter if you fall behind in adopting it… it should be able to pick up from where you need it. And if it’s not mature, there’s an equally valid argument to be made for not even STARTING adoption until it IS - early adopters always pay the most.

        There’s practically no situation where rushing now makes sense, even if the tech eventually DOES deliver on the promise.

        • Honytawk@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          5 hours ago

          It makes sense for the tech companies to be rushing AI development because they want to be the only one people use. They want to be the Amazon of AI.

          A ton of tech companies operate like that. They pump massive investments into projects because they see a future where they have the monopoly and will get their investments out a hundred fold.

          The users should be a lot more wary though.

        • OpenStars@piefed.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Yes but counterpoint: give me your money.

          … or else something bad might happen to you? Sadly this seems the intellectual level that the discussion is at right now, and corporate structure being authoritarian, leans towards listening to those highest up in the hierarchy, such as Donald J. Trump.

          “Logic” has little to do with any of this. The elites have spoken, so get to marching, NOW.

      • Tiresia@slrpnk.net
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        I think that’s called a cargo cult. Just because something is a tech gadget doesn’t mean it’s going to change the world.

        Basically, the question is this: If you were to adopt it late and it became a hit, could you emulate the technology with what you have in the brief window between when your business partners and customers start expecting it and when you have adapted your workflow to include it?

        For computers, the answer was no. You had to get ahead of it so companies with computers could communicate with your computer faster than with any comptetitors.

        But e-mail is just a cheaper fax machine. And for office work, mobile phones are just digital secretaries+desk phones. Mobile phones were critical on the move, though.

        Even if LLMs were profitable, it’s not going to be better at talking to LLMs than humans are. Put two LLMs together and they tend to enter hallucinatory death spirals, lose their sense of identity, and other failure modes. Computers could rely on a communicable standards, but LLMs fundamentally don’t have standards. There is no API, no consistent internal data structure.

        If you put in the labor to make a LLM play nice with another LLM, you just end up with a standard API. And yes, it’s possible that this ends up being cheaper than humans, but it does mean you lose out on nothing by adapting late when all the kinks have been worked out and protocols have been established. Just hire some LLM experts to do the transfer right the first time.

        • Honytawk@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          Even if LLMs were profitable, it’s not going to be better at talking to LLMs than humans are.

          LLMs don’t need to be better. They just need to be more profitable. And wages are very expensive. Doesn’t matter if they lose a couple of customers when they can reduce cost.

          It is all part of the enshittification of the company and for the enrichment of the shareholders.

          • Tiresia@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            23 minutes ago

            Except LLMs aren’t profitable. They’re propped up by venture capital on the one hand and desperately integrated into systems with no case study on the effects on profit on the other. Video game CEOs are surprised and appalled when gamers turn against AI, implying they did literally no market research before investing billions.

            When venture capital dries up and companies have to bear the full cost of LLMs themselves - or worse: if LLM companies go bankrupt and their API goes dead - any company that adopted LLMs into their workflow is going to suffer tremendously. Imagine if they fired half their employees because the LLM does that work and then the LLM stops working. So even if you could lose some money this quarter to invest in it and maybe gain some back by the end of this year, several years from now the company could be under existential threat.

            And again, it can be acceptable to take this sort of risk if the technology is one you might at some point not be able to serve customers and business partners without. But LLMs and genAI are not that sort of technology. Maybe business partners will hate you if you don’t go along with the buzzword mania, but then you should fake it and allow it to cause as little damage as it can.

            It is all part of the enshittification of the company

            A company that adopts LLMs is not enshittifying, it is setting itself up to be a victim of LLM enshittification.

            and for the enrichment of the shareholders.

            Shareholders would be richer in the short term if they didn’t waste money investing in LLM adoption, and much richer in the long term if they were one of the few companies that doesn’t go bankrupt when the LLM bubble pops.

            The purpose of LLM adoption is to weaken the social-political position of workers, to create an extra rival to break their collective bargaining power even if it costs capital unfathomable amounts of money. Like when capitalists oppose universal basic income despite it massively increasing their profit margins if it were adopted because workers wouldn’t get sick as often, capitalists are fully capable of acting in solidarity with each other for purposes of class warfare, even if it comes at a personal loss.

    • Saledovil@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      The problem is that code is hard to write. AI just doesn’t solve it. This is opposite of crypto, where the product is sort of good at what it does, (not bitcoin, though), but we don’t actually need to do that.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Yeah it is, it brings up a lot of good points that often don’t get talked about by the anti-AI folks (the sky is falling/AI is horrible) and extreme pro-AI folks (“we’re going to replace all the workers with AI”)

      You absolutely have to know what the AI is doing at least somewhat to be able to call it out when it’s clearly wrong/heading down a completely incorrect path.

    • 87Six@lemmy.zip
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      recent attempt to rewrite SQLite in Rust using AI

      I think it is talking 100% vibe code. And yea it’s pretty useful if you don’t abuse it

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Yeah, it’s really good at short bursts of complicated things. Give me a curl statement to post this file as a snippet into slack. Give me a connector bot from Ollama to and from Meshtastic, it’ll give you serviceable, but not perfect code.

        When you get to bigger, more complicated things, it needs a lot of instruction, guard rails and architecture. You’re not going to just “Give me SQLite but in Rust, GO” and have a good time.

        I’ve seen some people architect some crazy shit. You do this big long drawn out project, tell it to use a small control orchestrator, set up many agents and have each agent do part of the work, have it create full unit tests, be demanding about best practice, post security checks, oroborus it and let it go.

        But it’s expensive, and we’re still getting venture capital tokens for less than cost, and you’ll still have hard-to-find edge cases. Someone may eventually work out a fairly generic way to set it up to do medium scale projects cleanly, but it’s not now and there are definite limits to what it can handle. And as always, you’ll never be able to trust that it’s making a safe app.

        • 87Six@lemmy.zip
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 day ago

          Yea I find that I need to instruct it comparably to a junior to do any good work…And our junior standard - trust me - is very very low.

          I usually spam the planning mode and check every nook of the plan to make sure it’s right before the AI even touches the code.

          I still can’t tell if it’s faster or not compared to just doing things myself…And as long as we aren’t allocated time to compare end to end with 2 separate devs of similar skill there’s no point even trying to guess imho. Though I’m not optimistic. I may just be wasting time.

          And yea, the true costs per token are probably double than what they are today, if not more…

          • rumba@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Once you set up a proper scaffold for it in one project, it’s marginally repeatable across other projects. If all you have is one project, that would be crap. Where this will disrupt and kill things is in cheap contract work.

            If you’re trying to produce grade A code parallel to a grade A developer on a single project, it’s absolutely a losing battle for AI

            But if you have unit tests and say go upgrade these libraries, test, then fix any problems and keep that loop until it all works. It’s about at the point where it can be servicable.

            I’m betting tokens for development go up 50-100 times when it’s all done. I know that sounds shocking, but hear me out.

            • Those datacenters, the ones that banks are starting to get cold feet one are hella expensive. We’re currently paying with monopoly money.
            • The cost savings at scale are only as good as the idle cycles on a machine, and the development shift for a small project is eating up days of cycles for a single machine.
            • Those mega-datacenter machines will only have a lifespan of a few years.

            The AI companies’ bet is that they can get companies to fire enough developers to convert a decent percentage of salaries over to AI. They’re planning on Bob’s Discount Coders to fire people making 40-80k and long-term move 40k of that salary per head to them.

            It’ll be like a streaming service where they’re paying $16 a month for claude and they’ll slowly enshittify the service until it’s a grand or two per month per head.

            Step 1: Depress the market for coders at a loss, allowing companies to pay less and hoover up the extra money by firing people, which means less computer science in college, making a hole in the job market.

            Step 2: Slowly crank up the features and cost until the prices are back where they were, but all the money is flowing to them.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    63
    ·
    2 days ago

    Generative models, which many people call “AI”, have a much higher catastrophic failure rate than we have been lead to believe. It cannot actually be used to replace humans, just as an inanimate object can’t replace a parent.

    Jobs aren’t threatened by generative models. Jobs are threatened by a credit crunch due to high interest rates and a lack of lenders being able to adapt.

    “AI” is a ruse, a useful excuse that helps make people want to invest, investors & economists OK with record job loss, and the general public more susceptible to data harvesting and surveillance.

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    19 hours ago

    I got a hot take on this. People are treating AI as a fire and forget tool when they really should be treating it like a junior dev.

    Now here’s what I think, it’s a force multiplier. Let’s assume each dev has a profile of…

    2x feature progress, 2x tech debt removed 1x tech debt added.

    Net tech debt adjusted productivity at 3x

    Multiply by AI for 2 you have a 6x engineer

    Now for another case, but a common one 1x feature, net tech debt -1.5x = -0.5x comes out as -1x engineer.

    The latter engineer will be as fast as the prior in cranking out features without AI but will make the code base worse way faster.

    Now imagine that the latter engineer really leans into AI and gets really good at cranking out features, gets commended for it and continues. He’ll end up just creating bad code at an alarming pace until the code becomes brittle and unweildy. This is what I’m guessing is going to happen over the next years. More experienced devs will see a massive benefit but more junior devs will need to be reined in a lot.

    Going forward architecture and isolation of concerns will be come more important so we can throw away garbage and rewrite it way faster.

    • Buddahriffic@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      16 hours ago

      It’s not even a junior dev. It might “understand” a wider and deeper set of things than a junior dev does, but at least junior devs might have a sense of coherency to everything they build.

      I use gen AI at work (because they want me to) and holy shit is it “deceptive”. In quotes because it has no intent at all, but it is just good enough to make it seem like it mostly did what was asked, but you look closer and you’ll see it isn’t following any kind of paradigms, it’s still just predicting text.

      The amount of context it can include in those predictions is impressive, don’t get me wrong, but it has zero actual problem solving capability. What it appears to “solve” is just pattern matching the current problem to a previous one. Same thing with analysis, brainstorming, whatever activity can be labelled as “intelligent”.

      Hallucinations are just cases where it matches a pattern that isn’t based on truth (either mispredicting or predicting a lie). But also goes the other way where it misses patterns that are there, which is horrible for programming if you care at all about efficiency and accuracy.

      It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it. Or forgetting instructions (in a context window of 200k, a few lines can easily get drowned out).

      Code quality is going to suffer as AI gets adopted more and more. And I believe the problem is fundamental to the way LLMs work. The LLM-based patches I’ve seen so far aren’t going to fix it.

      Also, as much as it’s nice to not have to write a whole lot of code, my software dev skills aren’t being used very well. It’s like I’m babysitting an expert programmer with alzheimer’s but thinks they are still at their prime and don’t realize they’ve forgotten what they did 5 minutes ago, but my company pays them big money and get upset if we don’t use his expertise and probably intend to use my AI chat logs to train my replacement because everything I know can be parsed out of those conversations.

      • farting_gorilla@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 minutes ago

        It’ll do things like write a great helper function that it uses once but never again, maybe even writing a second copy of it the next time it would use it.

        Holy shit! That exactly explains why I’ve seen so many duplicated functions lately. I brought it up to the dev responsible the first time I found one (git blame), and he was just like “oh haha I can remove one” like he wasn’t quite sure what I was talking about, now I realize he must’ve gotten on that AI train much earlier than I thought…

    • GiorgioPerlasca@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      Junior software developers understand the task. They improve their skill in understanding the code and writing better code. They can read the documentation.

      Large language models just generate code based on what it looked like in previous examples.

    • forrgott@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      19 hours ago

      Or maybe don’t try and drive a screw in with a hammer?

      It’s just not good for 99% of the shit it’s marketed for. Sorry.

  • luciole (they/them)@beehaw.org
    link
    fedilink
    arrow-up
    41
    ·
    2 days ago

    This is all fine and dandy but the whole article is based on an interview with “Dorian Smiley, co-founder and CTO of AI advisory service Codestrap”. Codestrap is a Palantir service provider, and as you’d expect Smiley is a Palantir shill.

    The article hits different considering it’s more or less a world devourer zealot taking a jab at competing world devourers. The reporter is an unsuspecting proxy at best.

    • calliope@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 day ago

      People will upvote anything if it takes a shot at AI. Even when the subtitle itself is literally an ad.

      Codestrap founders say we need to dial down the hype and sort through the mess

      The cult mentality is really interesting to watch.

      Keep replying! Maybe this is a good honeypot for stupid people. “I hate you!!” Lmao

        • calliope@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          Me: This is an ad, it’s crazy that people will engage in something that’s clearly an ad, they’re feeding right into it. It’s a cult mentality.

          You: I hate you!! SCREEEE

          You couldn’t have proved my point more. Someone even upvoted you because it was a shot at AI. The cult is so strong you can’t even tell you’re in it.

          I’m glad you have an outlet for your impotent rage, but do you have to be so pathetic? Your mental age is showing.

          I’ll take pretentious though, because I am better than you.