

No.


No.
So, it was gambling, then.


Revenue going up, hiring going down, layoffs every quarter and a big push for everyone to use AI. But at the same time basically no real success story from all this increased AI usage. Probably just me, but I just don’t get it.
No, you’ve got it: Revenue increases, short term, when personnel costs are cut, through layoffs and hiring freezes.
The story told (“workers must return to the office to sit on teleconference all day” prompting more of them to quit, or “your job can be done by robots”, or whatever) only needs to make enough sense that the stock holders are satisfied the executives have a sane explanation for sudden loss of workers. Otherwise it might look like the executives are panicking!


10×s developers who could produce 0 code without it
Let me see; ten times nothin’, add nothin’, carry the nothin’…


The spec is so complex that it’s not even possible to know which regex to use
Yes. Almost like a regex is not the correct tool to use, and instead they should use a well-tested library function to validate email addresses.


Magit is what allowed me to finally commit to switching to Git full time.
It’s such an excellent front-end for Git that I’ve known numerous workmates learn Emacs just to use Magit.
Except worse: Confluence tries insanely hard to prevent anyone actually getting at the document source code. So you are expected to use the godawful interactive web editor to make any changes.


Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.
Thank you for correctly describing what a Luddite wants and does not want.


do companies need code that runs quickly on the systems that they are installed on to perform their function.
(Thank you, this indirectly answers one question: the specific optimisation you’re asking about, it seems, is optimised speed of execution when deployed in production. By stating that as the ideal to be optimised, necessarily other properties are secondary and can be worse than optimal.)
Some do pursue that ideal, yes. For example: many businesses seek to deploy their internal applications on hosted environments where they pay not for a machine instance, but for seconds of execution time. By doing this they pay only when the application happens to be running (on a third-party’s managed environment, who will charge them for the service). If they can optimise the run-time of their application for any particular task, they are paying less in hosting costs under such an agreement.
can an unqualified programmer use AI code to build an internal corporate system rather than have to pay for a more qualified programmer’s time either as an internal hire or producing.
This is a question now about paying for the time spent by people to develop and maintain the application, I think? Which is thoroughly different from the time the application spends running a task. Again, I don’t see clearly how “optimise the application for execution speed” is related to this question.


As others have said: the content is likely to be only of historical interest, because the fields they describe have progressed in understanding a great deal in the intervening decades. As a result, many, many historical books are of effectively negligible interest today.
With that said, historical interest can sometimes be a lot: and those two seem to be from institutions which did seminal work (Rand Corporation, for example).


This is a good question: submitting a bug report should be feasible without maintaining an account at GitHub (other issue trackers manage it just fine with only existing email communication, for example).
Unfortunately, GitHub, like so many other centralised platforms that assume they’re the centre of the world, expects you to create and maintain a personal identity special to GitHub, in order to submit a bug report at all.


Following up after your clarification (thank you):
it is not okay to do, but the severity of how much of an issue it is depends on the context? Either that or it’s completely avoidable in the first place if I just use “automated testing” or “loggers”.
It’s important here to distinguish the code you’re currently working on, in your local development environment only; versus the code you commit to VCS (or otherwise record for progress / deployment / sharing with others / etc.).
In your local development environment, frequently you are making exploratory changes, and you don’t yet know what exactly is the behaviour you need to implement. In this mode, it’s normal to pepper the area of interest with console.log("data_record is:", data_record) calls, and other chatty diagnostic messages. They are quick and easy to write, and give immediate result for your exploratory needs.
In the code you commit (or deploy, or share, or otherwise record as “this is what I think the code should be, for now”) you do not want that chatty, exploratory, effectively just noise diagnostics. Remove them as part of cleaning up the code, which you will do before leaving your workstation because you now understand what the diagnostic messages were there to tell you.
If you find that you haven’t yet understood the code, can’t yet clean it up, but you need to leave your workstation? Then you’ve made a mistake of estimation: your exploration took too long and you didn’t achieve a result. Clean up anyway, leave the code in a working state, come back to it another day with a fresh mind. Your will have a better understanding because of the exploration you did anyway.


Magit, which is the best Git porcelain around. Git, because it has an unparalleled free-software ecosystem of developer tools that work with it.
Why is Git’s free-software ecosystem so much better than all the other VCSen?
Largely because of marketing (the maker of Linux made this! hey look, GitHub!), but also because it has a solid internal data model that quickly proved to experts that it is fast and flexible and reliable.
Git’s command-line interface is atrocious compared to contemporary DVCSen. This was seen originally as no problem because Git developers intentionally released it as the “plumbing” for a VCS, intending that other motivated projects would create various VCS “porcelain” for various user audiences. https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain The interface with sensible operations and coherent interface language, resides in that “porcelain”, which the Git developers explicitly said they were not focussed on creating.
But, of course, the “plumbing” command line interface itself immediately became the primary way people were told to use Git, and the “porcelain” applications had much slower development and nowhere near the universal recognition of Git. So either people didn’t learn Git (learning only a couple of operations in a web app, for example), or to learn Git they were required to use the dreadful user-hostile default “plumbing” commands. It became cemented as the primary way to learn Git for many years.
I was a holdout with Bazaar VCS for quite a while, because its command-line interface dealt in coherent user-facing language and consistent commands and options. It was deliberately designed to first have a good command-line UI, and make a solid DVCS under that. Which it did, quite well; but it was no match for the market forces behind Git.
Well, eventually I found that Magit is the best porcelain for Git, and now I have my favourite VCS.


Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?
Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there’s no incentive to put more effort to improve it.
My past year (and more) of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.
Certainly there’s no effort spent to determine whether it’s a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.


Does business internal software need to be optimized?
Need to be optimised for what? (To optimise is always making trade-offs, reducing some property of the software in pursuit of some optimised ideal; what ideal are you referring to?)
And I’m not clear on how that question is related to the use of LLMs to generate code. Is there a connection you’re drawing between those?


The Unix shell remains an excellent IDE.
A huge array of text- and data-manipulation tools, with more available through the standard package manager in my operating system.
Add in a powerful text editor like Vim or Emacs, and nothing can beat this IDE.
Please either update the post with the URL that gets around this login-wall; or, remove the post because the article is not on the public web.