- In the end, I think the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
The details are what stops it from working in every form it's been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
- The pattern that gets missed in these discussions: every "no-code will replace developers" wave actually creates more developer jobs, not fewer.
COBOL was supposed to let managers write programs. VB let business users make apps. Squarespace killed the need for web developers. And now AI.
What actually happens: the tooling lowers the barrier to entry, way more people try to build things, and then those same people need actual developers when they hit the edges of what the tool can do. The total surface area of "stuff that needs building" keeps expanding.
The developers who get displaced are the ones doing purely mechanical work that was already well-specified. But the job of understanding what to build in the first place, or debugging why the automated thing isn't doing what you expected - that's still there. Usually there's more of it.
- I've watched this pattern play out in systems administration over two decades. The pitch is always the same: higher abstractions will democratise specialist work. SREs are "fundamentally different" from sysadmins, Kubernetes "abstracts away complexity."
In practice, I see expensive reinvention. Developers debug database corruption after pod restarts without understanding filesystem semantics. They recreate monitoring strategies and networking patterns on top of CNI because they never learned the fundamentals these abstractions are built on. They're not learning faster: they're relearning the same operational lessons at orders of magnitude higher cost, now mediated through layers of YAML.
Each wave of "democratisation" doesn't eliminate specialists. It creates new specialists who must learn both the abstraction and what it's abstracting. We've made expertise more expensive to acquire, not unnecessary.
Excel proves the rule. It's objectively terrible: 30% of genomics papers contain gene name errors from autocorrect, JP Morgan lost $6bn from formula errors, Public Health England lost 16,000 COVID cases hitting row limits. Yet it succeeded at democratisation by accepting catastrophic failures no proper system would tolerate.
The pattern repeats because we want Excel's accessibility with engineering reliability. You can't have both. Either accept disasters for democratisation, or accept that expertise remains required.
- This all reminds me of one of the most foundational and profound papers ever written about software development: Peter Naur's "Programming as Theory Building". I have seen colleagues get excited about using Claude to write their software for them, and then end up spending at least as much time as if they had written it themselves trying to develop a theory the code that was produced, and an understanding sufficient to correct the problems and bugs in the created code. Every professional software engineer confronts the situation of digging into and dealing with a big wad of legacy code. However, most of us prefer those occasions when we can write some code fresh, and develop a theory and deep understanding from the get-go. Reverse-engineering out a sufficient theory of legacy code to be able to responsibly modify it is hard and at times unsatisfying. I don't relish the prospect of having that be the sum total of all my effort as a software engineer, when the "legacy code" I need to struggle to understand is code generated by an AI tool.
- I don't think the dream of replacing developers, in particular exists. Specialization of labour leads to increased costs, due to value placed on said specialized labour. Software development, is one form of specialized manufacturing, and hence is more costly. Within software development, similar strata exists, leading to increased value on increased specialization, and hence the pyramid effect. The same is true within any field.
Similarly, one might argue as increased capital finds its way to a given field, due to increased outcomes, labour in turn helps pressure pricing. Increased "sales" opportunity within said field (i.e people being skilled enough to be employed, or specialized therein) will similarly lead to pricing pressure - on both ends.
- > Which brings us to the question: why does this pattern repeat?
The pattern repeats because the market incentivizes it. AI has been pushed as an omnipotent, all-powerful job-killer by these companies because shareholder value depends on enough people believing in it, not whether the tooling is actually capable. It's telling that folks like Jensen Huang talk about people's negativity towards AI being one of the biggest barriers to advancement, as if they should be immune from scrutiny.
They'd rather try to discredit the naysayers than actually work towards making these products function the way they're being marketed, and once the market wakes up to this reality, it's gonna get really ugly.
- As I have heard from mid level managers and C suite types across a few dev jobs. Staff are the largest expense and the technology department is the largest cost center. I disagree because Sales couldn't exist with a product but that's a lost point.
This is why those same mid level managers and C suite people are salivating over AI and mentioning it in every press release.
The reality is that costs are being reduced by replacing US teams with offshore teams. And the layoffs are being spun as a result of AI adoption.
AI tools for software development are here to stay and accelerate in the coming months and years and there will be advances. But cost reductions are largely realized via onshore/offshore replacement.
The remaining onshore teams must absorb much more slack and fixes and in a way end up being more productive.
- It's not so much about replacing developers, but rather increasing the level of abstraction developers can work at, to allow them to work on more complex problems.
The first electronic computers were programmed by manually re-wiring their circuits. Going from that to being able to encode machine instructions on punchcards did not replace developers. Nor did going from raw machine instructions to assembly code. Nor did going from hand-written assembly to compiled low-level languages like C/FORTRAN. Nor did going from low-level languages to higher-level languages like Java, C++, or Python. Nor did relying on libraries/frameworks for implementing functionality that previously had to be written from scratch each time. Each of these steps freed developers from having to worry about lower-level problems and instead focus on higher-level problems. Mel's intellect is freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
(The thing that distinguishes gen-AI from all the previous examples of increasing abstraction is that those examples are deterministic and often formally verifiable mappings from higher abstraction -> lower abstraction. Gen-AI is neither.)
- Science is hated because its mastery requires too much hard work, and, by the same token, its practitioners, the scientists, are hated because of their power they derive from it. - Dijkstra '1989
https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD104...
- The reverse is developer's recurring dream of replacing non-IT people, usually with a 100% online automated self promoting SaaS. AI is also the latest incarnation of that.
- I was skeptical until 3-4 months ago, but my recent experience has been entirely different.
For context: we're the creators of ChatBotKit and have been deploying AI agents since the early days (about 2 years ago). These days, there's no doubt our systems are self-improving. I don't mean to hype this (judge for yourself from my skepticism on Reddit) but we're certainly at a stage where the code is writing the code, and the quality has increased dramatically. It didn't collapse as I was expecting.
What I don't know is why this is happening. Is it our experience, the architecture of our codebase, or just better models? The last one certainly plays a huge role, but there are also layers of foundation that now make everything easier. It's a framework, so adding new plugins is much easier than writing the whole framework from scratch.
What does this mean for hiring? It's painfully obvious to me that we can do more with less, and that's not what I was hoping for just a year ago. As someone who's been tinkering with technology and programming since age 12, I thought developers would morph into something else. But right now, I'm thinking that as systems advance, programming will become less of an issue—unless you want to rebuild things from scratch, but AI models can do that too, arguably faster and better.
It is hard to convey that kind of experience.
I am wondering if others are seeing it too.
- We could have replaced tons of developers if only employers were selective in their hiring and invested in training. Instead there are a ton of hardly marginal developers in employment.
Case in point: web frameworks as mentioned in the article. These frameworks do not exist to increase productivity for either the developer or the employer. They exist to mitigate training and lower the bar so the employer has a wider pool of candidates to select from.
- The way I learned to write software was years of cutting my teeth on hard problems. I have to wonder what happens when the new developers coming up don’t have that teeth cutting experience because they use language models to assist with every algorithm, etc?
- Can semi-technical people replace developers if those semi-technical people accept that the price of avoiding developers is a commitment to minimizing total system complexity?
Of course semi-technical people can troubleshoot, it's part of nearly every job. (Some are better at it than others.)
But how many semi-technical people can design a system that facilitates troubleshooting? Even among my engineering acquaintances, there are plenty who cannot.
- The pattern I've noticed building tooling for accountants: automation rarely removes jobs, it changes what the job looks like.
The bookkeepers I work with used to spend hours on manual data entry. Now they spend that time on client advisory work. The total workload stayed the same - the composition shifted toward higher-value tasks.
Same dynamic played out with spreadsheets in the 80s. Didn't eliminate accountants - it created new categories of work and raised expectations for what one person could handle.
The interesting question isn't whether developers will be replaced but whether the new tool-augmented developer role will pay less. Early signs suggest it might - if LLMs commoditise the coding part, the premium shifts to understanding problems and systems thinking.
- This is looking at the wrong end of the telescope. The arc has been to move computing closer to more and more end users. In the 1960's, FORTRAN enabled scientists and engineers to implement solutions without knowing much about the underlying computer. Thompson and Ritchie got a PDP11 by promising to make a text processing system for patent applications. Many years later desktop PC's and programs like VisiCalc and PageMaker opened up computing to many more users. The list goes on and on. With this movement, developer jobs disappeared or changed.
- This wave of AI innovation reveals that a lot of activity in coding turns out to be of accidental complexity instead of essential. Or put it another way, a lot of tasks in coding is conceptual to human, but procedural to AI. Conceptual tasks require intuitive understanding, rigorous reasoning, and long-term planning. AI is not there yet. On the other hand, procedural tasks are low entropy with high priors: once a prompt is given, what follows is almost certain. For instance, one had to learn many concepts to write "public static void main(String[] args)" when writing Java code in the old days. But for AI, the conditional probability Pr(write "public static void main(String[] args)" | prompt = "write the entry method for a given class") is practically 1. Or if I'd like to use Python to implement linear regression, there will be pretty much one way to implement it right, and AI knows about it - nothing magical, but only because we human have been doing so for years and the optimal solution for most of the cases have converged, so it turns into procedural to AI.
Fortunate or unfortunate, many procedural tasks are extremely hard for humans to master, but easy to AI to generate. In the meantime, we structured our society to support such procedural work. As the wave of innovation spreads, many people will rise but many will also suffer.
- Don't take it personal. All business want to reduce costs. As long as people cost money, they'll want to reduce people.
- >The tools expanded who could write software, but they didn’t eliminate the expertise required for substantial systems.
The hardest thing about software construction is specification. There's always going to be domain specific knowledge associated with requirements. If you make it possible, as Delphi and Visual Basic 6 did, for a domain expert to hack together something that works, that crude but effective prototype functions as a concrete specification that a professional programmer can use to craft a much better version useful to more people than just the original author.
The expansion of the pool of programmers was the goal. It's possible that AI could eventually make programming (or at least specification) a universal skill, but I doubt it. The complexity embedded in all but the most trivial of programs will keep the software development profession in demand for the foreseeable future.
- In 00s, Rational Rose UML was a mandatory course in my Uni undergrad program.
At that time I had a chat with a small startup CEO who was sure that he'll fire all those pesky programmers who think they are "smart" because they can code. He pointed me to a code generated by Rational Rose for his diagram, and told that only methods should be implemented, which also will be possible soon, the hardest part is to model the system.
- "This time is different"
- Me, the last time it wasn't different
- Nothing says it like this quote: “quickly discovered that readable syntax didn’t eliminate the complexity of logic, data structures, or system design”
- Kind of off topic but this has got to be one of my least favorite CSS rules that I’ve seen in recent memory:
.blog-entry p:first-letter { font-size: 1.2em; } - Is this a real article or just AI-generated text? This whole text has a lot of very weird phrasing in it, also it's so strange how it just seems to keep trudging on and on without ever getting to the point. Actual human-written articles are not like this.
- > Yet demand for software far exceeds our ability to create it.
In particular the demand for software tools grows faster than our ability to satisfy it. More demand exists than the people who would do the demanding can imagine. Many people who are not software engineers can now write themselves micro software tools using LLMs -- this ranges from home makers to professionals of every kind. But the larger systems that require architecting, designing, building, and maintaining will continue to require some developers -- fewer, perhaps, but perhaps also such systems will proliferate.
- The link redirects back to the blog index if your browser is configured in Spanish, because it forces to change the language to spanish and the article is not available in spanish.
Here's an archived link: https://archive.is/y9SyQ
- It's simple -- the more high-minded and snobbish the developer class will be (thus extracting the highest salaries in the world) and as long as they will continue to maintain this unreal amount of gatekeeping, the more the non-developer community (especially those at the leadership-level) will continue to revel at the prospect of eliminating developers from the value chain.
- Consider what the rise of things like shopify, squarespace, etc. did for developers.
In 2001, you needed an entire development team if you wanted to have an online business. Having an online business was a complicated, niche thing.
Now, because it has gotten substantially easier, there are thousands of times as many (probably millions of times) online stores, and many of them employ some sort of developer (usually on a retainer) to do work for them. Those consultants probably make more than the devs of 2001 did, too.
- I recently did a higher education contract for one semester in a highly coding focused course. I have a few years of teaching experience pre-LLMs so I could evaluate the impact internally, my conclusion is that academic education as we know it is basically broken forever.
If educators use AI to write/update the lectures and the assignments, students use AI to do the assignments, then AI evaluates the student's submissions, what is the point?
I'm worried about some major software engineering fields experiencing the same problem. If design and requirements are written by AI, code is mostly written by AI, and users are mostly AI agents. What is the point?
- It's not the dream of replacing developers.
It's the dream of replacing labor.
They've already convinced their customers what the value of the product is! Cutting labor costs is profit! Never mind the cost to society! Socialize those costs and privatize those profits!
Then they keep the money for themselves, because capitalism lets a few people own the means of production.
So everything that looks cheaper than paying someone educated and skilled to do a thing is extremely attractive. All labor-saving devices ultimate do that.
- > Understanding this doesn’t mean rejecting new tools. It means using them with clear expectations about what they can provide and what will always require human judgment.
Speaking of tools, that style of writing rings a bell.. Ben Affleck made a similar point about the evolving use of computers and AI in filmmaking, wielded with creativity by humans with lived experiences, https://www.youtube.com/watch?v=O-2OsvVJC0s. Faster visual effects production enables more creative options.
- The real reason is, expectations and requirements increased whenever tools helped more productivity or solved problems. This kept complexity growing and the work flowing. Just because you use cars instead of horses, it doesn't mean you get more free time.
- There must be a way to just print cash, have no jobs, and pay nobody, so we can have all the money. And then...? No and then.
- I'm reminded of this: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
- You shouldn’t ask developers about this.
You should ask the business owners. They are hiring fewer developers and looking to cut more.
- Consider what happened to painters after the invention of photography (~1830s). At first the technology was very limited and no threat at all to portrait and landscape painters.
By the 1860s artists were feeling the heat and responded by inventing all the "isms" - starting with impressionism. That's kept them employed so far, but who knows whether they'll be able to co-exist with whatever diffusion models become in 30 years.
- Mythical Man Month -> Mythical AI Agent Swarm
- It never happened before so it will never happen.
- > We’re still in that same fundamental situation. We have better tools—vastly better tools—but the thinking remains essential.
But less thinking is essential, or at least that’s what it’s like using the tools.
I’ve been vibing code almost 100% of the time since Claude 4.5 Opus came out. I use it to review itself multiple times, and my team does the same, then we use AI to review each others’ code.
Previously, we whiteboarded and had discussions more than we do now. We definitely coded and reviewed more ourselves than we do now.
I don’t believe that AI is incapable of making mistakes, nor do I think that multiple AI reviews are enough to understand and solve problems, yet. Some incredibly huge problems are probably on the horizon. But for now, the general “AI will not replace developers” is false; our roles have changed- we are managers now, and for how long?
- This resonates with what I'm experiencing, but I think the article misses the real shift happening now.
The conversation shouldn't be "will AI replace developers". It should be "how do humans stay competitive as AI gets 10x better every 18 months?"
I watched Claude Code build a feature in 30 minutes that used to take weeks. That moment crystallised something: you don't compete WITH AI. You need YOUR personal AI.
Here's what I mean: Frontier teams at Anthropic/OpenAI have 20-person research teams monitoring everything 24/7. They're 2-4 weeks ahead today. By 2027? 16+ weeks ahead. This "frontier gap" is exponential.
The real problem isn't tools or abstraction. It's information overload at scale. When AI collapses execution time, the bottleneck shifts to judgment. And good judgment requires staying current across 50+ sources (Twitter, Reddit, arXiv, Discord, HN).
Generic ChatGPT is commodity. What matters is: does your AI know YOUR priorities? Does it learn YOUR judgment patterns? Does it filter information through YOUR lens?
The article is right that tools don't eliminate complexity. But personal AI doesn't eliminate complexity. It amplifies YOUR ability to handle complexity at frontier speed.
The question isn't about replacement. It's about levelling the playing field. And frankly we all are figuring out on how will this shape out in the future. And if you have any solution that can help me level up, please hit me up.
- >>>> Developers feel misunderstood and undervalued.
Really?
Is this reflected in wages and hiring? I work for a company that makes a hardware product with mission-critical support software. The software team dwarfs the hardware team, and is paid quite well. Now they're exempt from "return to office."
I attended a meeting to move a project into development phase, and at one point the leader got up and said: "Now we've been talking about the hardware, but of course we all know that what's most important is the software."
- It might just be companies I have worked for in past 25 years, but engineers were virtually always the ones to make sense of whatever vague idea product and UX were trying to make. It's not just code monkey follow the mockup stuff. AI code tools don't really solve that.
- It's like developers are only now awakening to the reality that despite being paid well, they never were the capitalists.
- “ AI: The Latest Chapter in a Long Story” More the current chapter. Curious about the next one!
- How much tech debt has AI paid off actually?
- Who remembers Model-Driven Architecture and code generation from UML?
Nothing can replace code, because code is design[1]. Low-code came about as a solution to the insane clickfest of no-code. And what is low-code? It’s code over a boilerplate-free appropriately-high level of abstraction.
This reminds me of the 1st chapter of the Clean Architecture book[2], pages 5 and 6, which shows a chart of engineering staff growing from tens to 1200 and yet the product line count (as a simple estimate of features) asymptotically stops growing, barely growing in lines of code from 300 staff to 1200 staff.
As companies grow and throw more staff at the problem, software architecture is often neglected, dramatically slowing development (due to massive overhead required to implement features).
Some companies decided that the answer is to optimize for hiring lots of junior engineers to write dumbed down code full of boilerplate (e.g. Go).
The hard part is staying on top of the technical (architectural and design) debt to make sure that feature development is efficient. That is the hard job and the true value of a software architect, not writing design documents.
[1] https://www.developerdotstar.com/mag/articles/reeves_origina... A timeless article from 1992, pre-UML, but references precursors like Booch and object diagrams, as well as CASE tools [2] You can read it here in Amazon sample chapter: https://read.amazon.com/sample/0134494164?clientId=share
- We succeeded each time. We replaced the 60s dev with a 70s dev with an 80s dev... Same title different job description.
I can see the 2030s dev doing more original research with mundane tasks put to LLM. Courses will cover manual coding, assembler etc. for a good foundation. But that'll be like an uber driver putting on a spare tire.
- To understand how business views developers, reread Tim Bryce's Theory P: The Philosophy of Managing Programmers (which is old enough to drink in the USA today): https://web.archive.org/web/20160407111718fw_/http://phmains...
Tim Bryce was kind of the anti Scott Adams: he felt that programmers were people of mediocre intelligence at best that thought they were so damn smart, when really if they were so smart, they'd move into management or business analysis where they could have a real impact, and not be content with the scutwork of translating business requirements into machine-executable code. As it is, they don't have the people skills or big-picture systems thinking to really pull it off, and that combined with their snobbery made them a burden to an organization unless they were effectively managed—such as with his methodology PRIDE, which you could buy direct from his web site.
Oddly enough, in a weird horseshoe-theory instance of convergent psychological evolution, Adams and Bryce both ended up Trump supporters.
Ultimately, however, "the Bryce was right": the true value in software development lies not in the lines of code but in articulating what needs to be automated and how it can benefit the business. The more precisely you nail this down, the more programming becomes a mechanical task. Your job as a developer is to deliver the most value to the customer with the least possible cost. (Even John Carmack agrees with this.) This requires thinking like a business, in terms of dollars and cents (and people), not bits and bytes. And as AI becomes a critical component of software development, business thinking will become more necessary and technical thinking, much less so. Programmers as a professional class will be drastically reduced or eliminated, and replaced with business analysts with some technical understanding but real strength on the business/people side, where the real value gets added. LLMs meaningfully allow people to issue commands to computers in people language, for the very first time. As they evolve they will be more capable of implementing business requirements expressed directly in business language, without an intermediator to translate those requirements into code (i.e., the programmer). This was always the goal, and it's within reach.
- Spreadsheets replaced developers for that kind of work, while simultaneously enabling multiple magnitudes more work of that type to be performed.
deleted
- What I’m seeing is that seniors need fewer juniors, not because seniors are being replaced, but because managers believe they can get the same output with fewer people. Agentic coding tools reinforce that belief by offloading the most time-consuming but low-complexity work. Tests, boilerplate, CRUD, glue code, migrations, and similar tasks. Work that isn’t conceptually hard, just expensive in hours.
So yes, the market shifts, but mostly at the junior end. Fewer entry-level hires, higher expectations for those who are hired, and more leverage given to experienced developers who can supervise, correct, and integrate what these tools produce.
What these systems cannot replace is senior judgment. You still need humans to make strategic decisions about architecture, business alignment, go or no-go calls, long-term maintenance costs, risk assessment, and deciding what not to build. That is not a coding problem. It is a systems, organizational, and economic problem.
Agentic coding is good at execution within a frame. Seniors are valuable because they define the frame, understand the implications, and are accountable for the outcome. Until these systems can reason about incentives, constraints, and second-order effects across technical and business domains, they are not replacing seniors. They are amplifying them.
The real change is not “AI replaces developers.” It is that the bar for being useful as a developer keeps moving up.
- Business quacks being forever bamboozled because turns out implementation is the only thing that matters and hacker culture outlived every single promise to eradicate hacker culture.
- This is the best explanation of (my take on) this I've seen so far.
On top of the article's excellent breakdown of what is happening, I think it's important to note a couple of driving factors about why (I posit) it is happening:
First, and this is touched upon in the OP but I think could be made more explicit, a lot of people who bemoan the existence of software development as a discipline see it as a morass of incidental complexity. This is significantly an instance of Chesterton's Fence. Yes, there certainly is incidental complexity in software development, or at least complexity that is incidental at the level of abstraction that most corporate software lives at. But as a discipline, we're pretty good at eliminating it when we find it, though it sometimes takes a while — but the speed with which we iterate means we eliminate it a lot faster than most other disciplines. A lot of the complexity that remains is actually irreducible, or at least we don't yet know how to reduce it. A case in point: programming language syntax. To the outsider, the syntax of modern programming languages, where the commas go, whether whitespace means anything, how angle brackets are parsed, looks to the uninitiated like a jumble of arcane nonsense that must be memorized in order to start really solving problems, and indeed it's a real barrier to entry that non-developers, budding developers, and sometimes seasoned developers have to contend with. But it's also (a selection of competing frontiers of) the best language we have, after many generations of rationalistic and empirical refinement, for humans to unambiguously specify what they mean at the semantic level of software development as it stands! For a long time now we haven't been constrained in the domain of programming language syntax by the complexity or performance of parser implementations. Instead, modern programming languages tend toward simpler formal grammars because they make it easier for _humans_ to understand what's going on when reading the code. AI tools promise to (amongst other things; don't come at me AI enthusiasts!) replace programming language syntax with natural language. But actually natural language is a terrible syntax for clearly and unambiguously conveying intent! If you want a more venerable example, just look at mathematical syntax, a language that has never been constrained by computer implementation but was developed by humans for humans to read and write their meaning in subtle domains efficiently and effectively. Mathematicians started with natural language and, through a long process of iteration, came to modern-day mathematical syntax. There's no push to replace mathematical syntax with natural language because, even though that would definitely make some parts of the mathematical process easier, we've discovered through hard experience that it makes the process as a whole much harder.
Second, humans (as a gestalt, not necessarily as individuals) always operate at the maximum feasible level of complexity, because there are benefits to be extracted from the higher complexity levels and if we are operating below our maximum complexity budget we're leaving those benefits on the table. From time to time we really do manage to hop up the ladder of abstraction, at least as far as mainstream development goes. But the complexity budget we save by no longer needing to worry about the details we've abstracted over immediately gets reallocated to the upper abstraction levels, providing things like development velocity, correctness guarantees, or UX sophistication. This implies that the sum total of complexity involved in software development will always remain roughly constant. This is of course a win, as we can produce more/better software (assuming we really have abstracted over those low-level details and they're not waiting for the right time to leak through into our nice clean abstraction layer and bite us…), but as a process it will never reduce the total amount of ‘software development’ work to be done, whatever kinds of complexity that may come to comprise. In fact, anecdotally it seems to be subject to some kind of Braess' paradox: the more software we build, the more our society runs on software, the higher the demand for software becomes. If you think about it, this is actually quite a natural consequence of the ‘constant complexity budget’ idea. As we know, software is made of decisions (https://siderea.dreamwidth.org/1219758.html), and the more ‘manual’ labour we free up at the bottom of the stack the more we free up complexity budget to be spent on the high-level decisions at the top. But there's no cap on decision-making! If you ever find yourself with spare complexity budget left over after making all your decisions you can always use it to make decisions about how you make decisions, ad infinitum, and yesterday's high-level decisions become today's menial labour. The only way out of that cycle is to develop intelligences (software, hardware, wetware…) that can not only reason better at a particular level of abstraction than humans but also climb the ladder faster than humanity as a whole — singularity, to use a slightly out-of-vogue term. If we as a species fall off the bottom of the complexity window then there will no longer be a productivity-driven incentive to ideate, though I rather look forward to a luxury-goods market of all-organic artisanal ideas :)
- The link doesn't works for me, just get thrown on the main page after a second.
- The dumb part of this is: so who prompts the AI?
Well probably we'd want a person who really gets the AI, as they'll have a talent for prompting it well.
Meaning: knows how to talk to computers better than other people.
So a programmer then...
I think it's not that people are stupid. I think there's actually a glee behind the claims AI will put devs out of work - like they feel good about the idea of hurting them, rather than being driven by dispassionate logic.
Maybe it's the ancient jocks vs nerds thing.
- A few observations from the current tech + services market:
Service-led companies are doing relatively better right now. Lower costs, smaller teams, and a lot of “good enough” duct-tape solutions are shipping fast.
Fewer developers are needed to deliver the same output. Mature frameworks, cloud, and AI have quietly changed the baseline productivity.
And yet, these companies still struggle to hire and retain people. Not because talent doesn’t exist, but because they want people who are immediately useful, adaptable, and can operate in messy environments.
Retention is hard when work is rushed, ownership is limited, and growth paths are unclear. People leave as soon as they find slightly better clarity or stability.
On the economy: it doesn’t feel like a crash, more like a slow grind. Capital is cautious. Hiring is defensive. Every role needs justification.
In this environment, it’s a good time for “hackers” — not security hackers, but people who can glue systems together, work with constraints, ship fast, and move without perfect information.
Comfort-driven careers are struggling. Leverage-driven careers are compounding.
Curious to see how others are experiencing this shift.
- I think that programming as a job has already changed. Because it is hard for most people to tell the difference between someone who actually has programming skills and experience versus someone who has some technical ingenuity but has only ever used AI to program for them.
Now the expectation from some executives or high level managers is that managers and employees will create custom software for their own departments with minimal software development costs. They can do this using AI tools, often with minimal or no help from software engineers.
Its not quite the equivalent of having software developed entirely by software engineers, but it can be a significant step up from what you typically get from Excel.
I have a pretty radical view that the leading edge of this stuff has been moving much faster than most people realize:
2024: AI-enhanced workflows automating specific tasks
2025: manually designed/instructed tool calling agents completing complex tasks
2026: the AI Employee emerges -- robust memory, voice interface, multiple tasks, computer and browser use. They manage their own instructions, tools and context
2027: Autonomous AI Companies become viable. AI CEO creates and manages objectives and AI employees
Note that we have had the AI Employee and AI Organization for awhile in different somewhat weak forms. But in the next 18 months or so as the model and tooling abilities continue to improve, they will probably be viable for a growing number of business roles and businesses.
