There’s been a lot of noise lately about how GenAI and Copilots are going to replace a good chunk of the software industry, including software architects. According to some, we’re just a few months away from feeding a prompt to an LLM and getting back a fully-functioning, well-architected, enterprise-grade system.
Well, I think that is not the case.
Yes, GenAI Can Write the kind of Code I would not like to write
Let’s start by acknowledging that GenAI is impressive. It can write boilerplate, suggest API calls, generate UI scaffolding, and even help refactor (to some extent) legacy code. If your job is building the ten-millionth restaurant website, sure, a Copilot might do a decent chunk of the work. After all, reproducing common patterns is what GenAI excels at.
But that’s not the kind of work most software architects are doing.
Our job isn’t about spitting out code that looks like something on GitHub. It’s about solving new problems, in new contexts, for specific people and teams, with specific constraints. It’s about designing systems that make sense over time, across teams, silos, and shifting business goals. That’s a fundamentally different task.
Architecture Is About Trade-Offs — And Trade-Offs Are Human
A lot of what architects do isn’t written down in code. It’s not something you’ll find in the training data of an LLM. It’s conversations with product managers about roadmaps, debates with security teams over compliance constraints, hard choices about maintainability, operability, and team capability. It’s designing for the messy, human, long-term reality of software.
GenAI doesn’t:
- Align tech choices with business strategy.
- Consider hiring markets or team learning curves.
- Optimize for maintenance in a specific organization.
- Navigate regulatory constraints or deployment infrastructure.
- Handle inter-team politics, knowledge silos, or legacy entropy.
These are things that experienced architects think about constantly — often without realizing it. It’s part of the implicit craft we build up over years. In other words, software architects do not solve well-defined problems, with all the requirements clearly laid out. They design tailor-made solutions after having hunted the requirements themselves or having sensed them based on their experience.
Moreover, even if we had an AI way, way capable than it is, you should expect at the very least to express these constraints to the AI for it to take them into account. So you would need to be able to list them and express them clearly. So, no, it is not possible to imagine that you type “please AI make me a nice backend service for X that works perfectly for my case” and you get an answer that solves your problem because you never defined your problem in the first place, with all its constraints and all the necessary context. At best you could get an implementation of the requested service that sort of works for the typical organization, not something designed for your specific needs.
And when you say trade-off you say decisions, and so someone should bear the responsibility of such decisions. You may remember this sentence:
“Managers make decisions and are held accountable for them. A machine cannot be held accountable, therefore it cannot be a manager.”
And so I think that similarly a machine cannot be a Software Architect.
What About the Training Data?
There’s also the elephant in the room: training data. Most LLMs are trained on public code — open source projects, StackOverflow answers, hobby repositories. That’s not representative of real enterprise systems. It’s not representative of financial software bound by regulation, or telecom infrastructure with 30 years of legacy, or internal services built on half-forgotten mainframes.
Or, one could say more frankly, LLMs are about stealing the work of others, but most valuable work is not as easy to steal because it is protected. And therefore LLMs have only vague ideas of how such work looks like.
In other words: GenAI hasn’t seen the kind of software most architects work on. It hasn’t lived it. And it certainly hasn’t designed for it.
Beware the Hype (And the Hype Merchants)
Let’s also talk about the tone of the conversation. A lot of what we hear isn’t analysis — it’s marketing. There’s an entire ecosystem of people selling fear, or selling hype, or selling “vision” because it benefits their position, their product, or their funding round.
That’s fine. But we shouldn’t confuse that with reality.
And by it is fine I mean, it is not fine. I am fed up with people having no hesitation to say whatever in order to do a few extra bucks. And I am also fed up with people who have no idea what they are talking about, but just need to say it anyway, to agitate because they see others agitating.
There’s a difference between “AI can help” (true) and “AI will replace architects” (dubious at best). One is based on observed capabilities; the other is speculation wrapped in a sales pitch and the need to pay rent somehow or buy the next Mercedes.
So, Should We Be Worried?
In the short term? Not really.
GenAI will take away some of the repetitive or tedious parts of our work. I do not know about you, but I am not complaining about that. If it means we spend less time on glue code or service boilerplate and more time thinking deeply about the systems we build, that’s a good thing. If in exchange to that we lose our ability to think, then I do not think we are getting a bargain.
But the core responsibilities of an architect in my view are: understanding context, navigating complexity, making trade-offs, aligning people and technology, uncover unexpressed requirements, do not ruffle the feathers of key people, and use their vast experience. And their judgement. And their motivated opinion. Those things aren’t going anywhere. Not yet, and probably never.
So, yes, let’s keep an eye on AI but let’s stop wasting sleep or being distracted by it because we have work to do. We should keep designing systems that make sense in the real world, because someone still needs to and at this time we are the only ones who can do that.