I’m putting this on the record now so I can look back in five years and see if I was a visionary or an idiot.

I’ve turned the corner from “doomer” to “doer,” and it’s freeing. The projects that used to be too big are now achievable. But to get there, we have to burn down our current understanding of how software works.

There’s an implicit thread running through everything I’m about to say: speed is the new advantage, and anything that slows you down is a liability. Security has to match the pace. Your platform has to handle the velocity. Legacy debt is a speed tax. Your tooling is a bottleneck. If you can’t move fast, you’re already dead.


The “Model-First” Vision

In a Model-First company, code is no longer a static asset. It’s a disposable medium. We are moving towards describing behaviours rather than implementing functionality.

The Evolution of Security

When most people hear “security,” they think friction. Slower releases, more gates, another team saying no. But in a model-first world, security doesn’t slow you down, it runs alongside you. The backpressure is instant, automated, and resolved before you even notice it.

Agent pushing code through a security backpressure loop

Traditional security is built on a database of “known bads.” But when models generate equivalent functionality in-house instead of pulling in dependencies, open-source use reduces. Traditional vulnerability scanners won’t make sense because there’s going to be no database to check against. It’s all custom code.

Traditional SAST scanners still have a place. They catch known patterns and low-hanging fruit. But they’re not enough anymore. The bigger problem is pace. When your codebase can be substantially rewritten in days, point-in-time security audits are meaningless. By the time a quarterly pen test reports back, the code it tested might not even exist anymore.

More advanced AST-based tooling will emerge too. Abstract syntax tree analysis used to be expensive to build and maintain, but now agents can generate custom analysers on the fly, specific to your architecture. Not generic rules, but tooling that actually understands the structure of your code.

But the real evolution is code-level tests written by an agent that actually understands your codebase. Security becomes a gate in the pipeline, not a comment on a pull request. The agent pushes code, it hits the security gate, backpressure happens, it iterates, and it either clears or it doesn’t.

The agent doesn’t just identify a vulnerability. It understands your custom code, writes an exploit test proving it, submits a fix, and that test becomes part of your regression suite. Every pen test becomes a white box test. Find a vulnerability, write a test (unit, integration, end-to-end), validate the fix, and it becomes a permanent regression test.

Here’s what makes this fundamentally different from anything we’ve had before: the security agent can hold the entire system context. Right now, a security engineer reviewing code sees one file, maybe a few related files. They can’t hold the entire system in their head. The data flows, the auth boundaries, the infrastructure topology. An agent can. It can trace a user input from the API boundary through every service, every database query, every response, and identify cross-service vulnerabilities that no human reviewer would catch because no human can hold that much context at once.

It can also pull in distributed traces and APM data to understand how services actually interact at runtime, not just how the code says they should. Static analysis tells you what the code could do. Tracing tells you what it actually does. An agent with both catches things neither approach could alone.

All of it (scanner results, AST analysis, test failures, security checks) forms backpressure to the model. The model pushes code, the security layer pushes back. And critically, the security agent should be a different model. You don’t want the same model reviewing its own work. That separation is adversarial by design.

All of this happens before anything touches production. The agent can iterate freely in the loop, but nothing deploys until the backpressure clears. Production is sacred.

This is what DevSecOps was always supposed to be. Security that actually moves at the speed of development, not a scanner bolted onto a CI pipeline.

SRE: The Engine of Speed

If an agent can build a feature in ten minutes, “features” are a commodity. Your moat is Operational Excellence.

You need elite SRE to survive. If you have it, you can take off like a rocket. If you don’t, you are going to be bogged down in high-impact failures while your competitors ship circles around you. Your only limitations become humans (if your company isn’t ready for fully autonomous systems) and the size of your wallet.

Good system design is a hard requirement for operational excellence. If your platform architecture is brittle, it becomes the bottleneck. You can’t just throw “more AI” at a bad platform. The platform has to be designed to handle the velocity of an agentic loop without buckling.

Everything SREs have been preaching for years: canary deployments, feature flags, automated rollback, real observability. These aren’t best practices anymore. They’re mandatory. Without them, agentic velocity is just agentic chaos.

Everything must be codified. Infrastructure as code, policy as code. You need validations in place to let the AI go wild. Think of it as a cage: the agent runs free inside defined boundaries. For example, an agent can spin up infrastructure, deploy, and iterate freely, but any change that crosses a cost or blast-radius threshold requires human approval before it touches production. That’s the principle: freedom within guardrails.

Will we even need staging? If your production guardrails are strong enough, canary to a small percentage, observe, and roll back automatically if anything breaks. Staging is just production with fake data and false confidence.

Hyper-Personalisation & The Support Loop

We’re heading towards a world of agent loops that never sleep.

Agents will live in your APM, detecting errors in real-time, reading customer support tickets, scanning social media, validating that there is a real bug and automatically fixing it. The same agent can reach out to affected customers, let them know the issue is being worked on, notify them when the fix is released, and even offer a concession. The entire loop from detection to resolution to customer communication happens without a human touching it.

Software will be Hyper-Personalised. If a tool doesn’t fit your workflow perfectly, you don’t compromise anymore; you (or your agent) just build one that does. This means customer reliability expectations will skyrocket. If a bug can be fixed in minutes, users won’t tolerate “we’ll fix it next quarter.”

And this isn’t optional. Your competitors’ products will be hyper-personalised. Your customers will experience that level of tailoring elsewhere and expect it from you. The market will dictate that you must do this. It’s not a differentiator, it’s table stakes.

You’re also not in full control of the UX anymore. Customers will want to use your product the way they want, not the way you designed it. Some will skip your UI entirely. Just give them the API and they’ll have their agent build an interface that fits their workflow. Your product is the data and the API, not the pixels on the screen.


The Legacy Reality (The “Refactor or Die” Mandate)

Technical debt iceberg

Most companies are currently sitting on an iceberg of technical debt that makes them “AI-Proof” in the worst way possible.

The Refactor Tax

Here’s the impossible choice: you can stop shipping and refactor your foundation, or you can keep building on rot and die slower. Neither option is safe.

A single person with a clear vision and an agentic harness can out-engineer a team bogged down by legacy spaghetti and meetings. That’s not a future prediction. It’s happening now. In a model-first world, a large, slow team isn’t an asset, it’s a massive liability. That’s why the refactor isn’t optional. You cannot sprinkle AI on top of a mess.

The refactor takes however long it takes, and every day of it is a day your competitors are shipping. If they ran a tighter ship or addressed their debt earlier, they are already at the starting line while you’re still in the locker room. The uncertainty is the killer. Not the work itself, but not knowing if you’ll finish before the market moves past you.

This is the moment where the relationship between engineering leadership, the board, and your investors gets tested. Your investors bet on growth, not a rewrite. You’re not asking for a roadmap delay. You’re asking for permission to look like you’re going backwards. Value your current customer base, forget about hyper-growth, and make the case that the alternative is extinction.

I don’t have a clean answer for this. Some companies won’t survive the transition, and that’s not a failure of AI adoption. It’s the ultimate margin call on years of accumulated debt.

The Headcount Reckoning

We will continue to see software companies laying off engineers. They’ll keep the best, the architects and system thinkers, long enough to extract their domain knowledge and codify it into the harness. The tribal knowledge gets captured, the system design gets documented, the institutional context gets encoded into the agentic loop.

And then the same logic applies to them too. Once the knowledge is in the harness, headcount becomes a cost centre, not a capability. The people who survive longest are the ones who can do what the models can’t. Hold relationships with customers, make judgment calls under ambiguity, and understand the business well enough to point the agents in the right direction. But even that window is narrowing.

The Trust Economy: Brand vs. Commodity

If software is a commodity that anyone can generate in a weekend, why does your company exist?

SaaS will be used purely as a risk mitigation factor, not for technical reasons. You aren’t paying for the “features.” You’re paying for the Trust and the Brand.

When code is cheap, brand recognition is everything. Customers will pay for the entity they trust to handle their data and take the legal liability. Database and data access auditing needs to be robust, because trust is the only thing you can’t generate with a prompt.

But brand alone won’t save you. Hyper-personalisation kills stickiness. When customers can recreate their entire customised setup on a competing product in minutes, switching costs collapse. If you’re operating in the consumer space, you must be better than your competitors, constantly. The barrier to building a competitor is near zero now, and there’s no coasting.

The Compliance Moat (And Its Expiry Date)

The one real friction that remains is regulatory compliance. SOC 2, HIPAA, FedRAMP, PCI-DSS. These take time and money regardless of how fast your agents can write code. A solo founder can out-engineer you but they can’t fast-track a regulatory audit. In regulated industries, the vendor that’s already cleared the compliance bar has a genuine barrier to entry. But it’s also a temporary shield. Companies hiding behind compliance instead of actually being better are just delaying the same reckoning. Eventually a competitor will clear that bar and be better.


The End of the Developer Workstation

I’ve stopped caring about local specs. I’m using my phone for most of my personal projects, SSH-ing into my desktop. Why would I need anything more?

Say goodbye to your fully spec’d MacBook Pro. Say hello to your Chromebook, or your MacBook Neo. When the environment is perfectly codified and the compute is in the cloud, a thin client is all you need.

Every company will have a codified harness connected to remote compute. Your “dev environment” is a declared state, not a laptop config. Onboarding isn’t “spend two days setting up your machine.” It’s “here’s your credentials, the harness is ready.”

The IDE itself will be reimagined. When agents are doing most of the writing, a text editor isn’t the centre of your workflow anymore. The developer’s interface becomes a command centre. Agent activity, project status, business context. You’re orchestrating, not typing.

Why run the system locally at all? Your agents are running in the cloud. Your tests are in the cloud. Your infrastructure is in the cloud. The laptop is just a viewport. Local execution is a crutch from an era when cloud latency and tooling weren’t good enough. That era is ending.

Everyone’s talking about GPU shortages, but your agent build machines need just as much investment. The agent can write code in seconds. If it’s waiting 10 minutes for a compile or a test suite, you’ve bottlenecked all that speed on infrastructure you forgot to invest in. Fast CPU cores, high IOPS storage, and enough memory to run multiple agents concurrently. The investment shifts from developer laptops to agent infrastructure.

We’ve come full circle. Mainframes, personal computers, cloud, and now we’re back to timesharing on powerful remote machines. The whole personal computing revolution was about giving everyone their own box. Now we’re giving the machines back to the data centre and sitting at thin clients again.

The “developer” of the future is an orchestrator, not a keystroke jockey fighting with local dependencies.


Next up, I’m going to dig into why Sprint Planning and Story Points are dead, and why we’re moving towards a “Super Sprint” that never stops.