Skip to main content
Artificial Intelligence (AI) · April 2026

AI Doesn't Fix Your Culture. It Amplifies It.

A transformation in motion — systems, teams, and tools evolving under pressure

The first thing I noticed when AI coding tools spread through a team wasn't the velocity gain. It was the bifurcation. Engineers who already wrote tests, gave substantive PR comments, and raised concerns in retros got measurably faster. Engineers who had been quietly shipping half-finished features and hoping review wouldn't catch it started shipping more of them. AI didn't change what those teams were. It turned up the volume.

Most conversations about AI in engineering fixate on one question: will AI replace engineers? I think that's the wrong question — and more importantly, it's a distraction from the uncomfortable one. AI is not replacing engineering organisations. It is exposing them. The weaknesses were already there: requirements unclear, ownership fragmented, delivery pipelines noisy, testing partial, monitoring reactive. For years, those weaknesses were compensated by sheer human effort — senior people who stayed late, who served as human routers for decisions, who manually caught production issues before anyone else noticed. AI removes some of that compensating friction. Which means the structural problems surface faster now, and at scale.

The Friction That Was Load-Bearing

Every mature engineering team has friction built into it. Some of that friction is bureaucratic — approval chains that outlived their purpose, ceremonies that became ritual. But some of it is structural: the load-bearing kind.

Code review is the obvious example. A pull request that takes two days to merge isn't always a bottleneck. Sometimes it's a knowledge transfer mechanism. The junior engineer learns how state gets managed in this codebase. The senior engineer discovers they hadn't documented the edge case the junior was about to trip over. The friction is the conversation.

AI tools reduce the cost of generating code. They don't, by design, replace the conversation about whether that code is right. In teams where review culture was already strong, AI accelerates the loop — you get to the conversation faster. In teams where review was perfunctory, AI gives the rubber-stamp a wider surface to stamp.

The friction was load-bearing. Remove it without replacement and the structure shifts.

Bad Culture at Speed

I've seen what this looks like on the other end. An on-call engineer at 2 AM opens the offending service. The file that caused the incident is 700 lines — generated, stitched together from a few prompts, merged after a one-comment "LGTM" review, never touched again. There's no author who understands it end to end. There's no test that would have caught the edge case. The post-mortem is painfully familiar: ownership was diffuse, the review was a formality, and nobody asked the question that would have caught it.

The only thing AI changed in that story is the speed at which the problem was created and the scale at which it shipped. Bad culture had always produced code like that. Now it produces it faster.

In organisations where silence was already the dominant pattern — where engineers didn't feel safe raising concerns, where leads didn't actually read the diffs — AI-generated code doesn't get reviewed any more carefully than human-written code. If your architecture is inconsistent, AI generates inconsistent implementations at scale. If your standards are weak, AI multiplies weak patterns. If your documentation is outdated, AI-assisted development amplifies outdated assumptions. The generator changes. The culture doesn't.

The Velocity Illusion

There is a trap that well-intentioned organisations fall into, and I've watched it play out in enterprise settings where pressure to "adopt AI" arrives from above before the ground conditions are ready.

In the short term, velocity metrics improve. More pull requests. More features shipped. More prototypes in front of stakeholders. Leadership points to the throughput numbers as evidence of transformation. Everything looks like progress.

And then production catches up with you.

In regulated industries — banking, insurance, payments — the bottleneck was never typing speed. It was decision clarity. Architecture quality. Operational discipline. Testing completeness. Change management accountability. AI can generate code in seconds. It cannot resolve organisational confusion. It cannot decide which architecture tradeoff matters most for your compliance boundary. It cannot fix approval layers designed around distrust, or compensate for ownership fragmented across four teams who each believe the other owns the deployment pipeline.

The irony is that AI may initially make weak organisations look more productive. More commits. More merges. More experiments. But over time, production systems expose organisational truth. Incidents expose weak ownership. Migrations expose poor architecture. Scaling exposes operational gaps. Audits expose governance weaknesses. The velocity metric goes up while the structural debt accumulates below it, faster than before.

Good Culture at Speed

The contrast is just as stark. In a team where psychological safety is already high and ownership is clear, AI tooling has a different character entirely.

The generated code still gets read. The PR still gets commented. Not because there's a process mandating it, but because the team has internalized that the review is where the knowledge lives — that "it compiles and the tests pass" was never the bar. Engineers treat AI output the same way they'd treat a first draft from a capable but context-blind colleague. Sometimes with more scrutiny, because they know the model doesn't understand the domain constraints the way a colleague would.

I've watched a senior engineer pair with an AI tool and produce in a morning what would have taken a full day. Then spend an hour in review — not because the output was poor, but because the quality culture demanded it. That's the multiplier effect when the foundation is already there.

The Diagnostic You Should Run First

Before your team adopts any AI coding tool — or if they already have — here's the audit I'd run.

Pull the last thirty merged PRs. Don't look at the code. Look at the conversations. Are comments substantive or ceremonial? Are tests added with features, or added after QA catches something three sprints later? Are there PRs where a junior engineer got taught something, or are all the approvals timestamps with no exchange underneath them?

That data tells you what AI will amplify. Rich conversations and clear ownership: AI makes your team faster without making them worse. Thin conversations and diffuse ownership: AI accelerates the rate at which technical debt accumulates and incidents occur. Not because the technology is flawed — because that's what amplification means.

What Actually Becomes More Valuable

The common anxiety — that AI diminishes the value of engineers — gets the direction exactly backwards.

What AI does is shift the differentiator. For years, the scarce resource was the engineer who could write correct code quickly. That resource is now less scarce. The resource that becomes genuinely scarce is something harder to develop: the engineer who can design systems, structure organisations, build guardrails, and architect feedback loops that consistently produce quality outcomes — not just code.

The organisations that get the most from AI won't be the ones who bought the most tools. They'll be the ones who were disciplined enough to redesign how work happens before they gave their teams a velocity multiplier. Because AI works best where standards are clear, architecture is intentional, testing is automated, observability is mature, and accountability is visible. The tool doesn't create those conditions. It reveals whether they exist.

Technology has always amplified organisational behaviour. Cloud amplified infrastructure discipline. Microservices amplified architecture maturity. DevOps amplified operational readiness. AI is no different. It will not transform weak engineering cultures into strong ones. But it will make the gap between them impossible to ignore.

AI amplifies what's already there — discipline and disorder equally. It doesn't choose. You already did.

— Komang