Lloyd Taylor
← All series
April 7–16, 2026

It Didn't Suddenly Break

Six posts on accountability mechanisms, captured processes, and why the same findings keep appearing with different names.

Behavior is output. Debug the system.

The project failed. Everyone knows who’s responsible.

There was a review. There were findings. The person at the center of the findings is gone. The organization issued a statement about accountability, about learning, about ensuring this never happens again.

Six months later, it happens again. Different project. Different person at the center. Same findings.


Post 1 — Tuesday, April 7

The project failed. Everyone knows who’s responsible.

There was a review. There were findings. The person at the center of the findings is gone. The organization issued a statement about accountability, about learning, about ensuring this never happens again.

Six months later, it happens again. Different project. Different person at the center. Same findings.

You’ve been in that room. The post-mortem where everyone agrees on the cause — and the cause has a name and a title and is no longer employed there. The room feels productive. The conclusions feel clear. The action items get assigned.

And somewhere in the back of your mind, something doesn’t sit right. Not about the person — they probably did do what the report says they did. But about the speed of the conclusion. About how quickly the room exhaled once the name was on the table.

About the fact that you’ve seen this exact report before.

The standard explanation is that organizations are bad at learning. That institutional memory is short. That incentives reward speed over rigor. That people just want to move on.

These explanations aren’t wrong. They’re just not deep enough.

Here’s the question the post-mortem didn’t ask: what were the conditions that made that behavior not just possible, but nearly inevitable? Not what did this person do — but what did the system produce?

Because if the same situation produces the same behavior in person after person, year after year, the variable isn’t the people.

The names changed. The findings didn’t.

Behavior is output. The question is what generated it.


Post 2 — Wednesday, April 8

When something goes wrong in an organization, the first question is always: who?

This is not a failure of rigor. Finding the agent is what the brain does — it’s fast, it’s satisfying, and it’s usually partially right. There usually is a person whose actions preceded the failure. They usually did make choices. The report is usually accurate about what happened.

The problem isn’t that the who answer is wrong. The problem is that it’s incomplete in a specific, consequential way: it stops at the point where the sense of completion is found, rather than where the generative mechanism is.

Think about what the who answer leaves intact. The production schedule that made corner-cutting the rational choice. The incentive structure that rewarded speed over rigor. The reporting relationship that made escalating a problem more costly than absorbing it. The accountability gap that made the behavior not just possible but the path of least resistance.

The who answer names the last person to touch the output. It leaves the machine running.

This isn’t an argument against accountability. The person who made the choice is still responsible for the choice they made. But accountability aimed at the wrong level doesn’t prevent the next failure — it just changes the name in the next report.

Every organization has a library of these reports. Go back and read the last three. Notice how similar the findings are. Notice how different the names are.

That’s not a learning problem.

It’s a design problem. And the design is working.


Post 3 — Thursday, April 9

Here’s what makes this harder than it looks.

The post-mortem process isn’t just failing to find the systemic cause. In many cases, it’s actively preventing the systemic cause from being found.

Not through conspiracy. Through structure.

Consider what the post-mortem is optimizing for. Speed of resolution — because the organization needs to move forward. Defensibility — because the findings will be reviewed by lawyers, boards, regulators. Clarity — because the conclusion needs to be communicable to stakeholders who weren’t in the room.

The individual cause satisfies all three. It’s fast — you’ve found the cause the moment you name the person. It’s defensible — you acted, you found accountability, you made changes. It’s clear — here is what happened, here is who was responsible, here is what we did about it.

The systemic cause satisfies none of them. It’s slow — you have to trace the mechanism back through decisions made years ago by people who’ve since left. It’s uncomfortable — some of those decisions were made by people still in the room, or their predecessors, or the board. It’s complex — the conclusion requires the audience to hold multiple simultaneous truths, and audiences under pressure don’t want simultaneous truths.

So the process doesn’t fail to find the systemic cause by accident. It’s designed — not deliberately, but structurally — to produce the individual cause. The conclusion the process reaches isn’t the conclusion of the investigation. It’s the conclusion the process was built to reach.

This is not a statement about bad faith. Most of the people in the room are genuinely trying to understand what happened. They’re operating inside a process that shapes what questions get asked, what evidence gets weighted, and when the room is allowed to reach consensus.

The process is the problem. Which means fixing the process requires asking who the process was built to protect.

Might you be stuck in a captured accountability process?


Post 4 — Tuesday, April 14

Accountability mechanisms get captured.

Not always through deliberate corruption. Often through something more ordinary — the slow accumulation of decisions about who runs the process, what counts as evidence, where the investigation is permitted to look, and when it’s permitted to stop.

By the time the mechanism fires, it’s already been aimed.

Picture a regulatory body that investigates failures in the industry it also depends on for expertise, funding, and its own future employment. The investigation is real. The investigators are largely competent and largely well-intentioned. But the process has been shaped — through hiring, through the definition of scope, through decades of ordinary institutional decisions — to reach conclusions that the regulated industry can absorb without structural change.

Picture an internal audit function that reports to the executive whose division it audits. The auditors find what they find. But what gets escalated, how it’s framed, and what counts as resolved has been shaped by a reporting relationship established long before the current crisis.

In each case, no one woke up deciding to build a captured mechanism. The capture happened incrementally — through decisions that each seemed reasonable at the time. The mechanism looked like accountability. It was aimed away from the thing it appeared to be aimed at.

Here is how to check your own process. Three questions.

Does the investigation report to the function being investigated? If the accountability process is owned, staffed, or scoped by the institution it reviews, the conclusions are constrained before the first interview.

Do the findings consistently stop at the individual level, regardless of the pattern? If three consecutive post-mortems reached the same structural conclusion — individual error — without once examining what produced the error, the process is selecting for that conclusion.

Who decided what was in bounds — and did they have a stake in keeping certain things out? The definition of scope is where capture most often hides. The people who draw the boundary are rarely neutral about where it falls.

If you answered yes to two or more of those questions, you are not in a broken accountability process. You are in a working one — working exactly as it was designed.

Now the question is what to do about it.


Post 5 — Wednesday, April 15

So what do you actually do with this?

You’re not the general counsel. You’re not the board. You’re a strategist, an analyst, an operator — someone who has to work inside the process while seeing its limitations clearly. You can’t redesign the accountability mechanism by yourself, and “the process is captured” is not a presentation you’re making to the executive committee.

Here’s what you can do.

Read the pattern, not the case. When a post-mortem concludes, read the findings — and then read the last three post-mortems from comparable failures. If the systemic findings are absent from all of them, that’s not a coincidence and it’s not a learning problem. The process is producing individual causes because that’s what it was built to produce. Knowing this doesn’t give you the power to change the process, but it tells you where to focus your own attention.

Ask the question the process didn’t. After the finding, after the person is gone, after the action items are assigned — sit with the question the post-mortem left unanswered: what were the conditions that made this behavior not just possible, but predictable? You may not be able to ask it publicly. You can always ask it privately, do the analysis, and hold the answer.

Notice when the room reaches consensus. The moment everyone agrees on the individual cause is diagnostic. If you’re in that room and you feel it, that’s the signal to keep looking, not to stop.

Understand your leverage. You can see the mechanism. You can name it to yourself and the people you trust. You can build a map of where the accountability process is and isn’t looking. In most organizations, that map is more valuable than the post-mortem itself.

What you can’t do — from inside the process, without standing and authority — is redirect a captured mechanism toward the target it was designed to avoid.

That constraint isn’t a reason to stop looking. It’s a reason to be precise about what you’re looking at — and why the organization keeps telling you the answer is the person, when the evidence keeps pointing somewhere else.


Post 6 — Thursday, April 16

Here is what the pattern has been showing you.

The behavior was real. The person made the choices the report says they made. The finding is accurate.

And the same finding will appear in the next report, with a different name, because the conditions that made those choices available, rational, and nearly inevitable are still intact.

The post-mortem found the output. It didn’t find the generator.

This is not a failure of investigation. It’s a failure of level. The investigation asked: who produced this output? It should have asked: what kind of system reliably produces this output? The first question has a name as its answer. The second has a mechanism — and mechanisms can be changed.

The diagnosis is not that your organization has bad people. It’s that your organization has a system that produces predictable outputs and has built accountability processes that protect the system from that diagnosis.

Seeing this clearly doesn’t make you cynical. It makes you useful. The analyst who can read the distribution instead of the case, who can ask what the system is producing rather than who produced it, who can hold the individual cause and the structural cause simultaneously without letting either one dissolve the other — that analyst is doing something most post-mortem processes are specifically designed not to do.

Name the mechanism. Trace it to where it operates. Stop when going deeper wouldn’t change what you’d build, change, or aim at.

Behavior is output. Debug the system. ««<