When Preparedness Looks Right but Fails Under Pressure

Most organizations have more than enough artifacts.

Resilience plans. BCDR runbooks. Incident response procedures. Control libraries. Playbooks for scenarios that may or may not ever occur.

On paper, it looks like preparedness.

And to be fair, those things matter. They create structure. They make expectations visible. They provide a starting point for coordination when something goes wrong.

But they are not the same thing as capability.

That distinction usually stays hidden until the system is actually stressed.

Because what matters in that moment is not whether the plan exists. It is whether the organization understands how to operate when the plan no longer fits the situation exactly as written.

And it rarely does.

Most disruptions do not follow the script. Dependencies fail in ways no one modeled. Information arrives late or incomplete. Decisions need to be made before the full picture is available.

That is where “looking prepared” starts to separate from actually being prepared.

From a quantitative risk perspective, this is the difference between documenting a response and actually changing loss exposure. A runbook may assume a certain response time, a certain level of control effectiveness, or a certain sequence of actions. If those assumptions do not hold, the expected reduction in loss frequency or magnitude does not hold either.

The artifact remains. The exposure does not change.

And yet, most organizations still measure preparedness through the existence and completeness of those artifacts.

Do we have a plan?
Is it documented?
Has it been reviewed?

Those are easy questions to answer.

They are also the wrong ones.

Because they do not tell you whether the organization can actually execute under real conditions.

Over time, these artifacts take on more weight than they should. They become signals of maturity. They become part of audit narratives. They become things that are maintained because they are expected to exist.

And slowly, they start to replace deeper understanding.

That is where preparedness becomes performative.


Preparedness looks clean on paper. Capability is tested in conditions that are not.


Everything looks structured. Everything is accounted for. But if you step into how decisions would actually be made under pressure, the signal weakens quickly.

What matters most in that moment is rarely written down in a runbook.

How quickly can the team interpret incomplete information?
Do they understand which systems actually drive the largest loss exposure?
Can they prioritize actions based on impact instead of sequence?

Those are not checklist items.

Those are capability indicators.

You can see the difference most clearly in how organizations respond to disruption.

In one environment, the team follows the plan until it no longer applies, and then progress slows. People look for direction. Escalations increase. Decisions get deferred because they fall outside the predefined path.

In another environment, the plan is used as a reference point, not a constraint. Teams understand the intent behind the steps. They adapt based on what is actually happening. Decisions are made with an awareness of tradeoffs, not just adherence to sequence.

The difference is not the quality of the documentation.

It is the depth of understanding behind it.

If you take this back to how preparedness should actually be evaluated, the focus shifts.

The question is not whether a plan exists.

The question is whether the plan reflects how the system behaves, and whether the people responsible for executing it understand how to adapt when it does not.

From a FAIR perspective, that means asking whether your preparedness mechanisms actually influence loss event frequency or magnitude under realistic conditions. If your response depends on assumptions that are unlikely to hold under stress, then the modeled reduction in risk is overstated.

The plan looks right.

The outcome does not follow.

That is the gap.

Preparedness should not be measured by completeness. It should be measured by how well it supports decision-making under uncertainty.

Does it clarify what matters most?
Does it help prioritize actions that meaningfully reduce exposure?
Does it enable teams to act when conditions deviate from expectation?

If it does not do those things, it is not building resilience.

It is creating structure without capability.

And structure without capability tends to fail quietly, right up until the moment it matters most.

Real confidence comes from something else.

It comes from repeated exposure to how the system behaves. It comes from understanding where assumptions break. It comes from being able to explain, not just what the plan says, but why it exists and how it should flex.

Plans are necessary.

But they are not sufficient.

Capability is what determines whether they actually matter.

James Smith

James is the Founder and Managing Director of ORP Consulting and a U.S. Army veteran with over a decade of experience across military, law enforcement, and national laboratory environments. He brings a disciplined, security-first perspective focused on practical risk management and decision-making that holds up under real-world conditions.

Previous
Previous

Good Decisions Are Rarely Comfortable

Next
Next

Resilience Is Built Before the Crisis