Assumption testing is the ability to identify the unexamined beliefs that underpin your strategy and deliberately subject them to evidence-based challenges.
In the context of learning agility, assumption testing is the investigative core of the shedding pillar. While other behaviours focus on general unlearning, this is about the precision work of finding the structural flaws in your current logic. It requires the intellectual rigour to treat your most cherished “facts” as mere guesses that need verification. It is the transition from blind execution to evidence-based action, ensuring that you are not building your future on a foundation of outdated or incorrect premises.
Why assumption testing matters
In a rapidly shifting environment, what was true six months ago is often a myth today. When this dimension of agility is low, a leader operates on “auto-pilot,” making high-stakes decisions based on unexamined rules of thumb. This leads to strategic drift, where the organisation continues to solve yesterday’s problems while the market has moved on. Without the habit of testing, you become a prisoner of your own past successes, unable to see the reality that is staring you in the face.
High assumption testing allows you to maintain a high-fidelity map of the world. By regularly hunting for the “load-bearing” assumptions in your plan and trying to disprove them, you ensure that your strategy is resilient. This behaviour ensures that your focus is on what is actually happening now rather than what you hope is happening. It creates a state of permanent alertness where you are always ready to shed a belief the moment the data suggests it is no longer valid.
Assumption testing spectrum
Effective leadership requires a balance between the conviction needed to drive an idea forward and the constant skepticism required to check if the idea is still based on reality.
| Left side: Unconscious bias | Right side: Assumption testing |
|---|---|
Strengths
Liabilities
|
Strengths
Liabilities
|
What good and bad look like for assumption testing
| What bad looks like | What good looks like |
|---|---|
| Treating guesses as facts: Using phrases like “the customer wants X” without any recent data to prove it. | Labeling the leap: Identifying “the customer wants X” as a high-risk assumption that must be tested this week. |
| Seeking confirmation: Designing “tests” that are guaranteed to prove you are right rather than finding the truth. | Seeking disconfirmation: Actively looking for the one person or data point that would prove your idea is wrong. |
| Ignoring the “silent” assumptions: Focusing only on the surface details while ignoring the massive beliefs underneath. | Deep-diving: Asking “what would have to be true for this entire plan to work” and testing those core pillars. |
| One-time checks: Testing an assumption at the start of a year and never checking it again as the environment shifts. | Continuous verification: Building small “reality checks” into every stage of the execution process. |
| Defending the plan: Reacting to negative data by trying to explain why the data is wrong rather than why the plan is flawed. | Updating the map: Using a failed test as an immediate signal to shed the old assumption and build a new one. |
Barriers to assumption testing
- The “expert” ego: The belief that your experience allows you to see through the need for testing.
- Fear of delay: The concern that stopping to test an assumption will cause you to lose momentum or miss a deadline.
- Incentive structures: Organisations that reward “hitting the target” regardless of whether the target was the right one.
- Cognitive ease: The brain’s natural tendency to accept information that confirms what we already believe.
- Groupthink: A team environment where no one wants to be the “naysayer” who questions the leader’s core premise.
- Sunk cost bias: The more time you have spent on a plan, the less you want to test the assumptions that might break it.
- Lack of analytical tools: Not knowing how to design a low-cost experiment to get a clean “yes” or “no” answer.
Enablers of assumption testing
- The “leap of faith” log: Keeping a visible list of the five biggest assumptions in your current strategy.
- The “what if” habit: Regularly asking “if our core assumption about X turned out to be false, what would we do?”
- Red-teaming: Assigning a specific person to act as the “challenger” for every major strategic assumption.
- Low-fidelity probing: Using tiny, cheap actions to get a “pulse” on whether an assumption is still valid.
- Data transparency: Ensuring you have real-time access to the signals that would indicate an assumption is failing.
- Metacognitive honesty: Learning to distinguish between what you “know” and what you “hope” is true.
- Reward for disproof: Publicly celebrating when an assumption is proven wrong, as it prevents a larger future failure.
Questions for reflection
- What is the one “fact” I am most certain about that I haven’t actually checked in the last three months?
- What would have to be true for this entire project to be a complete waste of time?
- Am I looking for evidence that I am right, or am I brave enough to look for evidence that I am wrong?
- If I were a competitor looking at my strategy, which assumption would I target to try and make me fail?
- How many of my “best practices” are actually just old habits that haven’t been tested in years?
- What is the smallest, fastest way I could prove my most important assumption is false today?
- Am I protecting my own sense of certainty or am I protecting the long-term success of the project?
Micro practices for assumption testing
- The “assumption audit”: Take your current top priority and write down the three things that “must be true” for it to succeed. Find one way to test the shakiest one.
- The “five whys” check: When you state a “fact,” ask yourself “why do I believe that” five times until you reach the root assumption.
- The “devil’s advocate” minute: In every planning session, spend 60 seconds arguing against your own most confident statement.
- The data-dating rule: Every time you use a piece of market data, check the “date” on it. If it is older than six months, treat it as an assumption, not a fact.
- The “outside-in” call: Speak to one person outside your industry and explain your core plan; listen for the assumptions they spot that you have become blind to.
This is one of the 20 behaviours in the learning agility library. Visit the learning agility library to explore the rest.