Experimental fluidity is the ability to rapidly initiate, iterate, and abandon low-stakes trials to discover the most effective path forward in a shifting environment.
In the context of learning agility, experimental fluidity is the operational engine of the sprinting pillar. While other aspects of agility focus on reflecting on the past, experimental fluidity is about proactive discovery. It marks the transition from intellectualising a plan to physically probing the environment. This behaviour requires the high metabolic energy to maintain multiple safe-to-fail loops simultaneously, ensuring that a leader’s mental model evolves faster than the market’s rate of change. It is the practice of learning through action rather than speculation.
Why experimental fluidity matters
In complex, volatile systems, the information required to succeed often does not yet exist and it must be provoked. When this dimension of agility is low, leaders fall into the big bet trap, treating strategy as a linear track rather than an evolutionary process. This leads to cognitive lock-in, where teams spend months perfecting a single launch only to find the environment has shifted by the time they arrive.
High experimental fluidity allows a leader to use action as a form of inquiry. By maintaining a state of constant experimentation, you negotiate with the environment in real time. This ensures that strategic bets are based on the latest feedback from reality rather than unexamined historical assumptions or expert biases. It creates a culture of versioning where the best path is discovered through data, reducing the psychological cost of failure by making it a standard, low-stakes part of the daily learning workflow.
Experimental fluidity spectrum
Effective leadership requires a balance between the rigour needed for high-quality execution and the plasticity required to maintain a rapid pace of discovery.
| Left side: Linear execution | Right side: Iterative discovery |
|---|---|
Strengths
Liabilities
|
Strengths
Liabilities
|
What good and bad look like for experimental fluidity
| What bad looks like | What good looks like |
|---|---|
| The grand reveal: Waiting until a strategy is 100 per cent finished before launching it to the market. | The provisional probe: Launching a rough, 20 per cent version early to sense how the system responds. |
| Rigid rollout: Sticking to a 12-month execution plan regardless of the early negative data. | Dynamic iteration: Using the data from the first week of a test to fundamentally rewrite the second week. |
| Scaling before sensing: Deploying a new process across the whole business before testing it in one small unit. | The micro-pilot: Testing a new behaviour or tool with one client or in one meeting to see if it works. |
| Fearing the fail: Treating an unsuccessful experiment as a failure of leadership or a waste of budget. | Valuing the no: Treating a negative result as a high-value data point that prevents a big failure later. |
| Individualised trials: Running experiments in silos without sharing the learning across the whole team. | Collective learning: Ensuring that the insights from every small trial are used to upgrade the group’s map. |
Barriers to experimental fluidity
- Perfectionism: The internal rule that says never show anything unless it is flawless. This shame-based barrier prevents the early-stage feedback required to make the idea good.
- The hero fallacy: The belief that a leader should always know the right path. Admitting you need to experiment feels like admitting you are not certain, which feels like a loss of status.
- Sunk cost psychology: The more time you have spent planning, the more your brain will fight to stick to that plan, even when a small trial suggests it is flawed.
- Incentive misalignment: Most organisations reward success and punish failure. This creates a culture where people only run safe tests that provide zero actual learning.
- Bureaucratic friction: Large organisations often require ten levels of approval for even a small test. This structural noise kills the speed required for agile sprinting.
- Expert blindness: Believing that your past experience is a substitute for current data. You assume you already know the outcome, so you do not bother to test.
- Fear of brand damage: The concern that showing a half-baked idea will hurt the company’s reputation. This ignores the fact that a fully-baked bad idea is far more damaging.
- Cognitive saturation: Managing multiple experiments requires significant mental slack. In a state of permanent firefighting, the brain shuts down discovery to save energy.
Enablers of experimental fluidity
- Defining safe-to-fail zones: Explicitly identifying the areas where the team is allowed to fail without damaging the core business.
- The hypothesis mindset: Replacing the word plan with hypothesis. This signals that the strategy is meant to be tested, not just executed.
- Rapid time-boxing: Limiting experiments to very short windows to ensure the feedback loop stays fast and focused.
- Low-fidelity prototyping: Using sketches, wizard of oz tests, or manual workarounds to validate a concept before building the expensive infrastructure.
- Rewarding the insightful failure: Publicly celebrating a team that ran a disciplined trial that proved a major strategic assumption was wrong.
- Metacognitive detachment: Learning to view your ideas as disposable specimens. You are the scientist and your value is in the discovery, not the idea.
- Creating learning slack: Deliberately leaving 10 percent of the team’s capacity for unstructured trials that are not tied to an immediate kpi.
- The pre-mortem habit: Before starting a trial, ask if this fails, why it will have happened, and design the experiment to target that specific risk.
Questions for reflection
- What is the smallest, cheapest way I could test my best idea tomorrow morning for less than £100?
- Am I currently trying to be right or am I trying to learn what is actually true in the market?
- If I were forced to launch this project in 48 hours instead of 6 months, what is the one core thing I would focus on?
- What safe-to-fail boundary have I set for my team where I can give them total freedom to experiment?
- Which part of our current strategy is based on a guess that we are currently treating as a settled fact?
- What was the last failed experiment we had, and did it actually result in a change to our direction?
- Am I currently protecting my professional ego or am I protecting the speed of our organisational learning?
- How many probes are we currently running to sense the future shifts in our industry?
Micro practices for experimental fluidity
- The paper prototype challenge: Before writing a formal proposal, you must show a hand-drawn version of the idea to three potential users and record their honest reactions.
- The 24-hour pilot: Pick one new team behaviour and commit to it for exactly 24 hours. Debrief the results the next morning.
- The fake door test: Instead of building a new service, create a simple flyer or landing page describing it. Track how many people express interest before you invest in the build.
- The negative milestone: Set a specific data point that, if reached, means the experiment is a failure and must be stopped immediately. This prevents zombie projects.
- The shadow budget: Allocate a tiny amount of money that can be spent on any experiment without formal approval, provided the results are shared with the whole team.
This is one of the 20 behaviours in the learning agility library. Visit the learning agility library to explore the rest.