Leaders today face environments filled with volatility, uncertainty, complexity, and ambiguity. The real challenge is not simply that the future is hard to predict, but that efforts to create certainty through detailed planning often prove fragile when tested against reality.

Large-scale initiatives rarely fail because leaders are careless. They fail because they were designed for a world where stability and predictability could be taken for granted. In the world we now inhabit, agility and adaptability matter more.

Pilots, probes, and experiments offer practical ways to navigate this uncertainty. They allow organisations to move forward even when clarity is limited, to test ideas in both controlled and exploratory ways, and to build a culture that values learning as much as results.

Rather than relying on forecasts, these approaches help leaders learn through direct experience. They also influence the spirit of an organisation, inviting humility, curiosity, and resilience. Leaders who embrace them can reduce risk, accelerate insight, and empower their teams to innovate with care and responsibility.

What are pilots, probes, and experiments?

Pilots: A pilot is a small, time-bound test to determine whether a product, process, or service can work in practice. It is used when an idea appears promising but remains unproven.

Think of a bank that tries a new mobile feature in one city before a national launch. The trial exposes what could not be seen on paper: user confusion, system issues, or compliance gaps. The value of a pilot lies in its realism. It bridges the space between concept and operation, letting leaders see what truly happens when an idea meets the world. From this, they can refine and strengthen the design before committing at scale.

Probes: A probe begins with curiosity, not certainty. Drawn from complexity thinking, probes are small, safe-to-fail actions that explore what might work when the path ahead is unclear.

A city struggling with congestion might try three different approaches: a cycling subsidy, staggered work hours in one district, and AI-driven traffic lights. None is guaranteed to succeed, and that is the point. Each test offers a glimpse of how the system responds. The power of a probe is its capacity to uncover patterns that analysis alone cannot reveal.

Experiments: An experiment is a structured test designed around a hypothesis. It is used when a leader wants clear evidence about cause and effect.

An online retailer, for example, may compare two website layouts to see which converts more visitors into buyers. Experiments are precise, controlled, and measurable. They do not explore the unknown in the same way that probes do, but they provide reliable evidence where outcomes can be quantified. Their strength lies in giving leaders grounded answers to specific questions.

Why they matter in a VUCA world

Pilots, probes, and experiments matter because they fit the real conditions of uncertainty. Traditional management still leans on the hope that problems can be solved with enough analysis, planning, and disciplined execution. That might work in a stable environment. But when the world is volatile and complex, predictions age quickly and large-scale commitments built on assumptions often collapse under their own weight.

These approaches bring three vital benefits:

First, they reduce risk by containing failure. Instead of betting everything on a single grand plan, leaders can test several smaller options. A pilot that fails in one department costs far less than a full organisational rollout that misfires.

Second, they speed up learning. When organisations act, observe, and adapt, they gather insights faster than those waiting for perfect data or endless certainty.

Third, they build resilience. A portfolio of pilots, probes, and experiments spreads learning across the system. Even if many attempts do not work, the few that succeed deliver value out of proportion to their size, strengthening the whole organisation.

Think of how airlines responded during the COVID-19 pandemic. Those that tried flexible ticketing, contactless boarding, or new loyalty schemes discovered in real time what customers needed most. Those who waited for clarity found themselves behind. In a VUCA world, these practices are not optional extras. They are the disciplines that allow organisations to survive, learn, and grow.

A note on VUCA: The acronym VUCA is often used to describe the turbulence of modern work. It helps name what feels unpredictable, yet it is not a theory of the world so much as a convenient label for our discomfort with it. The truth is simpler: volatility, uncertainty and ambiguity have always been with us; we are just more awake to it now. I use VUCA as a term that many understand.

How they differ in practice

Pilots, probes, and experiments all involve small-scale testing, but they differ in purpose, design, and mindset. Knowing which to use is as important as knowing how to use it.

Pilots: A pilot is suitable when there is already a promising solution that needs to be proven in practice. It tests feasibility, scalability, and unintended consequences. The mindset is confirmatory: does this work as intended in the real world?

Pilots should be designed to reflect the larger system, yet remain contained enough to manage risk. The danger lies in turning them into mini-projects that grow too large, become bureaucratic, and delay learning.

Probes: A probe belongs in an unpredictable environment where many possible paths exist. Its purpose is not to confirm but to explore. The mindset is exploratory: what can we learn from trying?

Leaders must be comfortable with ambiguity and accept that most probes will “fail” in the conventional sense. The real value lies in the insight gained, not in a clean success. The common trap is to treat probes like pilots, demanding metrics and rollout plans that stifle experimentation and narrow curiosity.

Experiments: An experiment is for situations where a specific hypothesis can be tested with rigour. It provides reliable evidence for decisions when outcomes are measurable and variables can be controlled. The mindset is analytical: what does the data reveal?

The strength of experiments lies in their clarity. Yet leaders sometimes over-engineer them, delaying action in pursuit of perfection or drawing false conclusions when the wider system cannot be neatly contained. Experiments can also be overused, applied to contexts that are too messy or complex for such precision.

Bringing it together

Understanding the distinctions among pilots, probes, and experiments is essential. Misusing them, such as seeking certainty from a probe or letting a pilot expand into a full-scale project, undermines their purpose. Each is a different doorway into learning. The choice is not about which is best, but which fits the kind of uncertainty you face.

A step-by-step guide to using them

1. Clarify purpose

Begin by being clear about why you are running a pilot, probe, or experiment. Ask yourself: are we testing feasibility through a pilot, exploring options through a probe, or validating a hypothesis through an experiment? Misalignment at this stage creates confusion and wastes energy. Be open about your intent so that everyone involved understands what success means and what learning you are seeking.

2. Design scope

Keep every initiative small, low-cost, and time-bound. The goal is not to build a miniature version of a full rollout, but to create a focused learning opportunity. Define in advance what success, adaptation, and failure would look like. For example, in a retail pilot, success might mean 20 percent of customers adopt a new feature, while failure might mean adoption remains below 5 percent. Set boundaries around scope, department, market, and timeframe. Smaller efforts allow faster reflection and more agile adjustment.

3. Build psychological safety

People will only attempt bold probes or admit when a pilot has not worked if they feel safe to do so. Leaders create this safety by making it clear that failure is not career-threatening but part of responsible learning. Acknowledge and thank teams for the insights gained, even when the outcomes are disappointing. Share your own mistakes as examples of what learning looks like in practice. Without psychological safety, teams will design only safe tests that protect assumptions instead of challenging them.

4. Run and monitor

Keep execution transparent. Avoid hiding results in lengthy reports. Share them through simple dashboards, short reflections, or open conversations. Monitoring is not about inspection but about learning as things unfold. One pharmaceutical company, for example, held weekly “sensemaking huddles” during a drug pilot to discuss early data. This helped the team adapt quickly rather than wait until the end of the trial.

5. Adapt and scale

After each round, decide whether to stop, adjust, amplify, or scale. Not every pilot needs to grow, not every probe should end after an initial failure, and not every experiment result should be treated as universal truth. The task is to learn deeply and make conscious choices. Be wary of scaling too early; one local success does not ensure success everywhere. At the same time, do not ignore failed attempts. Even weak signals from what did not work can shape future direction.

Common pitfalls to avoid

Even when leaders adopt pilots, probes, and experiments, several familiar traps can quietly undo their value.

Pilots that are too large

Leaders sometimes make pilots too ambitious, turning them into small-scale rollouts instead of learning exercises. This creates bureaucracy, slows feedback, and increases cost. A government agency once launched a “pilot” digital system across several regions with thousands of users. The scope was so broad that when problems appeared, they were complex and expensive to repair. True pilots should remain tightly scoped, fast, and manageable.

Mislabelled probes

A frequent mistake is to treat probes as if they were pilots, expecting them to succeed. This pressure discourages creativity and risk-taking. In one public sector innovation lab, leaders demanded business cases and performance metrics for every probe. As a result, the team focused only on safe ideas and learned little about truly new possibilities. Probes exist to explore. Failure is part of their purpose, not a sign of poor work.

Over-engineered experiments

Experiments can lose their value when over-analysed. Some leaders want flawless control groups, perfect data, and lengthy approvals before acting. In one multinational, a digital marketing experiment took six months to finalise because of debates about statistical accuracy. By the time it launched, competitors had already tested and applied similar ideas. Experiments should be rigorous enough to produce insight, but not so complex that they block learning in real time.

Scaling too early

Early success can create pressure to expand quickly. Yet what works in one setting may fail in another. A retail chain piloted a new store format in one city and saw strong results. When they rolled it out nationally, it failed in suburban and rural areas. Scaling before understanding the limits of context wastes energy and erodes trust. Leaders should wait until insights are proven across several conditions.

Fear of failure

The hardest barrier is cultural. When people believe that failed initiatives will harm their reputation, they design only safe tests that confirm what they already know. This robs the organisation of genuine learning. Leaders must keep signalling that failure is not a verdict but a contribution. Without this mindset, pilots, probes, and experiments will remain surface tools rather than sources of discovery.

Top tips for leaders using pilots, probes, and experiments

Start small and stay small: Avoid the temptation to design grand pilots that resemble full-scale projects. A true pilot or probe should be quick, low-cost, and easy to reverse. Smaller efforts provide faster feedback, build momentum, and reduce the political risk of failure. Think of them as test balloons rather than miniature rollouts.

Frame failure as data: The most important cultural shift is to see failure as learning. When leaders celebrate lessons from what did not work, they make intelligent risk-taking normal. Thank teams who bring honest insights forward, and feed those lessons into the next round of design. Without this, people will keep offering safe, low-value ideas.

Run multiple bets in parallel: In a complex environment, no single idea is guaranteed to succeed. Running several pilots or probes at once creates a portfolio of learning. Even if most do not work, a few will produce exceptional value. This approach spreads risk and speeds discovery by testing multiple possibilities at the same time.

Be explicit about purpose: Confusion often arises when leaders fail to distinguish between a pilot, a probe, and an experiment. Before launching, clarify whether you are testing feasibility, exploring unknowns, or validating a hypothesis. Clear purpose leads to better design and avoids mismatched expectations later.

Ensure psychological safety: People will only take genuine risks if they feel safe to do so. Leaders must set the tone by showing that failed pilots are not punishable and that probes are expected to produce mixed results. Model vulnerability by sharing your own learning moments and by valuing honesty over polished reports.

Make learning visible: Do not let insights get lost in long reports or buried emails. Use simple dashboards, debrief sessions, and storytelling forums to share what was tried, what worked, and what did not. Visible learning connects teams and prevents repeated mistakes.

Resist premature scaling: One early success can create excitement, but scaling too soon often leads to disappointment. Robust ideas need testing across different contexts and timeframes. Hold off on scaling until patterns are consistent and risks are understood. Patience at this stage saves far more time and money later.

Empower teams to lead: Pilots and probes are most effective when owned by teams, not imposed from above. Encourage teams to design, test, and interpret their own initiatives. Your role as a leader is to create space, remove obstacles, and offer support.

Balance rigour with speed: Experiments should be rigorous enough to produce useful insights, but perfection slows progress. Decide when “good enough” evidence will serve the purpose, rather than waiting for flawless data. Learning in motion is better than precision that comes too late.

Connect back to strategy: Pilots, probes, and experiments can seem scattered unless they tie into a larger story. Link each initiative to a strategic question or challenge the organisation faces. This alignment ensures that local learning contributes to wider purpose and direction.

Practical examples

Here are some examples I have read about, heard about or been involved in:

Technology company: A global software firm facing declining engagement launched several small probes. Teams explored gamification, personalised alerts, and simplified interfaces. Most ideas did not work, but one stood out: a dashboard that offered recommendations based on user behaviour. This probe grew into the centrepiece of the redesigned product. The insight was clear. Running several probes at once builds resilience, because one meaningful success can balance many failures.

Healthcare organisation: A hospital piloted telemedicine services in one department for patients needing follow-up care. The pilot exposed challenges such as low digital literacy among older patients and concerns about privacy. These lessons shaped better patient education and stronger cybersecurity before the wider rollout. By starting small, the hospital avoided a costly system-wide failure and ensured a more confident expansion later.

Retailer: An online retailer used structured experiments to improve its checkout process. The team tested different layouts, payment options, and call-to-action wording. The data revealed that a simplified two-click path increased conversions by 18 percent. This experiment offered clear, evidence-based guidance that directly strengthened performance when applied more widely.

Public sector: A city government faced growing youth unemployment and chose to act through a series of probes. In one area, it trialled mentorship programmes; in another, small entrepreneurship grants; and in a third, job-matching platforms. Not every idea worked, but the job-matching probe gained strong traction. The city then expanded that approach while continuing to learn from the others. By working in this adaptive way, it responded faster and more effectively than a single centralised programme ever could.

Leadership mindset and behaviours

The success of pilots, probes, and experiments depends not only on how they are designed but also on the mindset leaders bring to them. Leadership behaviour shapes whether these practices become genuine opportunities for learning or mere rituals that confirm what is already believed.

Curiosity over certainty: Leaders who cling to certainty tend to design tests that prove their own assumptions. Curiosity, by contrast, invites discovery. When a global consumer goods company explored sustainable packaging, its senior leader resisted the urge to back one technology too soon. Instead, she encouraged the team to stay open, run several trials, and let the results guide direction. Curiosity slows the rush to conclusions and keeps the organisation learning.

Humility: No leader has all the answers. Humility acknowledges this truth and creates space for others to contribute. In a healthcare organisation piloting digital health records, humble leadership meant involving nurses and clerical staff in design discussions, valuing their lived experience as much as executive expertise. Leaders who lack humility often distort results by pushing for their own preferred outcomes, silencing the insights of those closest to the work.

Support over control: Effective leaders act as sponsors rather than controllers. Their task is to create the conditions for learning; providing resources, removing barriers, and protecting space for teams to explore. A European energy company gave local offices the freedom to run their own probes into renewable energy adoption, offering funding and encouragement from the centre. This support led to solutions that made sense locally. Tight control from headquarters would have crushed that creativity.

Learning focus: Perhaps the most important behaviour is to reward learning as much as success. When only success is celebrated, people avoid risk and protect their reputations instead of stretching for insight. A financial services firm changed this pattern by introducing an award for “most valuable learning,” recognising pilots that failed but exposed blind spots. By treating failure as learning, the organisation strengthened both its resilience and its willingness to innovate.

Making pilots, probes, and experiments a leadership habit

Pilots, probes, and experiments are not one-off tools but ongoing leadership practices. They move organisations from static planning to active learning, from risk avoidance to resilience, and from rigid certainty to adaptive curiosity. Leaders who work in this way model humility, curiosity, and courage. They make it clear that learning is valued, that exploration is safe, and that adaptability matters more than being right at the start.

In practice, this means placing small bets, sharing results openly, celebrating what is learned, and resisting the urge to scale too early. It means building workplaces where experimentation is part of the daily rhythm, not just a special innovation exercise. When leaders embed these habits, they shift culture from compliance to engagement, from reaction to reflection.

Every act of leadership is, in essence, a conversation with uncertainty. Pilots, probes, and experiments remind us that progress is not a straight line but a practice of steady curiosity. They invite us to lead through questions rather than predictions and to create the conditions where others can learn alongside us.

True leadership here is not about bold declarations or perfect plans. It is about the patience to listen to what reality teaches, the courage to remain open when outcomes are unclear, and the generosity to make learning a shared experience rather than a private achievement.

This article connects to one on the Cynefin framework

Reflective questions

  • Where in your work are you still leading from certainty instead of curiosity?
  • What is one small, safe experiment you could begin this month?
  • How might you make learning visible and shared in your team’s daily rhythm?

Do you have any tips or advice for bringing into play pilots, probes or experiments

What has worked for you?

Do you have any recommended resources to explore?

Thanks for reading!