Mental model updating is the conscious process of recalibrating your brain’s predictive “best guesses” to align with new, contradictory evidence.
While “mental models” are a convenient psychological shorthand, science shows the brain actually operates as a predictive engine. It constantly generates internal simulations of how the world works to save energy. In the context of learning agility, “updating” is the difficult work of overriding an established neural prediction when it consistently fails to match reality. It is the transition from “this is how I expect the world to behave” to “I am intentionally revising my expectations based on new data.”
Mental model updating matters because our brains are naturally “conservative”, they prefer to ignore small errors to maintain a stable worldview. Agile leaders, however, treat “prediction errors” (surprises or failures) as high-value signals rather than noise. This behaviour ensures that your internal “operating system” doesn’t become a legacy system in a modern environment.
Why mental model updating matters
When this agility is low, leaders suffer from “predictive persistence,” where they continue to act as if their old rules are true even when the results are failing. They become expertly obsolete. High mental model updating allows a leader to remain “fit for purpose” indefinitely. It turns the brain into a living document, ensuring that your leadership logic evolves at the same rate as the complexity of your challenges.
Mental model updating spectrum
Effective leadership requires a balance between the stability needed to provide a coherent direction and the plasticity needed to stay relevant.
| Left side: Principled stability | Right side: Adaptive evolution |
|---|---|
Strengths
Liabilities
|
Strengths
Liabilities
|
What good and bad look like for mental model updating
| What bad looks like | What good looks like |
|---|---|
| Explaining away the error: Treating a surprising failure as a “one-off” fluke rather than a flaw in your logic. | Mining the surprise: Asking “What does this surprise tell me about the flaw in my current predictions?” |
| Belief perseverance: Doubling down on a failing strategy because “this is what made me successful.” | Intellectual de-leveraging: Letting go of past “truths” the moment they stop producing the expected results. |
| Linear addition: Adding new facts to your brain without ever changing the underlying “rules” of the game. | Structural rewrite: Fundamentally changing the “if-then” logic you use to understand the market. |
| Certainty theatre: Projecting absolute conviction to look “strong” while ignoring your own internal doubts. | Agile conviction: Having strong opinions that are held very loosely and updated frequently. |
| Protecting the ego: Feeling that being “wrong” is a failure of character rather than an update in data. | Decoupling self from stance: Viewing your ideas as disposable prototypes rather than part of who you are. |
Barriers to mental model updating
- Prediction error suppression: The brain often subconsciously “mutes” signals that contradict our expectations to avoid cognitive dissonance. We don’t just ignore the truth; our biology often prevents us from even seeing it.
- The “expert” identity prison: When your professional value is tied to “knowing,” admitting your mental map is wrong feels like a loss of power. You defend the old model to protect your status, not the business.
- The metabolic cost of unlearning: It is neurobiologically harder to inhibit a strong, existing neural pathway than to create a new one. “Unlearning” is physically exhausting and requires significant cognitive “slack.”
- Social consistency bias: Societies and organisations reward “consistency.” Changing your mind is often stigmatised as “flip-flopping,” creating a social incentive to stay wrong rather than be updated.
- Success-induced entrenchment: Past success creates a “competency trap.” The brain rewards you for using the same neural circuits that won before, even as the environment shifts around you.
- Temporal exhaustion: Meaningful updates require “Type 2” contemplative thinking. In high-stress, back-to-back meeting cultures, the brain defaults to fast, legacy heuristics to save energy.
- Sunk cost logic: The more you have publicly and financially invested in a specific worldview, the higher the psychological “price” of admitting it is no longer valid.
- Lack of “safe-to-fail” spaces: If errors are punished, leaders will hide their outdated logic rather than updating it, leading to a “frozen” organisation.
Enablers of mental model updating
- Adopting the “Bayesian” mindset: You treat your beliefs as “prior probabilities”, starting points that are meant to be constantly shifted by new evidence.
- Seeking disconfirming data: You don’t look for why you are right; you actively hunt for the one piece of data that proves your current “logic” is failing.
- Metacognitive labelling: You move from “I believe X” to “I am currently operating under the hypothesis that X.” This small linguistic shift detaches your ego from the idea.
- The “beginner’s mind” ritual: You regularly enter new domains where you have zero expertise. This keeps the neural pathways of “re-learning” plastic and active.
- Rewarding the “rethink”: You celebrate when a team member brings evidence that a long-held strategic pillar is now obsolete. You signal that updating is a high-status leadership act.
- Using “sacred cow” audits: Once a quarter, you pick an “untouchable” rule of the business and ask: “What if the exact opposite of this was now true?”
- External cognitive disruptors: You work with mentors or peers from unrelated industries specifically to “shred” your current logic and offer alternative frames of reference.
- The posture of intellectual humility: You define your value not by the permanence of your answers, but by the speed and accuracy of your updates.
Questions for reflection
- What is one “truth” about our industry that I have held for a decade that might actually be a liability today?
- When was the last time I genuinely changed my mind about a major decision because I was proven wrong?
- Am I currently trying to be the “expert” with the answer or the “learner” with the most accurate map?
- What surprising data point have I been trying to “explain away” lately to avoid changing my plan?
- If I were my own competitor, what part of my logic would I attack first?
- What part of my leadership style is “Version 1.0” in a “Version 5.0” world?
- Am I busy with the “right” things, or just using busyness to avoid the difficult work of re-thinking?
- If I were my own successor, what is the first legacy rule I would delete?
Micro practices for mental model updating
- The “kill the company” exercise: Imagine you are a start-up with zero legacy and your only goal is to put your current company out of business. What “legacy rules” would you exploit to win? These are the models you need to update first.
- The “logic audit”: Write down the “if-then” statement behind a current decision (e.g., “If we do X, then Y will happen”). Now, find one piece of evidence that suggests that “if-then” might be false.
- The “intellectual will”: Write a letter to your successor telling them which “standard ways of doing things” here are actually traps. Now, read that letter to yourself and start changing them today.
- The “belief-update” announcement: In your next meeting, say: “I used to believe X about our strategy, but after seeing Y, I’ve updated my view to Z.” This gives your team permission to do the same.
- The 10% doubt window: When presenting a plan, explicitly state the “10% of this I am most worried about” or the parts where your confidence is lowest. This invites a collaborative update rather than a defensive approval.
This is one of the 20 behaviours in the learning agility library. Visit the learning agility library to explore the rest.