ADAPTIVE ADOPTION™ — CHANGE AGILITY™

Change Agility™

Change management was designed in an era, and for an era, that has passed. This is an organizational change framework built for AI — not just for change managers, but for Forward Deployed Engineers, developers, and adoption leaders.
THE PROBLEM

The Specific Adoption Problems AI Creates

  • Change is treated as a project, not a capability. Organizations run change as discrete initiatives with start dates and end dates. AI doesn't end. The organizations that win will be the ones that build permanent adaptive capacity — not the ones that execute the best rollout plan.
  • People are treated as the last mile. Traditional change cut the check, bought the tech, and persuaded people to use it. With AI, the human is the integration layer, not the end-user.
  • Traditional training never worked well. Some experts put "training transfer to the job" as low as ten percent. AI is more like carpentry than calculus — you have to build with it to learn it. And the half-life of AI knowledge is about three months.
  • Complexity defeats planning. AI adoption is a complex adaptive system — three bodies in motion (humans, organizations, a non-human agent) producing emergent patterns no plan can predict.
  • Trust is undiagnosed. Organizations know trust is low but treat it as a communications problem rather than diagnosing which of four trust dimensions has actually failed. Under-trust and over-trust operate simultaneously, requiring entirely different interventions.
  • Prototyping is absent. Change management assumes you know the destination. AI moves too fast to define a future state; you must design and iterate your way toward it.
  • Persuade then pray. Superb communication, education, and inspiration — followed by waiting for behavior to inevitably follow. It doesn't. In the lab of their own lives, people realize that good intentions matter less than behaviors — but in business, the intention-action gap is a chasm.
  • Ethics is outsourced to compliance. A "short list" of AI ethical issues runs to forty items. No committee meets fast enough to adjudicate them in real time.
THE RESPONSE

Why Change Agility Is Different

  • Agility as infrastructure. Change Agility is not a program you run — it is a capability you build. Seven pillars, each with tools, processes, behaviors, and skills that compound over time. The flywheel, not the project plan.
  • People-first sequencing. Invert the order: begin with tools and workflows that serve the human, that augment rather than automate. You get to the efficiency gains faster by not starting with them.
  • Craft, not curriculum. AI is not a language you learn; it is a system you architect. The unit of learning is the community of practice — people learning together, in public, through doing.
  • Complexity-native. Probe-sense-respond replaces plan-and-execute. Designed for emergence, not control.
  • Trust as a named variable. Four trust dimensions (Relational, Institutional, Self-Trust, Task Trust), each with distinct failure modes and distinct interventions. Naming it is not fixing it; calibration does.
  • Design thinking for change. Build–Measure–Learn applied to organizational change itself. Every initiative is a prototype until evidence says otherwise.
  • Behavior before attitude. Change the environment, change the behavior; the mindset catches up. Behavioral science provides tools for understanding, changing, and measuring change.
  • Active ethics. Ethical business isn't about having better surveillance — it is about the whole workforce saying: I know we can, but should we?
  • Integrated, not isolated. Change Agility works alongside Leadership Delta and Behavioral Governance — a new approach to leadership and governance designed for the AI era. The three frameworks are mutually reinforcing and designed to be deployed together.

When management ideas live in PDFs, they become rigid. The tools, diagnostics, and methods behind this framework are open and inspectable.

OPEN THE REPOSITORY

Below: the high-level elements of the framework, with a single example tool, process, behavior, and skill per pillar.

THE CHANGE AGILITY FRAMEWORK

TOOL
WHAT YOU USE
PROCESS
HOW YOU WORK
BEHAVIOR
WHAT YOU MODEL
SKILL
WHAT YOU BUILD
Master the Craft
Build capability through doing, not curriculum.
Building with AI is like architecture, not calculus: you can't learn it in a classroom or a webinar. The dominant L&D model fails when the skill landscape shifts quarterly. AI Mastery requires craft: intimate knowledge of tools, understanding of their limits, and quality work produced through skill, not luck. The unit of learning is the community of practice.
TOOL
Sandbox Environments
Safe place to break things without production consequences
PROCESS
Peer Learning Circles
Weekly show + tell; structured "work out loud"
BEHAVIOR
Fail Loudly — Safely
Share misses and fixes in public, not just wins
SKILL
Prompting as Craft
Task decomposition, constraint design, context engineering
Embrace Complexity
You cannot plan your way through emergence, but you can design for it.
Beyond personal productivity and turnkey SaaS lies complexity. AI adoption is a complex adaptive system — a 3-body problem: AI, human, organization. Change support must be equally adaptive: probe-sense-respond, not waterfall.
TOOL
Polarity Maps
Managing tensions (innovation vs. safety) rather than choosing sides
PROCESS
Probe
Design safe-to-fail experiments that test assumptions
BEHAVIOR
Silo-Busting
Breaking down organizational boundaries to enable cross-team learning
SKILL
Pattern Journaling
Recording emerging patterns and system behaviors over time
Consciously Manage Trust
Trust is the change-resistance antivenom.
Only 30 percent of Americans trust AI, and over half fear for their job. This was not the case with ERP, cloud, or mobile. Trust is not a byproduct of good leadership — it is a four-dimensional dynamic to calibrate: Relational, Institutional, Self-Trust, and Task Trust. Under-trust and over-trust both operate simultaneously and require different interventions.
TOOL
RIST™ Trust Diagnostic
Use RIST diagnostic for a whole-system review of trust — 4 dimensions, 2 failure modes.
PROCESS
Skeptics Roundtable
Surface implicit trust/distrust assumptions about AI reliability
BEHAVIOR
Vulnerability First
Leaders admitting ignorance and modeling uncertainty publicly
SKILL
The Trust Talk
Doing what you say you will — the behavioral foundation of trust
Put People First™
Start with augmentation. Efficiency follows faster than if you start with efficiency.
Traditional technological change cut the check, bought the tech, and persuaded people to use it. Invert the order: begin with tools and workflows that serve the human, that augment rather than automate. The human is the integration layer, not the end-user. People-first sequencing produces faster adoption, higher trust, and ultimately greater efficiency than efficiency-first approaches.
TOOL
Friction Audit
Mapping the drudgery in current workflows; start AI here
PROCESS
Hopes and Fears
Map what people are actually afraid of, not what the change plan assumes
BEHAVIOR
“Superpower” Mindset
Reframing AI as a bionic suit, not a replacement
SKILL
Skill Shoutouts
Cheering when drudgery is removed — reinforcing augmentation
Design and Prototype
Every initiative is a prototype until evidence says otherwise.
Traditional change management assumes you know the destination: define the future state, plan the transition, execute. AI moves too fast to define a future state. Design thinking applied to organizational change: Build–Measure–Learn. Rapid prototyping sprints, safe-to-fail experiments, and evidence-based scaling replace big-bang rollouts.
TOOL
No-Code Sandboxes
Tools for non-technical builders to prototype AI workflows
PROCESS
BUILD: Bound the Experiment
Define hypothesis, minimum viable test, and kill criteria
BEHAVIOR
Bias for Action
Taking quick steps before having perfect information
SKILL
“Rough Work” Transparency
Showing unfinished work to learn faster
Prioritize Behavior
Change the environment, change the behavior; the mindset catches up.
Awareness, desire, and knowledge do not add up to action. The intention-action gap is empirically documented — Aristotle named it akrasia twenty-four centuries ago. Pillar 6 replaces "persuade then pray" with diagnosis-first, environment-first, behavior-first methodology. COM-B diagnosis, behavioral systems mapping, and behavioral science — diagnostics and drivers.
TOOL
Behavioral Science — Diagnostics and Drivers
Capability, Motivation, Trust, Opportunity — diagnose which is the actual blocker
PROCESS
SHIFT: Specify Behavior
Translate vague outcomes into observable actions (who / what / when / where)
BEHAVIOR
Environmental Design
Tweak defaults, reduce friction, restructure choice architecture
SKILL
Friction Mapping
Identifying sludge that makes the desired behavior harder than it needs to be
Manage Ethics Always
Ethics is not compliance and it isn't moralizing — it's phronesis.
A "short list" of AI ethical issues runs to twenty items; no previous technology has been so powerful or so dangerous. Novel moral questions arise faster than governance can adjudicate. The leader is the ethicist in the room by default. Phronesis — practical wisdom — the meta-ethical skill that selects the right ethical lens for the context. Ethics as practiced capability, not governance appendix.
TOOL
Should-We Scan
Business Model Canvas adapted for ethical risk: who is affected, what could go wrong
PROCESS
Pre-Mortems
Harm-focused pre-mortem before every deployment
BEHAVIOR
Stop-Cord Practice
The moral pause button before every deployment decision
SKILL
Radical Transparency
Admitting AI limitations publicly — to users, to clients, to the board
THE DIAGNOSTIC ENGINE

Behavioral Science — Diagnostics and Drivers (adapted from COM-B)

C
Capability
“Can I?”

Skills, tools, knowledge, and access. The most common misdiagnosis: assuming motivation is the problem when people literally don't know how.

M
Motivation
“Why should I?”

Incentives, purpose, identity. Extrinsic rewards get compliance. Intrinsic motivation — autonomy, mastery, purpose — gets adoption.

T
Trust
“Should I?”

The four-dimensional trust problem. Relational, Institutional, Self-Trust, and Task Trust each fail differently and require different fixes.

O
Opportunity
“Am I enabled?”

Friction, access, time, governance. The org that blocks AI tools behind 9-month legal review has diagnosed its own failure before it begins.

INTELLECTUAL PROVENANCE

The Change Agility Reading List

The books and research that built this framework
ARGYRIS, CHRIS & SCHÖN, DONALD
Organizational Learning (1978)
Single-loop vs. double-loop learning. Organizations that adopt AI without questioning their adoption assumptions are single-looping.
ARISTOTLE
Nicomachean Ethics
Phronesis — practical wisdom as the meta-virtue. The episteme/techne/phronesis distinction is the backbone of Pillar 7. Akrasia — the intention-action gap — underpins Pillar 6.
AUTOR, DAVID
Why Are There Still So Many Jobs? (2015)
Task-level analysis of automation. The unit of AI adoption is the task, not the job — foundational to Pillar 4.
BANDURA, ALBERT
Self-Efficacy (1977)
Belief in one's own capability predicts behavior change. The psychological mechanism behind Pillar 1's emphasis on mastery through practice.
BROWN, TIM
Change by Design (2009)
Design thinking and the IDEO methodology. Prototyping as a way of thinking, not just testing — the foundation of Pillar 5.
DAVENPORT, THOMAS & KIRBY, JULIA
Only Humans Need Apply (2016)
Augmentation vs. automation. Five strategies for human-AI partnership. The intellectual ancestor of Pillar 4's people-first sequencing.
DEWEY, JOHN
Experience and Education (1938)
Learning by doing. Experience is primary, theory secondary. The philosophical anchor for Pillar 1: craft over curriculum.
EDMONDSON, AMY
The Fearless Organization (2019)
Psychological safety as a prerequisite for learning and experimentation. Necessary for every pillar — but insufficient without trust calibration.
ERICSSON, ANDERS
Deliberate Practice (1993)
Expertise is built through structured, intentional practice with feedback — not through time served. The learning science behind Pillar 1.
FEYNMAN, RICHARD
The Pleasure of Finding Things Out (1999)
Anti-bullshit, direct contact with reality, curiosity, tinkering, and epistemic honesty. A spirit-book for the entire framework.
FLORIDI, LUCIANO
The Ethics of Artificial Intelligence (2023)
Information ethics and the philosophical infrastructure for AI governance. The academic grounding for Pillar 7's active ethics.
GEBRU, TIMNIT ET AL.
Model Cards for Model Reporting (2019)
Transparency tools for documenting AI systems. The inspiration for Adaptive Adoption's own model card architecture.
GIBBONS, PAUL
The Science of Organizational Change (2015, 2019)
The argument that change management must be rebuilt on behavioral science, complexity, trust, and ethics — not persuasion models from the 1940s.
GOLLWITZER, PETER
Implementation Intentions (1999)
If-then planning — the strongest known intervention for closing the intention-action gap. The mechanism behind Pillar 6's behavioral tools.
HALPERN, DAVID
Inside the Nudge Unit: How Small Changes Can Make a Big Difference (2015)
Applied behavioral science from the UK's Behavioural Insights Team — translating nudge theory into organizational and policy-level decision-making.
HEALY, JAMES
BS at Work
Behavioral science applied to organizational change. Change the environment, change the behaviour.
HEALY, JAMES & GIBBONS, PAUL
Adopting AI: The People-First Approach (2025)
A human-centered approach to AI strategy, adoption, and ethics.
KENNEDY, TRICIA & GIBBONS, PAUL (EDS.)
The Future of Change Management, Vol. 1 (2024)
Contributed chapters on mental health, neuroscience, behavioral tools, trust, and complexity-native change design.
HUME, DAVID
A Treatise of Human Nature (1739)
"Reason is, and ought only to be, the slave of the passions." The philosophical ancestor of behavioral science — motivation precedes rational compliance.
KAHNEMAN, DANIEL
Thinking, Fast and Slow (2011)
Dual-process theory. Trust defaults and automation bias are System 1 failures in a System 2 domain. Central to Pillars 3, 6, and 7.
LAVE, JEAN & WENGER, ETIENNE
Situated Learning (1991)
Legitimate peripheral participation. Learning happens at the edge of practice, not in classrooms. The social theory behind communities of practice.
LEE, JOHN D. & SEE, KATRINA A.
Trust in Automation (2004)
The undertrust/overtrust spectrum. People miscalibrate trust in automated systems predictably — and dangerously.
LITTLE, JASON
Lean Change Management (2014)
Iterative, feedback-driven approach to organizational change.
MANIFESTO FOR AGILE SOFTWARE DEVELOPMENT
agilemanifesto.org (2001)
The original source text: learning by doing, responsiveness over rigid planning, working reality over documentation theater. The spirit ancestor of Pillars 1 and 5.
MEADOWS, DONELLA
Thinking in Systems (2008)
Leverage points and feedback loops. Where you intervene in the system matters more than how hard you push.
MEZA, ROBERT
Aim for Behavior
The argument that change programs should target observable behaviors, not attitudes or awareness. Central to Pillar 6.
MEZA, ROBERT & GIBBONS, PAUL
Behavioral Science Tools for the Change Professional (2024)
The SHIFT method for behavioral diagnosis. Published in The Future of Change Management, Vol. 1.
MICHIE, SUSAN ET AL.
The Behaviour Change Wheel (2014)
COM-B model and 93 behavior change techniques. The most comprehensive taxonomy of behavior change interventions available.
MINTZBERG, HENRY
The Rise and Fall of Strategic Planning (1994)
The clean anti-planning ancestor. AI moves too fast to define a future state; you must design and iterate your way toward it.
NONAKA, IKUJIRO & TAKEUCHI, HIROTAKA
The Knowledge-Creating Company (1995)
Tacit-to-explicit knowledge conversion. How organizations learn from practitioners, not just databases.
PEARL, JUDEA
Causality: Models, Reasoning, and Inference (2000)
The formal framework for causal inference. Why correlation-based AI needs causal reasoning — and why organizations confuse the two.
REST, JAMES
Four-Component Model of Moral Behavior (1986)
Ethical behavior requires sensitivity, judgment, motivation, and character — not just rules. The psychology behind Pillar 7.
RIES, ERIC
The Lean Startup (2011)
Build-measure-learn. Every AI initiative is a prototype until evidence says otherwise. The operational logic of Pillar 5.
SARASVATHY, SARAS
Effectuation (2001)
Entrepreneurial logic under uncertainty — start with what you have, not what you predict. The design principle behind probe-and-iterate.
SCHÖN, DONALD
The Reflective Practitioner (1983)
Reflection-in-action and knowing-in-action. How professionals actually think in practice, not in theory.
SENGE, PETER
The Fifth Discipline (1990)
Systems thinking and the learning organization. The aspiration that organizations can learn — and the evidence that most don't.
SENNETT, RICHARD
The Craftsman (2008)
The dignity and epistemology of working with your hands. You learn by making, not by reading about making. The philosophical backbone of Pillar 1.
SNOWDEN, DAVE
Cynefin Framework (1999)
Domain distinctions — simple, complicated, complex, chaotic. Matching the intervention to the domain. The operating system of Pillar 2.
STACEY, RALPH
Complexity and Management (2001)
Complex responsive processes. Complicated is expert-solvable; complex is emergent and unpredictable. The theoretical foundation for Pillar 2.
SUMMERFIELD, CHRISTOPHER
These Strange New Minds
How AI systems develop cognitive capabilities that are familiar yet alien. The neuroscience-informed perspective on what AI actually does.
THALER, RICHARD & SUNSTEIN, CASS
Nudge (2008)
Choice architecture — designing the environment so the default behavior is the desired behavior. The policy logic behind Pillar 6.
WARDLEY, SIMON
Wardley Mapping (2016)
Situational awareness through value-chain mapping. Understanding where you are before deciding where to go.
WEISBORD, MARVIN
Productive Workplaces (1987, 2012)
People-first, whole-system, dignity-and-meaning OD — without collapsing into stale change-management tropes.
WENGER, ETIENNE
Communities of Practice (1998)
The unit of learning is the community, not the individual. The social architecture behind Pillar 1's peer learning circles.
Gibbons original IP in this framework: Behavioral Science — Diagnostics and Drivers (adapted from COM-B) (Capability, Motivation, Trust, Opportunity). The AI Mastery Matrix (6 dimensions × 4 levels). The RIST Trust Framework™ (Relational, Institutional, Self-Trust, Task Trust). The undertrust/overtrust spectrum applied to AI. Augmentation-first sequencing. Complexity-native adoption design. Prototype-native change design. Ethics as practiced capability. The claim that change management as a discipline has not yet caught up with the demands of the AI era — and the attempt to close that gap. The entire Adaptive Adoption™ structure, naming, and operational content.

See how Change Agility works in your organization

A board briefing, a diagnostic workshop, or a full Adaptive Adoption sprint — tailored to where you are.

Book a Consultation Explore the Repo