The Corpus

Change is inevitable — whether it represents progress is up to us.

The world is full of questions that matter. It is also full of internet noise.

Most AI is trained on the internet. This one was trained on me.

What, in large language models, "bakes in" ethical challenges?
How does the psychology of risk skew board decisions — and how did it bring about the global financial crisis?
How might the economics of AI affect the liberal international order that peaked during the Blair-Clinton years?
How are cognitive biases a factor in both organizational change and internet slop?
What do most businesses get wrong about AI adoption?
Why do some doctors call GLP-1s "the first true longevity drug?"
Where is Paul Gibbons wrong?
What is the link between information disorder and organizational culture?
What, in large language models, "bakes in" ethical challenges?
How does the psychology of risk skew board decisions — and how did it bring about the global financial crisis?
How might the economics of AI affect the liberal international order that peaked during the Blair-Clinton years?
How are cognitive biases a factor in both organizational change and internet slop?
What do most businesses get wrong about AI adoption?
Why do some doctors call GLP-1s "the first true longevity drug?"
Where is Paul Gibbons wrong?
What is the link between information disorder and organizational culture?

Ask anything.

Answers are drawn from 8 published books, 2 unpublished manuscripts, 5 works in progress, and 50 articles. Sources cited in every response.

Ask a question about leadership, change, AI adoption, or anything in the work. Every answer is grounded in the source material.

I'm Client Zero

When I started giving talks on AI in 2017, it was theory. Theory helps but doesn't solve business problems. I wanted to feel the same pain my clients feel — building something real with my own hands. One person. No team. No engineering degree. No budget. The pipeline, the chunking, the embeddings, the query layer — built using Claude, from scratch, in a few weekends. The engineering isn't a footnote. It is the argument.

The Internet Is an Averaging Machine

Ask any AI about organizational change and you get the consensus. The SEO-optimized. The algorithmically safe. The Corpus draws on a body of scholarship with a 30-year evidence base, peer-reviewed foundations, and a documented intellectual lineage — positions taken, defended, and revised in public, across eight books and dozens of white papers. Not the average of what everyone thinks. What one person has argued, tested, and staked their reputation on.

Ask the Hard Questions

To get to grips with an author, you buy their books, spend hours reading them, or use the index and hope your interests are covered. Across eight books and dozens of papers, that is an oppressive and expensive project.

In the Corpus, you can ask:

  • Does Paul actually have evidence for his claims about culture change?
  • How does Adaptive Adoption differ from Kotter — and why does it matter?
  • What is the link between information disorder and organizational culture?
  • Where is he wrong?

Real interrogation. Cited sources. No puffery. A library that talks back.

As of March 2026, Luciano Floridi — one of the most cited philosophers of information — is the only comparable public intellectual to have made a substantial body of work queryable through a dedicated AI interface.

This is the first in the management and leadership disciplines.
And the first built by the author himself.

How the Corpus Thinks

Corpus

What goes in — 8 books, manuscripts, articles, white papers, and unpublished work.

BOOKS
8 Published Books
30 years of research, frameworks and evidence
live
UNPUBLISHED
The Great Collisions
2 volumes. Nowhere else on the internet.
live
GITHUB
Adaptive Adoption
Living framework repo, versioned and growing
v2.0
ESSAYS
Substack and Talks
Long-form thinking, keynote transcripts
v2.0
Processing

How text is chunked, embedded, and indexed for semantic search.

CHUNKER
agent-chunker.py
Semantic chunking by argument structure
running
EMBEDDINGS
OpenAI Ada
1,536-dim vector per chunk
live
DATABASE
Supabase pgvector
paulgpt_chunks 1,400+ chunks stored
live
Retrieval

How the system finds the most relevant passages for your question.

QUERY
Semantic Search
Question to vector embedding in real time
cosine similarity
MATCH
Top-k Chunks
Most relevant passages ranked by relevance
k = 8
VERCEL
Query Layer
Serverless function, API key protected
live
Response

How Claude synthesizes an answer grounded in the source material.

CLAUDE API
Anthropic Claude
Chunks + system prompt = grounded answer
Sonnet 4
CITATIONS
Source Attribution
Every answer names its book sources
v1.0
INTERFACE
The Chatbox
paulgibbonsadvisory.com embedded via Vercel
live

Sample Questions & Answers

Real answers drawn from the Corpus. Every response is grounded in Paul's published and unpublished work.

What is the LIAR model and how is it used?

LIAR stands for Logic, Intuition, Authority, and Research evidence — a critical thinking framework I adapted from the epistemological work of Jenny Duke-Yonge at Macquarie University. It gives practitioners four lenses for evaluating any claim: Does the reasoning hold up logically? Does it accord with expert intuition? What authority stands behind it, and is that authority earned? And what does the empirical evidence actually show? The real power is in using all four together and weighing them for the situation. I developed LIAR as an antidote to epistemic vice — the uncritical acceptance of management fads, guru pronouncements, and junk science. The classic example: the "70% of change initiatives fail" myth survived 30 years despite zero empirical foundation. LIAR would have killed it in its cradle.

Sources: Change Myths, The Science of Organizational Change

What is Adaptive Adoption and how does it differ from traditional change management?

Adaptive Adoption is a people-first AI adoption framework that treats AI implementation as a human transition, not a technology rollout. Traditional change management follows top-down, linear playbooks — assess, plan, communicate, train. Adaptive Adoption is bottom-up and experimental, built on three pillars: behavioral governance (how do we shape the norms and habits around AI use?), change agility (how do organizations learn to become disruptors rather than the disrupted?), and the AI Leadership Delta (what new capabilities do leaders need when their smartest colleague is a machine?). It puts ethics at its center rather than treating it as a compliance checkbox, and it acknowledges that AI adoption is fundamentally different from previous technology waves because the technology itself is unpredictable, opaque, and evolving faster than our governance structures.

Sources: Adopting AI, Adaptive Adoption Framework

What does Paul say about game theory, trade, and the global economy?

My markets writing examines how three intellectual pillars that held up the global order for forty years are fracturing simultaneously. We bet the house on Ricardian efficiency — the idea that it doesn't matter where things are made, only that they're cheap. We bet on institutional game theory — the belief that rational actors in repeated games would keep cooperating. And we bet on delayed gratification — investing in emerging markets and just-in-time supply chains, trusting that interdependence would make everyone richer and safer. Modern game theory, formalized by von Neumann and refined by Nash, reveals an uncomfortable truth through the Prisoner's Dilemma: rational actors acting in self-interest will often choose mutual destruction over cooperation. The question now is whether the liberal international order can survive when its foundational assumptions about rational cooperation are being tested by nationalism, tariff shocks, and AI-driven economic disruption.

Sources: Markets v1.0

How does philosophy apply to artificial intelligence?

Philosophy isn't optional decoration for AI — it provides the essential frameworks we're missing. Epistemology asks how we can know what is true when synthetic systems generate plausible but unreliable content. Philosophy of mind asks whether these systems are conscious, and what it even means to ask that question. Political philosophy examines how we govern technologies that concentrate power. Aesthetics asks whether AI-generated art is really art. Philosophy of science asks what happens to the scientific method when AI can generate hypotheses faster than we can test them. And the philosophy of technology — once considered fringe — now poses the most urgent questions of our time: How are humans different from machines? Are we just machines made of carbon rather than silicon? These aren't abstract puzzles. They're the operating system for every policy decision about AI.

Sources: Philosophy and AI Series (Substack)

What is wrong with popular leadership advice?

Pop leadership is dangerous because it substitutes inspiration for information. Dan Dennett coined the term "deepity" — a statement that sounds profound but asserts something trivial on one level and meaningless on another. Leadership literature is saturated with deepities. The field confuses popularity with validity, speaking fees with scholarship, and TED talks with peer review. When a guru earns $50,000 per keynote, their ideas get reinforced by commercial success rather than empirical testing. The result is an industry built on unfalsifiable claims, recycled metaphors, and what I call epistemic vice — the uncritical acceptance of attractive ideas. We need second-tier thinking in leadership: complexity-aware, evidence-grounded, and willing to say "we don't know" rather than offering false certainty.

Sources: Change Myths, Impact, The Science of Organizational Change

Why do humans misunderstand risk and probability?

In a VUCA world, you need to understand volatility and uncertainty. Yet even CEOs, CFOs, and senior business people I work with are ignorant of some of the basics of risk and probability, which can make them the "sucker" in the game. We confuse risk with uncertainty, treat low-probability events as impossible, and mistake a good outcome for a good decision. In poker, you can make the mathematically perfect play — putting your chips in with pocket Kings — and still lose. The universe deals a bad card. But that doesn't make the decision wrong. We systematically overweight vivid, recent events and underweight base rates. We confuse correlation with causation. We anchor to irrelevant numbers. These aren't just cognitive curiosities — they explain financial crises, failed strategies, and organizational disasters from Deepwater Horizon to the 2008 meltdown.

Sources: Markets v1.0, The Science of Organizational Change, Impact

What is the role of spirituality in the workplace?

This is probably the most comprehensive scholarly treatment of workplace spirituality to date — and I want to be clear: not self-help, not woo. The serious evidence-based case for meaning-making, vocation, purpose, happiness, mindfulness, altruism, motivation, and engagement in organizational life. Drawing on psychology, philosophy, evolutionary biology, sociology, and systems thinking, the work examines what spirituality actually is, how it differs from religion, and whether spiritual workplace experiences can be intentionally cultivated. The core argument is that when organizations ignore the spiritual dimension — the search for meaning, connection, and purpose — they hollow out the very things that make people want to bring their full selves to work.

Sources: The Spirituality of Work and Leadership

How should organizations measure AI adoption success?

Most organizations are measuring AI adoption wrong — they track deployment metrics (how many tools rolled out, how many people trained) rather than behavioral metrics (how many people actually changed how they work). The Adaptive Adoption Maturity Model assesses organizational readiness across behavioral, governance, and cultural dimensions. Where are people experimenting? Where are they resisting, and why? Are ethical guardrails in place before scale, or bolted on after? The measurement challenge with AI is compounded by the intention-action gap — the chasm between what people say they'll do with AI and what they actually do. Behavioral science offers solutions here: observe behavior, don't just survey attitudes.

Sources: Adopting AI, AAMM White Paper, Adaptive Adoption Framework

What can behavioral science teach us about organizational change?

Behavioral science was the missing foundation for change management. For decades, the field relied on intuitive models — Kotter's 8 steps, Lewin's unfreeze-change-refreeze — without grounding them in how humans actually think and decide. The Science of Organizational Change was the first book to bring behavioral economics, cognitive psychology, complexity theory, and mindfulness research together for change practitioners. The key insight: cognitive biases don't just affect individuals, they scale up to organizational pathologies. Confirmation bias becomes groupthink. Loss aversion becomes resistance to change. The planning fallacy becomes failed transformation timelines. Once you understand the behavioral architecture, you can design change interventions that work with human nature rather than against it.

Sources: The Science of Organizational Change, Change Myths, Impact

Can AI be trusted, and what does trust in AI adoption require?

Trust arrives on foot and leaves on horseback. One bad experience with an early language model — confidently wrong, embarrassingly biased — can poison an entire organization's willingness to adopt AI. Trust in AI adoption is not a technology problem; it's a human systems problem. It requires transparency (people need to understand what the AI is doing and why), predictability (consistent behavior builds confidence), competence (the AI must actually deliver value), and ethical guardrails (people need to know someone is watching). Most leaders get this backwards — they focus on proving AI's technical capability when the real bottleneck is psychological safety. People need permission to experiment, permission to fail, and the assurance that AI augments their judgment rather than replacing it.

Sources: Adopting AI, Trust and AI Adoption (Substack)

1,400 PASSAGES · 8 BOOKS · CITED ANSWERS TEST DRIVE THE CORPUS →