Why generative AI can stretch human capability across adjacent work — but still hits a wall where deep judgment begins
This is ultimately a question of AI and domain expertise, not AI fluency alone.

Direct answer: Generative AI can extend capability into nearby domains, especially in structured and conceptual work. But when execution depends on tacit knowledge, domain judgment, and the ability to recognize what quality really means, AI becomes much less effective at closing the gap between specialists and outsiders. The real limit is not raw machine fluency. It is knowledge distance — the gap between what a person already knows and what the target domain requires them to recognize, refine, and judge well.
Key insights
AI stretches adjacent capability more easily than distant capability.
It helps more with structured thinking than domain-native execution.
The scarce resource in the AI era is not output generation, but evaluative judgment.
AI and domain expertise: where the gap really appears
The public conversation around AI still swings between two shallow extremes. One side says generative AI will flatten expertise and make specialists less valuable. The other says AI is overhyped and will change much less than people imagine.
Both positions miss something important.
The more serious question is not whether AI makes everyone smarter in the abstract. It is whether AI can help people cross from one form of expertise into another — and where that transfer breaks down.
That is no longer a theoretical question. A Harvard summary of the research framed the issue clearly: generative AI can help people stretch beyond their usual roles, but it still does not erase the difference between outsiders and real experts. The underlying working paper introduces the idea of a GenAI wall — the point at which AI no longer meaningfully closes the gap between insiders and outsiders because the outsiders are simply too far from the domain’s underlying logic.
This is a much sharper way to think about AI and work.
It moves us beyond vague claims about productivity and toward a more useful question: Where does AI actually transfer capability, and where does it fail to transfer judgment?
What the study actually examined
The study was conducted at IG, a large UK firm. Researchers examined 78 employees across three occupational groups:
-
web analysts as domain insiders,
-
marketing specialists as adjacent outsiders,
-
technology specialists as distant outsiders.
Participants completed two kinds of work:
-
conceptualization of a web article,
-
execution of the full article.
Some participants worked with a bespoke GenAI tool tailored to the company’s environment rather than a generic public model. That matters, because the study did not compare vague opinions about AI. It tested how people from different occupational backgrounds performed the same real work, with and without GenAI support.
The core finding was clear: AI was more effective at bridging expertise gaps for near occupations than for distant occupations, and more effective for conceptualization than for execution.
Study snapshot
-
Setting: IG, UK-based firm
-
Sample: 78 employees
-
Groups: web analysts, marketing specialists, technology specialists
-
Tasks: conceptualization and execution of a web article
-
Key finding: GenAI narrowed the gap more effectively in adjacent roles than distant roles, and more in ideation than in execution
What knowledge distance means
Knowledge distance is the gap between a person’s existing mental model and the domain logic required to perform, evaluate, and refine work in another field.
In plain English, it is the difference between:
-
doing work that sits next to what you already know,
-
and doing work that only looks similar from the outside.
The paper frames this through adjacent outsiders and distant outsiders. Adjacent outsiders share more overlap with the insider role, so their prior knowledge transfers more easily. Distant outsiders face a much harder form of transfer, because the underlying skills, evaluative criteria, and working assumptions diverge more sharply.
That is why the real AI question is not, Can the model generate something plausible?
It is, Can the human using it recognize what good looks like in this domain?
The GenAI wall
The paper’s most useful concept is the GenAI wall.
This is the horizontal limit of AI-enabled expertise transfer: the point where AI can no longer bridge the gap between insiders and outsiders because the outsider lacks the foundational knowledge needed to use AI recommendations well.
Put simply, the GenAI wall appears when AI can still generate plausible work, but the user lacks enough domain understanding to judge, adapt, and finish that work well.
That idea deserves attention because it corrects a common mistake in AI strategy.
“AI can lend fluency across domains faster than it can lend judgment within them.”
Many teams assume that if AI can produce fluent output, role boundaries are collapsing. But fluency is not the same as domain-native performance. The paper suggests something more precise:
-
AI can often help people borrow the surface structure of another domain.
-
It is much weaker at giving them the embedded evaluative judgment that experts use instinctively.
That is where the wall appears.
Where AI worked — and where it did not
The study found that with GenAI support, both adjacent and distant outsiders could perform at insider-level in the conceptualization task. This suggests AI can be highly effective for abstract, structured, idea-oriented work.
But the pattern changed in execution.
For article execution, GenAI helped marketing specialists close the gap with web analysts. It did not help technology specialists fully close that gap. Their performance distribution remained wider, and the gap persisted.
That is the wall in action.
It is not that the technology specialists were incapable people. It is that they approached the task through a different evaluative framework.
In effect, AI narrowed the gap in thinking about the work more than it narrowed the gap in doing the work well.
Why execution breaks before ideation

This is where the argument becomes especially relevant for leaders.
Conceptualization is often more abstract. It can be scaffolded by prompts, examples, structure, and pattern recognition. Execution, by contrast, usually requires embodiment. It demands turning a concept into a finished artifact in a way that fits the norms, priorities, and hidden quality criteria of the domain.
That maps onto real-world work surprisingly well.
It is one thing to ask AI for:
-
an outline,
-
a first draft,
-
a comparison,
-
a list of ideas.
It is another thing to ask it — and a non-expert user — to decide:
-
what should be emphasized,
-
what should be removed,
-
what quality standards truly matter,
-
what makes the final output successful in its native environment.
The farther a user is from the domain, the harder that becomes.
Why distant outsiders still struggled

The technology specialists in the study did not simply “write worse.” They used a different mental model of the task.
The qualitative evidence suggests they treated article writing more like technical documentation: concise, literal, stripped down, clarity-first. But the insider domain — web analysts — was operating with a different frame, one shaped by SEO, CRO, audience engagement, headings, structure, and external-facing persuasion.
That difference is revealing.
The issue was not just output generation. The issue was evaluation.
Marketing specialists were close enough to share a related logic of messaging and audience relevance. Technology specialists were not. So even when AI generated useful suggestions, they were less able to judge which outputs matched the domain’s actual success criteria.
This is also where the broader cognitive implications of AI become important: fluency can accelerate output without deepening understanding at the same rate.
This is the part many organizations still underestimate.
AI does not only require prompting skill. It requires selection skill, editing skill, and domain-aware judgment.
The illusion of universal expertise
Generative AI creates a powerful illusion: because it is fluent, it feels transferable.
A convincing paragraph can make someone feel more expert than they are. A structured answer can make cross-domain work look easier than it is. A polished draft can conceal the fact that the user does not yet understand the real quality criteria of the field.
That illusion is dangerous in strategy, hiring, organizational redesign, and content operations.
Because once leaders assume role boundaries have dissolved, they start making the wrong substitutions:
-
replacing specialists with generalists too aggressively,
-
collapsing expert review too early,
-
mistaking first-draft speed for finished-work competence,
-
undervaluing domain knowledge precisely when AI makes surface-level performance look deceptively strong.
The paper points toward a more disciplined interpretation: GenAI may significantly widen adjacent capability, but it does not automatically erase deep specialization.
The problem is not only mistaken confidence, but also the way machine-assisted speed can intensify continuous partial attention and reduce reflective depth.
What this means for leaders
The practical lesson is not “AI is limited.” That is too vague to guide decisions.
The real lesson is that AI changes organizations unevenly.
1. Expand adjacent capability deliberately
The best early gains often come when AI helps people operate one layer beyond their home expertise. This is where learning transfer is easier, judgment overlap is higher, and AI can actually compress coordination and drafting time in useful ways.
2. Protect expert review where tacit judgment matters
When work depends on domain-specific success criteria, insider evaluation remains critical. This is especially true for execution-heavy work, where the final 20 percent often determines whether the output actually works.
3. Redesign roles based on task depth, not AI hype
The right question is not “Which experts can we replace?” It is “Which adjacent contributors can AI help us widen safely, and where do we still need specialist judgment at the point of evaluation?”
That is a much more mature operating model.
And when every role is pushed to produce faster with AI, the organizational cost is not only strategic confusion but also modern exhaustion.
What the research does not prove
This is important for trust.
The paper is strong, but it does not prove that novices can never become experts, nor that all domains will show the same wall in the same place. It is one firm, one experimental setting, and one class of tasks. It is a working paper, not the final word on all AI-enabled expertise transfer.
That restraint makes the framework more useful, not less.
Because a good framework does not need to explain everything to clarify something important.
A better way to talk about AI and expertise

The lazy version of the debate asks:
Can AI replace experts?
The better version asks:
Under what conditions does AI transfer useful capability across knowledge distance, and where does that transfer stop?
That question is more realistic, more strategic, and more human.
“The real limit of GenAI is not intelligence. It is knowledge distance.”
Because in many fields, what makes expertise valuable is not just information. It is judgment:
-
what to notice,
-
what to ignore,
-
what to prioritize,
-
what counts as quality,
-
what failure actually looks like before everyone else can see it.
AI can lend fluency across domains faster than it can lend judgment within them.
That may be one of the most important distinctions of the AI era.
The real scarcity is not output. It is domain-aware evaluation.
Final thought
Generative AI is already transforming work. The question is not whether it helps. It clearly does.
The question is where it helps enough to blur occupational boundaries and where deep domain knowledge still resists compression.
That is where knowledge distance becomes the critical idea.
AI can travel surprisingly far across nearby terrain. It can accelerate conceptual work, reduce friction, and widen contribution. But when execution depends on tacit standards, situated judgment, and the ability to recognize what truly counts, the machine’s fluency stops being enough.
That is not a weak conclusion.
It is a mature one.
And in a moment dominated by exaggerated claims, the most useful AI thinking may come from learning exactly where capability transfer ends and judgment still begins.
The future of AI and domain expertise will be shaped less by output generation than by who still knows how to judge what good really looks like.
At BBGK, this question sits inside a broader inquiry into how technology reshapes knowledge, judgment, and human life.
FAQ
Can AI replace domain experts?
Not reliably. Generative AI can help people work closer to adjacent domains, but it remains weaker when tasks depend on tacit judgment, contextual evaluation, and domain-native standards of quality.
What is knowledge distance in AI?
Knowledge distance is the gap between a person’s existing mental model and the deeper logic required to perform and judge work in another field.
Why is AI better at ideation than execution?
Because ideation is often more structured and abstract, while execution requires embodiment, refinement, and decisions shaped by tacit expertise.
What is the GenAI wall?
The GenAI wall is the point where AI can still generate plausible outputs, but the user lacks enough domain understanding to evaluate, adapt, and finish the work well.
What should leaders do with this insight?
Use AI to expand adjacent capability, keep expert review where tacit judgment matters, and redesign roles based on task depth rather than hype.
