The Research Behind ARC

How behavior changes in AI-enabled work

Evidence across cognitive science, decision science, behavioral economics, and organizational systems shows a consistent pattern: training builds awareness, environment design drives behavior.

How to read this

This page summarizes the research behind our approach to AI adoption. The evidence converges across five fields: cognitive science, decision science, human factors engineering, behavioral economics, and organizational psychology. Each section addresses a specific failure point in typical AI adoption efforts.

Taken together, they support a single conclusion: behavior changes more reliably through environment design than through training alone.

Cognitive Science

Why training increases awareness but rarely changes behavior

Conceptual Change Theory
Posner, Strike, Hewson, and Gertzog (1982) — Science Education

People using AI tools already have a framework for evaluating written work. Clear, structured language signals competence. With AI, those signals are unreliable, but the evaluation model remains unchanged.

Training typically adds information on top of this model. It does not replace it.

Conceptual Change Theory explains why. For a mental model to change, the learner must:

  • Experience dissatisfaction with the existing model

  • Find an alternative that is understandable and credible

Most AI training does not create these conditions. It builds awareness, but the underlying model remains intact.

In practice, direct exposure to failure is what drives change. When AI produces output that appears credible but is wrong, the gap becomes visible. That moment creates the conditions for restructuring.

Organizations that design for this kind of exposure accelerate behavior change more effectively than those relying on instruction alone.


Decision Science

Why AI systems distort judgment

AI Over-Reliance and Independent Judgment
Buçinca, Malaya, and Gajos (2021) — Harvard

When users see AI output before forming their own view, over-reliance increases. When they are required to form a judgment first, reliance becomes more calibrated.

Interface order matters:

  • Output first → passive acceptance

  • Judgment first → active evaluation

This effect is more durable than explanations or disclaimers.

Citation-Driven Automation Bias
Microsoft Research (2022–2025 synthesis)

Adding citations to AI outputs can increase uncritical acceptance. Users interpret citations as signals of credibility, even when the underlying content may be flawed.

Well-intentioned transparency can reinforce the same evaluation errors users already bring to AI systems.

Trust Calibration

Research across human-computer interaction identifies three zones:

  • Under-reliance

  • Appropriate reliance

  • Over-reliance

Both extremes reduce effectiveness. The goal is not adoption alone, but calibrated judgment.
System design determines where users operate.


QUICK REFERENCE — ALL SOURCES