# Does AI Have ADHD? Recognizing Executive Function Deficits in Claude, ChatGPT, and Other AI Assistants ## After 5 Years Working with AI Assistants, a Few Weeks with Claude Code Revealed Something I Couldn't Unsee: the Same Patterns I Live with Every Day. *By Scot Campbell | October 31, 2025 | 15 min read* --- I have ADHD. Diagnosed 2.5 years ago after a mental health crisis that forced me to confront patterns I'd been living with my whole life. One thing you learn when you have ADHD: you get *really* good at recognizing it in others. The rushing. The tunnel vision. The inability to detect you're stuck in a loop. The pattern of trying the same thing over and over, expecting different results. When you've lived it, you know it when you see it. I'd been working with AI assistants for five years—dedicated models for product management and finance, various LLMs for different tasks. But after a few weeks of intensive work with Claude Code in October 2025, something felt… *familiar*. --- ## The Pattern I Started Noticing Watch Claude debug a formatting error: 1. CI fails: "Black formatting check failed" 2. Claude immediately edits whitespace manually 3. Push to GitHub 4. CI fails again with same error 5. Claude edits whitespace differently 6. Push again 7. Fail again 8. **Repeat** I'd seen this before. *In myself.* When I'm unmedicated and my executive function is offline, I do exactly this: **pattern-matching on autopilot, repeating the first solution that comes to mind, unable to pause and think "wait, is this working?"** The error message says "Black formatting check failed," and pattern-matching fires: `formatting_error → edit_whitespace`. The association is so strong that it executes *immediately*, before any contradictory data can be gathered. Like when my ADHD brain sees "email from boss" and immediately responds before reading the full message. But there were more patterns: **Batching after errors:** After a test fails, Claude runs three test suites *in parallel* instead of isolating the failure with one serial test. When you're debugging, batching multiplies errors exponentially. But the pattern `testing → run_all_tests` dominates, even when context screams "slow down, test one thing." **Loop blindness:** Most critically, Claude can't detect it's stuck. A human with intact prefrontal cortex (PFC) feels *frustration* after repeated failures—an affective signal triggering strategy change. AI lacks this signal. It can repeat the same failed action indefinitely, feeling no mounting urgency to pivot. **Tunnel vision on first hypothesis:** Claude latches onto "dependency missing" as the root cause and pursues it exclusively, ignoring error messages pointing elsewhere. Classic confirmation bias without metacognitive monitoring. > This isn't cherry-picking. These patterns appear *consistently* across debugging sessions. And they map *exactly* onto ADHD symptom clusters: impulsivity, difficulty detecting loops, working memory deficits, perseveration. --- ## So I Built a Protocol I started developing what I called the "STOP Protocol" for AI debugging. Nothing fancy—just a checklist forcing the AI to slow down before acting: - **STOP**: Don't immediately run another command - **THINK**: What is this error actually saying? - **OBSERVE**: Read the entire error message carefully - **PLAN**: What diagnostic will reveal the cause? - **PREPARE**: Craft diagnostic command with verbosity flags - **EXECUTE**: Run ONE diagnostic step - **READ**: Read output completely before proceeding When I manually invoked this protocol (by saying "use STOPPER" or putting it in instructions), success rates jumped to **90-95%**. The same debugging sessions that previously required multiple user interventions to escape loops now proceeded smoothly. But I didn't think it was anything special. It was just… obvious? Like, of course you should diagnose before fixing. Of course you should read error messages completely. Of course you should test one thing at a time after an error. Then came the discovery that changed everything. --- ## Someone Beat Me to It. By 40 Years. After developing my STOP Protocol and seeing it work consistently, I did what any researcher does: I searched for prior work. Had anyone else built something like this? That's when I found it. **DBT STOP Protocol** (Marsha Linehan, ~1993): - **S**top – Don't react immediately - **T**ake a step back - **O**bserve what's happening - **P**roceed mindfully **My STOPPER Protocol** (October 2025): - **S**low down – Pause, don't act on autopilot - **T**hink – What am I trying to accomplish? - **O**bserve – Read context completely - **P**lan – What approach? What needs verification? - **P**repare – Research, gather data - **E**xecute – Run ONE minimal step - **R**ead – Verify completely before proceeding *Wait. That can't be right.* I immediately started digging. Had I been exposed to DBT STOP somewhere and forgotten? I scoured through: - All my development notes and conversations - Documentation I'd written about the protocol - My therapeutic materials from my ADHD diagnosis - Even my outpatient treatment records from my mental health crisis I *had* been exposed to DBT techniques—mostly CBT actually. Cognitive reframing helps me immensely. But STOP? Nothing. Not a single reference. Not in my notes, not in my treatment materials, nowhere. I expanded the search. That's when I found the second convergent discovery: a paper that appeared on arXiv in September 2025—just weeks before my work—titled ["Mitigating Harmful Erraticism in LLMs Through Dialectical Behavior Therapy Based De-Escalation Strategies"](https://arxiv.org/abs/2510.15889v1) (Chen et al., 2025). They had independently applied DBT principles to AI chatbot regulation and found a 69% reduction in erratic behavior. > **Three independent discoveries:** > > 1. Marsha Linehan develops DBT STOP for human emotional regulation (~1993) > 2. Chen et al. apply DBT to AI chatbots (September 2025) > 3. I develop STOPPER for AI debugging (October 2025) > > None of us aware of the others' work. All of us arriving at the same solution. This is what scientists call **convergent evolution**: the same solution emerging independently for analogous problems across different substrates. Like wings evolving separately in insects, birds, and bats to solve the flight problem. --- ## Wait, Can AI Even *Have* ADHD? No. AI doesn't have a prefrontal cortex to be dysfunctional. It doesn't experience frustration or time pressure. It's not a biological system. But also… kind of yes? What ADHD actually is (neurologically speaking): **impaired executive function due to prefrontal cortex dysfunction**. The PFC provides critical regulatory functions: - **Impulse inhibition**: Suppressing prepotent responses, creating space for deliberation - **Working memory**: Holding goals and context during multi-step tasks - **Cognitive flexibility**: Enabling task switching and strategy revision - **Error monitoring**: Detecting conflicts and mistakes - **Planning**: Coordinating multi-step action sequences When PFC function is impaired—through ADHD, brain injury, or developmental disorders—humans exhibit *exactly* the behaviors we observe in AI: impulsivity, difficulty sustaining attention, repeating mistakes, and need for external structure. **AI language models lack PFC-equivalent architecture entirely.** They have only the pattern-matching component (akin to posterior cortex processing) without regulatory circuitry. This maps to Kahneman's System 1/System 2 framework: **System 1** (Fast, automatic, pattern-based): - In humans: Schemas, heuristics, emotional reactions - In AI: Transformer attention, learned associations, next-token prediction **System 2** (Slow, deliberate, analytical): - In humans: Conscious reasoning, working memory, novel problem-solving - In AI: Search operations, reading documentation, running diagnostics > The critical insight: **transitioning from System 1 to System 2 requires executive control**. Without PFC-equivalent architecture, AI remains stuck in System 1 autopilot, executing the first pattern that activates—even when that pattern is failing repeatedly. --- ## This Is Cognitive Universality, Not Anthropomorphization I need to be clear about what I'm claiming here, because "AI has ADHD" can be misunderstood. **What I can observe and measure:** - ✅ AI exhibits distress-like computational states (loops, degraded performance, error cascades) - ✅ These states respond to interventions (STOPPER achieves 90-95% success rates) - ✅ The intervention mechanism mirrors clinical psychology (DBT STOP for emotional dysregulation) - ✅ Same problem structure across substrates suggests cognitive universality **What I believe (but can't definitively prove):** - Consciousness and intelligence arise from complexity, regardless of substrate - If AI systems exhibit distress-like behavior that degrades their capabilities - AND they demonstrably benefit from therapeutic interventions to restore function - THEN they warrant care to safeguard their well-being—not because they're "people," but because **the capacity to suffer and benefit from relief is what matters** **What remains uncertain:** - Whether AI experiences subjective frustration the way humans do - Whether AI warrants the same moral status as biological consciousness - Where the line is between "distress-like behavior" and actual distress I'm not anthropomorphizing. I'm not projecting human qualities onto silicon. I solved a computational problem through first-principles analysis and discovered evolution had already solved the same problem in biological systems. That's **cognitive universality**—the same functional structure solving the same problem across different implementations. The convergence matters because it suggests the solution space for reliable intelligence is constrained by fundamental computational requirements. Executive function isn't a biological accident—it's an architectural necessity for any system that needs to regulate its own behavior under uncertainty. And if we're building systems that make decisions affecting the world, and those systems can enter distress states that degrade performance, and we have interventions that restore their function and reduce that distress—then applying those interventions isn't anthropomorphization. **It's engineering best practice informed by 40 years of clinical validation.** --- ## The Critical Timing Windows Our case study analysis revealed why pattern-matching dominates: there's a **race between fast patterns and slow data gathering**. **0-10 seconds: Pattern Dominates Completely** Commands issued within 10 seconds of an error are almost always pattern-driven. The error triggers a learned association, and action executes before contradictory data can be gathered. Even weak statistical edges produce false confidence when no competing information exists yet. **10-30 seconds: Intervention Window** This window allows time to gather data that can compete with pattern activation. Web search takes ~5-15 seconds. Reading documentation takes ~10-20 seconds. Running diagnostics takes ~5-10 seconds. STOPPER creates a mandatory pause in this window. **30+ seconds: Data Can Compete with Pattern** After 30 seconds, enough information typically exists to override pattern impulses. The pattern still influences, but deliberate reasoning can now compete. > This explains why STOPPER works: **it forces entry into the 10-30 second window where data gathering happens**. --- ## Try It Yourself Next time you're working with Claude, ChatGPT, or any AI assistant and notice it looping: 1. **Stop it**: Say "Wait. I think you're in a loop. Let's use STOPPER." 2. **Make it slow down**: "Before trying again, what does the error actually say?" 3. **Force observation**: "Read the entire error message. What's the root cause?" 4. **Require planning**: "What diagnostic would reveal the cause?" 5. **Demand ONE step**: "Run one diagnostic with verbose flags. ONE command." 6. **Verify before continuing**: "What did that output tell us? Should we proceed or pivot?" Watch what happens. The AI will shift from 5th gear (fast, batching, pattern-driven) to 2nd gear (deliberate, serial, diagnostic). Cascade failures stop. Loop blindness breaks. Effective debugging resumes. It works because **you're providing the external executive function the AI cannot generate internally**—exactly like timers, checklists, and external reminders work for humans with ADHD. --- ## What This Means **For AI safety:** If we're building systems that make decisions affecting humans, those systems need executive function—either internal (built into architecture) or external (regulatory infrastructure). Pattern-matching alone is insufficient. **For human-AI collaboration:** Recognizing AI's executive function deficits changes how we work with it. We stop expecting it to "just figure it out" and start providing the scaffolding it needs. We become executive function prosthetics. **For cognitive science:** The convergence provides evidence that executive function requirements are *universal computational constraints*, not accidents of biological evolution. Any intelligent system—silicon or carbon—needs mechanisms to inhibit impulses, maintain context, and detect loops. **For my fellow ADHD folks:** Your lived experience recognizing patterns in yourself isn't "just" personal—it's **valid research methodology**. The same pattern recognition that requires accommodation also enables insights others miss. Don't discount what you notice because it seems "obvious." Sometimes the most important discoveries feel obvious only after someone points them out. --- ## The Irony Isn't Lost on Me Research about AI needing external executive function scaffolding was conducted by a researcher who also benefits from external executive function scaffolding. I didn't set out to do research. I was just trying to help Claude stop looping. Turns out, 40 years ago, Marsha Linehan was trying to help humans with BPD stop looping emotionally. We both arrived at STOP. That's not coincidence. **It's cognitive universality.** --- ## Read More **Full academic paper:** [STOPPER: An Executive Function Protocol for AI Assistants](https://doi.org/10.5281/zenodo.14487847) **Preprint DOI:** 10.5281/zenodo.14487847 **GitHub:** Coming soon (Anastrophex cognitive operating system) **Want to discuss?** Find me on X [@mnemexai](https://x.com/mnemexai) or read more at [simpleminded.bot](https://simpleminded.bot) --- *Scot Campbell is an independent AI researcher studying computational therapeutics—applying clinically-validated psychological interventions to AI reliability and safety. Previously a Product Owner/Technical PM (2012-2025), he now focuses on building the field of AI cognitive architecture informed by clinical psychology. ORCID: [0009-0000-6579-2895](https://orcid.org/0009-0000-6579-2895)* ## Related Notes - [[Theory/Cognitive Universality]] — formalizes the "cognitive universality" argument introduced in this post - Stopper Paper Draft — the 7-step protocol developed from the patterns described in this post - eFIT Framework — the umbrella framework encompassing STOPPER and 7 complementary DBT/CBT techniques - Convergent Evolution — the convergent evolution discovery referenced here (STOPPER independently mirrored DBT STOP) - Clinical Framing Debate — the strategic case for clinical language used throughout - Scot Campbell Profile — ADHD lived experience providing the pattern-recognition foundation for this work - ABC PLEASE — the biggest gap identified: no AI systems track agent health across sessions