AI Productivity Leadership

AI Brain Fry Is Real - But It's Not the Tools' Fault

Harvard Business Review says AI is frying workers' brains. My data - 1,986 commits and 1,900 AI sessions in 76 days - shows the opposite is possible. The difference isn't the tools. It's whether anyone invested in actually understanding them.

9 min read

TL;DR: Harvard Business Review reports that 14% of workers experience “AI brain fry” - mental fatigue from intensive AI oversight. My own numbers show the opposite is possible: 1,986 code changes across two products in 76 days, with output accelerating month over month. The difference isn’t talent or tolerance. It’s whether you actually get to learn the tools deeply, or just get handed them with a mandate to “be more productive.”


1,986 code changes. Roughly 1,200 formal code reviews. 1.47 million lines of code touched. That’s my output from January 1 to March 18, 2026 - 76 calendar days, across two software products I’m building: 3ngram, a persistent memory layer for AI tools, and Climbr, a climbing training platform.

For non-developers: a “commit” is a saved checkpoint of code changes - like saving a document, but with a description of what you changed and why. A “pull request” is a formal proposal to merge those changes, where the code gets reviewed before it’s accepted. These aren’t vanity metrics. They’re the standard measure of development velocity.

I’m not bragging. I’m setting up a question.

Harvard Business Review just published a study of 1,488 workers that found AI use is causing a new form of mental fatigue they call “brain fry.” Workers reported buzzing sensations, mental fog, slower decisions. The numbers are stark: 33% more decision fatigue among affected workers. 11% higher minor error rates. 39% higher major error rates. And a 39% jump in people wanting to quit.

Reading that, you’d think the tools are the problem. They’re not. The environment is.

Monthly output acceleration

What 1,900 AI Sessions Actually Looks Like

Before we get to why brain fry happens, let me show you what heavy AI-assisted development actually looks like. My AI coding tool tracks everything.

Since January 11 - when I started building seriously - I’ve had 1,906 working sessions with AI. That’s an average of 32 sessions per day across my projects. On my peak day, I ran 107 sessions simultaneously, each working on a different part of the codebase.

A “session” is a conversation with the AI where it reads code, makes changes, runs tests, and proposes solutions. Think of each session as having a developer working on a specific task. When I run 30 or more sessions in a day, it’s like managing a team of 30 developers who are all working in parallel, each on their own piece of the puzzle.

Across those sessions, the AI and I exchanged 644,620 messages and the AI took 109,934 actions - reading files, editing code, running tests, checking results. That’s not a typo. Six hundred thousand messages.

Session and message volume

The numbers look insane. And they should raise an obvious question: how is that not the definition of brain fry?

The Three-Tool Cliff

The HBR study found something specific: productivity increased from one to three AI tools, then dropped. After three simultaneous tools, multitasking inefficiency ate the gains.

My daily toolkit includes an AI coding assistant, GitHub, multiple AI model providers, a persistent memory system I built myself, browser automation, deployment pipelines, and database management. That’s well past three.

But here’s the thing - I’m not juggling them. They’re integrated. My AI assistant knows my codebase, remembers previous sessions, follows my coding standards, and runs through a review pipeline I designed. The tools don’t compete for my attention. They compound.

That didn’t happen on day one. It took weeks of deliberate setup, experimentation, and iteration. I had to understand how each tool works, where it breaks, what it’s good at, and how to connect them into something coherent.

Most people never get that time.

The Corporate Pattern

In my previous corporate roles, AI tool adoption followed a pattern I’ve now seen enough to call predictable:

  1. Someone senior decides the team should “use AI”
  2. A tool gets approved (usually one, sometimes with restrictions on what data it can touch)
  3. People get a lunch-and-learn or a shared prompt library
  4. The expectation is immediate productivity gains
  5. Nobody’s workload decreases to make room for learning

This is the setup for brain fry. You’re doing your existing job, plus learning a new tool under pressure, plus monitoring that tool’s output because you don’t trust it yet - because you haven’t had time to understand when to trust it.

The HBR study confirms it: workers doing high AI oversight spent 14% more mental effort and experienced 12% more mental fatigue. But oversight isn’t the problem. Uninformed oversight is.

When you don’t deeply understand how your AI tools work - their reasoning patterns, their failure modes, where they’re reliable, where they’re not - every output requires full manual review. You’re not directing. You’re proofreading. That’s exhausting and slow.

Directing vs proofreading

Directing vs. Proofreading

My January numbers: 176 commits and 321 sessions across my main project. Solid, but not unusual.

February: 571 commits and 964 sessions. Nearly triple.

First 18 days of March: 832 commits and 617 sessions in 13 active days.

The output didn’t triple because I worked triple the hours. It tripled because my understanding of the tools compounded. I stopped reviewing every line and started reviewing architecturally. I built automation that handles the repetitive verification. I learned which types of tasks the AI handles reliably and which need my direct involvement.

The multi-agent workflow is key here. Instead of one AI session doing everything sequentially - write code, test, fix, repeat - I run multiple sessions in parallel. One is working on the database layer. Another is building the API. A third is writing tests. I’m moving between them like a tech lead doing code reviews, not like a typist writing code.

On my peak day of 107 sessions, I wasn’t staring at 107 screens. Most of those sessions were autonomous - the AI was reading, writing, and testing code on its own. I stepped in when it needed architectural decisions or got stuck. The rest of the time, I was thinking about the product, not the code.

That shift - from proofreading to directing - is the difference between brain fry and flow. But it requires something most corporate environments won’t give you: permission to go deep.

What “Going Deep” Actually Looks Like

Going deep means spending a Tuesday afternoon rebuilding your AI’s memory system because you realized it keeps losing context between sessions. This is one of the reasons I built 3ngram - cross-session memory combined with search over your actual documents, so your AI tools have both what you’ve decided and the source material to back it up. It means writing configuration so your tools automatically capture decisions. It means breaking things, repeatedly, while you figure out how the pieces fit together.

In a corporate environment, that Tuesday afternoon is a sprint commitment you’re not delivering on. That configuration is “not in scope.” The breaking things is a risk your manager didn’t approve.

So instead, people use AI at surface level. They paste text into ChatGPT. They get something back. They manually check it. They paste more text. Every interaction is a disconnected transaction instead of a compounding relationship. No wonder they’re fried.

The difference shows up in the data. My tool call count - the number of actions the AI takes autonomously - averages 1,863 per day. That’s the AI reading files, running tests, editing code, checking results. Each of those is a task I’m not doing manually. Each one I had to set up, verify, and learn to trust. But once that trust was established, the cognitive load shifted from “check everything” to “check what matters.”

The Restrictions Problem

The HBR study found that AI use reduced burnout by 15% when it eliminated repetitive tasks, and increased engagement. That’s the promise. But the promise requires freedom to figure out which tasks to eliminate and how.

In most organizations, you can’t connect AI tools to your actual systems. You can’t experiment with workflows that might temporarily slow you down before they speed you up. You can’t choose your own tools - you use what’s approved. You can’t spend time building custom integrations because that’s “not your job.”

These restrictions don’t just limit productivity. They guarantee the worst cognitive experience: all of the oversight burden with none of the compounding benefit.

The Honest Part

I’m not immune to fatigue. Some days I’ve been running parallel work across multiple projects and I hit a wall where I can’t hold the architecture in my head anymore. The HBR study’s description of “buzzing” and mental fog - I recognize that.

But here’s what I’ve noticed: the fatigue comes from context-switching between deep problems, not from the AI tools themselves. It’s the same fatigue a project manager feels juggling five workstreams. The AI just makes each workstream move faster, which means the switching happens more often.

The fix isn’t fewer tools or more breaks (though breaks help). The fix is fewer concurrent streams with deeper focus on each one. The AI is an amplifier - it amplifies your focus and it amplifies your fragmentation, depending on how you use it.

What This Means for Organizations

The HBR piece recommends sensible things: limit the number of simultaneous AI agents, communicate workload expectations, measure outcomes over activity. All good.

But they miss the structural point. Brain fry isn’t a training problem or a wellness problem. It’s a design problem. The organizations creating brain fry are the ones that:

  • Restrict tool access while demanding AI-driven results
  • Skip the learning curve while expecting the productivity curve
  • Keep workloads constant while adding AI oversight on top
  • Measure output volume instead of asking what should stop being done

The brain fry equation

My 1,986 commits aren’t superhuman. They’re the result of having full control over my tools, time to learn them deeply, and the freedom to redesign my workflow around them. Most people don’t get any of those three.

The Real Question

The HBR study asks “how do we prevent AI brain fry?” That’s the wrong question. Brain fry is a symptom.

The real question is: are you setting people up to become fluent, or just busy?

Because the data is clear on both sides. Shallow AI use with heavy oversight creates fatigue, errors, and turnover. Deep AI fluency with integrated tooling creates acceleration. Same tools. Wildly different outcomes.

The difference isn’t the AI. It’s whether anyone invested in actually understanding it.


The HBR study referenced is “When Using AI Leads to Brain Fry” (Harvard Business Review, March 2026), based on research with 1,488 full-time U.S. workers. My own data is pulled directly from git logs and Claude Code usage statistics. More about how I work at sebastianebg.dk.