The Hidden Cost of AI at Work

Artificial intelligence has become the co-pilot of modern business. From marketing teams using generative models to draft campaigns, to finance leaders automating forecasting, to operators relying on AI assistants to summarize complex data, adoption is accelerating at an unprecedented pace. At the same time, growing research on AI cognitive effects raises questions about how this shift impacts memory, judgment, and long-term performance.

But what if these tools, while making us more efficient, are quietly reducing our brain’s ability to think, remember, and innovate?

A new study from the MIT Media Lab confirms this. When participants relied on AI to write essays, their brains showed less connectivity and weaker memory recall compared to those who worked without external tools MIT Media Lab, 2025. In other words, AI saved time, but the brain paid the price.

For leaders, the question is straightforward: how do we integrate AI into our workflows without creating long-term cognitive debt in our teams?

What the MIT Media Lab Study Found

Researchers divided participants into three groups:

  1. Brain Only: Participants wrote essays without external tools.
  2. Search Engine Users: Participants used online search to assist.
  3. AI Users: Participants utilized AI language models, such as ChatGPT.

They tracked neural connectivity and memory recall across each group.

Key Results

  • AI users exhibited the lowest brain activity and weakest memory recall (Kosmyna et al., 2025).
  • Search engine users performed better than AI, but still fell short of the brain-only group.
  • Participants with brain-only involvement had the strongest memory and cognitive engagement.

Another layer of the experiment revealed more.

  • Participants who initially used AI struggled to regain vigorous brain activity even when later asked to work without it.
  • Participants who began with brain-only work maintained higher neural connectivity even when researchers later introduced AI.

The conclusion: early reliance on AI creates cognitive debt that is hard to reverse.

What Is Cognitive Debt?

Cognitive debt represents thinking power offloaded today at the cost of future capacity.

  • Over-reliance on AI means the brain engages less.
  • Over time, neural connections weaken, memory retention declines, and the ability to think critically without AI crutches fades.
  • Unlike efficiency gains, which appear instantly, cognitive decline is a gradual and more difficult-to-detect process. It shows up as weaker performance, shallow creativity, or poor decision-making.

Leaders must recognize that AI improves outputs in the short term, but unchecked use risks degrading the inputs that matter most: human judgment, creativity, and problem-solving.

Flat-style infographic comparing brain engagement in three states: Brain-Only (strongest), Search-Assisted (moderate), and AI-Assisted (weakest).
Cognitive Debt Visual shows how brain activity is strongest when work is done without AI, moderate when assisted by search, and weakest when fully AI-assisted.

Why Leaders Should Care

1. Innovation at Risk

Innovation requires friction, exploration, and iteration. Teams that skip directly to AI-generated answers lose the muscles of original thought. Over time, the outcome is incremental ideas rather than breakthrough solutions.

2. Learning and Development

Employees learn through practice. If AI takes over first drafts or analyses, more profound learning moments disappear. The result is a workforce that appears productive but fails to develop long-term capabilities.

3. Strategic Judgment

AI can produce recommendations, but strategy requires context, values, and trade-off decisions. Leaders need people who can weigh ambiguous factors, not teams that outsource decisions to machines.

4. Culture of Dependency

Unchecked AI use risks creating a culture where speed takes precedence over depth. Over time, employees lose confidence in their own thinking.




The Case for Training Critical Thinking to Offset AI Cognitive Effects

Strong critical thinking skills correlate with better decision-making, improved problem-solving, and stronger communication (Forbes Coaches Council, 2023). Deep work practices also strengthen memory, resist distractions, and enhance the ability to process complex information (Timely, 2023). Training critical thinking skills enables employees to identify misinformation, generate creative solutions, and apply reflective judgment (Nichols College, 2023).

These are precisely the capabilities leaders cannot afford to lose in an AI-powered workplace.




Leadership Framework: Managing AI Cognitive Effects With 70–20–10

Leaders need to design workflows that capture AI’s efficiency while protecting human brainpower.

1. Define AI Boundaries

Not every task deserves AI assistance. Draw clear lines:

  • Brain-First: Strategy development, negotiation prep, original ideation.
  • Search-Assisted: Fact-checking, competitive intelligence.
  • AI-Enhanced: Draft refinement, large dataset summarization, speed-based tasks.

2. Apply the 70–20–10 Rule

The original 70–20–10 model comes from research conducted at the Center for Creative Leadership (CCL), which found that 70% of professional development comes from experience, 20% from relationships, and 10% from formal education CCL, 1980s.

Flat-style infographic donut chart showing the 70–20–10 AI workflow model. 70% Brain-First Work (strategy, analysis, writing), 20% Search-Assisted (research, fact-check, data), and 10% AI-Assisted (draft refinement, automation).
The 70–20–10 AI Workflow Diagram illustrates how leaders can structure team tasks for balance: 70% brain-first work, 20% search-assisted, and 10% AI-assisted.

Adapting this for AI in the workplace:

  • 70% Brain-First Work: Preserve deep thinking time without AI interference.
  • 20% Search-Assisted Work: Use online research to expand context.
  • 10% AI-Assisted Work: Accelerate repetitive or low-value tasks.

3. Create “First Draft Human” Rituals

Require that first drafts, whether a sales pitch, strategy memo, or marketing campaign, come from human effort. Use AI to refine, not to originate.

  • Preserve Cognitive Load: Writing the first draft forces the brain to engage fully with the problem. That engagement builds stronger memory and sharper reasoning.
  • Protect Creativity: Human first drafts capture tone, nuance, and originality that AI often flattens. Encourage teams to put down raw ideas before handing work to a machine.
  • Promote Ownership: A first draft built by a person creates accountability. People feel more invested in work they originated, even if AI later helps polish it.
  • Blend Strengths: Once the draft exists, AI can accelerate refinement: tightening language, checking consistency, summarizing data, or generating alternative phrasings.
  • Practical Drill: Assign teams to produce a complete draft without AI, then spend a fixed time window using AI for editing only. Compare results to highlight the value of both steps.
  • Cultural Signal: Reinforce that AI is a tool for leverage, not a replacement for thinking. Leaders should model this behavior by sharing their own rough drafts before refining with AI.

4. Train AI Literacy

Instead of banning AI, teach teams when to use it, how to verify its outputs, and how to integrate results without shutting down their own critical thinking.

  • Context Awareness: Show teams that not every task requires AI. For example, encourage them to write outlines or make key decisions before involving a tool.
  • Verification Skills: Teach people to fact-check AI responses against trusted data, official sources, or internal benchmarks. Build a simple checklist for accuracy.
  • Bias Recognition: Train employees to recognize when AI responses may reflect outdated or biased data, and how to flag those risks.
  • Prompt Discipline: Encourage thoughtful prompting. Shortcuts often lead to shallow outputs; better prompts drive better thinking.
  • Ethics & Confidentiality: Remind teams to protect sensitive company and client data when experimenting with AI tools.
  • Practical Drills: Run workshops where teams complete a task with and without AI, then compare outcomes. Discuss when the AI added value and when it weakened judgment.

5. Build Cognitive Fitness Programs

Promote mental strength with structured practices:

  • Deep work sessions without digital tools
  • Memory recall exercises in meetings
  • Journaling and reflection practices
  • Team rituals such as “no-AI Fridays” or brainstorms with no automation

Case Example: CRE Sales Teams

In my own work leading a commercial real estate sales team, I saw the same temptation to lean too heavily on AI. Reps could auto-generate outreach emails in seconds, but when they did, they lost the ability to read buyer signals, craft nuanced pitches, and negotiate effectively.

To address this, I applied the 70–20–10 model. I required each rep to draft three custom pitches per week without AI. I encouraged them to use AI only for refining templates and summarizing market reports. I also directed the team to use search tools for property and neighborhood research, which helped them expand context without outsourcing judgment.

The result has been clear. The team moves faster with AI, but they also sharpen the human skills that actually close deals.

What Forward-Thinking Leaders Are Doing

Flat-style infographic showing a leader holding a balanced scale. Left side labeled AI for Scale with icons for documents, automation, and quick output. Right side labeled Human Judgment with icons for strategy, critical thinking, and creativity.
The Leader Guardrails Flow Illustration shows how leaders balance AI for scale with human judgment in strategy, creativity, and decision-making.
  • Google trains employees in AI skills through programs like “Google AI Skills,” where they learn both what AI tools can do and where those tools fall short. Google also teaches Google’s AI Principles, which emphasize human oversight and responsibility.
  • McKinsey’s Responsible AI Principles emphasize oversight by humans throughout the entire AI lifecycle, along with diverse stakeholder perspectives, to ensure the strategy remains grounded in judgment (McKinsey, Responsible AI Principles).
  • Progressive startups experiment with human-only sprints to preserve creativity and innovation.

The differentiator is leadership discipline. Companies that thrive in an AI-powered economy will be those that build guardrails around its use.

AI Cognitive Effects Action Plan for Leaders

  1. Audit team workflows and identify tasks that are either AI-heavy or brain-heavy.
  2. Rebalance tasks using the 70–20–10 model.
  3. Protect deep work by carving out blocks where AI is off-limits.
  4. Coach teams to explain the reasoning behind their outputs, not just share the results.
  5. Monitor for dependency and intervene early when employees rely too heavily on AI.

AI Cognitive Effects Frequently Asked Questions

Q1: Does using AI reduce brain activity?

Yes. A recent MIT Media Lab study found that participants who relied on AI tools to write essays showed significantly lower brain activity and weaker memory recall compared to those who worked without external tools MIT Media Lab, 2025.

Q2: What is cognitive debt?

Cognitive debt refers to the long-term cost of offloading thinking tasks to external tools. While AI can improve efficiency in the short term, overreliance can reduce memory retention, weaken neural connections, and erode critical thinking capacity.

Q3: How can leaders prevent teams from becoming too dependent on AI?

Leaders can set boundaries on AI use, require human-first drafts, promote AI literacy, and encourage deep work rituals without automation. The 70–20–10 model is a helpful framework, comprising 70% brain-first work, 20% search-assisted, and 10% AI-enhanced tasks.

Q4: Why is critical thinking training important in the AI era?

Strong critical thinking skills lead to better decision-making, improved problem-solving, and resilience in ambiguous situations (Forbes Coaches Council, 2023). Without these skills, teams risk becoming overly reliant on AI outputs without applying judgment.

Q5: What role does human oversight play in responsible AI?

According to McKinsey’s Responsible AI Principles, organizations should integrate human oversight at every stage of AI deployment to ensure strategy remains grounded in judgment and ethical decision-making (McKinsey, Responsible AI Principles).

Closing: Protecting the Human Advantage

AI is here to stay, and its capabilities will only grow. Leadership is about more than adoption; it is about designing conditions for long-term performance.

The MIT Media Lab study is a wake-up call. Left unchecked, AI risks weakening the very cognitive muscles that drive innovation, strategy, and resilience. Leaders who act now to balance AI with brain-first work will protect their teams from AI cognitive debt and build organizations that are faster, smarter, and more adaptive.

Question for readers: How are you balancing deep human thinking with AI acceleration in your organization?

Help Support My Writing

Subscribe for weekly articles on leadership, growth, and AI-driven strategy. You’ll get practical frameworks and clear takeaways you can apply right away. Connect with me on LinkedIn or Substack for conversations, resources, and real-world examples that help.

Related Articles

AI Bias in B2B Growth: A Framework for Executives
AI Workflow Process: Practical Solutions Leaders Need
The Truth About Vibe Coding and AI-Assisted Development
Future of AI in Business 2025: Avoid Mistakes, Gain Edge
The Ultimate Guide to Embedding-Based SEO Success
AI Agents Are Here: Unlock Growth, Speed & Scale in 2025
The Truth About AI-Driven SEO Most Pros Miss
Unlock Better MarTech with AI Marketing Automation
Win With AI: A Proven 5-Step Guide for Founders
Remarkable AI-Assisted Development Hack Slashed Dev Time

About the Author

I write about:

📩 Want 1:1 strategic support
🔗 Connect with me on LinkedIn
📬 Read my playbooks on Substack


Leave a Reply

Your email address will not be published. Required fields are marked *