New Pulse Survey report! How Teens in the U.S., Brazil, & France Use AI Chatbots

Using AI to Augment, Not Replace Our Learning

#teenvoices: Willow

Authored by Willow Y.

Member, Digital Wellness Lab 2025-26 Student Advisory Council


Introduction

In this fast-moving era of addictive algorithms and widely accessible information, it feels inevitable how artificial intelligence (AI) is infiltrating every aspect of our lives: our relationships, information sources, and the focus of this blog article, our thinking and learning processes. A report actually revealed that teens’ most common uses of generative AI included tasks related to schoolwork and gathering information—but what the limits in how much the AI helps is where things have been getting complicated (Bickham, D.S. et al, 2024). 

Cognitive Offloading and AI

When students are given access to something so versatile and perpetually responsive, it can be easy to resort to the sentiment “Just put the assignment into [insert trending AI chatbot name]”. I’ve heard it too frequently amongst my classmates! But I can’t blame them either, as on the surface the act feels easy, efficient, and innocent. After all, the AI output takes only a few seconds to generate, and looks polished and competent. 

However, when students “just put the assignment into” an AI chatbot, they’re engaging in something calledcognitive offloading, which is officially defined as the use of physical actions to reduce cognitive demand by altering a task’s information processing requirements(Risko, E. F. et al, 2016). In other words, instead of processing and working through information ourselves, we shift that cognitive load onto something else.

Perceived Issues with Cognitive Offloading

The thing is, cognitive offloading itself isn’t inherently bad; in fact, we do it all the time! Every time you’re setting a reminder on your phone, or tilting a device to be able to read the screen easier, you’re engaging in cognitive offloading. In this AI context though, the act can become a problem when it lets us replace crucial learning processes. You may be thinking, I don’t let AI generate my work THAT often, I’m still able to learn just fine! I would think twice – the research has been speaking otherwise.

This shows that AI does have the ability to improve our performance, but depending on how it’s used, it can also weaken our independent thinking and learning processes.

A study aimed to explore this question of how AI can affect learning, using around 1,000 high school students solving math problems as the test subjects (Bastani et. al, 2025).

Students were divided into three groups: 

  1. “GPT Base” (access to a chatbot mimicking a standard ChatGPT interface),
  2. “GPT Tutor” (access to a chatbot prompted to only safeguard and support learning), 
  3. Control group that had no AI assistance. 

The results when the groups had AI assistance were as predicted: respectively, GPT Base and GPT Tutor students performed 48% and 127% better than the control group. It was actually when AI assistance was taken away when researchers were shocked – the GPT Base students performed 17% worse than the control group, while GPT Tutor students displayed little to no difference! This shows that AI does have the ability to improve our performance, but depending on how it’s used, it can also weaken our independent thinking and learning processes.

How to Avoid AI Harms in Learning

Since most of us don’t have access to a “GPT Tutor” with built-in safeguards, the responsibility falls on us to build our own mental safeguards to try and prevent this cognitive offloading on AI. Like I said earlier, I totally understood the qualms with how easy it is to let AI do your work for you—a simple Google search can, within less than a second, turn into a full-fledged explanation and essentially a whole assignment submission. That’s pretty much unavoidable.

Therefore, to set a more solidified boundary between cognitive offloading onto AI and intelligence augmentation (using tech to enhance, not replace our intelligence!), I’ve created a simple 3-step process to use every time you consider approaching AI for help:

  1. Trust Yourself
    First, ensure that you’ve built your own understanding before using AI. Learning foremost requires that you directly engage with and think through the material. A good check is to test if you could explain the topic to a friend.
  2. Challenge Yourself and AI
    After you’ve solidified your baseline knowledge, you can now critically engage with AI to guide your thinking: ask for explanations, hints, or feedback instead of just straight answers.
  3. Be Honest With Yourself
    Finally, the most important step with any kind of learning assistance. Ask yourself, “Who did the heavy lifting?” If AI is doing most of the thinking work, you’re engaging in the kind of offloading that the study showed can harm learning.

In such a fast-paced, digital world, it feels easier than ever to fall into the results-focused, grind culture mindset. But the lifetime values of creativity, critical thinking, and problem solving that are gained through learning don’t come from that final result. In fact, it’s the hard effort, struggles that have been overcome, critical thinking, and problem-solving processes behind that result that bring the indispensable benefits of real learning. 

Use AI to augment your thinking, not replace it!


Willow Y. is a member of the Digital Wellness Lab’s 2025-2026 Student Advisory Council, and is currently a high school junior in Hackensack, New Jersey.

The author of this article is a young person who has been engaging with the Digital Wellness Lab about topics of young people’s safety and wellbeing within digital environments. Here at the Lab, we welcome different viewpoints and perspectives. However, the opinions and ideas expressed here do not necessarily represent the views, research, or recommendations of the Digital Wellness Lab, Boston Children’s Hospital, or affiliates.


References

Bickham, D.S., Powell, N., Chidekel, H., Yue, Z., Schwamm, S., Tiches, K., Izenman, E., Ho, K., Carter, M., Rich, M. (2024). Optimism and Uncertainty: How Teens View and Use Artificial Intelligence. Boston, MA: Boston Children’s Hospital’s Digital Wellness Lab. https://digitalwellnesslab.org/pulse-surveys/optimism-and-uncertainty-how-teens-view-and-use-artificial-intelligence/

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002

Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences, 122(26). https://doi.org/10.1073/pnas.2422633122