Ying Xu, PhD
Assistant Professor of Education, Harvard Graduate School of Education
What’s your organization’s mission, and what’s your area of focus?
At the Harvard Graduate School of Education (HGSE), I study how children learn and think in the context of AI and how we can make sure AI helps support their development in positive ways. At the heart of my work is a simple question: how do children acquire knowledge, form beliefs, and interact with social agents—both human and artificial? My research shows that when AI tools are designed to support child-led conversations that spark creativity, curiosity, and critical thinking, they can boost children’s language skills, STEM learning, and motivation. I use these insights to co-design AI tools with children, families, and partners like Sesame Workshop and PBS KIDS, helping them bring thoughtful, child-friendly AI into their programs.
What led you to this work?
I started doing research on AI’s impact on education before it became as prominent in the public sphere as it is today. It stemmed from my earlier research on how children learn from educational media in general, including television, interactive apps, and electronic books. At that time, I observed how children interacted with these technologies, measured their learning, and explored if these technologies brought about meaningful benefits. I walked away feeling that there was still room for improvement—I wondered whether technology could better support the rich, interactive learning that happens in natural social settings, such as with teachers, caregivers, or peers. That was why devices powered by conversational AI, like Siri and Alexa, caught my attention as they introduced technology capable of enabling natural dialogue. What if we could leverage this capability and turn it into focused educational opportunities?
To explore this question, we used evidence-based instructional approaches as our starting point to see if AI could simulate those instructions in reading, science learning, and creative activities. Indeed, we found that in many cases, AI could be quite effective.
Then, as you know, the introduction of ChatGPT brought AI to the forefront of public discussion. On top of that, the most widely used AI products are typically designed for general purposes rather than being specifically created for education or children. This has added urgency to the research, as families, educational systems, and policymakers now face many pressing questions. At the same time, it has pushed much of the research beyond lab settings and into real-world environments that are evolving rapidly.
What have you learned in the course of doing this work about young people’s wellness while engaging with tech and interactive media?
When we talk about young children’s well-being in interactions with AI, many people are concerned about children’s tendency to anthropomorphize—treating AI as if it were a person—which raises the worry that they might grow more attached to AI than to the humans around them. While this concern is certainly valid, one potential way to mitigate it is to shift away from framing the issue as AI vs. humans and instead ask: Can AI be used to strengthen children’s social connections? For example, we’ve seen AI designed as a third-party facilitator to support children’s collaborative learning with peers. There are also AI tools specifically developed to enhance parent-child interactions.
What guidance or advice do you have for parents and other caregivers to help kids to build and maintain their wellness when engaging with digital media and technology?
A lot of the time, when we talk about adult’s roles in mediating children’s technology usage, we think of them primarily as supervisors. And of course, guidance is still important in this AI era. But nowI think it’s also crucial to recognize adults as co-learners—because AI is evolving so quickly that even they may not have more expertise or experience than children. Instead of just supervising, adults can embrace open-minded, transparent conversations and be willing to navigate this space alongside children.
How would you change or design technology and/or media to be healthier for kids across the developmental span?
To make technology and media healthier for kids across the developmental span, I think we need to focus on both the goals and the process. The goals should be to maximize opportunities for learning, creativity, and meaningful engagement—not just to avoid harm and risks. And the process should be grounded in designing with and for children. That means involving kids in the design process, listening to their voices, and creating experiences that reflect their needs, interests, and developmental stages.
Is there anything else you’d like to share?
As part of The Youth & Interactive Media Coalition at Harvard University, we strive to advance progress on issues impacting youth in the digital age, demonstrating how technology can be harnessed for growth, connection, and wellbeing.
We know that no one person, organization, or company can successfully address the challenge alone, so it’s imperative that we collaborate to design and maintain a healthier digital experience for all young people and their families. Our Fellow Travelers blog series features colleagues from around the world who focus on digital wellness from a different perspective than the Digital Wellness Lab, enabling us to share expertise in key areas of digital wellness that we don’t explore as deeply.
Here at the Lab, we welcome different viewpoints and perspectives. However, the opinions and ideas expressed here do not necessarily represent the views, research, or recommendations of the Digital Wellness Lab, Boston Children’s Hospital, or affiliates.