New White Paper: How Digital Media Can Foster Youth Wellbeing

Research Brief: Young People, Content Effects, and Current Content Moderation Practices

Research Brief: Young people, content effects, and current content moderation practices

Content Moderation Policies Overview

The Trust and Safety Professional Organization (TSPA) defines content moderation as “the process of reviewing online user-generated content for compliance against a digital platform’s policies regarding what is and what is not allowed to be shared on their platform (TSPA, n.d.)”. Different platforms have varying content policies and guidelines, and those policies may differ based on legal requirements or age-based rules. This research brief reviews how young people access certain types of content frequently cited as concerning by regulators and platforms (e.g. violence, sexual content), and how young people respond to harmful content, including youth viewpoints on content moderation practices. To better capture the evolving social and political nature of this topic, this research brief will evaluate peer-reviewed research, reports, and news coverage  that reflect the current landscape of content moderation practices and viewpoints. This research brief also recognizes that research and discourse has centered the Global North, and far less is known about content moderation and online expression in the Global South (Elswah, 2024). There are also other types of content likely to be targeted by moderation that are not specifically discussed, and therefore this is not a comprehensive review of media effects research.

Across the globe, many countries have expressed concerns about young people accessing content deemed inappropriate or unsuitable for youth, but content moderation laws often are challenged due to freedom of expression rights. Therefore, some countries, like Australia, have now begun creating and implementing regulations which simply restrict young people’s access to online spaces, like social media platforms, all together. However, since many of these policies are still in development or early stages of implementation, there is little understanding of whether these bans will effectively limit access to the content of concern. Critics of bans, including youth themselves, say that such bans are likely to be ineffective and that limiting all access may only move children and teens into less regulated and more dangerous spaces, where harmful content may be even less regulated  (e.g. McAlister et al., 2024; The Learning Network, 2024; Youth Select Committee, 2025). Therefore, more focus should be placed on understanding how to support platforms’ trust and safety structures and incentivize safety, as those from the industry have noted that changes they believe are necessary to protect young people are not prioritized, even if they would likely incur benefits for all users of the platform (Owens & Lenhart, 2025). 


Specific Content Effects

**Content Warning: This research brief includes content that may be disturbing or upsetting to some readers including descriptions of violent events, hateful online content, and sexual harassment/violence.

Violence

Violent content in media has been a concern among researchers for decades (e.g. Council on Communications and Media, 2009), but much of the previous research has focused on programmed content contained within films or video games (e.g. American Psychological Association, 2020; Anderson et al., 2010; Gentile et al., 2004). However, this does not account for violent user-generated content online. On gaming platforms, the ability for users to create playable content and share it with other users provides a unique opportunity for bad actors to create spaces which gamify both mild and extreme violence. Likewise, the speed at which content can be created and disseminated on social media allows violent content or intent to quickly reach a wide audience, including young users.

Social Media

One particular risk for young people is the accessibility of images and videos related to violent crime as a way to share news among social media users. A survey of teens found that while many young people are still engaging with traditional news sources like local TV news (46%), a significant portion are getting news from YouTube (37%) or TikTok (35%) at least weekly (Reger, 2023). While one major concern is that user-generated “news” may spread mis/disinformation (see Family Guide to Disinformation and Misinformation Online), unfiltered images of graphic events may also appear within a user’s social media feed even without searching for it. While these images or videos may be intended as a call-to-action, they may also subject viewers to disturbing and distressing content with little warning. As more teenagers have access to smartphones, there is greater ability to capture events that make national headlines in real time, like the school shooting at Marjory Stoneman Douglas High School in Florida. Students captured Snapchat videos, creating a real time narrative of the horrifying events witnessed by students and staff inside the school; these videos were used on news outlets with the most graphic content blurred or edited, but many had already seen the original footage. While some research has suggested that this new frontier of reporting has become a catalyst for change, similar to videos from the Vietnam War (Eckstein, 2020), there is also evidence that witnessing news on social media about school shooting events can provoke PTSD symptoms and depression symptoms (Abdalla et al., 2021; Martins et al., 2024). On college campuses, acts of gun violence were associated with acute stress expression on Reddit and increases in language focused on fear and vigilance (Saha & De Choudhury, 2017). Therefore, it is critical that content moderation limits young people’s exposure to graphic violence while also acknowledging the social and political impact of access to such content.

In addition, understanding how young people interact with content about violence or how they communicate with others on social media platforms should also be more strongly considered as prevention against further harm in real life. For example, it is critical that threatening behavior on social media can be reported to law enforcement as evidence for robust investigations to protect community members (FBI, 2018; Schildkraut et al., 2024). Previous research has suggested that gang members could use social media to suggest power and status through posting images of weapons or financial status symbols (e.g., designer clothing), which could appeal to vulnerable young community members at risk of becoming involved in gang activities (Fernandez-Planells et al., 2021). For all young people, there is the risk that features of social media such as comments and the ability to tag other accounts may exacerbate conflicts to the point of threats and violence (Elsaesser et al., 2021). Unfortunately, particularly for gang-affiliated youth, these threats and acts of violence may come while posting about grief or mourning over loved ones (Patton et al., 2018), suggesting  a need for interventions which acknowledge how grief can become aggression and encourage young people to pause before posting or engaging with aggressors’ content; tools which flag these behaviors and offer resources for support might help young people appropriately process difficult emotions instead of resorting to revenge or retribution. However,  the design and implementation of possible interventions should also involve  young people in affected communities to ensure cultural relevance. 

Video Games 

Since many video games that are popular among children and adolescents (e.g., Fortnite, Roblox, Minecraft) feature user-generated content, one concern is that bad actors could recruit young people to violent and extremist causes, using game content and gaming-related servers to perpetuate harm. Users have recreated events of mass violence on gaming platforms, including simulators focused on hate crimes, fascist propaganda, or antisemitic role-playing games (D’Anastacio, 2025; Newhouse & Kowert, 2024). Unfortunately, very few gaming companies have specific anti-extremist policies within their terms of service or code of conduct (notable exceptions include Activision Blizzard, Minecraft, and Roblox; Anti-Defamation League, 2024), and perpetrators often find ways of evading moderation or recreating profiles after being deplatformed (Newhouse & Kowert, 2024). Experts recommend that governing bodies and platforms themselves take steps like improving awareness of reporting tools, offering more educational initiatives for gamers, and shifting from a reactive to a proactive approach to incentivize positive gaming behavior from the onset of a game (for full reports and recommendations, see Rosenblat & Barrett, 2023; Wallner et al., 2025). 

Hateful Content (e.g. Racism, Sexism, Homophobia)

According to Australia’s eSafety Commissioner, hateful online behaviors include “any hateful posts about a person or a group of people for things they can’t change – like their race, religion, ethnicity, gender, sexual orientation or disability” (eSafety Commissioner, 2023). While anyone might see hateful materials online, that content is likely to be particularly harmful to young people within the targeted community. Over half of young social media users (14-22) are at least sometimes exposed to body shaming, sexist, transphobic, racist, or homophobic comments. However, LGBTQ+ young people stand out as being significantly more likely to encounter all forms of hateful content, including racism, sexism, homophobia, and transphobia (Madden et al., 2024). Parental mediation may play a role as well, with some evidence suggesting that active and instructive mediation which discusses online safety and risk of online hate may incur more protection than restrictive mediation, where young people are simply restricted from certain spaces or activities and may not be given the skills they need to navigate harmful online experiences (e.g. Wachs et al., 2025).

Exposure to online racism or LGBTQ+-directed hate among adolescents is associated with mental health concerns, sleep problems, lower school success, higher substance use, and even increased suicide risk (e.g. Dunn et al., 2025; Gamez-Guadix & Incera, 2021; Keighley, 2021; Oshin et al., 2024; Tao & Fisher, 2021; Thomas et al., 2022; Tynes et al., 2019; Volpe et al., 2022). For LGBTQ+ youth, some report coping with online hate or cyberbullying by changing their appearance or how they interact with peers (Cooper & Blumenfeld, 2012), possibly increasing tensions between their preferred gender or sexual expression and what is deemed “safe” in order to avoid further harassment.

 When encountering news online related to racism or racist events, one adolescent stated that their emotions felt like “one hundred bricks were just put on my shoulders” (Cohen et al., 2021, p. 291), but teens across multiple studies also reported finding solace in communicating with peers and finding community support online and in person (Cohen et al., 2021; Liby et al., 2023). During the height of the COVID-19 pandemic, a period of heightened discrimination against Asian peoples, Asian American adolescents found harmful posts from some but also a strong community, support, and inspiration (often from Asian American celebrities) (Atkin et al., 2024). Therefore, there is evidence of tension in spaces where young people are experiencing both trauma and solidarity. Platforms could be leveraged in ways which support young people finding communities of similar peers when confronted with harmful content. However, this should be approached with nuance. For example, the rapid expansion of what is frequently called the “manosphere” often leverages social connection and self-improvement tactics while simultaneously espousing misogynistic, racist, or homophobic beliefs; this uses the same strategy of solidarity to promote, rather than protect against, hateful online content. 

Unfortunately, even positive online experiences during difficult times may be overshadowed by animosity experienced online. Almost half of Black and Latine youth have reported either taking a break from social media or stopping use of an account entirely because of harassment or other negative experiences, compared to only about a third of their White peers (Madden et al., 2024). This could be particularly damaging because Black and Latine youth are also more likely to use social media to connect with people struggling with similar mental health concerns, express themselves creatively, and learn about professional or academic opportunities (Madden et al., 2024). Therefore, it is possible that online racial harassment could make the very spaces Black and Latine young people use to find support feel less welcoming or even hostile, further amplifying the already deleterious effects of online hate. 

Sexual Content

As is the case with violent content, there have been concerns about how sexual content might affect young people’s behavior for decades (e.g. Collins et al., 2017; Ward, 2003). However, the proliferation of social media and real-time online communication has added a level of interactivity not present in more traditional media formats like magazines or television. Therefore, this research brief will specifically focus on young people’s access to online spaces and social media platforms, with an emphasis on user-generated sexual content and direct communication with other users.

Over half of U.S. teens report they have seen pornography by age 13, but many are encountering it accidentally (Robb & Mann, 2023). Direct sexual content is also common, with 1 in 3 young people (ages 9-17) reporting that they’ve had any online sexual interaction, and a quarter have been asked for nude photos or videos or have been sent sexual messages (Thorn, 2024). While these interactions may be consensual and indicative of sexual exploration, the risks remain high; of adolescents aged 12-16.5, 20.3% have had unwanted exposure to sexual content, and 11.5% were solicited (Madigan et al., 2018). Of particular concern is that exploitation may be commodified, with young people being offered monetary rewards (e.g. giftcards, gaming currency) or social benefits (e.g. more followers, a place to stay) (Thorn, 2025).  Since LGBTQ+ youth are more likely to engage in behaviors like sharing sexual images of themselves, they are at higher risk, but they are also less likely to reach out to someone they know in person after an unsafe online experience (Thorn, 2023; Thorn, 2025). This underscores the multifaceted nature of risk for young people online, where their in-person experiences (e.g., lack of support regarding their sexual orientation) may drive them towards online sexual exploration, allowing them to learn about and express themselves but also opening them up to increased risk of harm. 

Viewing sexual content online may also change young people’s offline sexual opinions and behaviors. About half of teens who have seen pornography report seeing content that was aggressive or violent, compared to only a third who have seen pornography with consent before sexual activities (Robb & Mann, 2023). Previous research has associated viewing violent pornography with a higher acceptance of rape myths (e.g. blaming women for sexual assault or excusing men’s violent behavior; Hedrick, 2021). Therefore, content moderation should target pornography which contains violent or degrading content first. Unfortunately, internet filters implemented by parents or caregivers may have little effect on whether adolescents are exposed to sexual content, underscoring the need for filtering to occur on a wider, industry- or platform-wide, scale (Przybylski & Nash, 2018).

Finally, it is important to acknowledge that many young people are using online resources to find sexual health information. Online resources are particularly important for LGBTQ+ youth, as only 7.4% of U.S. students receive inclusive sexual education in school (Kosciw et al., 2022). Often, LGBTQ+ young people find information on social media, through posts detailing lived experiences of others who share their gender or sexual identity (Delmonaco & Haimson, 2022). While this information can be helpful and affirming, there is also little oversight to ensure accuracy of the information being shared. When considering content moderation, it is critical to ensure that access to harmful sexual content is lessened while simultaneously ensuring that sexual health information, both from trusted organizations and those from within the LGBTQ+ community, remains available to young people who may not otherwise have access to these resources (Borrás Pérez, 2021). 


Young People’s Experiences with Content Moderation

When young people encounter content online that may negatively affect them or others, there are often multiple possible actions that can be taken to alert platforms. Among young people who had a potentially harmful online experience, over three-quarters (86%) used online reporting tools, compared to just 42% who sought offline help (Thorn, 2024). Some research suggests that teens are more likely to block accounts rather than report (Bickham et al., 2023; Thorn, 2024), potentially due to fears of retaliation, stigma about reporting, or a sense that doing something may not make a difference. For example, when teens knew the person who sent a sexual request or images, they were more likely to do nothing because they did not want to damage their relationship (Razi et al., 2020). This fear of retaliation or offline consequences may explain why 77% of young people (ages 9-17) say that anonymized reporting would make them more likely to report someone (Thorn, 2024). Overall, teens are likely to report content they saw as dangerous or a threat to the safety of others, more so than something for their own safety (Bickham et al., 20203). This may be an opportunity for platforms to educate their users on how reporting, rather than blocking, empowers young people to  protect both themselves and their community.

There is also a great need to increase transparency throughout the content moderation process. Unfortunately, many young people do not think the platform will do anything about their report (Bickham et al., 2023), and 41% of teens do not trust companies to have fair resolutions after online harassment or bullying (Schoenebeck et al., 2021). One study found that while over half of LGBTQ+ young people had reported anti-LGBTQ+ harassment, hate speech, or discrimination on social media, only 21% saw action from the  platform (Belong To LGBTQ+ Youth Ireland, 2023).

On the other hand, some platforms like Discord and Twitch allow young people to actively participate in community moderation. On Discord, teen moderators reported enjoying the personalized and “human” touch of community moderation, noting the  importance of moderation reflecting the specific needs of the community instead of a strict universal policy (Yoon et al., 2025). However, there were also risks associated with this type of moderation, like increased exposure to harmful content or harassment from community members; to mitigate these risks, platforms must provide support for their young moderators and treat them as stakeholders in decision-making processes (Yoon et al., 2025). Overall, there is very little research regarding active teen participation in moderation processes, so more emphasis should be placed on understanding how moderators can better protect themselves and the online communities they serve.


Improving Content Moderation

Social media and video gaming comes with both risks and benefits for adolescents, with 58% of U.S. teens reporting that social media has a mix of positive and negative effects (Faverio et al., 2025). However, since a majority of young people are exposed to harmful content online, platforms can improve content moderation to increase the benefits of online spaces while mitigating the risks. 

One of the most important steps is ensuring that young people are involved in the design process whenever possible, as well as empowering young people to support moderation within community spaces when appropriate (e.g. Tekinbas et al., 2024; Yoon et al., 2025). Additionally, providing young people  with information regarding existing tools can empower them to take action against content or creators that cause harm. When young people were asked about desired resources from platforms, the top two responses were information on how to report and block people (Thorn, 2024). Other features built into gaming or social media platforms may not best support opportunities for youth community regulation and safe conflict resolution (e.g. Tekinbas et al., 2021), so features should be both created and evaluated with youth agency in mind. For example, a Minecraft server designed for children offers moderation that helps users communicate directly with each other to resolve conflicts, while also offering a keyboard shortcut to alert a moderator, providing a scaffolded space for growth with different levels of support instead of relying only on bans or filters (Tekinbas et al., 2021). Furthermore, there is a clear gap between features being available and feeling accessible or useful to young people. 

Finally, with AI-supported content moderation, there must also be an increased focus on interpreting the linguistic and behavioral characteristics of younger generations to ensure their protection from harm amid varying communication structures (Mehta & Giunchiglia, 2025). For platforms and organizations, it is also important to consider how AI-supported content moderation will moderate images, videos, and other content generated by AI. While AI-supported moderation is common on both gaming and social media platforms in order to manage enormous amounts of user-generated content (e.g. Chawki, 2025; Gorwa et al., 2020), there is little understanding at this time of how these systems will evaluate AI-generated content. In the new era of AI, platforms, organizations, and researchers must prioritize testing and analysis of these content moderation systems to ensure that both existing and new features maintain high standards of protection, even when faced with novel content forms. 


Conclusion

Young people are navigating many digital spaces in their everyday lives, and the experiences they have on those platforms are shaped by the content they encounter. Hateful, violent, or sexual content can have negative effects on young people’s mental health and the ways they relate to others. Moderation of harmful content can help protect young people, but it must support their development of digital literacy and autonomy while fostering trust between youth and the platforms they use. To do this, platforms should include young people in the development of new moderation tools and policies and learn from the experiences of young people who are using existing tools to improve them. As Generative AI tools become available for public use, it is also critical to ensure that AI-supported content moderation is prepared to analyze and moderate this novel content. By centering the needs of young people, platforms can  protect all users from content harms and encourage healthy digital citizenship.


This research brief was written by Kaitlin Tiches, MLIS, Medical Librarian and Knowledge Manager at the Digital Wellness Lab. For more information, please email us.


References

Abdalla, S. M., Cohen, G. H., Tamrakar, S., Koya, S. F., & Galea, S. (2021). Media Exposure and the Risk of Post-Traumatic Stress Disorder Following a Mass traumatic Event: An In-silico Experiment. Frontiers in Psychiatry,12. https://doi.org/10.3389/fpsyt.2021.674263

American Psychological Association. (2020). APA resolution on violent video games. https://www.apa.org/about/policy/resolution-violent-video-games.pdf

Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., Rothstein, H. R., & Saleem, M. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A meta-analytic review. Psychological Bulletin, 136(2), 151–173. https://doi.org/10.1037/a0018251

Anti-Defamation League. (2024). Addressing extremism in online games through platform policies. ADL. https://www.adl.org/resources/report/addressing-extremism-online-games-through-platform-policies

Atkin, A. L., Ahn, L. H., Yi, J., & Li, J. (2024). A qualitative study of Asian American adolescents’ experiences of support during the COVID-19 and racism syndemic. Asian American Journal of Psychology, 15(4), 295–307. https://doi.org/10.1037/aap0000342

Belong To LGBTQ+ Youth Ireland. (2023). 87% of LGBTQ+ youth report hate and harassment online. https://www.belongto.org/87-of-lgbtq-youth-report-hate-and-harassment-online/

Bickham, D.S., Hunt, E., Schwamm, S., Yue, Z., & Rich, M. (2023). Adolescent media use: Mediation and online safety features. Boston, MA: Boston Children’s Hospital Digital Wellness Lab.https://digitalwellnesslab.org/wp-content/uploads/Digital_Wellness_Lab_Pulse_Report_Mar-2023.pdf

Borrás Pérez, P. (2021). Facebook Doesn’t Like Sexual Health or Sexual Pleasure: Big Tech’s Ambiguous Content Moderation Policies and Their Impact on the Sexual and Reproductive Health of the Youth. International Journal of Sexual Health, 33(4), 550–554. https://doi.org/10.1080/19317611.2021.2005732

Cohen, A., Ekwueme, P. O., Sacotte, K. A., Bajwa, L., Gilpin, S., & Heard-Garris, N. (2021). “Melanincholy”: A Qualitative Exploration of Youth Media Use, Vicarious Racism, and Perceptions of Health. Journal of Adolescent Health, 69(2), 288–293. https://doi.org/10.1016/j.jadohealth.2020.12.128

Collins, R. L., Strasburger, V. C., Brown, J. D., Donnerstein, E., Lenhart, A., & Ward, L. M. (2017). Sexual Media and Childhood Well-being and Health. Pediatrics, 140(Supplement_2), S162–S166. https://doi.org/10.1542/peds.2016-1758X

Cooper, R. M., & and Blumenfeld, W. J. (2012). Responses to Cyberbullying: A Descriptive Analysis of the Frequency of and Impact on LGBT and Allied Youth. Journal of LGBT Youth, 9(2), 153–177. https://doi.org/10.1080/19361653.2011.649616

Council on Communications & Media. (2009). Media Violence. Pediatrics, 124(5), 1495–1503. https://doi.org/10.1542/peds.2009-2146

D’Anastasio, C. (2025). Roblox user group re-creates real-life mass shooting events. Bloomberg. https://www.bloomberg.com/news/articles/2025-04-21/roblox-user-group-re-creates-real-life-mass-shooting-events

Delmonaco, D., & and Haimson, O. L. (2023). “Nothing that I was specifically looking for”: LGBTQ + youth and intentional sexual health information seeking. Journal of LGBT Youth, 20(4), 818–835. https://doi.org/10.1080/19361653.2022.2077883

Dunn, C. B., Coleman, J. N., Smith, P. N., & Mehari, K. R. (2025). Associations Between Adolescents’ Exposure to Online Racism and Substance Use. Journal of the American Academy of Child & Adolescent Psychiatry. https://doi.org/10.1016/j.jaac.2025.01.018

Eckstein, J. (2020). Sensing school shootings. Critical Studies in Media Communication, 37(2), 161–173. https://doi.org/10.1080/15295036.2020.1742364

Elswah, M. (2024). Investigating content moderation systems in the Global South. Center for Democracy & Technology. https://cdt.org/insights/investigating-content-moderation-systems-in-the-global-south/

Elsaesser, C., Patton, D. U., Weinstein, E., Santiago, J., Clarke, A., & Eschmann, R. (2021). Small becomes big, fast: Adolescent perceptions of how social media features escalate online conflict to offline violence. Children and Youth Services Review, 122, 105898. https://doi.org/10.1016/j.childyouth.2020.105898

eSafety Commissioner. (2023). Online hate. https://www.esafety.gov.au/young-people/online-hate

Faverio, M., Anderson, M., & Park, E. (2025). Teens, social media and mental health. Pew Research Center. https://www.pewresearch.org/internet/2025/04/22/teens-social-media-and-mental-health/

FBI. (2018). FBI statement on the shooting in Parkland, Florida. Federal Bureau of Investigation. https://www.fbi.gov/news/press-releases/fbi-statement-on-the-shooting-in-parkland-florida

Fernández-Planells, A., Orduña-Malea, E., & Feixa Pàmpols, C. (2021). Gangs and social media: A systematic literature review and an identification of future challenges, risks and recommendations. New Media & Society, 23(7), 2099–2124. https://doi.org/10.1177/1461444821994490

Gámez-Guadix, M., & Incera, D. (2021). Homophobia is online: Sexual victimization and risks on the internet and mental health among bisexual, homosexual, pansexual, asexual, and queer adolescents. Computers in Human Behavior, 119, 106728. https://doi.org/10.1016/j.chb.2021.106728

Gentile, D.A., Lynch, P.J., Linder, J.R. and Walsh, D.A. (2004), The effects of violent video game habits on adolescent hostility, aggressive behaviors, and school performance. Journal of Adolescence, 27(1), 5-22. https://doi.org/10.1016/j.adolescence.2003.10.002

Hedrick, A. (2021). A Meta-analysis of Media Consumption and Rape Myth Acceptance. Journal of Health Communication, 26(9), 645–656. https://doi.org/10.1080/10810730.2021.1986609

Keighley, R. (2022). Hate Hurts: Exploring the Impact of Online Hate on LGBTQ+ Young People. Women & Criminal Justice, 32(1-2), 29–48. https://doi.org/10.1080/08974454.2021.1988034

Kosciw, J. G., Clark, C. M., & Menard, L. (2022). The 2021 National School Climate Survey: The experiences of LGBTQ+ youth in our nation’s schools. New York: GLSEN. https://www.glsen.org/sites/default/files/2022-10/NSCS-2021-Full-Report.pdf

Liby, C., Doty, J. L., Mehari, K. R., Abbas, I., & Su, Y.-W. (2023). Adolescent experiences with online racial discrimination: Implications for prevention and coping. Journal of Research on Adolescence, 33(4), 1281–1294. https://doi.org/10.1111/jora.12875

Madden, M., Calvin, A., Hasse, A., Lenhart, A., & Hopelab. (2024). A double-edged sword: How diverse communities of young people think about the multifaceted relationship between social media and mental health. San Francisco, CA: Common Sense. https://www.commonsensemedia.org/sites/default/files/research/report/2024-double-edged-sword-hopelab-report_final-release-for-web-v2.pdf

Madigan, S., Villani, V., Azzopardi, C., Laut, D., Smith, T., Temple, J. R., Browne, D., & Dimitropoulos, G. (2018). The Prevalence of Unwanted Online Sexual Exposure and Solicitation Among Youth: A Meta-Analysis. Journal of Adolescent Health, 63(2), 133–141. https://doi.org/10.1016/j.jadohealth.2018.03.012

Martins, N., Erica, S., & and Riddle, K. (2024). News exposure, depression, and PTSD symptoms among adolescents in the US: A case study of the Uvalde school shooting. Journal of Children and Media. https://doi.org/10.1080/17482798.2024.2443664

McAlister, K. L., Beatty, C. C., Smith-Caswell, J. E., Yourell, J. L., & Huberty, J. L. (2024). Social Media Use in Adolescents: Bans, Benefits, and Emotion Regulation Behaviors. JMIR Mental Health, 11, e64626. https://doi.org/10.2196/64626

Newhouse, A., & Kowert, R. (2024). Digital Games as Vehicles for Extremist Recruitment a nd Mobilization. In L. Schlegel & R. Kowert (Eds.), Gaming and Extremism: The Radicalization of Digital Playgrounds (1 ed., pp. 72–94). Routledge. https://doi.org/10.4324/9781003388371-5

Oshin, L. A., Boyd, S. I., Jorgensen, S. L., Kleiman, E. M., & Hamilton, J. L. (2024). Exposure to Racism on Social Media and Acute Suicide Risk in Adolescents of Color: Results From an Intensive Monitoring Study. Journal of the American Academy of Child & Adolescent Psychiatry, 63(8), 757–760. https://doi.org/10.1016/j.jaac.2024.03.009

Owens, K., & Lenhart, A. (2025). The unseen teen: Social platforms and accountability for addressing adolescent well-being. Annals of the New York Academy of Sciences. https://doi.org/10.1111/nyas.15375

Patton, D. U., MacBeth, J., Schoenebeck, S., Shear, K., & McKeown, K. (2018). Accommodating Grief on Twitter: An Analysis of Expressions of Grief Among Gang Involved Youth on Twitter Using Qualitative Analysis and Natural Language Processing. Biomedical Informatics Insights, 10, 1178222618763155. https://doi.org/10.1177/1178222618763155

Przybylski, A. K., & Nash, V. (2018). Internet Filtering and Adolescent Exposure to Online Sexual Material. Cyberpsychology, Behavior, and Social Networking, 21(7), 405–410. https://doi.org/10.1089/cyber.2017.0466

Razi, A., Badillo-Urquiola, K., & Wisniewski, P. J. (2020). Let’s Talk about Sext: How Adolescents Seek Support and Advice about Their Online Sexual Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). https://doi.org/10.1145/3313831.3376400

Reger, R. (2023). Teens tuning in: New Medill survey shows higher-than-expected news engagement among young people. Northwestern Medill Local News Initiative. https://localnewsinitiative.northwestern.edu/posts/2023/09/06/medill-teen-news-engagement-survey/

Robb, M. B., & Mann, S. (2023). Teens and pornography. San Francisco, CA: Common Sense. https://www.commonsensemedia.org/sites/default/files/research/report/2022-teens-and-pornography-final-web.pdf

Rosenblat, M. O., & Barrett, P. M. (2023). Gaming the system: How extremists exploit gaming sites and what can be done to counter them. New York, NY: New York University.  https://bhr.stern.nyu.edu/wp-content/uploads/2024/01/NYUCBHRGaming_ONLINEUPDATEDMay16.pdf

Saha, K., & Choudhury, M. D. (2017). Modeling Stress with Social Media Around Incidents of Gun Violence on College Campuses. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), Article 92. https://doi.org/10.1145/3134727

Schoenebeck, S., Scott, C. F., Hurley, E. G., Chang, T., & Selkie, E. (2021). Youth Trust in Social Media Companies and Expectations of Justice: Accountability and Repair After Online Harassment. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), Article 2. https://doi.org/10.1145/3449076

Tao, X., & Fisher, C. B. (2022). Exposure to Social Media Racial Discrimination and Mental Health among Adolescents of Color. Journal of Youth and Adolescence, 51(1), 30–44. https://doi.org/10.1007/s10964-021-01514-z

Tekinbas, K. S., Grace, T., Jagannath, K., & Larson, I. (2024). Designing care(full) online play communities for youth. In C. James & M. Ito (Eds.), Youth wellbeing in a technology rich world. Retrieved from https://wip.mitpress.mit.edu/pub/6kj51uj4/release/1?readingCollection=2c6e5c06

Tekinbas, K. S., Jagannath, K., Lyngs, U., & Slovak, P. (2021). Designing for youth-centered moderation and community governance in Minecraft. ACM Transactions on Computer-Human Interaction (TOCHI), 28(4), Article 24. https://doi.org/10.1145/3450290

The Learning Network. (2025). What teens are saying about barring children under 16 from social media. The New York Times. https://www.nytimes.com/2025/01/16/learning/what-teens-are-saying-about-barring-children-under-16-from-social-media.html

Thomas, A., Jing, M., Chen, H.-Y., & Crawford, E. L. (2023). Taking the good with the bad?: Social Media and Online Racial Discrimination Influences on Psychological and Academic Functioning in Black and Hispanic Youth. Journal of Youth and Adolescence, 52(2), 245–257. https://doi.org/10.1007/s10964-022-01689-z

Thorn. (2023). LGBTQ+ youth perspectives: How LGBTQ+ youth are navigating exploration and risks of sexual exploitation online. Thorn. https://info.thorn.org/hubfs/Research/Thorn_LGBTQ+YouthPerspectives_ExecutiveSummary_June2023.pdf

Thorn. (2024). Youth Perspectives on Online Safety, 2023. Thorn. https://info.thorn.org/hubfs/Research/Thorn_23_YouthMonitoring_Report.pdf

Thorn. (2025). Commodified sexual interaction involving minors. Thorn. https://info.thorn.org/hubfs/Research/Thorn_CommodifiedSexualInteractionsInvolvingMinors_Apr2025.pdf

Trust and Safety Professional Association (TSPA). (n.d.). What is content moderation?. Retrieved from  https://www.tspa.org/curriculum/ts-fundamentals/content-moderation-and-operations/what-is-content-moderation/

Tynes, B. M., Willis, H. A., Stewart, A. M., & Hamilton, M. W. (2019). Race-Related Traumatic Events Online and Mental Health Among Adolescents of Color. Journal of Adolescent Health, 65(3), 371–377. https://doi.org/10.1016/j.jadohealth.2019.03.006

Volpe, V. V., Benson, G. P., Czoty, L., & Daniel, C. (2023). Not Just Time on Social Media: Experiences of Online Racial/Ethnic Discrimination and Worse Sleep Quality for Black, Latinx, Asian, and Multi-racial Young Adults. Journal of Racial and Ethnic Health Disparities, 10(5), 2312–2319. https://doi.org/10.1007/s40615-022-01410-7

Wallner, C., White, J., & Regeni, P. (2025). Extremism in gaming spaces: Policy for prevention and moderation. RUSI. https://www.rusi.org/explore-our-research/publications/policy-briefs/extremism-gaming-spaces-policy-prevention-and-moderation

Ward, L. M. (2003). Understanding the role of entertainment media in the sexual socialization of American youth: A review of empirical research. Developmental Review, 23(3), 347–388. https://doi.org/https://doi.org/10.1016/S0273-2297(03)00013-3

Yoon, J., Zhang, A. X., & Seering, J. (2025). “It’s Great Because It’s Ran By Us”: Empowering Teen Volunteer Discord Moderators to Design Healthy and Engaging Youth-Led Online Communities. Proceedings of the ACM on Human-Computer Interaction, 9(2), Article CSCW121. https://doi.org/10.1145/3711114

Youth Select Committee. (2025). Youth Violence and Social Media. United Kingdom: House of Commons. https://www.parliament.uk/globalassets/documents/youth-select-committee/hc-999—youth-violence-and-social-media-online.pdf