In June, 2023, we launched the Inspired Internet Pledge, a commitment by tech companies and the industry at large to work collaboratively with one common goal: to make the internet a healthier place for young people. At its core, the Pledge aims to connect the dots between mental health and emotional wellbeing to technology and media in actionable ways.
Signatories to the Pledge commit to three central principles and will design a measurable action plan to implement their commitment in ways that are relevant for the contextual realities of their platform or service. We will convene signatories annually to review progress made and to commit to an updated set of actions for the coming year.
True to our belief that, by following the science, we can create an empathetic and respectful world in which our kids can grow up healthy, smart, and kind, the Pledge was designed to be impactful based on our current understanding of research on young people’s digital wellness. Pinterest, as a founding signatory, worked to ensure that the principles are relevant and impactful within the tech and media industry context. What follows is a brief summary of some of the literature underpinning the Pledge principles.
To learn more and to join the movement, a company decision-maker can submit their interest at inspiredinternet.org.
The Inspired Internet Pledge
As leading designers, producers, and managers of interactive online products, platforms, and services accessed by teens and young adults, signatories commit to taking meaningful, measurable actions to do the following in service of supporting more positive mental and emotional wellbeing outcomes for everyone, and especially young people.
Principle 1: Tune for emotional wellbeing
We commit to understanding which actions and content correlate with wellbeing outcomes on our platform. We will use this to inform how we build and evolve products and services that support healthier experiences on and offline.
The U.S. Surgeon General’s recent advisory on social media and youth mental health made clear that, even at the highest levels of our government, there are deep concerns about how interactive media may affect children, teens, and young adults (Office of the Surgeon General, 2023). While we don’t know the exact mechanisms that can create the conditions for negative mental health outcomes, research indicates that some types of use (such as social comparison) and some pre-existing conditions (such as prior diagnoses of anxiety or depression) can increase the likelihood of negative outcomes (Seabrook, Kern, & Rickard, 2016). Some research suggests that media, including social media, may affect individuals very differently (Orben, et. al., 2022; Beyens et. al., 2020; Valkenburg & Peter, 2013).
Recently, there has been heightened concern surrounding the idea of constant scrolling. People are able to spend increasing amounts of time on social media and often are unhappy with how long they have actually spent on the app even if they just meant to quickly check their content (ex. Baughan et al., 2022; Tran et al., 2019). This is, for many, especially concerning when considering that effect on young people. There is some research which suggests that adolescents spending increasing amounts of time with digital media may have negative effects on important factors in development like academics, body image, and sleep (see Adelantado-Renau et al., 2019; Fardouly and Vartanian, 2016; LeBougeois et al., 2017).
When discussing what we know — and don’t know — about how interactive media affect young people’s mental health, many cite concerns about the algorithms and application designs built into apps and platforms. These design choices may cause users to view content for longer periods of time and to feel that they lack agency over their choices to remain engaged with that content (Baughan et al., 2022; Cho et al., 2021). Design features which keep people engaged are even present in media used by young children (Radesky et al., 2022).
Better understanding young people’s perceptions of digital application designs and personalized algorithmic content, may help tech companies make choices which empower youth to feel they have control over their media habits and understand the choices tech companies are making (Bell et al., 2023; Eg et al., 2023; Perez Vallejos et al., 2021). Similarly, by better understanding the ways that each app and platform affects people’s wellbeing, tech companies can make design, marketing, and user support choices that maximize positive outcomes and minimize negative outcomes.
Principle 2: Listen to and act on insights from people who have experienced harm online
We commit to listening to and learning from those who have experienced harm online and the experts who support these communities, to inform the evolution of our policies and product.
Harassment and harm online are unfortunately commonplace. The ADL has reported that, as of 2021, 41% of Americans reported experiencing harassment online in the past year. A full third of respondents (33%) indicated that the harassment was identity-based and a third (33%) that it was based on their appearance; respondents believed they were targeted for their occupation (14%), disability (12%), or religion (21%). Nearly half (45%) of LGBTQ+ respondents were harassed due to their sexual orientation and 35% of women due to their gender (ADL, 2021). A full fifth (22%) of LGBTQ+ teens and young adults have reported witnessing potentially distressing content, including racist, sexist, or anti-LGBTQ+ speech online (Thorn, 2023)
Understanding how to mitigate harm, especially for those most disproportionately affected, is a core goal of research. Youth who identify as female and/or LGBTQIA+ have been subject to disproportionate harm online in the form of cyberbullying (Abreu & Kenny, 2018; NCES, 2022). Research has shown that both direct and vicarious online racial discrimination can have negative mental health effects on African American and Latinx adolescents (Tynes et al., 2020). Unfortunately, many victims of cyberbullying or bystanders to the cyberbullying event explain that they do not report the incidents to a trusted adult or the platform/app because they don’t believe that anything will be done or they fear being socially penalized (Bickham et al., 2023; Biernesser et al., 2023; Blumenfeld & Cooper, 2010; Cassidy et al., 2013).
Currently, there is concern that existing forms of content moderation do not provide adequate support systems for victims of online harm (Schoenebeck et al., 2021). Still, recent efforts are being made to include these groups in developing solutions and ensure they feel safe to discuss their experiences by integrating methods like restorative justice and participatory design which center victims’ experiences in order to create meaningful change in safety processes (ex. Chatlani et al., 2023; Xiao et al., 2022). Young people have ideas for discouraging cyberbullying and harassment behaviors online, and research has sought their insight and experimental ideas (ex. Ashktorab & Vitak, 2016; Biernesser et al., 2023), but these ideas have not necessarily been embedded into the systems and products that could energize change.
Principle 3: Share lessons collaboratively across the tech industry
We commit to sharing best practices, key research findings, and creative solutions across the industry to make the internet a healthier place for everyone – especially young people.
There have been recent debates about the value of researchers collaborating with tech companies, with concerns centering on the importance of independence and transparency in research, while shedding light on the need for data and the opportunities for more rapid beneficial impact (Livingstone et al., 2023; Shotbolt, 2023).
Anecdotal and consultancy evidence exists to make the case for collaborative sharing within and across the industry. Smaller, more nimble, companies can help larger companies innovate while larger, more settled companies can offer wisdom and connections to their smaller collaborators (Brown et.al., 2021). Collaboration within and across companies has been proposed to encourage more ethical, user-focused behaviors and decisions (Banfield, 2020).
Greater cooperation and a reduction in automatic distrust between researchers, organizations, and corporations is the more likely path to generate innovative solutions to the tech-enabled challenges facing us all (Stray & Hadfield, 2023).