Trauma-informed social media
Many people around the world experience harm online, especially on social media. This harm can be traumatic. Some examples include:
hate speech
doxxing
stalking
pretending to be someone else
public humiliation
bullying and harassment
Traumatic content even affects those who moderate and design platforms.
Lots of research shows that enduring or seeing trauma online can lead to devastating outcomes like:
job loss
feeling unsafe
becoming anxious or depressed
and even suicide
Let's imagine a better world where social media platforms promote healing and well-being, not react to harm or trauma after it occurs. Let's also imagine design and moderation decisions that consider everyone involved. And in doing so, they try to prevent trauma or re-traumatization.
This concept is called "Trauma-informed social media."
What is trauma-informed social media?
Trauma-informed social media applies six key principles, which are:
safety (physical and emotional)
trustworthiness and transparency
peer support
collaboration and mutuality
empowerment, voice, and choice;
cultural, historical, and gender issues
Trauma-informed social media seeks to make platforms safer and more supportive for all. “For all” includes users and those who design and moderate them. It also tries to shift the burden for healing and care from the user to the platform and the companies.
Trauma-informed designers, moderators, and companies realize that most people have a trauma history. They also realize that trauma can affect people in various ways. In realizing these two things, they recognize that it's essential to be aware of signs of trauma. And respond in ways that avoid causing further harm and resist re-traumatization.
What are some examples?
Trauma-informed social media design works well with other common methods like human-centered design. It allows us to create tools that help users track and filter out material they tag as traumatic. It’s like an email spam filter. Trauma-informed platforms could also provide gentle guidance to users. These reminders would encourage users to (re)create content that is sensitive to other users' needs. They would also prevent re-traumatization.
Trauma-informed content moderation requires trustworthiness and transparency. We can be more transparent by explaining why we deleted posts. We could also consider the cultural, historical, and gender content of posts. Co-creating these policy decisions with users is vital. Co-creating fosters collaboration and mutuality. It also empowers voice and choice. Moderators should also receive (peer) support. This can help prevent secondary trauma.
Social media companies must adopt these values and practices throughout their organizations. Trauma-informed social media companies are a more caring and supportive environment for all. It's also about sustainability.
Trauma-informed companies create a culture that is mindful of and responsive to trauma. They also embrace diversity, equity, inclusion, and accessibility (DEIA). All employees align with the six principles too. The Missouri Model is a great framework companies can apply. It helps them slowly move from trauma pre-aware to maintaining a trauma-informed model.
Want to learn more? Read the full paper and watch the YouTube clip below.