Platforms such as Instagram, Facebook, and TikTok have become everyday staples for many people: they use them to communicate, for entertainment, to consume news, and to form (political) opinions. This makes it all the more important that social media platforms do not operate solely according to commercial logics. Currently, attention-grabbing content spreads particularly quickly on social media. Posts that evoke strong emotions, simplify, or polarize are more likely to be amplified – including digital violence and disinformation.
These negative effects are not accidental. More importantly, they can be corrected. Where policymakers and platforms would need to take action is outlined by HateAid in its new publication »Safety by Design – Pathways to Safer Social Media Platforms«. HateAid, a digital human rights organization we have supported for years, argues that if social media platforms were designed from the outset according to the principle of »safety by design«, there would be less digital violence, less disinformation, and fewer addictive features that encourage endless scrolling and video consumption. After all, social media platforms are human-made products – much like colored pencils, beta blockers, or e-bikes. Their technical architecture, design, and algorithms are the result of deliberate decisions.
The study presented now is based on expert reports by digital law specialist Michael Denga from the Business & Law School Berlin and researcher Caroline Sinders. It outlines 200 concrete technical and regulatory measures to enhance safety on social media, many of which should be implemented already at the design stage of platforms.
While platform operators generally claim they are already reducing risks and often point to the European Digital Services Act, in practice they tend to remove or restrict problematic content only after it has already spread. According to the study, it would be far more effective to prevent such mechanisms at the design stage of social media services. For example, filters for harmful or disturbing content could be integrated, or minimum age requirements for platform use could be introduced.
The study puts forward two key recommendations: safety must become a central component of platform architecture, and the Digital Services Act must be implemented and enforced more consistently. In addition, HateAid calls for platform executives to be held personally liable. At present, for example, Meta CEO Mark Zuckerberg is facing a lawsuit brought by a user who alleges that the company’s platforms were deliberately designed to foster addiction. A ruling is still pending.
Read the study »Safety by Design« by HateAid here.