DV

Social Media is Failing Us

Social media platforms are sacrificing usability and safety to maximize profits from engagement, also known as engagement profiteering.

Engagement profiteering is a type of business model that social media companies use to generate revenue. It works by building an experience aimed at keeping you using the app as much as possible so that all of your engagement can be tracked. Your time, attention, and perspective is then turned into dollars through targeted ads.

In his blog post for qz.com, Tobias Rose-Stockwell, defines engagement as:

  1. The metric by which companies evaluate the number of clicks, likes, shares, and comments associated with their content.

  2. The currency of the attention economy.

The problem with engagement profiteering is that it incentivizes the worst types of behavior: misinformation spreading and abuse. That type of content gets the most engagement, and as such, gets surfaced by algorithms created to expose you to the most engaging content. Algorithms don't know or care about positive vs negative engagement - a like is a like whether it's a picture of a dog or hate speech.

Social media is mainly responsible for white supremacists storming the US Capitol Building on January 6th, 2021. Yet, social media also acts as a platform that individuals and groups can use to communicate solidarity and raise awareness of social justice issues.

The question: Are we forced to accept this duality, or is there a better way?

I spoke with experts on social media abuse, moderation, social justice, and accessibility to explore what a safer and more inclusive social media platform might look like.

Control of engagement needs to rest with the user

When we look at Twitter and what control we have over the content we create, it's very clear that Twitter is the real owner. In the case of one Twitter user who regularly combats white supremacists and white nationalists, it's not uncommon for their content, or tweets, to be bombarded with responses from these bigots. While that alone is problematic, it actually gets worse. Because they have no real control over their content, the only real action they can take is to report the abuse and block the user. However, this does very little to solve the problem for a few reasons:

  • The content remains visible to anyone who hasn't blocked the offenders
  • When your content is raided in this manner you could receive hundreds to even thousands of replies that you would have to report individually
  • Offenders can easily spin up new accounts to bypass blocking

The problem stems from the lack of control over how people have access to your content. Because Twitter wants to maximize engagement to maximize profits it's actually helpful to make it near impossible to control the conversation around your content.

So what might a safer solution look like then? Well, it all starts with giving content creators more control over their space.

A great example of controlling access to content is Discord's roles and permissions capabilities. Let's take the same scenario of mass abuse on a tweet and apply Discord's feature set to it:

  • By using roles and permissions the content creator could initially limit who could interact with the tweet, preventing mass spam
  • If someone is able to bypass the roles and permissions the content creator could completely remove the comments along with reporting and blocking the offender
  • By using roles and permissions the content creator could allow moderators on their content and get community help to remove, report, and block offender

In order to create a truly safe space the content creator needs to have more control over how people can view, engage with, and share their content.

Abuse must be handled by abuse and trust teams

Facebook boasts a user count of over 1.6 billion with Twitter coming in at 260 million users. Let's assume that only 1% of users fall under one of the 10 Personas Non Grata. That equates to 18,600,000 users who are on these platforms with ill intentions.

The role of Artificial Intelligence and Machine Learning in reporting abuse

Not every problem can be solved with technology and social media abuse reporting is definitely one of those problems. As Rua Williams discusses in their post, when it comes to dealing with context and nuance, no amount of model training is going to be able to decipher human emotion or accurately judge harm.

We can't rely on technology to moderate or respond to abuse.

Abuse can take on too many forms to model it well enough to not cause more harm than it prevents. We see the shortcomings of automated abuse reporting daily on Twitter and Facebook.

So if technology can't get us there, can the community?

The role of community sourced abuse management

Another common approach to handling abuse at scale is by allowing community members to flag and sometimes moderate content. Platforms like StackOverflow, Reddit, and now Twitter all have systems like this.

The problem with community led initiatives is that what is considered abuse can change as your community grows or changes. In most cases it actually creates a gate where moderators or admins have the room to push others out. It's not uncommon to hear stories of how people were treated poorly on StackOverflow or other moderator led platforms.

Toxic influencers will weaponize a platform's tools for abuse reporting and management and turn it against the very community those tools are supposed to protect. It's often marginalized people who suffer the most from these systems.

You need paid teams to evaluate abuse based on an open standard and within a specific amount of time

The only true way to create the least amount of harm is to pay humans to respond to abuse reports within a given SLA. It's funny how in tech we care about uptime for our services and support response times for issues with a platform, but we don't apply that same expectation to these social media platforms when it comes to abuse.

And let's be real, not being able to properly handle abuse is a system failure.

How abuse is handled also needs to be an open and transparent process. It's the only real way to ensure that biases minimized and people are treated the same as much as possible. Ideally this would be an open source standard that is contributed by a large community of diverse individuals. A great example of this type of communal definition is selfdefined.app.

From the Self Defined website: "Self-Defined seeks to provide more inclusive, holistic, and fluid definitions to reflect the diverse perspectives of the modern world."

We need an inclusive and holistic policy for dealing with abuse.

Conclusion

Engagement profiteering causes social media platforms to prioritize engagement over the safety its community by limiting the control users have over their content and how it's engaged with. Current solutions to deal with abuse at scale within an engagement profiteering business model have fallen short of providing the safety needed to have a truly inclusive space.

This post is part of the Project Forth series, in future posts we'll look at monetization without engagement profiteering, protecting content creators, and accessibility.