Aller au contenu principal

For the last few years, I’ve worked in different corners of what can be generally be dubbed as the content moderation industry. First as a contractor, then as an employee for well-known social media platforms, where day-to-day work is intimately linked to crucial social outcomes in different parts of the world. The decisions made in that industry can decide what counts as a real identity, the level of nudity deemed socially acceptable, and even what war images will make it to society’s collective memories. These decisions are often controversial and, for good reasons, should be openly debated and constantly questioned.

Long before the content moderation practices of Silicon Valley made mainstream headlines, many specialised organisations at the intersection of technology and human rights have been challenging the social media industry on these practices, demanding more transparency and better accountability processes. Despite that, the debate remains framed in pervasive myths that serve to dilute reality as it obfuscates further the industry’s more questionable practices.

These myths are one reason to continue the push for greater transparency in content moderation, not just in terms of value-driven intentions, but also, and mostly, in terms of processes, labour practices, and implementation. I may be naive but I still believe there are people in the industry trying their best to support users and keep them safe. However, way too often, they are caught in the middle of a debate between industry leaders and the public, in a way that is somewhat disconnected from the reality of their work. It’s based on that I want to insist on five aspects of content moderation that should remain at the forefront of the mainstream discussion.

Continue reading at GenderIT.org.

Region
APC-wide activities