top of page

Abbey's share: Ye and the Limits of Free Speech Online

Updated: Mar 31

ree

Thank you to our lovely Abbey for sharing this piece. Below are her her thoughts & reflections on it.

In recent years, social media companies have themselves engaged in "content moderation", enacting robust internal monitoring tools with a view to making the online world a safer place for end-users. However, following the recent US election, social media companies including Meta have now scaled back content moderation to weigh the scales in favour of freedom speech, employing tools that instead rely upon users themselves to flag or fact-check content. Does relying upon users to "flag or fact-check content [shift] the onus onto victims of harassment or misinformation", as the article suggests? Should social media companies continue to "invest in professionals who understand cultural context, language nuances and how threats evolve online"? Or, did online interventions by social media companies previously inhibit the exercise of freedom of speech; a right fundamental to civic democracies? In large part, I agree with the article's premise that content moderation by socials themselves was fundamental to the architecture of the online world, and served an important function that users themselves can't simply replace (the author herself is a former safety moderator at Meta, and makes the point that this is also an important use of the resources that socials have).


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

© 2025 Think & Think Again

bottom of page