Skip to main content

Is Machine Learning the Future of Content Moderation?

A decade ago, smartphones hit the mainstream with the launch of the first iPhone.  That’s around the same time that masses of people started to adopt social media channels as part of their everyday interactions. While before that time, Facebook was already a big part of the way college students, teens and the tech savvy were communicating, the burgeoning era of the “PC in our pockets” opened up a whole new world of connectivity — for everyone.

Today, social media sees new posts every millisecond, and user-driven content is changing everything about the way people interact — the way they keep in touch, shop, find news stories, access the weather, consume entertainment (the “YouTube Star” is no joke) and so much more. Social media has transformed every small business owner into CEO, sales executive and marketer — even the ones selling leggings and health shakes to us every day via our news feeds.

The global “wild wild west”

One of our greatest communication challenges today is that this new modern world of social interactions still operates, in many ways, like a “global” wild wild west. It’s largely a place where anyone can say just about anything — many times with perceived anonymity — and there’s no clear barrier to “the bad stuff” like hate speech, cyberbullying, lewd and lascivious messaging and security threats.

Content moderation is designed to provide that barrier and make social media platforms a safe place for content to be created and consumed. It’s still a manual process, however, and not sophisticated enough to catch everything.

Facebook, for example, has 7,500 employees whose job it is to just monitor content. This job didn’t exist five years ago, and now there are entire departments devoted to making sure the content the public consumes is in line with all security and privacy policies.

How advanced content moderation tools can help

In an industry that clearly recognizes the need for greater controls, social media companies are embracing new content moderation technology that will help shield people (especially our children) from harmful content, fake “click-bait style” news and hate speech. For example, features such as AI-based text and speech analytics can search for context clues and listen for harmful and illegal content — taking the burden off the manual side of policing.

Not only does this technology assist in moderating existing content, but it also has the potential to prevent unsavory posts from appearing in the first place. For example, when an unchecked employee creates an inappropriate description for a product sold online or a consumer posts a review that includes foul language and the offensive language goes viral before anyone sees it coming. By getting ahead of these potential problems through advanced moderation tools, businesses can save themselves from the kind of brand management blunders that cost revenue and ruin reputations.

Further on the horizon, computer vision will scale the content moderation process even more. Right now, we have sophisticated technologies available to target specific words and problematic images, but the video moderation component still largely relies on intense human intervention.

Because images and video are so nuanced, there will likely be a human element to moderation for a long time to come. However, advances underway will help machines view video footage (even streaming footage) in context to recognize potentially problematic content. That way, overgeneralized moderation won’t inadvertently remove good or important content. On the flip side, harmful content won’t slip through the cracks because machines will fully understand the content they’re “seeing.” Whereas sweeping censorship would patently remove content on several forms of cancer research due to a simple extraction of words used to describe body parts, intelligent analysis technologies will know this information is safe and important for public consumption.

No simple fix

Though the use of AI is growing in the field of content moderation, social media companies have been cautious in the way they adopt these technologies for a variety of reasons, two of which I’ll mention briefly.

For one, the field of content moderation is relatively new. Though social media has been around for over a decade, there has never been a time of greater user-generated content (and that volume increases every day). The 2016 elections and subsequent focus on “fake news” has become the moniker of our time — and with that comes greater responsibility than ever before. Gone are the days when social media platforms were simply viewed as channels for communications. Now, increased scrutiny—and responsibility—is being placed on the likes of Facebook, Twitter and YouTube to weed out misinformation and prevent its viral spread. For some recent insight into these complexities, check out this post from Princeton’s Freedom to Tinker blog.

Another reason for caution is that, when dealing with the nuances of human expression, one can argue that the need for human intervention will never go away. Within this framework, social media companies today are looking for that perfect mix of smart tools — and smart people — to moderate the content that comes across their channels in the most efficient ways possible. With growing evidence showing the psychologically taxing work that content moderation has become — both emotionally and physically, smart social media companies will embrace available and emerging technologies to shoulder the burden of mitigating both content risks and the inherent occupational risks to the content moderators themselves.

A safer, more connected future

Advances in content moderation technology are helping to create a safe place for user-driven content online. Though there is no “one size fits all” fix, the use of technology has a valuable role to play. Smart content moderation will help create a safer, more positive social environment for everyone. It will also help organizations protect their brand reputations while keeping an open, omnichannel dialogue with the consumers they serve. 

About the Author

Erica Ong is the Vice President of Customer Experience Management Sales for Conduent. For over 20 years, Erica has successfully worked across all industries to design, implement and operate global Customer Service, Sales, Retention, Collections and Technical Support solutions across multiple channels that increase revenue and drive customer loyalty for her clients. She has a strong background in business development, analytics and operations. During her 19-year tenure with Conduent, Erica has been a General Manager responsible for delivery, growth, client satisfaction, and financial performance for Customer Experience solutions provided by Conduent and filled many leadership positions in the CX space. Erica is passionate about implementing digital solutions to improve the human experience and building diverse workforces to drive positive change.

Print