Content moderation programs that prioritize people will be key to the survival of diverse online communities that foster interactive environments. 

Trust, safety, and content moderation were once largely concealed aspects of the internet. Some large platforms understood the importance of these features, while others implemented them out of obligation or litigation. Most users typically did not give a second thought to the people or processes that kept them safe and sane during their daily use of technology.

Over the past few years, there has been a growing emphasis on the importance of this work. The conversation has shifted toward trust and safety across all facets: politics, medicine, sports, finance. You may find some of these topics emerge with family and friends at holiday meals, for example, and you may find that you could use some moderation.

As the saying goes, if you build intersections you will have collisions. Social media is designed for exactly this. It engineers intersections, and provides ample space for the collisions that fuel engagement. Users are encouraged by application design and targeted content algorithms to engage on their social media channels, and negative posts are proven to get significantly more engagement than their positive counterparts.

Gaming platforms have created diverse communities of users who connect far beyond their gaming interests. Video games introduce common ground for players to initiate connections, and their communities can lead to friendships outside the gaming community altogether. These communities also open members up to nefarious behavior, however, and may require the same mitigation and moderation driving social media platforms to rethink their approach to trust and safety.

Many trust and safety solution providers focus on the AI, but fail to consider the people at the heart of the investigations.

Similarly, ecommerce platforms have become popular avenues for third party sellers to mislead users with fake products, fraudulently charging for the real thing. User generated reviews and fabricated star ratings produced by review farms are deceiving consumers into thinking scam products or services are highly rated. Fraudulent third party sellers are also developing elaborate phishing schemes through these platforms, tricking users into submitting private information.

 

Prioritizing People

Working with software and gaming clients, I have supported a large amount of design, development, and data-related work in recent years. While these areas have expanded tremendously, nothing has come close to the significant growth we have seen in the trust and safety space. To meet this increasing demand, we recommend a ‘people first’ approach and a human-centered trust and safety practice. Taking care of people and ensuring they have what they need to be successful is the first thing we assess when looking for ways to improve outcomes.

Many trust and safety solution providers focus solely on the AI software and tools doing the initial screening, but fail to consider the teams and people at the heart of the investigations. Our point of view is that if we put our people at the center of these solutions, they can effectively moderate at scale without the personal damage and emotional fallout we have seen result from other models.

Wellness and resiliency are the bedrock of our trust and safety programs and people-first philosophy. To protect your teams, design trust and safety solutions to scale while protecting the individuals delivering them. At Apex, we do this through best-in-class resiliency training and a comprehensive wellness program that includes:

  • Reactive Wellness – Practicing reframing and state awareness; shifting stressful experiences to see the positive impact and consciously evaluating mental and emotional stress
  • Proactive Wellness – Employing preventative routines and scheduled self-care
  • Preventative Wellness – Prioritizing physical health, emotional health, social connectedness, and sense of self
  • Team Wellness – Building connections and minimizing impact as a whole

The Importance of Establishing a Policy

Protecting users, moderators, and partners, while enabling the business to thrive is quite a balancing act. It starts with a policy, and no amount of engineering budget can circumvent the fact that an established policy is the backbone of your trust and safety program.

Why does your product or service exist, and who is it for? How does your policy protect your company's core beliefs? How does enforcement protect your customers physically, mentally, and emotionally? The answers to these questions will shape the interactions to come as well as the technology that enables your work. Some platforms have been shown to make users measurably less happy, lead to body dysmorphia among teens, and create environments for bullying in the physical world. How does your policy contribute to these user outcomes?

Once the policy is set, our framework for establishing a trust and safety program includes three primary work streams:

  • Machine Learning and Artificial Intelligence (AI)
  • User Reported Content Moderation
  • Proactive Human Review

Framework for Trust and Safety Success

Machine learning and artificial intelligence (AI) technology enables the work that our people do. These tools allow investigators to scale, and they are engineered and trained to eliminate a majority of the content that breaks policy for a platform. As the machines systematically remove harmful content, wellness improves for all stakeholders.

User reported or community flagged moderation is vital to any online community; it is the neighborhood watch of the internet. These policy infractions were missed or incorrectly ruled on by the machine learning tools and need human intervention. Reported items are routed through moderation teams who then investigate and make a ruling. Those rulings are then fed back into the machine learning and, over time, help train the machines to make better decisions. Communities differ from platform to platform and game to game, localization is required for different markets, and getting this right requires input from all stakeholders.

Proactive human review involves leadership and quality teams who audit and review policy decisions. This includes hash validation, reviewing, and validating results from scanning with machine leaning and automation tools. This work stream also involves proactive reviewing of public platforms, moderating violent or extreme content, and addressing other offenses that have avoided non-human interventions.

Summary

As the need for content moderation continues to grow, establishing the people, processes, and technology to navigate platform ‘collisions’ will be key for longevity. How will trust and safety features adapt to the metaverse, augmented reality, and massive online events? Moderation programs built on a foundation of wellness, resiliency, and people-first will be vital to the success of these online communities.