Case studies

Our abuse intelligence system has been proven in real-world settings in the industry and government. 

Safety and integrity

We enable trust & safety teams to identify harmful content and accounts quicker by days and provide data-driven context as an extra layer of content agnostic evidence for policy enforcement. 

In real-world environments, our technology helped content evaluation teams to focus on actionable leads and reduced the number of total leads for review by 67%.

Recommendation and ranking systems

Recommendation systems receive early warnings for content related to irregular behavior on or off-platform. We prevent them from jumping on harmful accounts or synthetically boosted content timely.

Read our CMU tech demo paper.

Case studies

Ahead of harmful content

Relevant for policy enforcement & recommendation systems; Platform: YouTube.

Problem

Detecting harmful content such as previously unknown conspiracy theories and new tactics of platform abuses are challenging for AI-enabled classifiers and recommendation systems.

Solution

We deployed our alert system on YouTube data and detected irregular cross-platform behavior.

Results

Our systems identified harmful content on average 83 days before the content was taken down. 30% of those alerts were related to QAnon or other conspiracy theories. Overall, 42.6% of the alerted videos triggered account suspensions. Account suspensions are one of the most severe enforcement policies a platform can enforce.

Reducing the noise

Relevant for content moderation and policy enforcement; Platform: Twitter.

Problem

Content evaluators are flooded with leads and flagged content. Reviewing all leads is time consuming, cost intensive and prone to errors.

Solution

Our alert system was adjusted to the team’s needs regarding relevant abuse factors and their level of leads volume per month. In four weeks, our systems processed 80 million off-platform signals related to YouTube content in real-time.

Results

Our technology provided actionable leads. The team was able to use our alerts to prioritize their leads and to focus on evolving threats while receiving on average 48.2% less leads per week. Of all alerted videos that have become unavailable, 82% were taken down due to account suspension. Account suspensions are one of the most severe enforcement policies a platform can enforce.

Identifying synthetic engagement

Relevant for recommenders; Platform: YouTube and Twitter.

Problem

Social media platforms are interconnected. Traffic on one platform can cause traffic on another platform. This may lead to distorted signals for recommenders.

Solution

We deployed our technology in a cross-platform setting over 3 weeks to identify videos in real-time that are related to synthetic engagement and untrustworthy accounts.

Results

Our systems identified 286 Twitter accounts that created a synthetic YouTube traffic of 36,720 referrals boosting 9,605 video ids of 1,939 YouTube channels. Synthetic engagement and untrustworthy accounts related to YouTube videos made more than half of the total YouTube traffic on Twitter in that time (57%).