Twincler's abuse intelligence system has been proven in real-world settings in the industry and government, protecting more than 3 billion users.
The system detected irregular behavior on user-generated content platforms including but not limited to synthetic engagement (later known as coordinated inauthentic behavior - cib), disinformation, influence operations, previously unknown conspiracy theories, copyright infringement, spam, scams and deceptive practices, impersonation, harassment and bullying, harmful content, hate speech, violent content, and duplicates across platforms.
Twincler's key capability was the detection of previously unknown abuse types, giving our customers a crucial lead a day, a week, a month, or sometimes even several months to stay ahead.
Fully automated detection in real-time
No human analyst needed to review flagged content.
Across regions and languages
Our technology was data-driven and content agnostic. This allowed our customers to use Twincler across languages and regions.
Ahead of novel abuse types
Twincler identified any kind of irregular behavior on user-generated content platforms including previous unknown and novel abuse types.
Twincler provided data-evidence and context in real-time such as associated narrative, account attribution, off-platform amplification or past activity and ownership. This enabled human analysts as well as recommendation and ranking systems to easily process the alerts.
Industry
Engineering teams/ Recommendation and ranking systems
Counter-abuse teams
Ad safety teams
Identity teams
Trust & safety teams
Public sector
Situational awareness teams
Election integrity teams
Counter-terrorism teams
Relevant for
Policy enforcement, recommendation systems
Platform
YouTube
Problem
Detecting harmful content such as previously unknown conspiracy theories and new tactics of platform abuses are challenging for AI-enabled classifiers and recommendation systems.
Solution
We deployed our alert system on YouTube streaming data and detected irregular cross-platform behavior.
Results
Our systems identified harmful content on average 83 days before the content was taken down. 30% of those alerts were related to QAnon or other conspiracy theories. Overall, 42.6% of the alerted videos triggered account suspensions. Account suspensions are one of the most severe enforcement policies a platform can enforce.
83 days earlier detection on average
30% flagged content related to conspiracy theories
42% accounts suspended
Relevant for
Content moderation, policy enforcement
Platform
Twitter (X), YouTube
Problem
Content evaluators are flooded with leads and flagged content. Reviewing all leads is time consuming, cost intensive and prone to errors.
Solution
Twincler was adjusted to the team’s needs regarding relevant abuse factors and their level of leads volume per month. In 4 weeks, our systems processed 80 million off-platform signals related to YouTube content in real-time.
Results
Our technology provided actionable leads. The team was able to use our alerts to prioritize their leads and to focus on evolving threats while receiving on average 48.2% less leads per week. Of all alerted videos that have become unavailable, 82% were taken down due to account suspension. Account suspensions are one of the most severe enforcement policies a platform can enforce.
48% fewer leads to review
80M off-platform signals processed
82% of the leads led to account suspensions
Relevant for
Recommendation systems
Platform
YouTube, Twitter (X)
Problem
Social media platforms are interconnected. Traffic on one platform can cause traffic on another platform. This may lead to distorted signals for recommenders.
Solution
We deployed our technology in a cross-platform setting over 3 weeks to identify videos in real-time that are related to synthetic engagement and untrustworthy accounts.
Results
Our systems identified 286 Twitter accounts that created a synthetic YouTube traffic of 36,720 referrals boosting 9,605 video ids of 1,939 YouTube channels. Synthetic engagement and untrustworthy accounts related to YouTube videos made more than half of the total YouTube traffic on Twitter in that time (57%).
64K tweets processed
286 adversarial accounts identified
9K synthetically boosted videos identified