Real-time AI-powered toxicity detection and content moderation for digital safety.
Perspective is an advanced machine learning tool developed by Jigsaw (a unit within Google) and the Google Counter Abuse Technology team. Built on a foundation of deep learning architectures, specifically Convolutional Neural Networks (CNNs) and more recently Transformer-based models, Perspective provides a suite of attributes to evaluate the perceived impact of a comment on a conversation. By 2026, it remains a critical infrastructure component for the Trust & Safety sector, providing a non-commercial, high-speed alternative to expensive proprietary LLM-based moderation. The API facilitates real-time interaction, allowing platforms to warn users about potential community guideline violations before they post. Its architecture is designed for low-latency inference, processing millions of comments daily across global newsrooms and social platforms. Perspective supports over 18 languages and provides granular scores for toxicity, threats, insults, and identity attacks. As a market leader in the 'public-good' AI space, it provides a benchmark for algorithmic transparency and bias mitigation in automated moderation, making it the go-to solution for developers who require high-performance filtering without the overhead of enterprise-level pricing models.
Simultaneously evaluates text across multiple dimensions including Identity Attack, Insult, Profanity, Threat, and Sexual Explicit levels.
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
Native support for 18+ languages (including Spanish, French, German, and Chinese) with cross-lingual model normalization.
Designed for sub-200ms latency to enable 'nudge' interfaces where users see toxicity scores while typing.
An endpoint that allows developers to send human-labeled data back to Jigsaw to improve model accuracy for specific community contexts.
A privacy flag that prevents Google from storing the text of the comment after the score is generated.
Ability to adjust sensitivity thresholds per attribute to better suit specific community guidelines.
Access to beta models that detect nuance, such as 'Likely to Reject' (flags content editors would likely remove).
Comment sections on major news sites are often overwhelmed by toxicity, requiring massive human moderation teams.
Registry Updated:2/7/2026
Publish comments <0.5 immediately.
Toxic behavior in multiplayer games leads to player churn.
Users often regret toxic comments sent in the heat of the moment.