Google is testing a new artificial intelligence (AI)-powered scam detection feature in its Chrome Canary browser, aiming to enhance user protection against fraudulent websites. The feature, dubbed “Client Side Detection Brand and Intent for Scam Detection,” leverages a Large Language Model (LLM) to analyze web pages in real time.
Spotted by a user named Leo on the platform X, the feature appears to evaluate the brand and intent of web pages to identify potential scams. According to the feature description in Chrome Canary, it uses on-device LLM analysis to detect suspicious content. The feature is compatible with macOS, Windows, and Linux.
While the exact mechanics of the tool remain unclear, it is designed to issue warnings when users visit scam websites. For instance, if a user navigates to a fake tech support page falsely claiming their device is infected, Chrome’s AI could identify telltale signs of fraudulent behavior, such as fake urgency or dubious domain names. In such cases, the browser could alert users, helping them avoid scams and safeguarding their personal information.
This new functionality builds on Chrome’s existing Enhanced Protection feature, which was updated earlier this year to incorporate AI-powered tools. Previously described as “proactive protection,” the updated feature now uses real-time AI analysis to shield users from harmful sites, downloads, and extensions. Google likely employs pre-trained models to understand web content, enabling more effective detection of scam tactics.
The AI-powered scam detection tool is currently in testing, and Google has not announced when it will be rolled out to the stable version of Chrome. However, this development reflects the company’s ongoing commitment to improving online safety by integrating advanced AI capabilities into its browser.
For now, users interested in exploring this feature can access it via Chrome Canary, Google’s experimental browser version designed for testing new functionalities.