According to a report Monday (Feb. 16) by cybersecurity publication Dark Reading, these extensions claim to offer the abilities of an artificial intelligence (AI) assistant, while stealing users’ personal information in secret.
The report cites research from security firm LayerX, which found 30 Google Chrome extensions that are carbon copies of each other, aside from some superficial branding differences, many of which have tens of thousands of downloads each.
While these extensions purport to act as AI assistants, they are in fact there to steal email content, browser content and anything else the user willingly offers them.
“While we’ve seen [similar tactics] used by malicious extensions in the past, what is new and concerning is how it’s being applied,” says LayerX security researcher Natalie Zargarov. “Instead of spoofing banks or email logins, attackers are now impersonating artificial intelligence interfaces and developer tools, places where users are conditioned to paste application programming interface (API) keys, tokens and sensitive data without hesitation.”
The research found 30 tools, with names like “Gemini AI Sidebar,” “ChatGPT Translate,” or more generic monikers like “AI Assistant,” which amassed more than 260,000 downloads.
PYMNTS has contacted Google for comment but has not yet gotten a reply.
The findings come at a time when—as PYMNTS wrote last month—AI is “pushing intervention earlier in the attack cycle by identifying coordinated behavior and emerging risk signals before fraud scales.”
As covered here, companies are ramping up their use of AI to guard against suspicious activities, even as they deal with an increasing risk from shadow AI, third-party agents and apps that could subject the businesses to cyber risks.
Research from PYMNTS Intelligence has found a gap between companies’ belief in their defenses against AI-powered fraud and the prevalence of those fraud cases.
While nearly all companies surveyed said they were confident in their protections, nearly 59% said they were struggling with bot-driven fraud. This gap is especially pronounced in the financial services sector, where 60.6% of companies have seen bot traffic rise in the past year, according to the PYMNTS Intelligence report “The Hidden Costs of ‘Good Enough’: Identity Verification in the Age of Bots and Agents.”
“Many assume their fraud controls are mature because they’ve passed compliance audits or updated authentication steps,” PYMNTS wrote. “But the report’s findings show that fraudsters’ use of artificial intelligence—from deepfakes to credential stuffing—is evolving faster than those defenses. What looks compliant may, in practice, be porous.”