BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Google Confirms Major Gmail AI Security Update For 3 Billion Users

Following

Google’s Cloud Next 2024 has drawn to a close but the news stories keep on coming. One that hasn’t surfaced, however, could well turn out to be the most important, at least from the user security perspective: the use of AI large language models to protect Gmail users from harm.

The main problem being addressed is that generative AI has become so good so quickly that it has “dramatically lowered the barrier to attacks,” according to Google, which it admitted has led to “a spike in higher quality phishing at scale.” As you might imagine, getting access to Gmail and Drive accounts is high on the attacker’s agenda, given the goldmine of readily actionable data they contain.

Google Announces AI-Powered Gmail Security Evolution

The solution, Google said, was conceptually simple albeit technically challenging: “We built custom LLMs to help fight back.” First deployed in late 2023, these LLMs are now “yielding big results,” Google said.

MORE FROM FORBESGmail And YouTube Hackers Bypass Google's 2FA Account Security

These custom LLMs are trained “on a diet of the latest, most terrible spam and phishing” content because what LLMs are uniquely good at is identifying semantically similar content. Given the large Google Workspace user base of 3 billion, “the results are very impactful—and the LLMs will only get better at this as we go,” a Google spokesperson said.

  • 20% more spam is blocked in Gmail using LLMs
  • 1,000% more user-reported Gmail spam is reviewed each day
  • 90% faster response time dealing with new spam and phishing attacks in Drive

The Positive Side Of The AI Security Fence

Although there has been plenty of talk in recent months about the Google Gemini LLM, not all of it has been filled with praise, quite the opposite in fact. My colleague Zak Doffman, a highly respected privacy contributor at Forbes, recently warned of concerns regarding Google’s AI-powered message helpers. While Doffman’s concerns come from the right place—real-world knowledge of AI privacy implications—many commentators have just jumped upon the “AI is evil” bandwagon. It’s comforting, therefore, to be able to report on something focused on generative AI LLMs but from the positive side of the security fence.

As well as detecting twice as much malware than bog-standard third-party antivirus and security products, according to Google, these AI-powered defenses stop 99.9% of spam. Although that’s a pretty impressive number, a Google spokesperson told me that “inside Google Workspace, we’re very focused on innovating to tackle that last 0.1%.”

MORE FROM FORBESGoogle Confirms 97 Zero-Day Attacks And Points Finger At China For 12

10 Million Paying Customers To Be Offered New AI Security Tooling

As well as these built-in security advances for more than 3 billion Google Workspace users and 10 million paying customers when it comes to Gmail and Drive, Google has also announced an optional new AI-security add-on. In order to address a common request of Workspace customers, the protection of confidential information in files, Google has built a tool to automatically classify and protect such sensitive data.

“Protecting obvious confidential information is straightforward,” Google said, “but safeguarding unexpectedly sensitive data is very hard.”

The reason so many customers request help is that many of them perform the same task manually right now. The new AI tooling will “find these hidden pockets of sensitive data, and make recommendations for added protections, which can automatically be implemented with a few simple clicks,” Google added.

When it comes to pricing, I am told that the tooling can be fine-tuned for the needs of every customer at a cost of $10 per user per month and can be added to most Workspace plans.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here