Google just deleted Chrome’s privacy promise — and security experts say the timing is suspicious

7 Min Read

Google has quietly removed a longstanding privacy assurance from Chrome’s AI feature documentation, prompting security researchers to question whether the company’s on-device processing claims still hold water.

The deletion matters because Chrome’s AI tools—which include features like writing assistance and image analysis—have been marketed as processing data locally on your device rather than sending it to Google’s servers. That promise was a cornerstone of Google’s pitch that these features protect user privacy. Now that the explicit assurance has vanished from official documentation, experts are asking what changed and why Google didn’t announce it publicly.

Key Findings:
  • The Silent Deletion: Google removed explicit on-device processing guarantees from Chrome AI documentation without public announcement or explanation.
  • The Trust Gap: Chrome’s AI features analyze text, images, and browsing behavior that could become valuable training data if processing shifts to Google’s servers.
  • The Regulatory Risk: The change occurs during active FTC scrutiny of Google’s data practices, potentially constituting a material change requiring user notification.

According to research on edge AI processing, local data processing provides improved privacy and security compared to cloud-based alternatives. The timing of Google’s documentation change compounds concerns, as the company faces ongoing regulatory pressure from the Federal Trade Commission and international privacy authorities over how it collects, stores, and uses user data.

Removing a privacy promise—even while claiming the underlying practice hasn’t changed—creates a credibility gap that security researchers say is hard to ignore. If on-device processing truly remains the standard, experts question why Google would remove the explicit guarantee rather than clarify or strengthen it. This pattern mirrors broader concerns about data collection practices where companies maintain surveillance capabilities while adjusting their public commitments.

What Data Could Google Access if Processing Moves to Servers?

The practical stakes are concrete. Chrome’s AI features can analyze text you write, images you upload, and content you interact with. If that processing moved from your device to Google’s servers, the company would gain additional visibility into your browsing behavior, creative work, and personal communications. Even if Google doesn’t deliberately misuse that data, it becomes a new asset in the company’s data infrastructure—valuable for training AI models, refining ad targeting, or responding to government requests.

The Data at Stake:
• Text analysis from writing assistance features
• Image content from uploaded files and screenshots
• Browsing patterns and interaction data
• Usage analytics sent to Google regardless of processing location

Google’s statement to The Register reaffirms that processing stays on-device, but the company did not explain why the privacy language was removed or when the change occurred. That silence is precisely what has triggered scrutiny from security researchers who monitor how tech companies communicate about privacy.

Why Does Documentation Matter When Users Can’t Verify Claims?

The incident reflects a broader tension in how AI features are deployed in consumer products. Companies like Google, Apple, and Microsoft have all promoted on-device AI processing as a privacy-protective alternative to cloud-based analysis. Research on on-device AI models shows that edge intelligence enhances localized data processing by deploying algorithms directly on devices, reducing data transmission requirements.

But those claims are difficult for ordinary users to verify. You cannot easily inspect Chrome to confirm where processing happens. You cannot audit the data flows. You depend on the company’s word and its documentation. When that documentation changes without notice, users lose even that thin layer of assurance.

Security researchers say the deletion also creates a legal and regulatory question: if Google marketed on-device processing as a privacy feature and users enabled Chrome’s AI tools based on that promise, does removing the guarantee constitute a material change that should trigger notification or consent?

Could This Trigger FTC Action Against Google?

The FTC has previously taken action against companies for making privacy claims they couldn’t substantiate. In 2020, the agency settled with Amazon over misleading Alexa privacy claims. Google itself has faced FTC scrutiny over privacy practices in Gmail, location tracking, and other services.

Regulatory Context:
• FTC has authority to investigate material changes to privacy practices
• Previous settlements establish precedent for misleading AI privacy claims
• Documentation changes during active regulatory scrutiny increase enforcement risk

A deleted privacy promise during an active period of regulatory attention could invite questions about whether Google is backing away from commitments it made to users and regulators alike. Research on federated learning demonstrates that keeping data on local devices provides measurable privacy benefits, making Google’s decision to remove explicit guarantees particularly noteworthy.

What Should Chrome Users Do Now?

For Chrome users, the practical question is whether to trust Google’s reassurance that on-device processing continues despite the removed language. Some security researchers suggest treating the deletion as a signal to disable Chrome’s AI features until Google provides clearer documentation and explanation. Others note that even if processing does remain local, the features still send usage data and feature interactions back to Google for analytics and improvement purposes.

Users concerned about the documentation change can review their Google ad tracking settings and consider disabling Chrome’s AI features entirely until the company provides clearer transparency about its data handling practices.

Google has not announced a timeline for restoring the privacy language or providing additional transparency about why it was removed. The company has also not clarified whether other privacy promises in its product documentation are subject to similar revision without public notice.

Until Google addresses these questions directly, the deletion will likely fuel ongoing debate about whether tech companies’ privacy commitments are binding promises or provisional marketing language subject to quiet revision.

Share This Article
Sociologist and web journalist, passionate about words. I explore the facts, trends, and behaviors that shape our times.