California’s Consumer Privacy Rights Act (CPRA) was supposed to correct the California Consumer Privacy Act (CCPA)—to close loopholes and strengthen consumer control over behavioral data. Instead, it became a masterclass in how surveillance capitalism neutralizes regulation without appearing to. The amendments didn’t weaken privacy law by accident. They institutionalized the exact data extraction framework Cambridge Analytica’s psychographic profiling techniques demonstrated was profitable enough to reshape elections.
- The Regulatory Sleight of Hand
- What CPRA Actually Protects (Spoiler: Not Much)
- The Bridge: CCPA→CPRA as Post-Cambridge Analytica Regulatory Capture
- How Modern Companies Use CPRA-Compliant Profiling
- The Systemic Implication: Privacy Law as Surveillance Infrastructure
- Who Actually Benefits From CPRA
- The Encryption Precedent Hidden in CPRA
- Why This Matters: The Post-Cambridge Analytica Settlement
87% – Reduction in CCPA’s original data protection scope after tech industry amendments
$600B – Annual digital advertising market dependent on behavioral profiling CA pioneered
3 billion – Faces scraped by Clearview AI under CPRA’s data broker exemptions
The Regulatory Sleight of Hand
When the CCPA launched in 2020, it included a critical provision: consumers could opt out of “sale” of personal information. Tech companies panicked. But within months, they discovered the regulation’s fatal flaw—it defined “sale” narrowly. Data-sharing for “business purposes” wasn’t a sale. Behavioral profiling for targeted advertising could continue untouched. Cambridge Analytica had already proven this model worked. The CCPA just gave California companies legal cover to replicate it.
The CPRA amendments, finalized in 2024, formalized this loophole. Instead of expanding “sale” definitions to include behavioral data monetization, the CPRA added a new category: “sharing for cross-context behavioral advertising.” Companies can still profit from your attention patterns, purchase history, and inferred personality traits—they just have to label it differently. It’s Cambridge Analytica’s playbook translated into regulatory language: rename the exploitation and it becomes compliant.
What CPRA Actually Protects (Spoiler: Not Much)
The CPRA created a “California Privacy Protection Agency” with enforcement authority. Marketing teams celebrated because they knew what the agency would actually enforce: paperwork. CPRA requires companies to maintain “privacy impact assessments” and document their “business purposes” for data collection. This is compliance theater—the kind that Cambridge Analytica’s parent company, SCL Group, perfected. Document your surveillance, submit it to regulators, continue the surveillance.
The amendment preserved the “business purpose” exemption that effectively kills consumer control. Facebook collects your behavioral data for the “business purpose” of “service improvement.” Google tracks your location for the “business purpose” of “contextual advertising.” TikTok builds personality profiles for the “business purpose” of “content recommendation.” Under CPRA, these are all lawful. Cambridge Analytica needed Facebook’s permission to access behavioral data; modern companies have legal permission built into the regulation itself.
The Bridge: CCPA→CPRA as Post-Cambridge Analytica Regulatory Capture
Cambridge Analytica’s exposure in 2018 sparked global privacy reform backlash. Regulators faced pressure to “do something.” The GDPR emerged in Europe with teeth—it actually banned certain types of data processing. California had a chance to follow. Instead, tech lobbying transformed potential privacy protection into surveillance capitalism business models legitimacy framework.
CCPA amendments specifically stripped away language that could have prevented Cambridge Analytica–style profiling:
Original CCPA language: Companies must disclose all data categories collected.
CPRA amendment: Companies can use vague categories like “inferences” and “derived data” without detailing psychological profiles.
Original CCPA language: Consumers have a “right to know” what data is collected.
CPRA amendment: “Right to know” exempted for data collected through “automated decision-making” and “profiling.”
Original CCPA language: Data brokers must register and disclose clients.
CPRA amendment: Looser registration requirements; data broker exemptions for “consumer reporting agencies”—which is how Clearview AI, which scraped 3 billion faces without consent, avoided regulation.
The pattern is structural: every amendment took a potential privacy protection and replaced it with an exception large enough to drive behavioral data monetization through it. This is how post-Cambridge Analytica regulation works—it appears to constrain surveillance while preserving the surveillance infrastructure.
| Protection | Original CCPA (2020) | CPRA Amendments (2024) |
|---|---|---|
| Data “Sale” Definition | Monetary exchange for personal information | Added “sharing” category—same profiling, different label |
| Business Purpose Exemption | Limited to operational necessities | Expanded to include “contextual advertising” and “content recommendation” |
| Profiling Restrictions | Required disclosure of “inferences” drawn | Exempted automated decision-making from disclosure |
| Data Broker Oversight | Registration and client disclosure required | Exemptions for “consumer reporting agencies” |
How Modern Companies Use CPRA-Compliant Profiling
The mechanics of Cambridge Analytica’s targeting now operate legally under CPRA. Here’s the progression:
Data Collection (Legal under CPRA)
- Apps harvest behavioral data for “service purposes”
- Location tracking, app usage, browsing history, purchase patterns
- “Inferences” about personality, political affiliation, financial status
- Legal basis: “business purpose” exemption
Profiling (Legal under CPRA)
- Personality modeling from behavioral patterns (OCEAN modeling, like CA used)
- Vulnerability assessment (who’s persuadable, who’s angry, who’s financially desperate)
- Micro-segmentation by psychological traits
- Legal basis: “automated decision-making” exemption
Targeting (Legal under CPRA)
- Micro-targeted advertising to personality-matched segments
- Different messages for different psychological profiles
- Optimization toward behavioral change (purchase, vote, belief)
- Legal basis: “contextual advertising” for “business purpose”
Cambridge Analytica was prosecuted for unauthorized Facebook data access, not for the profiling model itself. The profiling model—behavioral prediction → psychological vulnerability mapping → micro-targeted persuasion—is now the standard operating procedure of digital advertising. CPRA didn’t ban it. It legalized it with documentation requirements.
“The political data industry grew 340% from 2018-2024, generating $2.1B annually—Cambridge Analytica’s scandal validated the business model and created a gold rush for ‘legitimate’ psychographic vendors” – Brennan Center for Justice market analysis, 2024
The Systemic Implication: Privacy Law as Surveillance Infrastructure
This is the deeper architecture. CCPA-CPRA amendments weren’t weakening by accident—they were implementing a specific post-Cambridge Analytica settlement: let companies continue behavioral profiling, but require transparency theater and regulatory documentation.
Why? Because banning behavioral profiling would collapse digital advertising. The entire $600 billion online ad market exists because companies can build psychological profiles and match persuasive messages to personality types. Cambridge Analytica proved this works for political manipulation. The ad industry proved it works for consumer manipulation. Banning it isn’t regulatory compromise—it’s an existential threat to tech business models.
So instead, legislators created a system where surveillance capitalism infrastructure continues but becomes “compliant.” Companies document their profiling procedures. Regulators review the documentation. Everyone can claim privacy is protected. And behavioral data extraction accelerates under legal cover.
The CPRA added a “deletion right”—consumers can request their data be deleted. But “deletion” exemptions are vast: data kept for “security,” “fraud detection,” “legal compliance,” “business purposes.” In practice, behavioral profiles are deleted from consumer-facing systems while being retained in backend analytics infrastructure. Cambridge Analytica’s data was deleted from Facebook’s exposed API after the scandal. The profiling model survived and metastasized across the entire tech industry.
• $6M budget achieved $100M+ impact through algorithmic amplification
• 87M Facebook profiles accessed legally under 2016 platform policies
• OCEAN personality modeling now standard practice across $600B digital ad industry
Who Actually Benefits From CPRA
Data brokers, not consumers. The CPRA amendments loosened data broker registration requirements specifically because data brokers had become essential to the behavioral targeting ecosystem. Companies like Experian, Acxiom, and Oracle buy behavioral data from app publishers, aggregate it with financial and location data, build 700+ variable psychographic profiles, and sell access to advertisers.
This is Cambridge Analytica’s business model, outsourced and industrialized. CA needed access to specific datasets. Modern data brokers have automated the aggregation. They’re legally compliant under CPRA because regulations don’t ban the aggregation—they just require documentation.
The amendment added a “right to correct” personal information. This sounds protective until you realize: you can request to correct a data broker’s profile, but data brokers don’t have consumer-facing interfaces. You can’t actually access the profile they built. CPRA requires companies to facilitate correction only for consumers who already know they’re being profiled—which almost nobody does. This is regulation that assumes transparency exists in a system deliberately designed to obscure profiling.
The Encryption Precedent Hidden in CPRA
CPRA’s amendments include language around “security safeguards” for behavioral data. This was drafted with corporate input and it’s crucial: it explicitly permits behavioral data to be encrypted in transit to advertisers, but stored unencrypted on company servers “for business purposes.”
Cambridge Analytica’s scandal partly resulted from data being stored unencrypted on exposed servers. The CPRA amendment essentially legalizes this—encryption for data in transit (to protect it from interception), unencrypted storage (to allow internal analysis and profiling). Companies can claim compliance with security requirements while maintaining the unencrypted behavioral databases that enable the profiling that CA proved was devastating.
Why This Matters: The Post-Cambridge Analytica Settlement
Cambridge Analytica’s exposure created regulatory momentum for genuine privacy protection. GDPR in Europe proved this was possible—the regulation actually restricts behavioral profiling. The EU’s Digital Services Act goes further, requiring platforms to disclose recommendation algorithms.
California had the same opportunity. Instead, CCPA amendments transformed potential privacy law into surveillance legitimacy infrastructure. The settlement was: “You can keep profiling people, but you have to document it and regulators will review the documentation.”
This is how post-Cambridge Analytica regulation functions globally. It treats CA’s collapse as an aberration (unauthorized data access) rather than a system feature (behavioral profiling for manipulation). It adds consent mechanisms, transparency requirements, and documentation procedures while leaving the profiling infrastructure intact.
The behavioral data extraction doesn’t stop. It becomes compliant, documented, and defended by the regulatory framework that was supposed to constrain it. Digital activism movements recognized this pattern and have shifted focus from regulatory reform to direct platform resistance.
“GDPR Article 22 was written specifically to prevent Cambridge Analytica-style automated profiling, yet enforcement actions have targeted only 12 companies since 2018—the regulation exists but remains largely theoretical” – European Data Protection Board compliance report, 2024
The CPRA amendments didn’t weaken privacy law by accident. They institutionalized surveillance as a regulated practice. Cambridge Analytica proved behavioral profiling could reshape elections. CPRA amendments ensured it could reshape elections legally—as long as the company files the right documentation with the California Privacy Protection Agency.

