Do ‘Privacy-Preserving’ Technologies Harm Workers? Prof. Seema N. Patel Unpacks Impacts of Emerging AI Tools

A close-up photo of Seema N. Patel smiling with dark hair and hoop earings

Professor Seema N. Patel applies her expertise in workers’ rights to explore how AI technologies are reshaping working conditions and affecting employees.

Faculty Who Lead: Seema N. Patel


  • Professor Seema N. Patel draws on decades of employment law and policy experience to examine how AI is changing workplaces in a new co-authored policy brief.
  • She explains how “privacy-preserving” AI tools can allow employers to monitor and influence workers while technically complying with data privacy laws.
  • The co-authored brief recommends that policymakers focus on outcomes and workers’ rights, instead of trying to keep pace with every new emerging technology.

 

As AI tools reshape workplaces, Professor Seema N. Patel helps break down what these changes mean for workers. Drawing on her background in labor rights enforcement, policy reform, and worker advocacy, Patel has co-authored The “Privacy” Trap: How “Privacy-Preserving AI Techniques” Mask the New Worker Surveillance and Datafication. The new policy brief examines how emerging technologies let employers analyze and influence worker behavior while technically complying with privacy laws, and it outlines principles for strengthening workplace protections.

Co-authored with experts from Data & Society, PowerSwitch Action, Coworker.org, and Pennsylvania State University, the report arrives as states consider new rules for automated systems on the job.

In the following Q&A, Patel discusses what workers and policymakers should understand as these technologies become more widespread.

Q: What should workers and policymakers know about “privacy-preserving” AI?

A: Privacy-preserving technologies claim to protect personal data, but protecting workers’ personal data doesn’t necessarily lead to protecting workers themselves. Corporations can use Privacy-Preserving AI Techniques as workarounds to analyze data at scale and make predictions without “touching” personal data, enabling them to technically comply with data privacy laws while still exerting harmful control over workers. The emergence of these technologies underscores the ability for tech companies to continually race one step ahead of privacy protections, enabling the familiar business dynamics of worker surveillance and subordination.

Q: Why is it important to address this issue now?

A: With the explosion in AI spending unlocked by commercial generative AI, the market for these technologies is likely to massively increase. The global market for privacy-enhancing technologies is projected to explode from $2.4 billion in 2023 to $25.8 billion by 2033. Some consultants believe that synthetic data, which can be used to simulate and predict worker behavior, will “completely overshadow” real data in AI models. Left unchecked and absent proactive intervention, these technologies will be deployed in ways that further obscure accountability, entrench inequality, and strip workers of their voice and agency.

Q: What can individuals and governments do to guard against the misuse of powerful technologies?

A: The critical thing is for government and society to focus more on the underlying dynamics of worker power and agency, and less on chasing the newest technology or the data itself. If policymakers strictly focus on regulating the newest technology, they will inevitably fail to keep up with the newest developments and iterations. Technology has always outpaced regulation. Instead, as we argue in the brief, we must focus our energy on broader structural forces, including what we call the employer surveillance prerogative, which are more directly tied to actual worker outcomes in the long run. We also must solve for the tremendous unequal bargaining power between workers and employers, and low unionization rates (especially among low-wage workers), by providing more formidable avenues for workers to have a say in their employment relationships, through labor unions or through membership in other worker organizations, like workers’ centers.

Q: Can you discuss how this became a research interest for you?

A: As a longtime workers’ rights litigator, policymaker, and advocate, I’ve seen how the rapid rise of workplace technologies — and their often harmful, negative impacts on low-wage workers — has further skewed both the playing field, and democracy. Especially here in the Bay Area, the epicenter of technological innovation and progress, I’ve seen firsthand how tech lobbyists, business advocates, and industry groups wield influence in local and state politics. That inspired me to explore the dynamics of low-wage worker power-building in this technological realm. How might workers have a say in the invasive, often hidden ways that technologies like AI impact their work lives (and their non-work lives)? What governance mechanisms and guardrails might we consider as a society that enable low-wage workers, in particular, to also enjoy the “fruits” of technological innovation, rather than suffer from data extraction and lower wages because of how these technologies are wielded against their interests? These are some of the questions I felt compelled to investigate.

Patel has worked as a labor rights organizer, U.S. Department of Labor trial attorney and appellate litigator, policy reform coordinator at the White House under President Barack Obama, and deputy director of San Francisco’s labor standards enforcement agency, where she oversaw innovative programs and helped establish the nation’s first parental leave mandate and first predictive scheduling law.