Loading…
Monday, June 9
 

7:30am PDT

Continental Breakfast
Monday June 9, 2025 7:30am - 9:00am PDT
Monday June 9, 2025 7:30am - 9:00am PDT
Mezzanine East/West

7:30am PDT

Badge Pickup
Monday June 9, 2025 7:30am - 5:30pm PDT
Monday June 9, 2025 7:30am - 5:30pm PDT
Santa Clara Ballroom Foyer

9:00am PDT

Opening Remarks
Monday June 9, 2025 9:00am - 9:15am PDT
Monday June 9, 2025 9:00am - 9:15am PDT
Santa Clara Ballroom

9:15am PDT

Privacy Paradigms for Law Enforcement Response
Monday June 9, 2025 9:15am - 9:35am PDT
Lukas Bundonis, Netflix; Ben Ballard, MITRE


The phrase "law enforcement response" is ambiguous. It most often describes the domain of legal engineering that comprises corporate disclosure of data to government authorities in response to a legal request for information. However, this definition oversimplifies an opaque process. Law enforcement requests for information and the legal processes that structure them resist comprehension by design. Wiretap and pen register orders (real-time surveillance), lawful intercept requests, emergency disclosure/data requests, and national security letters all fall into this category. However, basic types of information requests, such as subpoenas and warrants for information, are less opaque, and provide an opportunity for greater standardization of information disclosure. A discussion of law enforcement response systems in the context of data privacy therefore bears merit. This is especially true when viewed through the complementary lenses of a concept Lawfare fellow Alan Rozenshtein coined in 2018—"surveillance intermediaries"—and an increasingly aggressive series of nation-state intelligence operations targeting sensitive corporate infrastructure. This short talk will explore what intermediaries are, why they matter, some of the risks posed to their sensitive systems, and what the speakers believe we all can do as privacy professionals to better defend them.


https://www.usenix.org/conference/pepr25/presentation/bundonis
Speakers
avatar for Lukas Bundonis

Lukas Bundonis

Netflix
Lukas Bundonis is a Senior Privacy Engineer at Netflix and the program lead for Legal and Privacy Engineering (LEAP), which comprises Subject Access Requests (SAR), data holds, law enforcement response, and other legal engineering services. He previously worked on law enforcement... Read More →
avatar for Ben Ballard

Ben Ballard

MITRE
Ben Ballard is a Senior Cybersecurity Engineer at the MITRE Corporation. Ben has served as a Google Public Policy Fellow at the Electronic Frontier Foundation, an X-Force Fellow with the National Security Innovation Network, and a cybersecurity fellow at the Citizen Lab at the Munk... Read More →
Monday June 9, 2025 9:15am - 9:35am PDT
Santa Clara Ballroom

9:35am PDT

Remediating Systemic Privacy Incidents
Monday June 9, 2025 9:35am - 9:55am PDT
Sam Havron, Meta

When a privacy incident occurs, our incident management process kicks in to quickly identify the root cause, mitigate the issue, and conduct a post-mortem review. While a post-mortem ensures the same incident doesn't reoccur, we want to take a more proactive approach and prevent similar incidents and enhance our privacy posture. This however could face challenges resulting from a lack of visibility and insufficient metrics. Incident owners may not be aware of similar incidents that require joint analysis, leading to missed systemic root causes. Furthermore, measurements to determine the frequency of similar incidents could be lacking to assess the effectiveness of our prevention efforts. To address these challenges, we've developed a program along with tooling to identify, analyze, and remediate systemic privacy incidents. In this talk, we'll cover our approach to tackling these clusters, including:

  • Automated Cluster Identification: Using heuristic and LLM-based methods to automatically identify clusters
  • Analysis and Remediation: Analyzing prioritized systemic clusters and holding teams accountable for remediation
  • Regression Alerting: Implementing alerting systems to detect regressions and prevent similar incidents from happening again

Join us as we share our experiences and insights on tackling systemic privacy incident clusters and improving incident management processes.

Authors: Sam Havron, Meta (Speaker); David Huang, Meta (Not Speaking)

https://www.usenix.org/conference/pepr25/presentation/havron
Speakers
avatar for Sam Havron

Sam Havron

Meta
Sam Havron is a Privacy Engineer at Meta, with a focus on developing workflows to scale incident investigation and review. Sam has an M.S. in Computer Science from Cornell University, and a B.S. in Computer Science from the University of Virginia.
Monday June 9, 2025 9:35am - 9:55am PDT
Santa Clara Ballroom

9:55am PDT

Observable...Yet Still Private? An Offensive Privacy Perspective on Observability
Monday June 9, 2025 9:55am - 10:15am PDT
Cat Easdon, Dynatrace Research; Patrick Berchtold, Dynatrace


Observability platforms provide development and operations teams with insights into their distributed systems, typically combining logs, metrics, and traces with additional telemetry for use cases such as runtime security monitoring and understanding user behavior. While this data is tremendously useful for troubleshooting and product development, it also poses privacy challenges. In this session, we'll consider these challenges through an offensive privacy lens, presenting our research conducting reconstruction attacks against aggregated user session data. We'll explore how offensive privacy research can be used to support the business case for a new product privacy feature, discuss the unique aspects of privacy threat modeling in a business-to-business (B2B) setting, and consider runtime mitigations to halt reconstruction attacks earlier in the 'privacy kill chain'.


https://www.usenix.org/conference/pepr25/presentation/easdon
Speakers
avatar for Cat Easdon

Cat Easdon

Dynatrace Research
Cat Easdon is an engineer and researcher working at the intersection of privacy, security, and policy. She leads Dynatrace's privacy engineering team, designing product privacy features and building privacy controls into the software development lifecycle.
avatar for Patrick Berchtold

Patrick Berchtold

Dynatrace
Patrick Berchtold is a software engineer at Dynatrace and a student at TU Graz, researching reconstruction attacks at ISEC in collaboration with Dynatrace in his thesis. His thesis focuses on applying reconstruction attacks in industry scenarios, exploring their risks and implications... Read More →
Monday June 9, 2025 9:55am - 10:15am PDT
Santa Clara Ballroom

10:15am PDT

Using Privacy Infrastructure to Kickstart AI Governance: NIST AI Risk Management Case Studies
Monday June 9, 2025 10:15am - 10:30am PDT
Katharina Koerner, Trace3; Nandita Rao Narla, DoorDash


The NIST AI Risk Management Framework has emerged as a popular choice among US based organizations aiming to build responsible AI governance programs. However, real-word adoption of this very comprehensive framework is both challenging and onerous—often falling on privacy engineers who are voluntold to lead AI governance efforts. This presentation will explore key lessons learned from implementing the NIST AI RMF for different industries, highlighting how existing privacy infrastructure, policies, and other governance frameworks can serve as a foundation for AI risk management and compliance. We will also uncover common pitfalls and present a lightweight approach to jumpstart this framework adoption.


https://www.usenix.org/conference/pepr25/presentation/koerner
Speakers
KK

Katharina Koerner

Trace3
Katharina is a seasoned expert in AI governance, tech policy, privacy, and security, with a background spanning law, public policy, and emerging technologies. She is currently a Senior Principal Consultant - AI Governance and Risk at Trace3, a leading technology consulting firm specializing... Read More →
NR

Nandita Rao Narla

DoorDash
Nandita Rao Narla is the Head of Technical Privacy and Governance at DoorDash. Previously, she was a founding team member of a data profiling startup and held various leadership roles at EY, where she helped Fortune 500 companies build and mature privacy, cybersecurity, and data governance... Read More →
Monday June 9, 2025 10:15am - 10:30am PDT
Santa Clara Ballroom

10:30am PDT

Coffee and Tea Break
Monday June 9, 2025 10:30am - 11:00am PDT
Monday June 9, 2025 10:30am - 11:00am PDT
Mezzanine East/West

11:00am PDT

UsersFirst: A User-Centric Threat Modeling Framework for Privacy Notice and Choice
Monday June 9, 2025 11:00am - 11:20am PDT
Norman Sadeh and Lorrie Cranor, Carnegie Mellon University


Recent privacy regulations impose increasingly stringent requirements on the collection and use of data. This includes more specific obligations to disclose various data practices and the need to provide data subjects with more comprehensive sets of choices or controls. There is also an increasing emphasis on user-centric criteria. Failure to offer usable notices and choices that people can truly benefit from has become a significant privacy threat, whether one thinks in terms of potential regulatory penalties, consumer trust and brand reputation, or privacy-by-design best practices. This presentation will provide an overview of UsersFirst, a Privacy Threat Modeling framework intended to supplement existing privacy threat modeling frameworks and to support organizations in their analysis and mitigation of risks associated with the absence or ineffectiveness of privacy notices and choices. Rather than treating privacy notices and choices as mere checkboxes, UsersFirst revolves around user-centric interpretations of these requirements. It is intended to reflect an emerging trend in privacy regulations where perfunctory approaches to notices and choices are no longer sufficient, and where instead notices and choices are expected to be noticeable, usable, unambiguous, devoid of deceptive patterns, and more. The presentation will include results of a detailed evaluation of the UsersFirst user-centric threat taxonomy with people working and/or trained in privacy.


https://www.usenix.org/conference/pepr25/presentation/sadeh
Speakers
avatar for Norman Sadeh

Norman Sadeh

Carnegie Mellon University
Norman Sadeh is a Professor in the School of Computer Science at Carnegie Mellon University (CMU), where he co-founded and co-directs Privacy Engineering Program. Norman served as lead principal investigator on two of the largest domestic research projects in privacy, the Usable Privacy... Read More →
avatar for Lorrie Cranor

Lorrie Cranor

Carnegie Mellon University
Lorrie Faith Cranor is the Director and Bosch Distinguished Professor in Security and Privacy Technologies of CyLab and the FORE Systems University Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University. She directs the CyLab Usable Privacy... Read More →
Monday June 9, 2025 11:00am - 11:20am PDT
Santa Clara Ballroom

11:20am PDT

Enterprise-Scale Privacy for AI: How Canva Scaled Customer Control of Data for AI Training
Monday June 9, 2025 11:20am - 11:40am PDT
Phillip Ward, Canva


Canva's mission is to empower the world to design. A major challenge to that mission is securing the data required to build the AI-powered tools that modern professionals love. This is not just the newest and fanciest generative-AI tools. Even the humble background remover and template library search functions require data that represents our user community in order to perform at the level our users expect. To create the best experience for users, this data must be as unique and diverse as our community, and as we scale, data from our growing community is essential to building a better product. However, no one can do their best creative work if they do not feel safe and empowered. We want our users to experience high quality protection for their personal information and personal creations, so every use of their content must be carefully considered. In this talk, I will outline the AI consent platform that we have built. I will share how Canva built an end-to-end ecosystem to simultaneously empower users to control their data and power the next generation of AI tools. This ecosystem spans from the user experience of providing consent, to the controls and platforms that ensure our 100+ models respect user consent every day.


https://www.usenix.org/conference/pepr25/presentation/ward
Speakers
avatar for Phillip Ward

Phillip Ward

Canva
Dr. Phillip Ward is the Lead of Privacy Engineering at Canva, specializing in privacy technology with over a decade of experience in software engineering, data science, and computer science. He leads a team focused on creating privacy-enabling infrastructure for the rapidly growing... Read More →
Monday June 9, 2025 11:20am - 11:40am PDT
Santa Clara Ballroom

11:40am PDT

The "Privacy" Offered by PETs and the "Privacy" That Users Want. Why So Different?
Monday June 9, 2025 11:40am - 12:25pm PDT
Don Marti


The "privacy" offered by "privacy-enhancing technologies" (PETs) on the web is remarkably different from the privacy that users want and expect. People seek out privacy to avoid real-world privacy harms such as fraud and algorithmic discrimination, and PETs, focused on more narrow mathematical goals, can actually make the real privacy problems worse and harder to detect. Can today's PETs be fixed, or should the web move to more productive alternatives?


https://www.usenix.org/conference/pepr25/presentation/marti
Speakers
avatar for Don Marti

Don Marti

Don Marti is VP of Ecosystem Innovation at Raptive (the company that used to be CafeMedia), and a former strategist at Mozilla and former editor of Linux Journal. He works on web ecosystem and business issues including collaborative research on the impact of advances in consent management... Read More →
Monday June 9, 2025 11:40am - 12:25pm PDT
Santa Clara Ballroom

12:25pm PDT

Conference Luncheon
Monday June 9, 2025 12:25pm - 2:00pm PDT
Monday June 9, 2025 12:25pm - 2:00pm PDT
Terra Courtyard

2:00pm PDT

My $5MM Differential Privacy Visualizations
Monday June 9, 2025 2:00pm - 2:15pm PDT
Marc-Antoine Paré


Let's face it: explaining that "differential privacy is like blurring an image" doesn't get very far in communicating how and why this technology should be used. This talk breaks down data visualizations used over the course of a three-year differential privacy project for the Department of Energy that were used to unlock $5MM in funding from grant administrators, convince regulators of the effectiveness of privacy protection, and generate excitement from industry partners for adoption. While past work in this sector failed to get past academic discussions, this project culminated in two large-scale data releases, subject to a strong differential privacy guarantee (ε=4.72 and δ=5.06⋅10^−9). Practitioners will walk away with ideas and inspiration for bridging the variety of communication gaps found in real-life privacy projects.


https://www.usenix.org/conference/pepr25/presentation/pare
Speakers
avatar for Marc-Antoine Paré

Marc-Antoine Paré

Marc-Antoine Paré was most recently a staff software engineer leading Cruise's Privacy Infrastructure team. Previously, he was the technical lead for the Department of Energy's "Energy Data Vault", which brought differential privacy to the energy efficiency sector.
Monday June 9, 2025 2:00pm - 2:15pm PDT
Santa Clara Ballroom

2:15pm PDT

Establishing Privacy Metrics for Genomic Data Analysis
Monday June 9, 2025 2:15pm - 2:35pm PDT
Curtis Mitchell, xD, United States Census Bureau


The ability to work with genomic datasets across institutions is a promising approach to understanding and treating diseases such as rare cancers. However, the sharing of genomic data raises challenging legal and ethical concerns around patient privacy. In this talk we will present on ongoing work between the National Institute of Standards and Technology (NIST), the US Census Bureau, and other organizations to explore metrics and use cases for privacy-preserving machine learning on genomic data. We will discuss the goals of the project, the technical architecture of the project using privacy-preserving federated learning, and the initial results on performance and privacy metrics we have obtained using plant genomic data as an initial stand-in for human genomic data.


Additional authors: Gary Howarth and Justin Wagner, NIST; Jess Stahl, Census; Christine Task and Karan Bhagat, Knexus; Amy Hilla and Rebecca Steinberg, MITRE


https://www.usenix.org/conference/pepr25/presentation/mitchell
Speakers
CM

Curtis Mitchell

xD, United States Census Bureau
Curtis Mitchell is an Emerging Technology Fellow on the xD team at the US Census Bureau where he is contributing to a variety of projects involving privacy-enhancing technologies, artificial intelligence, and modern web applications. He has over 15 years of experience in software... Read More →
Monday June 9, 2025 2:15pm - 2:35pm PDT
Santa Clara Ballroom

2:35pm PDT

Practical Considerations for Differential Privacy
Monday June 9, 2025 2:35pm - 2:55pm PDT
Alex Kulesza


What happens when the philosophical aspirations of differential privacy collide with practical reality? Reflecting on seven years of experience building and deploying differential privacy systems at Google, I will describe in this talk some of the ways in which a focus on worst-case outcomes both enables and discourages an honest accounting of privacy risk.


https://www.usenix.org/conference/pepr25/presentation/kulesza
Speakers
AK

Alex Kulesza

Alex Kulesza is a research scientist at Google NYC.
Monday June 9, 2025 2:35pm - 2:55pm PDT
Santa Clara Ballroom

2:55pm PDT

Unlocking Cross-Organizational Insights: Practical MPC for Cloud-Based Data Analytics
Monday June 9, 2025 2:55pm - 3:15pm PDT
Daniele Romanini, Resolve


In today's data-driven landscape, organizations often seek collaborative analytics to gain cross-organizational insights while upholding stringent privacy standards. This talk introduces a practical approach to adopting a Secure Multi-Party Computation (MPC) system for cloud-based data analytics. Leveraging open-source frameworks such as Carbyne Stack and MP-SPDZ, we have developed features to enable non-cryptographic and non-MPC expert developers to perform private analytics using intuitive, Python-like code. In this talk, we focus on the practical features that a real-world MPC solutions should have, presenting lessons learned and the key modification to an existing framework to reach a stable deployment.

We explain how we enhanced usability and functionalities of the existing framework (Carbyne Stack with MP-SPDZ), such as implementing the support for a semi-honest security model, which is less expensive and more practical than a malicious one in some real-world settings. We also address practical considerations of cost and performance, presenting strategies to optimize infrastructure deployment and algorithm-level enhancements to improve costs and enable complex analytics. Moreover, we illustrate a practical example on how the platform can be leveraged in the AdTech world. This presentation aims to demonstrate that secure and efficient cross-organizational data analytics are achievable, even for developers without specialized MPC expertise.

Authors: Adrián Vaca Humanes, Gerardo González Seco, Daniele Romanini, Goran Stipcich


https://www.usenix.org/conference/pepr25/presentation/romanini-unlocking
Speakers
avatar for Daniele Romanini

Daniele Romanini

Resolve
Daniele Romanini is a Senior Privacy Engineer at Resolve, with expertise in both data science and software engineering. His background includes experience in academia, government organizations, and the AdTech industry. Daniele is an advocate for privacy-by-design and a privacy tech... Read More →
Monday June 9, 2025 2:55pm - 3:15pm PDT
Santa Clara Ballroom

2:55pm PDT

Coffee and Tea Break
Monday June 9, 2025 2:55pm - 3:45pm PDT
Monday June 9, 2025 2:55pm - 3:45pm PDT
Mezzanine East/West

3:45pm PDT

Building an End-to-End De-Identification Pipeline for Advertising Activity Data at LinkedIn
Monday June 9, 2025 3:45pm - 4:05pm PDT
Saikrishna Badrinarayanan and Chris Harris, LinkedIn


Advertising platforms rely heavily on activity data to measure and optimize ads performance. With current privacy regulations and platform requirements, LinkedIn is held to increasingly rigorous standards in the handling of our members' personal data. This is acute for our ads business, as we adhere to strict regulations that necessitate stringent measures when handling user data, including data minimization, which is almost expanding into a global requirement. These regulations continue to evolve, requiring constant adaptation to new standards, while our data pipelines were originally established in a time when the use of personal data was less regulated.

Motivated by the principle of building privacy by design, we undertook a comprehensive project involving numerous stakeholders to address these challenges, and built an end-to-end robust pipeline that de-identifies advertising activity data. The goal of this project was to ensure that user information is protected while still enabling processing on this de-identified data to generate valuable analytics and enable advertisers to learn the effectiveness of their ad spend. We have onboarded products such as performance reporting and billing as the hero use-cases onto this pipeline. This talk will cover the design, implementation and innovative aspects of this pipeline. We will discuss the various privacy enhancing technologies we applied, our system architecture, challenges faced such as scalability (to process billions of events a day) and balancing privacy with the needs of the business. Finally, we will also highlight the outcomes and practical insights gained from this project.


https://www.usenix.org/conference/pepr25/presentation/badrinarayanan
Speakers
avatar for Saikrishna Badrinarayanan

Saikrishna Badrinarayanan

LinkedIn
Saikrishna Badrinarayanan is a Staff Privacy Engineer at LinkedIn. He has spent the last two years building privacy-preserving systems for problems in ads measurement and responsible AI. Before LinkedIn, he worked on privacy/security teams at Snap and Visa. He is a cryptographer by... Read More →
avatar for Chris Harris

Chris Harris

LinkedIn
Chris Harris is a Senior Staff Engineer at LinkedIn, where they have spent the past nine years working on ads measurement, privacy, and data governance. Passionate about hands-on coding and system performance optimization, they focus on building scalable, privacy-conscious solutions... Read More →
Monday June 9, 2025 3:45pm - 4:05pm PDT
Santa Clara Ballroom

4:05pm PDT

Network Structure and Privacy: The Re-Identification Risk in Graph Data
Monday June 9, 2025 4:05pm - 4:20pm PDT
Daniele Romanini, Resolve


In graph data, particularly those representing human connections, the structure of relationships can inadvertently expose individuals to privacy risks. Recent research indicates that even when traditional anonymization techniques are applied, the unique patterns within a user's local network—referred to as their "neighborhood"—can be exploited for re-identification. This talk delves into the complexities of anonymizing graph data, emphasizing that connections themselves serve as distinctive features that can compromise user privacy.
This talk examines the relationship between a network's average degree (i.e. the amount of nodes' connections) and the severity of uniquely identify a node in it solely based on the network's structure. We discuss how understanding these risks can inform the design of privacy-aware data collection and anonymization methods, ensuring that the benefits of data sharing are balanced with the imperative to protect individual privacy.

Authors: Daniele Romanini and Sune Lehmann, Technical University of Denmark; Mikko Kivelä, Aalto University


https://www.usenix.org/conference/pepr25/presentation/romanini-network
Speakers
avatar for Daniele Romanini

Daniele Romanini

Resolve
Daniele Romanini is a Senior Privacy Engineer at Resolve, with expertise in both data science and software engineering. His background includes experience in academia, government organizations, and the AdTech industry. Daniele is an advocate for privacy-by-design and a privacy tech... Read More →
Monday June 9, 2025 4:05pm - 4:20pm PDT
Santa Clara Ballroom

4:20pm PDT

Data Classification at Scale: Taming the Hydra
Monday June 9, 2025 4:20pm - 4:40pm PDT
Daniel Gagne, Meta


This talk goes into detail about the data classification processes at Meta, where we assign metadata about the semantics, actor, and other attributes of the data. We start by defining a taxonomy to support categorization based on the nature of data and regulatory requirements which will be used to ensure appropriate data usage. This supports a wide variety of privacy policies such as access control, deletion, and purpose limitation. We then take a bytes up approach to scan data, extract features, and infer labels from the taxonomy. We also detail challenges with different data storage patterns, classification approaches and quality measurement.

Additional Author: Giuseppe M. Mazzeo


https://www.usenix.org/conference/pepr25/presentation/gagne
Speakers
avatar for Daniel Gagne

Daniel Gagne

Meta
Danny Gagne is a Software Engineer on the Privacy Infrastructure team at Meta. He holds a B.S. in Computer Science from Northeastern University. He has worked on large scale data classification at the MITRE Corporation and at the International Atomic Energy Agency.
Monday June 9, 2025 4:20pm - 4:40pm PDT
Santa Clara Ballroom

4:40pm PDT

Harnessing LLMs for Scalable Data Minimization
Monday June 9, 2025 4:40pm - 5:00pm PDT
Charles de Bourcy, OpenAI


This talk explores how Large Language Models can enhance Data Minimization practices compared to traditional methods. Advanced contextual understanding can accelerate data classification across an organization's storage locations, improve de-identification of text corpora, and streamline internal governance mechanics. The talk will propose architectures for combining LLM-based tools of various kinds with other techniques like lineage tracing to facilitate proactive data minimization and prevent data sprawl.


https://www.usenix.org/conference/pepr25/presentation/bourcy
Speakers
avatar for Charles de Bourcy

Charles de Bourcy

OpenAI
Charles de Bourcy is a Member of Technical Staff at OpenAI. He enjoys exploring new ways to improve privacy protections. He received his PhD from Stanford University.
Monday June 9, 2025 4:40pm - 5:00pm PDT
Santa Clara Ballroom

5:00pm PDT

Conference Reception
Monday June 9, 2025 5:00pm - 6:30pm PDT
Monday June 9, 2025 5:00pm - 6:30pm PDT
Terra Courtyard

6:30pm PDT

Birds-of-a-Feather Sessions (BoFs)
Monday June 9, 2025 6:30pm - 9:30pm PDT
Monday June 9, 2025 6:30pm - 9:30pm PDT
Alameda Room

6:30pm PDT

Birds-of-a-Feather Sessions (BoFs)
Monday June 9, 2025 6:30pm - 9:30pm PDT
Monday June 9, 2025 6:30pm - 9:30pm PDT
Camino Real Room
 
Tuesday, June 10
 

8:00am PDT

Continental Breakfast
Tuesday June 10, 2025 8:00am - 9:00am PDT
Tuesday June 10, 2025 8:00am - 9:00am PDT
Mezzanine East/West

8:00am PDT

Badge Pickup
Tuesday June 10, 2025 8:00am - 12:00pm PDT
Tuesday June 10, 2025 8:00am - 12:00pm PDT
Santa Clara Ballroom Foyer

9:00am PDT

Demystifying the Android Telehealth Ecosystem
Tuesday June 10, 2025 9:00am - 9:20am PDT
Primal Wijesekera, ICSI and UC Berkeley; Mohsin Khan


The regulatory landscape surrounding sharing personal health information is complex and constantly evolving. Given that a host of regulations could be relevant to mobile health applications, it is not surprising that many developers and organizations are confused about or unaware of the applicability of such regulations and how to comply. This misunderstanding may cost consumers privacy protection for highly sensitive health data. We examined the data handling practices of 408 Android telehealth apps from 36 countries. We found that a significant portion deployed event reporting, which exposes highly sensitive health data to domains not equipped to handle health data. Such practices demonstrate a clear gap between the operational, technical, and regulatory realms. In our pool of US-based telehealth apps, 48.09% potentially violate at least one applicable regulation. We also uncover three main patterns of violations among the U.S.-based apps, including the potential culpability of the Android Platform.

Liam Webster have contributed significantly in the course of the analysis to analyze apps and understand the legal context of this telehealth apps. This work was supported by the U.S. National Science Foundation NSF (under grant CNS-2055772 & ​CNS-2217771 ).​


https://www.usenix.org/conference/pepr25/presentation/wijesekera
Speakers
avatar for Primal Wijesekera

Primal Wijesekera

ICSI & UC Berkeley
Primal Wijesekera is a research scientist in the Usable Security and Privacy Research Group at ICSI and also holds an appointment in the EECS at the University of California, Berkeley. His research focuses on exposing current privacy vulnerabilities and providing systematic solutions... Read More →
MK

Mohsin Khan

Mohsin Khan is a seasoned data privacy expert with a deep focus on applications and data in the healthcare privacy domain. His experience spans implementing enterprise-wide privacy programs at Oscar Health Insurance to addressing critical privacy concerns in cloud computing, IoT... Read More →
Tuesday June 10, 2025 9:00am - 9:20am PDT
Santa Clara Ballroom

9:20am PDT

Safetypedia: Crowdsourcing Privacy Inspections
Tuesday June 10, 2025 9:20am - 9:40am PDT
Lisa LeVasseur and Bryce Simpson, Internet Safety Labs


Internet Safety Labs has a goal to have current safety labels for all [more than 5M] mobile apps by 2029. To accomplish this, we've been experimenting with two paths: automation and crowdsourcing. Since October 2024, we've been running a pilot project called Safetypedia to crowdsource mobile app privacy inspections and labels. This presentation will share our findings on the hybrid approach of using both automation and crowdsourcing to increase the volume of safety labels (viewable at https://appmicroscope.org/). We'll discuss the challenges with crowdsourcing for both participants and ISL, whether the approach warrants a full rollout, and if the resultant community (and safety labels) can be a viable grassroots approach to accountability and digital product safety.


https://www.usenix.org/conference/pepr25/presentation/levasseur
Speakers
LL

Lisa LeVasseur

Internet Safety Labs
Lisa LeVasseur is the founder, Executive Director and Research Director of Internet Safety Labs, a nonprofit software product safety testing organization. Her technical industry contributions and deep knowledge of consumer software products and connected technologies span more than... Read More →
BS

Bryce Simpson

Internet Safety Labs
Bryce Simpson is a Safety Researcher/Auditor at Internet Safety Labs. He's been performing cyber security and online privacy assessments for five years, with recent focus on educational technology platforms, synthesizing regulatory requirements and industry best practices. Specializing... Read More →
Tuesday June 10, 2025 9:20am - 9:40am PDT
Santa Clara Ballroom

9:40am PDT

Verifying Humanness: Personhood Credentials for the Digital Identity Crisis
Tuesday June 10, 2025 9:40am - 10:00am PDT
Tanusree Sharma, Pennsylvania State University


With the rise of AI-powered deception, identity verification systems are increasingly important to distinguish between AI and humans. Building on related concepts, like, decentralized identifiers (DIDs), proof-of-personhood, anonymous credentials, personhood credentials (PHCs) emerged as an alternative approach, enabling individuals to verify that they are a unique person without disclosing additional information. However, new technologies might introduce some friction due to users' misunderstandings and mismatched expectations. This talk will discuss how people reason about unknown privacy and security guarantees of PHCs compared to current methods; and factors that influence how people would like to manage PHCs. Specifically, it will address critical design considerations, including the role of trusted issuers (e.g., government, private entities), the reliability of data attributes used for credential issuance (e.g., biometrics, physical IDs, selfies), and the trade-offs between centralized and decentralized issuance systems.


https://www.usenix.org/conference/pepr25/presentation/sharma
Speakers
avatar for Tanusree Sharma

Tanusree Sharma

Pennsylvania State University
Tanusree Sharma is an Assistant Professor at Penn State University and directs the Governance, Privacy, and Security (GPS) Research Lab. Her work, at the intersection of Security, HCI, and blockchain is oriented around answering the question: How can we design secure systems that... Read More →
Tuesday June 10, 2025 9:40am - 10:00am PDT
Santa Clara Ballroom

10:00am PDT

Coffee and Tea Break
Tuesday June 10, 2025 10:00am - 10:30am PDT
Tuesday June 10, 2025 10:00am - 10:30am PDT
Mezzanine East/West

10:30am PDT

Career Advice for Privacy Engineers: From Resume to Interview to Finding the Next Job
Tuesday June 10, 2025 10:30am - 10:50am PDT
Jason A. Novak, Google


We are in the second decade of privacy engineers being hired in industry and the process has professionalized at all levels. From the backgrounds that applicants come from to the way that candidates are screened to job descriptions to how interviews are conducted. This talk aims to: demystify the privacy engineering job market as it currently exists; to empower privacy engineers (current and prospective) in their career growth; and educate privacy engineers on hiring processes and practices.


https://www.usenix.org/conference/pepr25/presentation/novak
Speakers
avatar for Jason A. Novak

Jason A. Novak

Google
Jason Novak (jasonanovak.com) is a Sr. Staff Privacy Engineer at Google where he works on cross company privacy initiatives including AI data protection. Prior to Google, Jason led the Privacy Engineering teams at Cruise and Apple where he helped teams develop new infrastructure products... Read More →
Tuesday June 10, 2025 10:30am - 10:50am PDT
Santa Clara Ballroom

10:50am PDT

Privacy Engineers on the Front Line: Bridging Technical and Managerial Skills
Tuesday June 10, 2025 10:50am - 11:05am PDT
Ramazan Yener, Muhammad Hassan, and Masooda Bashir, University of Illinois at Urbana-Champaign


Privacy engineers (PEs) play a critical role in managing sensitive data and ensuring responsible system design amid evolving regulations like GDPR and CCPA. Yet, their skills and challenges remain poorly understood. To address this gap, we utilized a mixed-methods study combining survey responses from 28 privacy engineers and semi-structured interviews with 18 practitioners across diverse industries. Our findings reveal that PEs navigate a unique intersection of technical and managerial expertise and interpersonal skills. Educational backgrounds and certifications influence their career trajectories, equipping them to tackle industry-specific challenges such as navigating complex regulatory frameworks or implementing privacy-by-design principles in domains ranging from healthcare to technology. Beyond technical skills, PEs rely on interpersonal and managerial abilities to collaborate with cross-functional teams, negotiate with stakeholders, and advocate for privacy-first practices. They also perceive their roles as dual mandates: ensuring compliance while driving innovation, a balance that requires strategic approaches to overcome organizational resistance. This research highlights the diverse skill set of PEs and offers recommendations for supporting their growth. By shedding light on their evolving roles, we aim to inspire new professionals to explore this vital field while helping organizations better define and support the PE role.


https://www.usenix.org/conference/pepr25/presentation/yener
Speakers
MH

Muhammad Hassan

University of Illinois at Urbana-Champaign
Muhammad Hassan is a PhD student at the University of Illinois focused on the intersection of security, privacy, and usability within digital healthcare. His work explores the critical security and privacy issues that arise in healthcare technologies, emphasizing the usability constraints... Read More →
RY

Ramazan Yener

University of Illinois at Urbana-Champaign
Ramazan Yener is a PhD student at the University of Illinois Urbana-Champaign, researching user-centered privacy and security, with a focus on policy and governance in AI and IoT-driven systems. He examines apps and online platforms from a user perspective to identify potential privacy... Read More →
avatar for Masooda Bashir

Masooda Bashir

University of Illinois at Urbana-Champaign
Dr. Masooda Bashir is an Associate Professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign, where she conducts interdisciplinary research that bridges mathematics, computer science, and psychology. Her research sheds new light on digital trust... Read More →
Tuesday June 10, 2025 10:50am - 11:05am PDT
Santa Clara Ballroom

11:05am PDT

Purpose Limitation with Policy Zones: What We Tried That Didn't Work
Tuesday June 10, 2025 11:05am - 11:20am PDT
Diana Marsala, Meta


Purpose limitation is a foundational principle in data privacy, ensuring that the use of data is strictly confined to the explicitly stated purpose(s) disclosed at the time of collection. This presentation is a retrospective analysis on early iterations of Policy Zones, one of Meta's technical solutions designed to enforce purpose limitation.

By examining the evolution of Policy Zones, including lessons learned and key insights into why certain approaches were effective or ineffective, we aim to provide a deeper understanding of the complexities and opportunities in implementing purpose limitation at scale.

In particular, we'll focus on three common purpose limitation paradigms:


  • Set it and forget it: Dynamically propagating policies from service to service, throughout the stack

  • One size fits all: Leveraging a single policy for all purpose-limitation-based privacy requirements

  • Run checks everywhere: Running privacy checks server-side for every data store, in a single chokepoint that covers all callers


These three paradigms were initially adopted by Meta. Although they appeared sound and facilitated a faster system build-out, we later identified several gaps that necessitated a redesign of our systems to enhance operational maturity and make them more viable at scale.


https://www.usenix.org/conference/pepr25/presentation/marsala
Speakers
avatar for Diana Marsala

Diana Marsala

Meta
Diana Marsala is a Software Engineer on Meta's Privacy Infrastructure team, where she plays a pivotal role in shaping the company's approach to privacy. As an early adopter of privacy infrastructure technologies, she has successfully leveraged these tools to uphold critical privacy... Read More →
Tuesday June 10, 2025 11:05am - 11:20am PDT
Santa Clara Ballroom

11:20am PDT

Building Privacy Products: Field Notes
Tuesday June 10, 2025 11:20am - 11:40am PDT
Miguel Guevara, Google


This talk will provide insights from a product manager working on privacy. These insights will share some high-level thoughts that the author has accumulated over years of experience working on privacy products. These insights will shed light on what can make a privacy product successful, and what other considerations one might need to take into account when developing cutting-edge privacy technology with unclear use-cases.


The author will also share learnings from unsuccessful privacy experiences that can help others as they embark on a privacy journey.

 
The ultimate goal is to start a conversation that can help privacy practitioners build privacy products and features in a successful manner.


https://www.usenix.org/conference/pepr25/presentation/guevara
Speakers
MG

Miguel Guevara

Google
Miguel Guevara is a Product Manager working in Google’s Data Protection team. His primary focus area is building systems that apply privacy-enhancing technologies at scale. He holds a Masters in Public Policy and a Masters in Computer Science
Tuesday June 10, 2025 11:20am - 11:40am PDT
Santa Clara Ballroom

11:40am PDT

Short Break
Tuesday June 10, 2025 11:40am - 11:45am PDT
Tuesday June 10, 2025 11:40am - 11:45am PDT
Mezzanine East/West

11:45am PDT

Panel: How Privacy Engineers Can Shape the Coming Wave of AI Governance
Tuesday June 10, 2025 11:45am - 12:30pm PDT
Moderator: Zachary Kilhoffer, Dynatrace;
Panelists: Hoang Bao, Axon; Masooda Bashir, University of Illinois at Urbana-Champaign; Debra Farber, Lumin Digital; Sarah Lewis Cortes, Netflix and NIST; Shoshana Rosenberg, WSP in the U.S. and Women in AI Governance; Akhilesh Srivastava, IOPD
Privacy engineers often work on complex AI systems, and as such, many now find themselves playing a AI governance roles alongside their privacy engineering responsibilities. However, PEs are not only seeing their roles shaped by AI Governance; they also have an opportunity to shape it. Furthermore, PEs are at the forefront of regulatory compliance and as AI governance evolves and regulations take shape, PEs are uniquely positioned to lead in this domain. This panel explores the intersection of privacy engineering practice and the new wave of AI governance. As the EU AI Act ushers in a new era of AI regulation, reminiscent of GDPR's impact on privacy, we examine how privacy engineering practices and skills can inform and enhance AI governance strategies.
https://www.usenix.org/conference/pepr25/presentation/panel-ai-governance
Speakers
avatar for Hoang Bao

Hoang Bao

Axon
Hoang Bao has two decades of experience in building and leading privacy and data governance programs. He is currently serving as Director, Global Head of Privacy and Data Privacy Officer for Axon, where he helps ensure Axon is always at the forefront in fulfilling its commitment to... Read More →
avatar for Masooda Bashir

Masooda Bashir

University of Illinois at Urbana-Champaign
Dr. Masooda Bashir is an Associate Professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign, where she conducts interdisciplinary research that bridges mathematics, computer science, and psychology. Her research sheds new light on digital trust... Read More →
avatar for Debra Farber

Debra Farber

Lumin Digital
Debra J. Farber is a seasoned privacy executive and leader with over 20 years of experience operationalizing privacy across complex, data-driven environments. She spent the bulk of her career operationalizing privacy programs at companies large and small before shifting left into... Read More →
avatar for Zachary Kilhoffer

Zachary Kilhoffer

Dynatrace
Dr. Zachary (Zak) Kilhoffer is a manager of AI governance at Dynatrace. His research focuses on the ethical, socio-economic, and political implications of emerging technologies, particularly artificial intelligence. With a multidisciplinary background spanning international relations... Read More →
avatar for Sarah Lewis Cortes

Sarah Lewis Cortes

Netflix and NIST
Dr. Sarah Lewis Cortes (CISSP, FIP, CIPP/E (GDPR), CIPT, CISM, CISA, CRISC) is a leading expert with over 20 years of global-scale technology experience in domains including strategy and execution for Information Security, Privacy Engineering, Privacy Enhancing Technologies (PETs... Read More →
avatar for Shoshana Rosenberg

Shoshana Rosenberg

WSP in the U.S. and Women in AI Governance
Shoshana Rosenberg is a distinguished executive and corporate attorney with an exceptionally broad purview and extensive expertise in international data protection and technology law. As a Senior Vice President, Deputy General Counsel, and Chief AI Governance and Privacy Officer... Read More →
avatar for Akhilesh Srivastava

Akhilesh Srivastava

IOPD
Akhilesh Srivastava is a strategic senior leader with a wealth of experience as a Product Technical PM in large tech companies like Meta, Amazon, Capital One, and FINRA. Over his 19-year journey, he has been at the forefront of innovation across diverse domains, including Privacy... Read More →
Tuesday June 10, 2025 11:45am - 12:30pm PDT
Santa Clara Ballroom

12:30pm PDT

Conference Luncheon
Tuesday June 10, 2025 12:30pm - 2:00pm PDT
Tuesday June 10, 2025 12:30pm - 2:00pm PDT
Terra Courtyard

2:00pm PDT

Beyond RAG: Building Reliable AI Systems for Privacy Assessments
Tuesday June 10, 2025 2:00pm - 2:15pm PDT
Emily Choi-Greene, Clearly AI


As organizations explore AI automation for privacy assessments, ensuring reliable and trustworthy output is critical. This talk examines practical challenges in building AI systems that can consistently interpret privacy requirements, process engineering documentation, and produce reliable assessments. We'll set context by discussing which components of privacy assessments are ripe for automation, and which require more human oversight. We'll then explore technical approaches to prevent hallucinations, handle conflicting documentation, normalize AI outputs, and validate assessments against established policies. Drawing from real-world implementation experience, we'll share key patterns for building robust privacy automation systems that maintain high accuracy while scaling across organizations.


https://www.usenix.org/conference/pepr25/presentation/choi-greene
Speakers
avatar for Emily Choi-Greene

Emily Choi-Greene

Clearly AI
Emily Choi-Greene is the CEO and co-founder of Clearly AI, a Y Combinator-backed startup that automates security and privacy reviews. Previously, Emily led data security and privacy at Moveworks, including enterprise-grade privacy-preserving ML, sensitive data detection, and data... Read More →
Tuesday June 10, 2025 2:00pm - 2:15pm PDT
Santa Clara Ballroom

2:15pm PDT

When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study in Synthetic Data
Tuesday June 10, 2025 2:15pm - 2:30pm PDT
Yash Maurya and Aman Priyanshu, Carnegie Mellon University


Modern synthetic data generation with privacy guarantees has become increasingly prevalent. Take real data, create synthetic versions following similar patterns, and ensure privacy through differential privacy mechanisms. But what happens when theoretical privacy guarantees meet real-world data? Even with conservative epsilon values (ε
Speakers
avatar for Yash Maurya

Yash Maurya

Carnegie Mellon University
Yash Maurya is a Privacy Engineer who evaluates empirical guarantees of real-world privacy deployments, having designed privacy-preserving systems at Meta, PwC, BNY Mellon, and Samsung. An IAPP Westin Scholar with a Master's in Privacy Engineering from Carnegie Mellon University... Read More →
avatar for Aman Priyanshu

Aman Priyanshu

Carnegie Mellon University
Aman Priyanshu is an incoming AI Researcher at Cisco focused on AI safety & privacy. With a Masters from CMU, his research on foundation model vulnerabilities and LLM security has attracted media coverage and led to invitations to OpenAI's Red Teaming Network. His work has earned... Read More →
Tuesday June 10, 2025 2:15pm - 2:30pm PDT
Santa Clara Ballroom

2:30pm PDT

Quantifying Reidentification Risk for ML Models
Tuesday June 10, 2025 2:30pm - 2:50pm PDT
Nitin Agrawal and Laura Book, Snap Inc.


Machine learning models, in particular classification models, are used across a wide spectrum of products and applications. These models may be susceptible to attacks like model inversion and attribute inference attacks that could allow for reconstruction of the training data and re-indentification of the data subjects. However, not all models are attackable: well generalized ones specifically are less prone to memorize their training data, and privacy preserving techniques can be used to help ensure training is generalized rather than memorized. However, a key challenge at an industrial scale lies in identifying the attackability of a model as well as calibrating the need for privacy mitigations. Academic literature has established an order relationship between attacks, demonstrating that membership inference attacks are a precursor to the reconstruction and re-identification of training data. In this talk we'll discuss a mechanism to repurpose those attacks into a practical quantifiable metric for ML model attackability measurement. This could be critical in ensuring model privacy and ongoing monitoring of the model in the model deployment lifecycle.


https://www.usenix.org/conference/pepr25/presentation/agrawal
Speakers
avatar for Nitin Agrawal

Nitin Agrawal

Snap Inc.
Nitin Agrawal is currently a Privacy Engineer at Snap Inc., focussing on privacy validation, AI privacy, and data classification. Previously, he worked as an Applied Scientist for Alexa Privacy at Amazon. He holds a Ph.D. in Computer Science from the University of Oxford, where his... Read More →
avatar for Laura Book

Laura Book

Snap Inc.
Laura Book is a Privacy Engineer at Snap Inc., where she is currently focusing on validating privacy adherence across the product. Previously, she worked at Google as a software engineer with a focus on monetization, privacy and data governance. She holds a PhD in Physics from the... Read More →
Tuesday June 10, 2025 2:30pm - 2:50pm PDT
Santa Clara Ballroom

2:50pm PDT

Breaking Barriers, Not Privacy: Real-World Split Learning across Healthcare Systems
Tuesday June 10, 2025 2:50pm - 3:10pm PDT
Sravan Kumar Elineni


Regulatory constraints and siloed data often hinder collaborative AI in healthcare. Our project supported by ONC/ASTP implements Split Learning to enable three independent HIEs to jointly train a deep learning model without sharing sensitive patient data. We detail the technical workflow (e.g., partial model hosting, secure exchange of activations) and discuss how we navigated real-world challenges in data integration and quality, network security, and regulatory compliance. Preliminary results show robust model performance and seamless interoperability across participating sites, suggesting a robust blueprint for large-scale privacy-preserving ML in healthcare.


Authors: Sean Muir, Dave Carlson, Himali Saitwal, David E Taylor, Chit win, Jayme Welty, Adam Wong, Jordon Everson, Keith Salzman, Serafina Versaggi, Lindsay Cushing, Savannah Mueller


https://www.usenix.org/conference/pepr25/presentation/elineni
Speakers
avatar for Sravan Kumar Elineni

Sravan Kumar Elineni

Sravan Kumar Elineni is a seasoned technologist with over a decade of experience in healthcare data systems and emerging fields such as machine learning, robotics, computer vision, and natural language processing. He recently led a high-impact project implementing a state-of-the-art... Read More →
Tuesday June 10, 2025 2:50pm - 3:10pm PDT
Santa Clara Ballroom

3:10pm PDT

Coffee and Tea Break
Tuesday June 10, 2025 3:10pm - 3:40pm PDT
Tuesday June 10, 2025 3:10pm - 3:40pm PDT
Mezzanine East/West

3:40pm PDT

OneShield Privacy Guard: Deployable Privacy Solutions for LLMs
Tuesday June 10, 2025 3:40pm - 3:55pm PDT
Shubhi Asthana, IBM Research


The adoption of Large Language Models (LLMs) has revolutionized AI applications but has introduced complex challenges in enforcing scalable and adaptive privacy safeguards. This talk presents OneShield Privacy Guard, a framework engineered to mitigate privacy risks through context-aware guardrails across the LLM lifecycle—input preprocessing, inference, and output sanitization—while leveraging automated risk assessment for continuous refinement.

The talk explores two key deployments of OneShield Privacy Guard. The first deployment focuses on an enterprise-scale multilingual system for data governance, demonstrating enhanced PII detection accuracy and optimized privacy-utility tradeoffs compared to existing solutions. OneShield's integration provided real-time privacy enforcement, improving compliance adherence in high-volume enterprise environments.

The second deployment highlights an open-source implementation for automated privacy risk triaging, where OneShield reduced manual intervention in privacy-sensitive pull requests by 30% while maintaining compliance precision. This deployment demonstrates its adaptability in privacy-first software development workflows, enabling efficient and automated risk mitigation.

These deployments illustrate OneShield's scalability and deployment flexibility in enterprise and open-source ecosystems. Attendees will gain insights into its technical architecture, tradeoff considerations, and deployment challenges, equipping them with strategies for building automated, high-fidelity privacy safeguards for real-world AI applications.


https://www.usenix.org/conference/pepr25/presentation/asthana
Speakers
avatar for Shubhi Asthana

Shubhi Asthana

IBM Research
Shubhi Asthana is a Senior Research Software Engineer at IBM Almaden Research Center, specializing in AI and machine learning solutions for privacy-preserving technologies, particularly PII detection and management in unstructured data. She has contributed to the development of multimodal... Read More →
Tuesday June 10, 2025 3:40pm - 3:55pm PDT
Santa Clara Ballroom

3:55pm PDT

Scaling Federated Systems at Meta: Innovations in Analytics and Learning
Tuesday June 10, 2025 3:55pm - 4:10pm PDT
Sai Aparna Aketi and Harish Srinivas, Meta
At Meta, we are advancing the scalability and efficiency of federated systems through innovations in both Federated Analytics (FA) and Federated Learning (FL). Our FA system is designed to facilitate privacy-preserving analytics for billions of devices, addressing the key challenges of scalability, resource efficiency, and data privacy. By leveraging one-shot algorithms, batch processing, and predictable query loads, FA achieves efficient large-scale data processing while ensuring robust privacy safeguards through Trusted Execution Environments (TEEs). The system supports flexible ad-hoc querying with rapid iteration cycles, minimizing resource consumption even on constrained devices.
Simultaneously, we have enhanced our internal FL simulation framework, FLSim, to meet the demands of large-scale distributed learning. We addressed previous scalability bottlenecks by integrating FLSim with asynchronous Remote Procedure Call (RPC) communication protocol. As a result, FLSim can now simulate FL training with a throughput of 200,000 users—each with 50 samples—with 10 million samples per minute for a small three-layer neural network over an 8x8 distributed cluster.
The synergy of FA's scalable architecture and FLSim's optimized FL capabilities have enabled Meta to deploy internal use-cases leveraging federated technologies, with a pipeline of additional applications in development.
Authors:
PPML Team: Anjul Tyagi, Othmane Marfoq, Luca Melis, Aparna Aketi
Pytorch Edge Team: Diego Palma Sánchez, Harish Srinivas
https://www.usenix.org/conference/pepr25/presentation/aketi
Speakers
avatar for Sai Aparna Aketi

Sai Aparna Aketi

Meta
Sai Aparna Aketi is a Postdoctoral Researcher in the Central Applied Science team at Meta where she works on building Privacy Enhancing Technologies for Machine Learning applications. She received her Ph.D. in Electrical and Computer Engineering from Purdue University in 2024.
avatar for Harish Srinivas

Harish Srinivas

Meta
Harish Srinivas is a software engineer in PyTorch Edge team at Meta where he works on building on device AI frameworks and Privacy Enhancing Technologies for Machine Learning applications.
Tuesday June 10, 2025 3:55pm - 4:10pm PDT
Santa Clara Ballroom

4:10pm PDT

Using GenAI to Accelerate Privacy Implementations
Tuesday June 10, 2025 4:10pm - 4:30pm PDT
Rajkishan Gunasekaran and Rituraj Kirti, Meta


Meta has developed several streamlined workflows to enable developers to deploy purpose-limitation technology. Adoption of this technology can, however, often become challenging, both because of the scale at which Meta operates and the deep privacy domain knowledge and platform expertise required for efficiently implementing solutions.

This presentation will walk through how Meta leverages Generative AI and automation for scalable deployment of purpose limitation technology and guiding developers into building durable privacy solutions. We will discuss the architecture of the tooling behind key Privacy Workflows at Meta, such as annotating assets, reviewing data flows, and preventing regressions, and how we use Generative AI to accelerate decision-making for developers at every step of the way.


https://www.usenix.org/conference/pepr25/presentation/gunasekaran
Speakers
avatar for Rajkishan Gunasekaran

Rajkishan Gunasekaran

Meta
Rajkishan Gunasekaran is a Software Engineer on the Privacy Infrastructure team at Meta. He has worked on many of the foundational privacy infrastructure technologies such as Policy Zones, Data Lineage and Privacy Developer Tooling. Gunasekaran holds B. Tech and M. Tech degrees in... Read More →
avatar for Rituraj Kirti

Rituraj Kirti

Meta
Rituraj Kirti is a Software Engineer on the Privacy Infrastructure team at Meta that builds technologies for addressing privacy obligations. Kirti's prior work at Meta includes creating and scaling various products that apply machine learning to improve the effectiveness of advertisers... Read More →
Tuesday June 10, 2025 4:10pm - 4:30pm PDT
Santa Clara Ballroom

4:30pm PDT

From Existential to Existing Risks of Generative AI: A Taxonomy of Who Is at Risk, What Risks Are Prevalent, and How They Arise
Tuesday June 10, 2025 4:30pm - 4:50pm PDT
Megan Li and Wendy Bickersteth, Carnegie Mellon University


Due to its general-purpose nature, Generative AI is applied in an ever-growing set of domains and tasks, leading to an expanding set of risks impacting people, communities, society, and the environment. These risks may arise due to failures during the design and development of the technology, its release, deployment, or downstream usages and appropriations of its outputs. In this paper, building on prior taxonomies of AI risks and failures, we construct both a taxonomy of Generative AI risks and a taxonomy of the sociotechnical failure modes that precipitate them through a systematic analysis of 499 publicly reported incidents. We'll walk through some example incidents and highlight those related to privacy. We describe what risks are reported, how they arose, and who they impact. We report the prevalence of each type of risk, failure mode, and affected human entity in our dataset, as well as their co-occurrences. We find that the majority of reported incidents are caused by use-related issues but pose risks to parties beyond the end user(s) of the Generative AI at fault. We argue that tracing and characterizing Generative AI failure modes to their downstream risks in the real world offers actionable insights for many stakeholders, including policymakers, developers, and Generative AI users. In particular, our results call for the prioritization of non-technical risk mitigation approaches.


https://www.usenix.org/conference/pepr25/presentation/li
Speakers
avatar for Megan Li

Megan Li

Carnegie Mellon University
Megan Li is a Societal Computing PhD student at Carnegie Mellon University co-advised by Drs. Lorrie Cranor and Hoda Heidari. She is currently thinking mostly about Generative AI safety.
avatar for Wendy Bickersteth

Wendy Bickersteth

Carnegie Mellon University
Wendy Bickersteth is a Societal Computing PhD student at the Carnegie Mellon CyLab Security and Privacy Institute. Working with Dr. Lorrie Cranor, she conducts usable privacy and security research, focusing on privacy labels and AI use.
Tuesday June 10, 2025 4:30pm - 4:50pm PDT
Santa Clara Ballroom
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.