madS&P
About S&P Seminar Teaching

March 16, 2026

Security and Privacy of Edge AI Models in Social Media Applications

Abstract

Social media apps utilize AI models for music recommendations, face filters, and generating captions for user images. These models run on user devices, analyzing images before they are uploaded to cloud services. These systems operate within opaque pipelines that are rarely audited by third parties. In this talk, I will demonstrate how we reverse-engineered TikTok and Instagram to uncover their on-device image analysis systems and evaluate the models’ performance. We analyze model behavior across demographic groups and uncover systematic disparities in outputs. After discovering the models, we exposed authentic users to them to understand how they are perceived. Discussions with users revealed that nearly all participants were unaware that edge models processed their data and perceived such processing as privacy invasive. Notably, user perceptions of edge AI hinged on whether data left the device. In the final part of the talk, I will demonstrate how we proved that the data in question does leave. Furthermore, we show how edge image storage may leak user privacy. We found that some social media applications store images that users have not decided to post publicly at accessible URLs. I will conclude by outlining future directions for evaluating and securing edge image storage systems.

Biography

Jack is a fourth-year PhD student at the University of Wisconsin-Madison and is advised by Suman Banerjee and Kassem Fawaz. His work focuses on reverse-engineering social media applications to assess the security and privacy of current AI deployments.

© Copyright 2026 madS&P. Powered by Jekyll with al-folio theme.