Abstract: Modern machine learning models exhibit amazing accuracy on tasks from image classification to natural-language processing, but accuracy does not tell the entire story of what these models have learned. Does a model memorize and leak its training data? Did it "accidentally” learn privacy-violating tasks uncorrelated with its training objective? Can it contain hidden backdoor functionality? In this talk, I will explain why common metrics of model quality may hide potential security, privacy, and fairness issues, and outline recent results and open problems at the junction of machine learning and privacy research.
Bio: Vitaly Shmatikov is a Professor of Computer Science at Cornell Tech, where he works on security and privacy. He received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies three times, in 2008, 2014, and 2018, and the Test-of-Time Awards from the IEEE Symposium on Security and Privacy (S&P / “Oakland”) and the ACM Conference on Computer and Communications Security (CCS).