Secure, Privacy-Preserving Machine Learning in TensorFlow

Abstract: In the future, our most impactful applications will rely on our most sensitive data such as medical records, financial statements, location history, and voice transcripts. If this data were to end up in the wrong hands it could be disastrous to the individual, erode trust in the company, and lead to legal implications with the rise of policies such as GDPR. However, with recent advances in computing power and cryptographic techniques data privacy and machine learning no longer need to be adversaries.

In this talk, we introduce the importance of data privacy for advancing machine learning applications across healthcare, finance, and transportation. We discuss the many different technologies enabling this future such as differential privacy, secure multi-party computation, garbled circuits, and how they can be used to train and deploy secure, privacy-preserving machine learning models. Finally, we demonstrate how you can use these technologies today using tf-encrypted (, an open source library built on-top of TensorFlow for secure, privacy-preserving machine learning.

Bio: Yann is a machine learning engineer and privacy researcher at Dropout Labs. He started his career as an actuary at the largest insurance company in Canada, first in reinsurance then in research and development. He then managed a data science team at Deloitte in San Francisco, working with several Fortune 500 enterprises in the Consumer and Product industry. He holds a MASc in Electrical and Computer Engineering from Institut Superieur d’Electronique de Paris. In his free time, you can find him surfing at Ocean Beach or indoor rock climbing in San Francisco.