Index
RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response - Erlingsson et al., CCS '14
Randomized Response + Bloom Filter
Deep Learning with Differential Privacy - Abadi et al., CCS '16 [Summary]
Discussed how to train deep neural networks with non-convex objectives, under a modest privacy budget
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning* - Hitaj et al., CCS '17
Proposed and implement an active inference attack on deep neural networks in a collaborative setting(which stresses the importance of using secure aggregation and differential privacy.)
Opaque: An Oblivious and Encrypted Distributed Analytics Platform - Zheng et al., NSDI '17
Prio: Private, Robust, and Scalable Computation of Aggregate Statistics - Corrigan-Gibbs et al., NSDI '17
Honeycrisp: Large-Scale Differentially Private Aggregation Without a Trusted Core - Roth et al., SOSP '19
Deep Leakage from Gradients - Zhu et al., NIPS '19 [Zhihu]
Shredder: Learning Noise Distributions to Protect Inference Privacy - Mireshghallah et al., ASPLOS ' 20
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models - Shan et al., Security '20
Orchard: Differentially Private Analytics at Scale - Roth- et al., OSDI '20
Last updated