Three tools for practical differential privacy
Privacy Preserving Machine Learning (NeurIPS 2018 workshop) withDifferentially private learning on real-world data poses challenges for standard machine learning practice: privacy guarantees are difficult to interpret, hyperparameter tuning on private data reduces the privacy budget, and ad-hoc privacy attacks are often required to test model privacy. We introduce three tools to make differentially private machine learning more practical:
- simple sanity checks which can be carried out in a centralized manner before training,
- an adaptive clipping bound to reduce the effective number of tuneable privacy parameters, and
- we show that large-batch training improves model performance.