AI Privacy 360 - Tools
AI on Encrypted Data
- HELayers community edition FHE AI SDK (requires a docker account)
- Python and C++ code (educational non-optimized toolkit)
- API docs
- Tutorial
- Blog
Overview
While encryption allows data to be protected both during transit and storage, the data typically must be decrypted while it is being accessed for computing and business-critical operations. Fully Homomorphic Encryption (FHE) is a more advanced form of encryption, which is designed to close this gap -- by allowing data to remain encrypted even during computation. The mathematics behind FHE are designed so that computations can be performed on encrypted data (ciphertext), without the service behind it needing to "see" that data in order to provide accurate results. Thus, using FHE we are able implement different analytics and AI solutions over encrypted data.
- Complex Encoded Tile Tensors: Accelerating Encrypted Analytics, Ehud Aharoni, Nir Drucker, Gilad Ezov, Hayim Shaul, and Omri Soceanu, IEEE Security and Privacy, 2022
- Timing Leakage Analysis of Non-constant-time NTT Implementations with Harvey Butterflies, Nir Drucker and Tomer Pelleg, CSCML, 2022
- Privacy-Preserving Record Linkage Using Local Sensitive Hash and Private Set Intersection, Allon Adir, Ehud Aharoni, Nir Drucker, Eyal Kushnir, Ramy Masalha, Michael Mirkin, and Omri Soceanu, Cloud S&P, 2022
- A Methodology for Training Homomorphic Encryption Friendly Neural Networks, Moran Baruch, Nir Drucker, Lev Greenberg, and Guy Moshkowich, SiMLA, 2022
- HeLayers: A Tile Tensors Framework for Large Neural Networks on Encrypted Data, Ehud Aharoni, Allon Adir, Moran Baruch, Nir Drucker, Gilad Ezov, Ariel Farkash, Lev Greenberg, Ramy Masalha, Dov Murik, Hayim Shaul, and Omri Soceanu, PETs, 2023
- HE-PEx: Efficient Machine Learning under Homomorphic Encryption using Pruning, Permutation and Expansion, Ehud Aharoni, Moran Baruch, Pradip Bose, Alper Buyuktosunoglu, Nir Drucker, Subhankar Pal, Tomer Pelleg, Kanthi Sarpatwar, Hayim Shaul, Omri Soceanu, and Roman Vaculin, arXiv, 2022
- BLEACH: Cleaning Errors in Discrete Computations over CKKS, Nir Drucker, Guy Moshkowich, Tomer Pelleg, and Hayim Shau, IACR ePrint, 2022
- NTT software optimization using an extended Harvey butterfly, Jonathan Bradbury, Nir Drucker, and Marius Hillenbrand, IACR ePrint, 2021
- Efficient Encrypted Inference on Ensembles of Decision Trees, Kanthi Sarpatwar, Karthik Nandakumar, Nalini Ratha, James Rayfield, Karthikeyan Shanmugam, Sharath Pankanti, and Roman Vaculin, arXiv, 2021
- Homomorphic Training of 30,000 Logistic Regression Models, Flavio Bergamaschi, Shai Halevi, Tzipora T. Halevi, and Hamish Hunt, ACNS, 2019
- Poster: Secure SqueezeNet inference in 4 minutes, Ehud Aharoni, Moran Baruch, Nir Drucker, Gilad Ezov, Eyal Kushnir, Guy Moshkowich, and Omri Soceanu, IEEE S&P, 2021
Differential Privacy
Overview
Since its conception in 2006, differential privacy has emerged as the de-facto standard in data privacy, owing to its robust mathematical guarantees, generalised applicability, and rich body of literature. Over the years, researchers have studied differential privacy and its applicability to an ever-widening field of topics.
The IBM Differential Privacy Library is a general purpose, open source library for investigating, experimenting and developing differential privacy applications in the Python programming language. The library includes a host of mechanisms, the building blocks of differential privacy, alongside a number of applications to machine learning and other data analytics tasks. Simplicity and accessibility has been prioritised in developing the library, making it suitable to a wide audience of users, from those using the library for their first investigations in data privacy, to the privacy experts looking to contribute their own models and mechanisms for others to use.
- Secure Random Sampling in Differential Privacy, Naoise Holohan and Stefano Braghin, ESORICS 2021
- Diffprivlib: the IBM differential privacy library, Naoise Holohan, Stefano Braghin, Pól Mac Aonghusa, and Killian Levacher, arXiv preprint arXiv:1907.02444, 2019
ML Anonymization
- Python code
- API docs
- Leading paper
- Notebooks
Overview
Organizations often have the need to train ML models on personal data, but in a manner that preserves the anonymity of the people whose data was used during training. Learning on anonymized data typically results in a significant degradation in accuracy. Other methods such as those based on differential privacy tend to be much more complex and resource-intensive, requiring replacing existing training algorithms with new ones, and often requiring the use of several different tools or implementations.
We propose a tool that enables anonymizing training data in a way that is tailored to a specific model, resulting in anonymized models with much higher accuracy than when applying traditional anonymization algorithms that do not take into account the target use of the data.
In our paper we demonstrated that this method achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy. We also showed that our method is able to defend against other classes of attacks such as attribute inference.
This means that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance overhead and being fitted to specific model types. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself, and can work even with “black-box” models where the data owner has no control over the training process. As such, it can be applied in a wide variety of use cases, including ML-as-a-service.
Data Minimization
Overview
The European General Data Protection Regulation (GDPR) dictates that “Personal data shall be: adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. This principle, known as data minimization, requires that organizations and governments collect only data that is needed to achieve the purpose at hand. Organizations are expected to demonstrate that the data they collect is absolutely necessary, by showing concrete measures that were taken to minimize the amount of data used to serve a given purpose. Otherwise, they are at risk of violating privacy regulations, incurring large fines, and facing potential lawsuits.
Advanced machine learning algorithms, such as neural networks, tend to consume large amounts of data to make a prediction or classification. Moreover, these “black box” models make it difficult to derive exactly which data influenced the decision. It is therefore increasingly difficult to show adherence to the data minimization principle.
We propose a tool for data minimization that can reduce the amount and granularity of input data used to perform predictions by machine learning models. It currently supports minimizing newly collected data for analysis (i.e., runtime data), not the data used to train the model, although we may consider extending it in the future. The type of data minimization we perform involves the reduction of the number and/or granularity of features collected for analysis. Features can either be completely suppressed (removed) or generalized. Generalization means replacing a value with a less specific but semantically consistent value. For example, instead of an exact age, represented by the domain of integers between 0 and 120, a generalized age may consist of 10-year ranges.
Our method does not require retraining the model, and does not even assume the availability of the original training data. It therefore provides a simple and practical solution for addressing data minimization in existing systems.
Privacy Risk Assessment
- Python code
- API docs
- Notebooks
Overview
Recent studies show that a malicious third party with access to a trained ML model, even without access to the training data itself, can still reveal sensitive, personal information about the people whose data was used to train the model. For example, it may be possible to reveal whether or not a person’s data is part of the model’s training set (membership inference), or even infer sensitive attributes about them, such as their salary (attribute inference).
It is therefore crucial to be able to assess the privacy risk AI models that may contain personal information before they are deployed, allowing time for applying appropriate mitigation strategies. Such assessments are also important to enable comparing and choosing between different ML models based not only on accuracy but also on privacy risk, thus making an informed decision on what model is most suitable for a given use case.
Privacy risk assessment can be based on three types of information: a. empirical results of applying inference attacks to the model, b. computing privacy/membership leakage metric scores, and c. looking at risk factors that have found to be associated with increased privacy risk. The goal is to eventually be able to compute an overall privacy risk score based on these different dimensions and building blocks.
Example privacy assessment of a mortgage evaluation model: HMDA.
Types of attacks and metrics
See the references for additional information on each of these attacks:
- Membership inference is a type of attack where, given a trained model and a data sample, one can deduce whether or not that sample was part of the model’s training. This can be considered a privacy violation if the mere participation in training a model may reveal sensitive information, such as in the case of disease progression prediction.
- Attribute inference is an attack where certain sensitive features may be inferred about individuals who participated in training a model. Given a trained model and knowledge about some of the features of a specific person, it may be possible to deduce the value of additional, unknown features of that person.
- Model inversion is an attack that aims to reconstruct representative feature values of the training data by inverting a trained ML model. For example, it may be possible to reconstruct what the average sample for a given class looks like. This can be considered a privacy violation if a class represents a certain person or group of people, such as in facial recognition models.
- DB reconstruction is an attack where, given a trained model and all training samples except one, it is possible to reconstruct the values of the missing record.
- Membership leakage metrics measure the amount of information about a single sample (or a complete dataset) that is leaked by a model. It can be computed for example by comparing the characteristics (e.g., weights) of the model with those of models trained without that sample, or by measuring the mutual information between the training data and the output of the model. The higher the membership leakage of a model, the higher the privacy risk.
Privacy in Federated Learning
- Python code
- API docs
- Leading paper
- Examples and tutorials
Overview
Federated Learning (FL) is an approach to machine learning in which the training data is not managed centrally. Data are retained by parties that participate in the FL process and are not shared with any other entity. This makes FL an increasingly popular solution to machine learning task for which bringing data together in a centralized data repository is problematic, either for privacy, regulatory, confidentiality or practical reasons. This approach works for protecting consumers’ data on smart phone as well as in data centers in different countries, and everything in between. You can follow a simple demo.
However, models trained in a federated way are still vulnerable to inference attacks reconstructing training data either during the learning process or in the final model. Combinations of privacy techniques such as differentially private noise, homomorphic encryption and secure multi-party computation improve the privacy of federated learning beyond just not sharing data. The papers below outline different approaches:
- HybridAlpha: An Efficient Approach for Privacy-Preserving Federated Learning, Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar and Heiko Ludwig, 12th ACM Workshop on Artificial Intelligence and Security (AISec 2019), Nov, 2019, arXiv, Dec, 2019
- A Hybrid Approach to Privacy-Preserving Federated Learning, Stacy Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou, arXiv, Dec, 2018, AiSec 2019
- A Syntactic Approach for Privacy-Preserving Federated Learning, O. Choudhury, A. Gkoulalas-Divanis, T. Salonidis, I. Sylla, Y. Park, G. Hsu, A. Das, European Conference on Artificial Intelligence (ECAI)
- Secure Model Fusion for Distributed Learning Using Partial Homomorphic Encryption, Changchang Liu, Supriyo Chakraborty, Dinesh Verma, SpringerLink, Apr, 2019
- Analyzing Federated Learning Through an Adversarial Lens, Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, Seraphin Calo, ICML 2019, Nov, 2018, arXiv, Nov, 2018
- Towards Federated Graph Learning for Collaborative Financial Crimes Detection, Toyotaro Suzumura, Yi Zhou, Nathalie Baracaldo, Guangann Ye, Keith Houck, Ryo Kawahara, Ali Anwar, Lucia Larise Stavarache, Yuji Watanabe, Pablo Loyola, Daniel Klyashtorny, Heiko Ludwig, and Kumar Bhaskaran, NeurIPS 2019 Workshop on Robust AI in Financial Services, Dec, 2019, arXiv, Sep, 2019
- Differential Privacy-enabled Federated Learning for Sensitive Health Data, O. Choudhury, A. Gkoulalas-Divanis, T. Salonidis, I. Sylla, Y. Park, G. Hsu, A. Das, NeurIPS ML4H (Machine Learning for Health), 2019, Dec, 2019
- Predicting Adverse Drug Reactions on Distributed Health Data using Federated Learning, O. Choudhury, Y. Park, T. Salonidis, A. Gkoulalas-Divanis, I. Sylla, A. Das, American Medical Informatics Association (AMIA), 2019
- Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning, Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan, arXiv, Feb, 2021
- Sharing Models or Coresets: A Study based on Membership Inference Attack, H. Lu, C. Liu, T. He, S. Wang, K. S. Chan, FL-ICML Workshop 2020, arXiv, Jul, 2020
- FedV: Privacy-Preserving Federated Learning over Vertically Partitioned Data, Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, James Joshi, Heiko Ludwig, arXiv, Mar, 2021
- Accountable Federated Machine Learning in Government: Engineering and Management Insights, Dian Balta, Mahdi Sellami, Peter Kuhn, Ulrich Schöpp, Matthias Buchinger, Nathalie Baracaldo, Ali Anwar, Heiko Ludwig, Mathieu Sinn, Mark Purcell and Bashar Altakrouri, EGOV2021 – IFIP EGOV-CeDEM-EPART, Electronic Participation, 2021