AI Privacy 360 - Tools

AI on Encrypted Data



Overview

While encryption allows data to be protected both during transit and storage, the data typically must be decrypted while it is being accessed for computing and business-critical operations. Fully Homomorphic Encryption (FHE) is a more advanced form of encryption, which is designed to close this gap -- by allowing data to remain encrypted even during computation. The mathematics behind FHE are designed so that computations can be performed on encrypted data (ciphertext), without the service behind it needing to "see" that data in order to provide accurate results. Thus, using FHE we are able implement different analytics and AI solutions over encrypted data.

Differential Privacy



Overview

Since its conception in 2006, differential privacy has emerged as the de-facto standard in data privacy, owing to its robust mathematical guarantees, generalised applicability, and rich body of literature. Over the years, researchers have studied differential privacy and its applicability to an ever-widening field of topics.

The IBM Differential Privacy Library is a general purpose, open source library for investigating, experimenting and developing differential privacy applications in the Python programming language. The library includes a host of mechanisms, the building blocks of differential privacy, alongside a number of applications to machine learning and other data analytics tasks. Simplicity and accessibility has been prioritised in developing the library, making it suitable to a wide audience of users, from those using the library for their first investigations in data privacy, to the privacy experts looking to contribute their own models and mechanisms for others to use.

ML Anonymization



Overview

Organizations often have the need to train ML models on personal data, but in a manner that preserves the anonymity of the people whose data was used during training. Learning on anonymized data typically results in a significant degradation in accuracy. Other methods such as those based on differential privacy tend to be much more complex and resource-intensive, requiring replacing existing training algorithms with new ones, and often requiring the use of several different tools or implementations.

We propose a tool that enables anonymizing training data in a way that is tailored to a specific model, resulting in anonymized models with much higher accuracy than when applying traditional anonymization algorithms that do not take into account the target use of the data.

In our paper we demonstrated that this method achieves similar results in its ability to prevent membership inference attacks as alternative approaches based on differential privacy. We also showed that our method is able to defend against other classes of attacks such as attribute inference.

This means that model-guided anonymization can, in some cases, be a legitimate substitute for such methods, while averting some of their inherent drawbacks such as complexity, performance overhead and being fitted to specific model types. As opposed to methods that rely on adding noise during training, our approach does not rely on making any modifications to the training algorithm itself, and can work even with “black-box” models where the data owner has no control over the training process. As such, it can be applied in a wide variety of use cases, including ML-as-a-service.

Data Minimization



Overview

The European General Data Protection Regulation (GDPR) dictates that “Personal data shall be: adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”. This principle, known as data minimization, requires that organizations and governments collect only data that is needed to achieve the purpose at hand. Organizations are expected to demonstrate that the data they collect is absolutely necessary, by showing concrete measures that were taken to minimize the amount of data used to serve a given purpose. Otherwise, they are at risk of violating privacy regulations, incurring large fines, and facing potential lawsuits.

Advanced machine learning algorithms, such as neural networks, tend to consume large amounts of data to make a prediction or classification. Moreover, these “black box” models make it difficult to derive exactly which data influenced the decision. It is therefore increasingly difficult to show adherence to the data minimization principle.

We propose a tool for data minimization that can reduce the amount and granularity of input data used to perform predictions by machine learning models. It currently supports minimizing newly collected data for analysis (i.e., runtime data), not the data used to train the model, although we may consider extending it in the future. The type of data minimization we perform involves the reduction of the number and/or granularity of features collected for analysis. Features can either be completely suppressed (removed) or generalized. Generalization means replacing a value with a less specific but semantically consistent value. For example, instead of an exact age, represented by the domain of integers between 0 and 120, a generalized age may consist of 10-year ranges.

Our method does not require retraining the model, and does not even assume the availability of the original training data. It therefore provides a simple and practical solution for addressing data minimization in existing systems.

Privacy Risk Assessment



Overview

Recent studies show that a malicious third party with access to a trained ML model, even without access to the training data itself, can still reveal sensitive, personal information about the people whose data was used to train the model. For example, it may be possible to reveal whether or not a person’s data is part of the model’s training set (membership inference), or even infer sensitive attributes about them, such as their salary (attribute inference).

It is therefore crucial to be able to assess the privacy risk AI models that may contain personal information before they are deployed, allowing time for applying appropriate mitigation strategies. Such assessments are also important to enable comparing and choosing between different ML models based not only on accuracy but also on privacy risk, thus making an informed decision on what model is most suitable for a given use case.

Privacy risk assessment can be based on three types of information: a. empirical results of applying inference attacks to the model, b. computing privacy/membership leakage metric scores, and c. looking at risk factors that have found to be associated with increased privacy risk. The goal is to eventually be able to compute an overall privacy risk score based on these different dimensions and building blocks.

Example privacy assessment of a mortgage evaluation model: HMDA.


Types of attacks and metrics

See the references for additional information on each of these attacks:

  • Membership inference is a type of attack where, given a trained model and a data sample, one can deduce whether or not that sample was part of the model’s training. This can be considered a privacy violation if the mere participation in training a model may reveal sensitive information, such as in the case of disease progression prediction.
  • Attribute inference is an attack where certain sensitive features may be inferred about individuals who participated in training a model. Given a trained model and knowledge about some of the features of a specific person, it may be possible to deduce the value of additional, unknown features of that person.
  • Model inversion is an attack that aims to reconstruct representative feature values of the training data by inverting a trained ML model. For example, it may be possible to reconstruct what the average sample for a given class looks like. This can be considered a privacy violation if a class represents a certain person or group of people, such as in facial recognition models.
  • DB reconstruction is an attack where, given a trained model and all training samples except one, it is possible to reconstruct the values of the missing record.
  • Membership leakage metrics measure the amount of information about a single sample (or a complete dataset) that is leaked by a model. It can be computed for example by comparing the characteristics (e.g., weights) of the model with those of models trained without that sample, or by measuring the mutual information between the training data and the output of the model. The higher the membership leakage of a model, the higher the privacy risk.

Privacy in Federated Learning



Overview

Federated Learning (FL) is an approach to machine learning in which the training data is not managed centrally. Data are retained by parties that participate in the FL process and are not shared with any other entity. This makes FL an increasingly popular solution to machine learning task for which bringing data together in a centralized data repository is problematic, either for privacy, regulatory, confidentiality or practical reasons. This approach works for protecting consumers’ data on smart phone as well as in data centers in different countries, and everything in between. You can follow a simple demo.

However, models trained in a federated way are still vulnerable to inference attacks reconstructing training data either during the learning process or in the final model. Combinations of privacy techniques such as differentially private noise, homomorphic encryption and secure multi-party computation improve the privacy of federated learning beyond just not sharing data. The papers below outline different approaches: