Can machine learning algorithms preserve privacy and be fair? In this project we deal with these two concepts, their interaction and their trade-offs. On the one hand, we want to achieve that different social groups are considered equally in machine learning models (fairness). On the other hand, within an ML algorithm, no data should be passed on to third parties, and from the output of the algorithm, it should not be possible to draw conclusions about the original training data (privacy). The project is associated with “TOPML: Trading Off Non-Functional Properties of Machine Learning” (https://topml.uni-mainz.de/).

This project is funded by the “TOPML: Trading Off Non-Functional Properties of Machine Learning” project funded by Carl Zeiss Foundation (https://topml.uni-mainz.de/).