Conference on robustness and privacy

Modern, huge databases are naturally exposed to corruptions. These may be due to hardware issues such as servers crashing down, databases can be deteriorated during storage, when information is compressed or when messages are exchanged. Another problem is human corruption, that occur involuntarily during their manipulations, yielding wrong labeling or gross errors and voluntarily as for fake news. Some corruptions are more subtle as when experiments are slightly perturbed by some cognitive bias. As anomalies cannot be ignored in practice, it is important to study statistical performance of machine learning algorithms on databases possibly containing outliers and/or with heavy-tailed.
Robustness has been studied a lot in statistics since the seminal work of Huber. Many algorithms have also been designed in Machine Learning. However, this subject has witnessed an important renew during the last 10 years both in the statistical and computer science communities. It involves statistics, optimization, probability and machine learning as mathematical domains.
In the meantime, privacy has received a lot of attention in the Computer science community because it is a central issue for the security of many sensitive data in finance, economics, administration and, more basically, for keeping customers’ trust in trading. It seems however possible to randomize data in order to ensure a controlled amount of privacy of the individuals while still being able to learn some patterns out of it: this is the cornerstone of privacy. Only very recently, statisticians started to analyze privacy mechanisms. In particular, they discovered several interesting common features between robustness issues and privacy. That is the main motivation on our side to organize a joint conference simultaneously on the two subjects.
We believe that interesting interaction between people working on robustness and privacy may result in interesting collaborations. These fields have become mature enough in order to organize a three day workshop on this subject.
Organization commitee: Cristina Butucea, Victor-Emmanuel Brunel, Nicolas Chopin, Arnak Dalalyan, Guillaume Lecué, Matthieu Lerasle, Vianney Perchet and Alexandre Tsybakov.
Support: The conference is suported by the mathematical institute of CNRS and the Médiamétrie Chair.
For more informations, check homepage of Guillaume Lecué: Conference on robustness and privacy (lecueguillaume.github.io)
-
Table of contents
-
Means and K-means: dimension free PAC-Bayesian bounds for some robust estimators
-
Robustly Learning any Clusterable Mixture of Gaussians
-
Approximate computation of projection depths
-
Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms
-
Differentially private inference via noisy optimization
-
Near Instance-Optimality in Differential Privacy
-
A Central Limit Theorem for Differentially Private Query Answering
-
Differentially Private Mean and Covariance Estimation
-
Lower bounds for high-dimensional estimation under “local” information constraints
-
Optimal Mean Estimation without a Variance
-
Using VC-dimension in robust estimation
-
Robust Regression with Contamination
-