Description
This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing "privacy-preserving collaborative machine learning".
Infos pratiques
Prochains exposés
-
Hardware-Software Co-Designs for Microarchitectural Security
Orateur : Lesly-Ann Daniel - EURECOM
Microarchitectural optimizations, such as caches and speculative out-of-order execution, are essential for achieving high performance. However, these same mechanisms also open the door to attacks that can undermine software-enforced security policies. The current gold standard for defending against such attacks is the constant-time programming discipline, which prohibits secret-dependent control[…]-
SoSysec
-
Hardware/software co-design
-
Micro-architectural vulnerabilities
-
-
Should I trust or should I go? A deep dive into the (not so reliable) web PKI trust model
Orateur : Romain Laborde - University of Toulouse
The padlock shown in the URL bar of our favorite web browser indicates that we are connected using a secure HTTPS connection and providing some sense of security. Unfortunately, the reality is slightly more complex. The trust model of the underlying Web PKI is invalid, making TLS a colossus with feet of clay. In this talk, we will dive into the trust model of the web PKI ecosystem to understand[…]-
SoSysec
-
Protocols
-
Network
-