Description
This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing "privacy-preserving collaborative machine learning".
Infos pratiques
Prochains exposés
-
CHERI: Architectural Support for Memory Protection and Software Compartmentalization
Orateur : Robert Watson - University of Cambridge
CHERI is a processor architecture protection model enabling fine-grained C/C++ memory protection and scalable software compartmentalization. CHERI hybridizes conventional processor, instruction-set, and software designs with an architectural capability model. Originating in DARPA’s CRASH research program in 2010, the work has progressed from FPGA prototypes to the recently released Arm Morello[…]-
SoSysec
-
SemSecuElec
-
Compartmentalization
-
Hardware/software co-design
-
Hardware architecture
-
-
CHERI standardization and software ecosystem
Orateur : Carl Shaw - Codasip
This talk will describe the current status of the RISC-V International standardization process to add CHERI as an official extension to RISC-V. It will then explore the current state of CHERI-enabled operating systems, toolchains and software tool development, focusing on the CHERI-RISC-V hardware implementations of CHERI. It will then go on to give likely future development roadmaps and how the[…]-
SoSysec
-
SemSecuElec
-
Compartmentalization
-
Operating system and virtualization
-
Hardware/software co-design
-
Hardware architecture
-
-
Towards privacy-preserving and fairness-aware federated learning framework
Orateur : Nesrine Kaaniche - Télécom SudParis
Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy[…]-
Cryptography
-
SoSysec
-
Privacy
-
Machine learning
-
-
Malware Detection with AI Systems: bridging the gap between industry and academia
Orateur : Luca Demetrio - University of Genova
With the abundance of programs developed everyday, it is possible to develop next-generation antivirus programs that leverage this vast accumulated knowledge. In practice, these technologies are developed with a mixture of established techniques like pattern matching, and machine learning algorithms, both tailored to achieve high detection rate and low false alarms. While companies state the[…]-
SoSysec
-
Intrusion detection
-
Machine learning
-