Sommaire

  • Cet exposé a été présenté le 25 mars 2022.

Description

  • Orateur

    Dario Pasquini (EPFL)

This talk is about inaccurate assumptions, unrealistic trust models, and flawed methodologies affecting current collaborative machine learning techniques. In the presentation, we cover different security issues concerning both emerging approaches and well-established solutions in privacy-preserving collaborative machine learning. We start by discussing the inherent insecurity of Split Learning and peer-to-peer collaborative learning. Then, we talk about the soundness of current Secure Aggregation protocols in Federated Learning, showing that those do not provide any additional level of privacy to users. Ultimately, the objective of this talk is to highlight the general errors and flawed approaches we all should avoid in devising and implementing "privacy-preserving collaborative machine learning".

Infos pratiques

Prochains exposés

  • The Design and Implementation of a Virtual Firmware Monitor

    • 30 janvier 2026 (11:00 - 12:00)

    • Inria Centre of the University of Rennes - Room Petri/Turing

    Orateur : Charly Castes - EPFL

    Low level software is often granted high privilege, yet this need not be the case. Although vendor firmware plays a critical role in the operation and management of the machine, most of its functionality does not require unfettered access to security critical software and data. In this paper we demonstrate that vendor firmware can be safely and efficiently deprivileged, decoupling its[…]
    • SoSysec

    • Compartmentalization

    • Operating system and virtualization

Voir les exposés passés