Description
To understand the sensitivity under attacks and to develop defense mechanisms, machine-learning model designers craft worst-case adversarial perturbations with gradient-descent optimization algorithms against the model under evaluation. However, many of the proposed defenses have been shown to provide a false sense of robustness due to failures of the attacks, rather than actual improvements in the machine‐learning models’ robustness, as highlighted by more rigorous evaluations. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic and automated manner. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past.
Prochains exposés
-
CHERI: Architectural Support for Memory Protection and Software Compartmentalization
Orateur : Robert Watson - University of Cambridge
CHERI is a processor architecture protection model enabling fine-grained C/C++ memory protection and scalable software compartmentalization. CHERI hybridizes conventional processor, instruction-set, and software designs with an architectural capability model. Originating in DARPA’s CRASH research program in 2010, the work has progressed from FPGA prototypes to the recently released Arm Morello[…]-
SoSysec
-
SemSecuElec
-
Compartmentalization
-
Hardware/software co-design
-
Hardware architecture
-
-
CHERI standardization and software ecosystem
Orateur : Carl Shaw - Codasip
This talk will describe the current status of the RISC-V International standardization process to add CHERI as an official extension to RISC-V. It will then explore the current state of CHERI-enabled operating systems, toolchains and software tool development, focusing on the CHERI-RISC-V hardware implementations of CHERI. It will then go on to give likely future development roadmaps and how the[…]-
SoSysec
-
SemSecuElec
-
Compartmentalization
-
Operating system and virtualization
-
Hardware/software co-design
-
Hardware architecture
-
-
Towards privacy-preserving and fairness-aware federated learning framework
Orateur : Nesrine Kaaniche - Télécom SudParis
Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy[…]-
Cryptography
-
SoSysec
-
Privacy
-
Machine learning
-