Offers “CEA”

20 days agoCEA

STAGE - Out-of-distribution detection for adversarial attacks evasion H/F

  • Stage
  • Grenoble (Isère)
  • IT development

Job description

Détail de l'offre

Informations générales

Entité de rattachement

Le CEA est un acteur majeur de la recherche, au service des citoyens, de l'économie et de l'Etat.

Il apporte des solutions concrètes à leurs besoins dans quatre domaines principaux : transition énergétique, transition numérique, technologies pour la médecine du futur, défense et sécurité sur un socle de recherche fondamentale. Le CEA s'engage depuis plus de 75 ans au service de la souveraineté scientifique, technologique et industrielle de la France et de l'Europe pour un présent et un avenir mieux maîtrisés et plus sûrs.

Implanté au cœur des territoires équipés de très grandes infrastructures de recherche, le CEA dispose d'un large éventail de partenaires académiques et industriels en France, en Europe et à l'international.

Les 20 000 collaboratrices et collaborateurs du CEA partagent trois valeurs fondamentales :

• La conscience des responsabilités
• La coopération
• La curiosité

Référence

2024-32919

Description de l'unité

The French Alternative Energies and Atomic Energy Commission (CEA) is a key player in
research, development, and innovation. Drawing on the widely acknowledged expertise
gained by its 16,000 staff spanned over 9 research centers with a budget of 4.1 billion
Euros, CEA actively participates in more than 400 European collaborative projects with
numerous academic (notably as a member of Paris-Saclay University) and industrial
partners. Within the CEA Technological Research Division, the CEA List institute addresses
the challenges coming from smart digital systems.
Among other activities, CEA List’s Software Safety and Security Laboratory (LSL) research
teams design and implement automated analysis in order to make software systems
more trustworthy, to exhaustively detect their vulnerabilities, to guarantee conformity
to their specifications, and to accelerate their certification. The lab recently extended
its activities on the topic of AI trustworthiness and gave birth to a new research group:
AISER (Artificial Intelligence Safety, Explainability and Robustness).

Description du poste

Domaine

Mathématiques, information scientifique, logiciel

Contrat

Stage

Intitulé de l'offre

STAGE - Out-of-distribution detection for adversarial attacks evasion H/F

Sujet de stage

During this internship, you will study the use of AISER's OOD-detection method, PARTICUL,
to identify whether new inputs were tampered with. You will work using the open-source
library CaBRNet(Xu-Darme et al. 2024), developped at CEA LIST, which provides an
implementation of PARTICUL.

Durée du contrat (en mois)

5-6

Description de l'offre

Through the recent developments of AI, the use of models produced by machine learning has become widespread, even in industrial settings. However, studies are flourishing showing the dangers that such models can bring, in terms of safety, privacy or even fair-ness.

To mitigate these dangers and improve trust in AI, one possible avenue of research consists in designing methods for generating explanations of the model behaviour.

Such methods, regrouped under the umbrella term “eXplainable AI” (XAI), empower the user by providing them with relevant information to make an informed choice to trust the model (or not).

Another important topic is to assess the correct domain of operation of a neural network.
Indeed, inputs of a neural network are expected to be drawn from a distribution similar to the training set. To put it bluntly, a program trained to detect pedestrians on a road should not be expected to perform well when presented with pictures of planes.

As 1embedding such limitation in a neural network is unfeasible, there had been a lot of work on the field of “out-of-distribution detection” (OOD-detection).
Through several work Xu-Darme, Girard-Satabin, et al. (2023), the AISER team bridged XAI and OOD-detection together, using case-based reasoning techniques to detect distribution shift from an input. This ability can be used to other means, for instance monitoring the presence of maliciously modified samples (for instance, adversarial examples (Szegedy et al. 2014).


During this internship, you will study the use of AISER’s OOD-detection method, PARTICUL, to identify whether new inputs were tampered with. You will work using the open-source library CaBRNet(Xu-Darme et al. 2024), developped at CEA LIST, which provides an implementation of PARTICUL.


The broad internship goals are:
• familiarization with the state-of-the-art on XAI (Molnar 2022), OOD-detection Tajwar
et al. (2021) and adversarial examples Carlini and Wagner (2016);
• getting started with the PARTICUL implementation in CaBRNet;
• design and implementation of benchmarks involving the tampering of whole
datasets with adversarial examples;
• evaluation against other OOD-detection methods using for instance the Open-OOD
benchmark (Yang et al. 2022)

Moyens / Méthodes / Logiciels

CaBRNet

Desired profile

Profil du candidat

The candidate will work at the confluence of numerous topics: artificial intelligence,machine learning and cybersecurity.

As it is not realistic to be expert in all fields, weencourage candidates that do not meet the full qualification requirements to applynonetheless. We strive to provide an inclusive and enjoyable workplace. We are awareof discriminations based on gender (especially prevalent on our fields), race or disability,
we are doing our best to fight them. One of our team member is formally trained against
psychological harassment and sexual abuse.


Minimal
  • master student or equivalent (2nd/3rd engineering school year) in computer science;
  • ability to work in a team;
  • some knowledge of version control.
Preferred
  • formal training in machine learning and/or statistics;
  • experience of machine learning theory.

Make every future a success.
  • Job directory
  • Business directory