Offers “CEA”

New CEA

STAGE: In-painting using generative AI for the evaluation of eXplainable AI (XAI) methods H/F

  • Stage
  • Grenoble (Isère)
  • IT development

Job description

Vacancy details

General information

Organisation

The French Alternative Energies and Atomic Energy Commission (CEA) is a key player in research, development and innovation in four main areas :
• defence and security,
• nuclear energy (fission and fusion),
• technological research for industry,
• fundamental research in the physical sciences and life sciences.

Drawing on its widely acknowledged expertise, and thanks to its 16000 technicians, engineers, researchers and staff, the CEA actively participates in collaborative projects with a large number of academic and industrial partners.

The CEA is established in ten centers spread throughout France

Reference

2024-32910

Description de l'unité

Among other activities, CEA LIST Software Safety and Security Laboratory (LSL) research teams design and implement automated analysis in order to make software systems more trustworthy, to exhaustively detect their vulnerabilities, to guarantee conformity to their specifications, and to accelerate their certification. The lab recently extended its activities on the topic of AI trustworthiness and gave birth to a new research group: AISER (Artificial Intelligence Safety, Explainability and Robustness).

Position description

Category

Mathematics, information, scientific, software

Contract

Internship

Job title

STAGE: In-painting using generative AI for the evaluation of eXplainable AI (XAI) methods H/F

Subject

In-painting using generative AI for the correctness evaluation of eXplainable AI (XAI) methods H/F

Contract duration (months)

5-6 months

Job description

Through the recent developments of AI, the use of models produced by machine learning has become widespread, even in industrial settings. However, studies are flourishing showing the dangers that such models can bring, in terms of safety, privacy or even fairness. To mitigate these dangers and improve trust in AI, one possible avenue of research consists in designing methods for generating *explanations* of the model behaviour. Such methods, regrouped under the umbrella term "eXplainable AI" (XAI), empower the user by providing them with relevant information to make an informed choice to trust the model (or not).

In the field of XAI, multiple metrics have been proposed to evaluate the correctness of an explanation, i.e. how well the explanation reflects the actual AI model behaviour. In the particular context of computer vision, most evaluation metrics from the state of the art propose to “de-activate” pixels (e.g. replacing them with a black pixels) to measure their impact on the model decision. However, recent work has shown that such metrics might not be informative, in the sense that they tend to evaluate the model behaviour on images that do not belong the training distribution and that can be considered as “out of distribution”: indeed, an image with entire regions painted in black can hardly be considered as a "normal" input that the model should expect during its lifecycle.

In this internship, we propose to use generative AI to de-activate pixels in a more subtle way - creating images that resemble the original one but with missing features while remaining “in distribution” - and to study the impact of such method on the evaluation of the correctness of XAI methods.
More precisely, the internship will be split in several subtasks as follows:

  • Establish a baseline of existing metrics for evaluating the correctness of XAI methods, using the Quantus framework.
  • Identify a body of existing works on the use of generative AI for in-painting and select a method based on a set of motivated criteria
  • Implement the selected method and evaluate the advantages and drawbacks of the resulting evaluation metric, compared to the state of the art

Methods / Means

Explainable AI, Generative AI, Evaluation, Convolutional Neural Networks

Applicant Profile

As it is not realistic to be expert in machine-learning, computer vision and XAI, we encourage candidates that do not meet the full qualification requirements to apply nonetheless. We strive to provide an inclusive and enjoyable workplace. We are aware of discriminations based on gender (especially prevalent on our fields), race or disability, we are doing our best to fight them.

Minimal qualifications:

  • Master student or equivalent (2nd/3rd engineering school year) in computer science
  • knowledge of Python and the Pytorch framework
  • ability to work in a team, some knowledge of version control

Preferred:

  • notions of AI and neural networks
  • notions of Computer Vision
  • notions of explainable AI

Position location

Site

Grenoble

Job location

France, Auvergne-Rhône-Alpes, Isère (38)

Location

Grenoble

Candidate criteria

Languages

  • English (Fluent)
  • French (Fluent)

Prepared diploma

Bac+5 - Master 2

Recommended training

Master degree in AI, Machine-Learning or Data Science

PhD opportunity

Oui

Requester

Position start date

01/02/2025


Make every future a success.
  • Job directory
  • Business directory