I am a french researcher interning at Inria before my PhD, advised by Loucas Pillaud-Vivien and Francis Bach. My research interest are in understanding deep-learning systems through mathematical/theoretic tools, in particular during training and deployment: How does a model train ? What computation does a trained model do ? Here is a list of resources illustrating these questions :

  • Singular Learning Theory : wild maths that link singular algebra to bayesian machine learning.
  • Mechanistic interpretability : one can try to recover algorithms learned by foundational models to understand various tasks like memorization / translation / elementary algebra.
  • Superposition, Grokking and associated phenomenons.
  • Implicit biases : when neural networks interpolate data, many interpolating solutions exists, and I am interested in understanding the properties of the one found by gradient descent-like algorithms.

About me

I volunteer for important, meaningful and fun associations. I am currently involved in

  • EffiSciences, where I helped develop and promote the idea of Recherche Impliquée (RI), (close to Impactful Research), which says that research can and should try to have a positive impact on the world. EffiSciences has a lot of different projects on which I contributed since 2022 in RI, biosecurity and safe AIs. We recently wrote a report on impactful research that you can find here.
  • CeSIA, where I helped teach Machine Learning at ML4G bootcamps, Interpretability research at the Turing seminar (slides), and helped for other field building event in the AI-safety community.

Other stuff: I like to read and sometime post on LessWrong. My favorite sport is {biking, running, weight-lifting, bjj, climbing}. I love to discuss philosophy, with some favorite topics being : Absurdism (Camus), Consciousness, Language (Wittgenstein), Moral.

My CV.

Papers