Daniel Scalena

prof_pic.jpg
Milan, Italy

Hi! I am Daniel, a (double) second-year PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.

My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic to extend and improve their real-world applications.

In my spare time I take pictures and echo "from NL import infrastructure" > Milan.py.

news

May 23, 2025 📢 New paper out! Steering Large Language Models for Machine Translation Personalization, small thread about it on X or BSky.
Oct 02, 2024 📜 Multi-property Steering paper accepted to BlackBoxNLP 2024 (@EMNLP 2024) and 📜 A gentle push funziona benissimo accepted @ CLIC-it conference! 🎉

latest posts

selected publications

  1. personalizedMT.png
    Steering Large Language Models for Machine Translation Personalization
    Daniel Scalena*, Gabriele Sarti*, Arianna Bisazza, and 2 more authors
    2025
  2. A Gentle Push Funziona Benissimo: Making Instructed Models in Italian via Contrastive Activation Steering
    Daniel Scalena, Elisabetta Fersini, and Malvina Nissim
    In Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024), Dec 2024
  3. duckSteering.jpeg
    Multi-property Steering of Large Language Models with Dynamic Activation Composition
    Daniel Scalena, Gabriele Sarti, and Malvina Nissim
    In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Nov 2024