Daniel Scalena

prof_pic.jpg
Milan, Italy

Hi! I am Daniel, a (double) third-year PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.

🔬 My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic to extend and improve their real-world applications.

📜 You might find interesting: Leveraging model uncertainty to dynamically control computation and improve reasoning efficiency in large models, and Personalizing model outputs with lightweight steering methods in machine translation.

Outside of work, I enjoy the usual suspects (travel, photography, …) plus an unhealthy interest in big cities — the usable ones. Convinced most problems improve with better design.

news

Jan 14, 2026 🎉 Steering Large Language Models for Machine Translation Personalization has been accepted to the main conference at EACL 2026. See you in Morocco 🇲🇦!
Oct 14, 2025 🗣️ Want to save some tokens AND improve performance? 📝 New paper: EAGER: Entropy-Aware GEneRation for Adaptive Inference-Time Scaling.

latest posts

selected publications

  1. EAGer-pic.png
    EAGER: Entropy-Aware GEneRation for Adaptive Inference-Time Scaling
    Daniel Scalena, Leonidas Zotos, Elisabetta Fersini, and 2 more authors
    2025
  2. personalizedMT.png
    Steering Large Language Models for Machine Translation Personalization
    Daniel Scalena*, Gabriele Sarti*, Arianna Bisazza, and 2 more authors
    2025
  3. duckSteering.jpeg
    Multi-property Steering of Large Language Models with Dynamic Activation Composition
    Daniel Scalena, Gabriele Sarti, and Malvina Nissim
    In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Nov 2024