Daniel Scalena
Hi! I am Daniel, a (double) third-year PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.
🔬 My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic to extend and improve their real-world applications.
📜 You might find interesting: Leveraging model uncertainty to dynamically control computation and improve reasoning efficiency in large models, and Personalizing model outputs with lightweight steering methods in machine translation.
Outside of work, I enjoy the usual suspects (travel, photography, …) plus an unhealthy interest in big cities — the usable ones. Convinced most problems improve with better design.
news
| Jan 14, 2026 | 🎉 Steering Large Language Models for Machine Translation Personalization has been accepted to the main conference at EACL 2026. See you in Morocco 🇲🇦! |
|---|---|
| Oct 14, 2025 | 🗣️ Want to save some tokens AND improve performance? 📝 New paper: EAGER: Entropy-Aware GEneRation for Adaptive Inference-Time Scaling. |