Daniel Scalena
Milan, Italy
Hi! I am Daniel, a (double) second-year PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.
My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic to extend and improve their real-world applications.
In my spare time I take pictures and echo "from NL import infrastructure" > Milan.py.
news
| Oct 14, 2025 | 🗣️ Want to save some tokens AND improve performance? 📝 New paper: EAGER: Entropy-Aware GEneRation for Adaptive Inference-Time Scaling. |
|---|---|
| May 23, 2025 | 📢 New paper out! Steering Large Language Models for Machine Translation Personalization, small thread about it on X or BSky. |
| Oct 02, 2024 | 📜 Multi-property Steering paper accepted to BlackBoxNLP 2024 (@EMNLP 2024) and 📜 A gentle push funziona benissimo accepted @ CLIC-it conference! 🎉 |