Behrooz Razeghi
Ph.D., Computer Science
M.Sc., Applied Mathematics
M.Sc., Electrical Engineering
B.Sc., Electrical Engineering
M.Sc., Applied Mathematics
M.Sc., Electrical Engineering
B.Sc., Electrical Engineering
Welcome!
I am currently a Postdoctoral Fellow at Harvard University, where I began my appointment in October 2024. Under the mentorship of Professor Flavio P. Calmon, my research focuses on Trust in Generative AI.
From December 2022 to August 2024, I was a Postdoctoral Researcher in the Biometric Security & Privacy group at the Idiap Research Institute, working under the supervision of Professor Sébastien Marcel. I received my Ph.D. in Computer Science from the University of Geneva in 2022, where I was advised by Professor Slava Voloshynovskiy in the Stochastic Information Processing (SIP) group.
During my Ph.D., I spent six months as a Visiting Research Fellow at Harvard University, collaborating with Professor Flavio P. Calmon, and another six months as a Visiting Research Scholar at Imperial College London, working with Professor Deniz Gündüz.
Prior to my Ph.D., I earned an M.Sc. in Mathematics (Numerical Analysis) from Iran University of Science and Technology in 2017, and an M.Sc. in Electrical Engineering (Communications Systems) from Ferdowsi University of Mashhad in 2014, advised by Professor Touraj Nikazad and Professor Ghosheh Abed Hodtani, respectively.
My research interests lie at the intersection of machine learning, information theory, and signal processing, with a focus on data privacy, representation learning, algorithmic fairness, and federated learning. My research addresses the following key questions within these topics:
How can we design and implement robust privacy-preserving mechanisms to protect sensitive data in applications such as biometric systems and digital health problems?
How can we enhance representation learning techniques to improve the accuracy, fairness, interpretability, privacy, robustness, and generalizability of machine learning models?
What methodologies can effectively identify, quantify and mitigate biases in both human decisions and model predictions?
How can we optimize federated learning models to balance privacy, computational efficiency, and scalability in distributed systems?
These questions have significant implications for the development of trustworthy AI technologies in domains such as biometrics, healthcare, and medical imaging.