Get the latest news
Subscribe to receive updates from the School of Data Science.
Chirag Agarwal is an assistant professor of data science and leads the Aikyam lab, which focuses on developing trustworthy machine learning frameworks that go beyond training models for specific downstream tasks and satisfy trustworthy properties, such as explainability, fairness, and robustness.
Before joining UVA, he was a postdoctoral research fellow at Harvard University and completed his Ph.D. at the University of Illinois at Chicago in electrical and computer engineering and bachelor's degree in electronics and communication. His Ph.D. thesis was on the "Robustness and Explainability of Deep Neural Networks," and his research encompasses different trustworthy topics, such as explainability, fairness, robustness, privacy, transferability estimation, and their intersection in the age of large-scale models. He has developed the first-of-its-kind, large-scale, in-depth study to support systematic, reproducible, and efficient evaluations of post hoc explanation methods for (un)structured data to understand algorithmic decision-making on diverse tasks ranging from bail decisions to loan credit recommendations.
Agarwal has published in top-tier machine learning and computer vision conferences (NeurIPS, ICML, ICLR, UAI, AISTATS, CVPR, SIGIR, ACCV) as well as in top journals in datasets (Nature Scientific Data) and health care (Journal of Clinical Sleep Medicine and Cardiovascular Digital Health Journal). His research has received Spotlight and Oral presentations at NeurIPS, ICML, CVPR, and ICIP conferences, and industrial grants from Adobe, Microsoft, and Google to support his work on trustworthy machine learning.
Towards a unified framework for fair and stable graph representation learning (2021). UAI.
Openxai: Towards a transparent evaluation of model explanations. (2022). NeurIPS.
Evaluating explainability for graph neural networks. (2023). Nature Scientific Data.
Explaining image classifiers by removing input features using generative models. (2020). ACCV.
Probing gnn explainers: A rigorous theoretical and empirical analysis of gnn explanation methods. (2022). AISTATS.
Dear: Debiasing vision-language models with additive residuals. (2023). CVPR.
Gnndelete: A general strategy for unlearning in graph neural networks. (2023). ICLR.
Explaining RL Decisions with Trajectories (2023.) ICLR.
Subscribe to receive updates from the School of Data Science.