Get the latest news
Subscribe to receive updates from the School of Data Science.
Health & Wellbeing Projects
AI for Translational Health (Research Mentor: Don Brown): The integrated Translational Health Research Institute of Virginia (iTHRIV) is a federally funded collaboration between UVA, Virginia Tech, Carilion Clinic, and Inova that facilitates clinical and translational research across the Commonwealth. With partner institutions across the country researchers in iTHRIV have participated in creation of the National COVID Cohort Collaborative which now contains records from more than 5M patients that are helping to answer questions critical to understanding the pandemic. Students interns will learn about electronic health records and data privacy, data engineering/integration, data mining and big data analytics, and the use of artificial intelligence tools to help discover patterns of interest in the data.
Opioid Data (Research Mentor: Heman Shakeri): Using the All of Us Research Program, a unique database of over 1 million Americans, nearly matching the US demographic, we study the effects of chronic diseases and opioid use to identify factors that may create barriers for people to access care.
Dynamical Coupling of Physical and Social Infrastructures: Evaluating the Impacts of Social Capital on Access to Safe Well Water (Research Mentor: Jundong Li) In many places, private wells are the primary source of water for residents. This SAI research project examines the availability of potable drinking water to individuals and households in settings where private wells are the predominant source of water for residents. This project uses data on the mobility of cell phone users to characterize the social assistance that residents call upon. Methods are used to account for unequal representation of different groups in such datasets. The analysis considers other variables that may cause variation in water quality, such as demographic and socioeconomic factors. The project places high priority on sharing important findings with stakeholders, including extension services and health departments. Multiple, complementary datasets are leveraged to examine the ways in which advantageous positions in social networks may contribute to better water quality in private wells, particularly in geographic settings that have been impacted by recent flooding. Upon constructing these networks, measures of positions in social networks are used to predict variation in the contamination of private wells. The algorithmic approaches developed for graph neural network analysis will have broader potential applications in similar research that seeks to account for biases in the representativeness of large archival datasets, including biases that disadvantage vulnerable populations.
Privacy, Surveillance & Fairness Projects
Privacy and security issues in recent generative AI models (Research Mentor: Tianhao Wang) Recent generative AI models such as Stable Diffusion and ChatGPT are powerful. A comprehensive analysis of their privacy and security issues is needed. To prepare for this project, students will first learn to use computing clusters and run large models. The project will then begin by reviewing the relevant literature on privacy and security concerns in AI models, specifically focusing on generative models. Next, students will design experiments to evaluate the privacy and security risks associated with these models. This could include evaluating the extent to which sensitive information can be inferred from model-generated outputs or assessing the potential for model manipulation or adversarial attacks. The results of these experiments will be used to develop recommendations for improving the privacy and security of generative AI models. Overall, the project will provide students with an opportunity to explore cutting-edge research topics in AI while also developing critical thinking and research skills.
Mapping Surveillance Tech (Research Mentor: Renée Cummings) With the computational power, access to Big Data and support of Big Tech, law enforcement is building an algorithmic arsenal of AI firepower. AI policing is destined to become a superpower. But algorithmic abuse, through AI policing tools and AI policing tactics, could destroy lives in real time. Our data is not only being monetized as it is being shared with a broad range of law enforcement agencies; our data is also being weaponized against us. The brutal algorithmic force, through AI policing and AI surveillance, must be reined in, the overreach is scary and deadly. But most citizens are unaware of the power and pervasiveness of AI policing and the ubiquitous algorithmic information gathering strategies of law enforcement. AI has upgraded the undercover agent; algorithms are the new detectives, undercover officers and informants; taking covert policing to an unimaginable level. But the legal hurdles are way too low and the ethical guardrails way too weak for the encroachment and penetration of AI policing. We are being algorithmically cornered by the police in this Big Data Policing reality. This project culminates in the design of an interactive risk rating system of surveillance technologies to measure the intensity of algorithmic force deployed by the police. It strives to uphold due process, protect civil rights, and empower citizens with a tool that demands accountability and transparency in policing. It reimagines policing within a data justice and social justice framework.
Facial Recognition. (Research Mentor: Sheng Li) The objective of our project is to enhance the fairness and robustness of computer vision algorithms, such as mitigating the demographic bias in face recognition.
Urban Data Equity Projects
The Urban Data Equity Lab at the University of Virginia is a space to catalyze projects that interrogate how data is shaping our cities and communities – and whose stories are not being told in the process. Anchored in the commitment to advance our notions of responsible data science and understanding its limitations, the Lab looks at data equity in contemporary urban areas, regardless of their size. Our approach is based on transnational collaborative networks pursuing data justice, questioning skewed power dynamics, and prioritizing digital rights and the public interest. (Research Mentors: Jess Reia, Siri Russell, Claudia Scholz, Rebecca Schmidt)
Subscribe to receive updates from the School of Data Science.