Mona Sloane is new to the University of Virginia, but if you’ve followed the public debate around artificial intelligence in recent years, hers is likely a familiar voice. A frequent commentator and public speaker on AI issues, Sloane recently joined the UVA faculty as an assistant professor with the School of Data Science and Department of Media Studies.
She discussed what brought her to UVA from New York University, what she hopes to see with AI in the years ahead, her work running the Sloane Lab, and more.
Q. What drew you to UVA, and what drew you to the School of Data Science?
A. What drew me to UVA in the first place were the really interesting scholars who were here as well as the new leadership of the College of Arts & Sciences, Dean Christa Acampora, and the dean of the School of Data Science, Phil Bourne – both of whom have a really great track record and also a very convincing and holistic vision for their respective schools. And those visions really map onto the work that I do, which is really truly interdisciplinary and looks at the various concerns and problems and issues that show up at the intersection of artificial intelligence and society. And then I was invited a couple of times to come and have a look at the campus and the people, and everybody has been absolutely amazing – so I made the big move from the big city to Charlottesville.
Q. You’re a sociologist. Explain to someone who doesn’t know that much about the data science field how it and sociology intersect.
A. That’s a great question. So, sociology and data science intersect in so many ways and are intimately connected because sociology is really about understanding the bigger dynamics in society. And there are various methods by which that can be accomplished – and all of those are about collecting and analyzing data in order to understand society. And that really is where it connects to data science. I should say that, as a qualitative researcher, my understanding of data science very much includes qualitative data, and not just quantitative data. And so, for me, that’s really where it comes together, at that intersection of trying to understand the cause of things.
Q. As you travel around discussing artificial intelligence, what are the main questions/concerns you hear from people?
A. The number one concern is bias and harm. So, concerns around how these ubiquitous systems that are basically black boxed can be biased without us noticing and experiencing harm without knowing where it comes from – denied loan applications, high insurance premiums, the loss of benefits, all sorts of things. That is the number one concern.
Increasingly in the US we are getting ready for the next election. There is a lot of concern around manipulation and social media, fake news. And that is a really good segue into the third big concern that I hear very often, which is about generative AI – and generative AI in natural language processing, specifically, so your ChatGPT and the explosion of that type of AI into everyday life, into professional practice, into private practice, and really an absence of research into what potential issues could be with it.
It's going to be a technology that will, I think, really heighten the profile of social scientists because it is really, really social. In order to understand what it does, good or bad, you have to consider context. And so that’s going to be actually exciting for folks like me who do interdisciplinary research and social science research. The fourth concern, and I don’t hear that enough from the public but I like to flag it, is the complicity of AI with climate emergency events – in other words, the huge amount of energy and resources that AI technologies and technologies related to AI require and the absence of knowing about that and discourse about that.
Q. Looking into your crystal ball, where do you see the conversation around AI five years from now?
A. I get the crystal ball question all the time. I answer with a disclaimer that my crystal ball answer also is an “I wish for” answer. So, I anticipate, and wish for, a deeper and more expanded AI literacy among the public and our legislators. I hope for more literacy and standard questions that are being asked for when AI harms. Not just ethics – there’s been a lot of ethics washing that has been going on, so I actually don’t like to use that term very much. I like to talk about AI accountability, AI governance, and harms. I think that those items will become standard features of our conversations.
I know for sure that we will see a massive shift from gestures and voluntary AI accountability processes toward compliance because we will see the EU AI Act kick in. That’s been kind of watered down a lot, that’s why it’s being critiqued, but it really is very explicit about its risk framework. And it is very explicit about what it requires from tech companies, pre- and post-market deployment, in terms of impact assessments and audits. And so we will see those two AI accountability techniques become standard features with compliance.
I think it’s hoped that the U.S. will also get back into the AI regulation game. The U.S. is really lagging behind at the moment. Also, just thinking about the scope of data science and what we do here, I do think that the school will grow. I think that the demand for data science will grow. And I think that the vision that we have for data science here, about that being a holistic endeavor, I hope that that will become the standard.
Q. Tell me about the Sloane Lab and what you’re trying to achieve.
A. The Sloane Lab is basically a research endeavor that looks to establish and expand social science leadership in AI. The social sciences, and the humanities for that matter, are marginalized in the space, but they have a significant amount to contribute to the AI space. And so my work with the Sloane Lab looks at building that out theoretically and conceptually but also in a very applied way. I run research projects that are focused on applied AI audits, for example, in hiring. Or, currently, I work on motion capture technology and auditing motion capture technology – that is a project that is going to grow. I work on AI transparency, so questions around how we can create meaningful AI transparency that is specific to professions.
Q. Finally, tell us something about yourself that might surprise people about your background.
A. I used to train professionally as a ballerina as a child from the age of four to the age of 19 – so, 15 years in dancing every day.
And I love fast cars – you can add that.