Q&A: Rethinking the Inevitability of AI with Mar Hicks

Mar Hicks
Mar Hicks is an associate professor of data science at UVA.

Artificial intelligence and its potential to reshape the world in the years ahead will be one of the defining debates of our time.

During an online conference hosted by the University of Virginia this summer, experts from a broad spectrum of disciplines submitted research papers and shared their insights about the development of AI and what history tells us about the social and environmental impacts of technological change. A follow-up conference will be held virtually in December.

Mar Hicks, an associate professor of data science at UVA, organized the event along with Erik Linstrum, a history professor at the University. UVA’s Environmental Institute, the University’s Department of History, and the Office of Equity and Inclusion at the School of Data Science all supported their efforts.  

Hicks shared highlights and key takeaways from the wide-ranging discussions during a recent chat. 

Q. How did the idea for this conference come about, and how did you arrive at the overarching theme of “rethinking the inevitability of AI?”  

In 2023, there was a huge push to not just talk about generative AI but to use generative AI. We were all supposed be beta testers, only they didn’t call it a beta. They said, “This is the next great thing, and you all should start using it immediately.”  

And not only was that the message coming from the companies that were deploying these technologies, this message was also coming from school districts to K-12 teachers, it was coming from university administrators. The messaging was: now students and faculty have to either use, or be conversant with the use of, generative AI.

I felt, and a lot of folks felt, especially in the humanities, that we skipped over critical conversations regarding adoption of these technologies.

At the same time, there were news stories coming out that were incredibly concerning. Microsoft started talking about developing nuclear reactors to power their forays into AI. The energy usage of a lot of these major tech companies started shooting up, and they started either missing or, in fact, publicly going back on their climate goals because they said, “AI is the wave of the future.” So, after years and years of the tech industry trying to be more green and do things with carbon offsets and reduce the power consumption of their data centers, suddenly things were going in the wrong direction.  

And so that was why this particular angle for the conference came about, because of this concern that we were really taking a step backward as we were trying to take a step forward technologically, which is so often the issue when we’re talking about new technologies. And we pitched it to the Environmental Institute here at UVA and asked if they would fund it, and they did, so that was hugely important. 

Q. Can you talk about the process of choosing the topics for the seven panel discussions and what your priorities were in setting that agenda?  

So, one of the ways that we set the agenda for the panels was to partly think about what were the major issues we hoped the conference would address and then partly be led by the papers that we received. And so natural themes started to emerge.  

One of the things that was interesting to me, and a little surprising and really positive, was that the collection of really excellent papers was a lot more international than I assumed they would be. A lot of times when we’re talking about these issues, we’re understandably talking about the United States, and we’re talking about what Silicon Valley is doing, because in a lot of ways they’re leading the conversation. But many of these papers were looking at not just international impacts but also international action in the realm of energy consumption and AI and technologies that undergird AI like cloud technologies, data centers, and so on.  

Q. Mél Hogan, an associate professor in film and media at Queen’s University in Ontario, delivered the keynote address on the “the pain of datafication.” What were some of your main takeaways from her remarks?  

Dr. Hogan’s keynote was really interesting because it talked about the embodiment of these problems. The main thrust of her talk was about how this process of datafying everything, of making everything and everybody into data is its own kind of issue — and it’s not necessarily something that we should take for granted.  

The talk was really asking us to contend with the fact that this isn’t a neutral process. When you systematize everything in such a way as to strip people down into the things that are most useful for computers and the people who wield particular computerized tools, and that have value for certain sectors of our economy and certain sectors of our society, that’s not something that helps or hurts everybody equally. This turn to datafying everything is going to be something that does not come without cost.

A new professor to UVA, Lauren Bridges, was the discussant for the keynote, and she is terrific. She works on local and regional issues of energy and water consumption regarding data centers.  

Q. What were some of the highlights for you from other sessions?  

One of the panels that really was a standout to me as a historian of technology was by professor Zachary Loeb of Purdue. He talked about what we take for granted in terms of what goes right with computing, and he used the history of Y2K to talk about some transferable lessons for today.  

A lot of folks think, “Oh, Y2K was nothing, right? It was a media construction that was overhyped.” He talked about how that was not the case at all. The reason it turned out to be “nothing” was because legions of programmers worked really hard throughout the latter half of the 1990s to ensure things would not break down. It’s frustrating for him and for a lot of historians to see folks say, “Don’t worry too much. These things will all work out in the end.” Well, they won’t unless we’ve fixed them.

Another paper that I thought was really great was from UVA professor Ali Fard. He did a presentation on the materiality of cloud computing and really digging into the specifics of how the cloud has been built in the real world. The reason I think that’s important is because it really impresses upon us that there are so many problems and issues that are constantly being solved. We shouldn’t take for granted that this all works, and it will continue to work. It’s an ongoing problem-solving process.

The last paper I’ll mention was from Ksenia Tatarchengko of Singapore Management University. She gave a really great presentation on technocracy and Soviet cybernetics. It really helped not just tell us more about the history but also got us thinking about the ways that we’re maybe silently buying into technocratic ideals right now without explicitly naming them as such, and the way that technology is leading governance in a way that is similar to certain other countries in the past that decided that they were going to have a very strong, top-down model of development. In the U.S., we like to think that we’re not doing that at all, so it’s interesting to see the some of the echoes with that history. 

Q. As you reflect on this conference, what is your hope for what might come of these discussions? 

One of the real benefits of doing an online conference is it can be more international. Network building is the key to a lot of conferences. And I think we did help build some interesting networks.  

Another thing that came out of is so many people got in touch to say, “We’re so glad you’re doing a conference on rethinking the inevitability of AI because we feel like people aren’t talking about that as much as they should be.” Back when we put out this call, it seemed like AI technologies were going full steam ahead no matter what, even though more recently we’ve been hearing many people say that the AI “bubble” may be in the process of bursting. And so, I think that it’s important to put things out there that can alter the discourse. We’re still in a position where we should think about this development and not just uncritically accept it.  

The other thing that’s going to come out of this conference — because there were so many terrific people who wanted to present their work — we decided to do a second day of the conference later in the fall, on December 6. I’m looking forward to that.