Understanding the Age of AI: Key Takeaways from UVA Data Conference
Artificial intelligence experts shared their insights on how AI may prove to be the defining issue of our time, during a full-day conference organized by the University of Virginia.
Held in Arlington on Dec. 5, UVA’s Conference on Leadership in Business, Data, and Intelligence also explored the myriad ethical questions that AI tools and their deployment can raise.
Co-hosted by the University’s Darden School of Business and School of Data Science, the gathering featured leaders from diverse sectors who shared their thoughts about how AI will reshape society in the years ahead.
Here are some key takeaways from the conference:
Building comfort and trust with AI is critical
Companies everywhere are now wondering how, and to what extent, they should incorporate AI into their operations. In the first panel discussion of the day, which focused on private sector perspectives on AI, panelists discussed the factors businesses must consider as they think about how to adopt AI tools.
“If you’re building an AI system, the number one thing you’re going to encounter is people who are afraid,” said Andrew Gamino-Cheong, co-founder and chief technology officer of Trustible AI. “And then it’s going to be on you to help build that trust.”
A critical component of establishing trust is ensuring that systems are safe – even when, and perhaps especially when, they’re operating in unanticipated ways.
“It’s not just, how do I design this system safely, securely?” said Jamie Jones, vice president of field services and technical partnerships at GitHub. “But how do I design it so that it's safe and secure, even when it’s not doing what I expect it to do?”
Even with safeguards in place, though, Jones cautioned against releasing any AI system prematurely.
“We need to be very careful about what things we are actually pushing to market, shipping, and going live with that may not exactly meet what we were trying to do,” he said. “Because, again, the blast radius of AI can become so large.”
The notion of responsibility was a common refrain. In another panel, Alex Pascal, a senior fellow with Harvard’s Ash Center for Democratic Governance and Innovation and a former Biden administration official, urged the private sector to assume that mantle.
“With great power comes great responsibility,” he said. “And right now, that responsibility for the next, I would say, at least year to 2 years is in the hands of the private sector.”
How AI is defined affects how it’s governed
AI ethics and governance will be critical areas for policymakers and industry in the near and long term. Defining exactly what is meant by artificial intelligence will be important as guidelines and regulations are developed – and it’s not a settled matter.
“I would say the jury is still out,” said Renée Cummings, an assistant professor of the practice in data science at UVA, explaining that the definition tends to evolve as new technologies are released.
“I think everyone is defining AI in the way that works best for their organization, their agency, or the things they want to do,” she added.
Ron Keesing, director of artificial intelligence and machine learning at Leidos, agreed that clarity on AI’s definition remains elusive.
“I think the definition still remains very fuzzy, and it’s going to be more and more of a problem as we try to introduce governance,” he said, explaining that this is because firms and individuals will prefer their applications not be subject to regulation and will, consequently, argue their work does not constitute AI.
Given the complexities and risks involved with AI, would it be preferable to simply ignore the technology altogether? Marc Ruggiano, director of the Darden-School of Data Science Collaboratory for Applied Data Science at UVA, posed this hypothetical to panelists.
“What would we be missing out on?” he asked.
Cummings noted that many of the red flags raised by AI are not new. “The challenge with AI is the amplification,” she said. "The challenge with AI is the data." However, she added, "seeing the extraordinary things that AI can do – it makes you realize that we've got to work with these tough questions."
Jepson Taylor, chief AI strategist for Dataiku, said that passing on AI would be the historical equivalent to ignoring the printing press or the internet – only bigger.
“If you decide to skip it, for the people that are not in this room that decide not to skip it, they will run circles around all of us,” he said.
We’re all AI experts
In a session on how AI impacts society, Mona Sloane, an assistant professor of data science at media studies at UVA, put it directly: “We’re all AI experts now, which means we should all have a voice in the conversation.”
Sloane explained how everyone is constantly interacting with AI systems in all facets of life, even if it’s not always visible. “We actually all are building up really important knowledges by way of this experience around AI,” she said.
The common denominator of these AI systems is “they are designed to facilitate decision-making in order to save time, resources, and increase productivity,” she added.
Sloane addressed the notion that AI technology will serve as a proxy for human decisions. “AI does not necessarily replace those,” Sloane said, in reference to personal decision-making. “It shifts how they are being done.”
Sloane illustrated this by describing a research area she has explored, the use of AI in professional recruiting, which can take many forms, from screening applicants to crafting more inclusive job advertisements. She argued that “in order to avoid human bias and machine bias, we need to think about transparency — but in a contextual way.”
This is a period of ‘uncertainty’
In the conference’s closing keynote, attendees heard from Adam Ruttenberg, a partner with the law firm Cooley, who focused on the legal and regulatory environment for AI. Early in his remarks, he issued a disclaimer to the audience.
“When we talk about the tech regulatory landscape for AI, we are guessing,” he said. Why? “It’s changing every single day.”
To illustrate the delicate moment society is currently at with generative AI, Ruttenberg cited a test his law firm ran, in which it gave ChatGPT a prompt to generate 10 cases for a legal scenario where they knew there was only 1 that applied — which it did because that was the precise request.
“So where does that leave us?” he asked. “It leaves us in a period of fear, uncertainty, and doubt.”
Ruttenberg described how the recent White House executive order – which includes guidelines on transparency, rules on biases, and privacy protections – currently represents the closest thing in the United States to a governing legal framework. However, the time required to implement its provisions could prove problematic.
“The truth of the matter is, the technology will have changed by the time the regulations come out, because the pace of technology development, especially in AI, is huge,” he said.
He went on to explain that, currently, significant uncertainty exists around what will be covered by copyright and patent law, and that a lot will likely happen over the next 12 to 18 months to better define the lay of the land.
So, given this landscape, what should industry, developers, and regulators focus on?
“In the short term, in this period of uncertainty, we have to treat generative AI and AI generally as a tool,” Ruttenberg said.
“As human beings and responsible business owners,” he added, “we need to make sure that we’re using that tool in a responsible way – which means we need to understand how it works, we need to understand its capabilities and its limitations, and we have to be responsible for what it does.”
Looking farther down the road, Ruttenberg offered this advice: “All we can do is navigate our best, use the best amount of human oversight, and know that whatever we think we know today, will change, and it's going to change fast,” he said.
And it's a moment like no other
The discussions throughout the day were bookended by the leaders of the host schools: Phil Bourne, founding dean of the School of Data Science, and Jeanne Liedtka, interim dean of the Darden School.
Kicking off the conference in the morning, Bourne underscored the historic and unprecedented nature of this new AI era, saying we’re in a “Prometheus moment.”
“I don’t think there’s any doubt, as we see it siting in our respective schools at UVA, that there’s something really game-changing going on right now,” he said.
Bourne added: “I’ve been around academia for a long time, and in government, and I don’t think there’s been quite a moment like this in my whole career.”
Liedtka, in closing the conference, acknowledged that discussions around AI, for now, may ultimately produce more questions than answers, given the complexity of the issues and challenges.
“Having the courage to ask the hard, tough questions is much better than spending your time answering the obvious, easy ones,” she said.
Liedtka also echoed Bourne’s sentiments in summing up the conference and the collaboration between the two schools: “I hope that this is just the first of many great conversations that we have together on the subject that will probably define the time we live in.”