Former Top White House Technology Official Talks AI at UVA
Effective governance of artificial intelligence requires both a “bottom up” and “top down” set of policies, norms, and practices, said Alondra Nelson, who led efforts to develop the Biden administration’s Blueprint for an AI Bill of Rights, during an appearance at the University of Virginia’s School of Data Science.
Nelson served as deputy assistant to the president and acting director of the White House Office of Science and Technology Policy when it crafted a blueprint of principles aimed at protecting the rights of the American public amid growing deployment of AI technology.
She left the White House in February 2023 to rejoin the prestigious Institute for Advanced Study in Princeton, New Jersey, where she serves as the Harold F. Linder Chair and leads the Science, Technology, and Social Values Lab.
During a wide-ranging conversation, Nelson discussed a host of issues surrounding the use and governance of AI, including the need for a multifaceted approach to governing its use.
“If we are going to get the use and applications of AI to a place that it’s mitigating risk and maximally beneficial for the most people, we need to have a suite of tools and levers,” she said.
This toolbox, Nelson said, should include new laws and regulations from government but also standards and norms for how AI is used and discussed.
“What is a high-capability AI model? What is low? How do we think about those things? How is everyone using the same language, both in the United States and abroad? These are international standards. And so those are not quite laws — those are agreed-upon definitions.”
She added: “And then there are norms. We’ve got these new tools and systems — how are we going to use them?”
As for the role of government, while she said the AI executive order issued by President Biden last October was “extraordinary in a lot of ways,” federal policymakers need to do more.
“The United States needs a bit more top down right now,” Nelson said, when it comes to systematic AI regulations.
The conversation with Nelson was the second event this year organized by the Futures Initiative, a pan-University project aimed at helping UVA prepare for the next decade in higher education.
The initiative is a complement to the strategic vision laid out by President Jim Ryan in the University’s 2030 Plan and was developed by Phil Bourne, founding dean of the School of Data Science; Christa Acampora, dean of the University’s College and Graduate School of Arts & Sciences; and Ken Ono, a mathematics professor and STEM advisor to the provost who also holds an appointment with the School of Data Science
Melody Barnes, founding executive director of UVA’s Karsh Institute of Democracy who served as director of the White House Domestic Policy Council under President Barack Obama, and Yael Grushka-Cockayne, professor of business administration at the Darden School of Business and a professor of data science by courtesy, moderated the discussion with Nelson.
Last month, Ryan hosted the first signature event of the Futures Initiative, a panel discussion featuring three national leaders in higher education: Michael Crow, president of Arizona State University; Harriet Nembhard, president of Harvey Mudd College in Claremont, Calif.; and Santa Ono, president of the University of Michigan.
The group discussed a wide range of issues critical to the future of colleges and universities, from the need to prioritize innovation to why institutional diversity remains important.
Nelson also addressed the future of education during her appearance at UVA, specifically the impact of AI on teachers and students.
She said that she wished OpenAI had waited and consulted with educators before releasing ChatGPT in late 2022, noting the tumultuous impact the technology had in classrooms: “This whole thing just did not have to happen.”
“You could have imagined a partnership with teachers where they had different kinds of tools, teaching modules — things that brought it into the world in a way that was less, I think, adversarial and less traumatic,” Nelson said.
With AI technology continuing to expand its presence in schools, she said she’s concerned about the risk of data surveillance and tracking, particularly for students in K-12 classrooms.
“All of their biometric data is being tracked,” Nelson said.
As for how colleges and universities can help shape AI policy, she said higher education has a “huge” role to play.
“There are scores, hundreds of academic research questions around these tools and systems,” Nelson said, particularly, she added, because the developers of this technology sometimes don’t offer clear explanations about how it should be used.
“The thing that drives me crazy is creators of these tools saying, ‘I don’t know. We made them. We have no idea how they work,'” she said.
She added: “I just think that is an unacceptable answer.”
Toward the end of the event, Nelson was asked about her advice to students as they think about their studies and professional aspirations.
While she said that those who want to study coding and computer science should do so, her main message to students, she said, is to “study what you want to study” and not to feel constrained by a perception of the past 10-15 years that learning how to code was a professional necessity.
“I hope that this moment of generative AI, for all of its challenges, is also opening up all of the things that we need to work on,” she said, including philosophical questions around new technology and those related to literary theory, communications, and media studies.
“There are huge social science questions, and obviously there are a lot of science and research questions, so my hope for students is that this feels like a moment of broadening out.”