Gianluca Guadagni on the Opportunities and Future of Open-Source AI
As the technological revolution brought about by artificial intelligence continues to take shape, researchers across academia are exploring the many implications of AI’s evolution and expansion across society.
Last November, Gianluca Guadagni, an associate professor of data science at the University of Virginia, joined forces with more than 20 colleagues from universities around the world to assess the risks and opportunities associated with open-source generative AI.
Meta, the parent company of Facebook, provided support and logistical assistance for the group’s efforts but did not otherwise influence the editorial direction of their work.
In a research paper, which was presented at this summer’s prestigious International Conference on Machine Learning, the group argues for the development of responsible, open-source generative AI models.
You can read the full paper here.
With an academic background that encompasses mathematics, physics, and engineering, Guadagni’s research has long focused on better understanding complex systems.
He recently discussed the many issues involved with open-source AI and whether he is optimistic about how this evolving technology will define the future.
On how the group came together and the approach they took …
“We got the chance to get together at a Meta location in London. We started discussing what is relevant and what we thought was critical in AI.
“And so, we came up with this idea of creating a taxonomy of the AI model, mostly in the view of how open they are. We were leaning toward the idea that open-source would be beneficial for large language models, and so that was the key idea.
“We list five different levels of openness. We came up with the main models which are available, and for each of them we define whether they are open or closed.
“The beauty of this group is that I think I’m the only mathematician. There are several from Oxford, and they are engineers, but there are colleagues who are not necessarily in math-intense sciences. There are colleagues who are interested in bias, in ethics — they are interested in the impact to society.”
On the group’s projected evolution of AI ...
“We split the development and evolution of AI into a framework of three steps. We call it the short term, the midterm, and the long term. The short term is now, when all these models are popping up.
“The midterm will be a time where things have kind of settled down. The use will be more widespread, people will start using it more often in their daily lives — not everybody, but I’m thinking about companies. So, they’re going to integrate this stuff in their production systems. We think that this is about three to five years.
“The next step would be the long term, when there is a jump in capabilities. Long term is defined by technological advances that will create dramatically greater AI capabilities.”
On the benefits of open-source AI models…
“The key point is, most of the fear that people have about these models isn’t really there. When people get something new, they immediately start thinking, ‘Oh, this could kill us all.’ If you think about it, we already have a lot of stuff that could kill us all. But it doesn’t.
“So, we are saying, let’s keep it open so that everybody can use it. This would allow, for instance, people with very few resources to keep playing with these models and come up with new ideas — not necessarily those working at Google. Smart kids at UVA, if it’s open-source, could come up with something that is very effective.
“For equity, we think it is very important because it gives access to countries that don’t have the infrastructure and data centers to run it from scratch. Because if it’s a closed system, you are paying a lot of money to use it. Or you have to build your own, and to build your own is really a significant effort.”
On regulations for open AI and addressing its risks …
“The main risks are malevolent use by bad actors. In the long term, obviously, they could create serious consequences. It turns out that this can happen with closed-source models as well. Our statement is that the differential in benefits and risks between open-source and closed-source AI favors open-source models.
“If we implement now a framework to regulate these models, even when we jump to the long term, hopefully we will be ready to contain whatever there is because we already have in place a system. If you leave Google, Meta, and Microsoft in charge of it, you will never know how effective they are because it’s very hard to test a model even if you have hundreds of programmers. When you use an open-source, you have hundreds of thousands of people looking at it. We’re not saying that risks don’t exist. We are saying that open-source would help control them, and understand them, and possibly mitigate them.
“AI doesn’t have to be all open-source. We’re not saying that. What we think is that having a strong, open-source alternative should be part of the game. And people shouldn’t be afraid of that.”
On whether he’s optimistic about the future …
“Personally, I am. I think that something new is always a challenge. I see it as an amazing opportunity for everybody. I understand concern of any kind, but that has happened before many times. The example that I usually have is about the first industrial revolution, which is when they started to use steam engines. That revolutionized society but at a big cost. A lot of people lost their jobs, but society changed in a dramatic way with a new economic system and new kinds of jobs. Countries and governments will need to manage the transition with specific measures.
“Once we understand how AI works, we may be able to fix biases, and we may be able to tweak it in such a way that it is more effective at doing what it’s supposed to do.”