Q&A: Aaron Martin on the Risks International Organizations Face With AI and Data Protection

September 30, 2024
Aaron Martin is an assistant professor of data science and media studies at UVA.
Aaron Martin is an assistant professor of data science and media studies at the University of Virginia. (Photo by Cody Huff)

Policymakers across the globe are grappling with the vast implications of advances in artificial intelligence technology, including how these tools will be used by international organizations.

Aaron Martin, an assistant professor at the University of Virginia’s School of Data Science, was recently invited to share his insights on this critical topic during a panel discussion at the International Organisations Workshop on Data Protection, which was held at the World Bank in Washington and co-hosted by the European Data Protection Supervisor. This year’s gathering marked the first time the conference took place outside of Europe.

Martin, who joined the School of Data Science faculty in 2023 with a joint appointment at UVA’s Department of Media Studies, specializes in data governance and how international bodies establish transnational policy, particularly as it relates to technology.  

At the panel discussion, Martin shared his thoughts on the challenges global institutions face with data protection and why it is vital that they work to address them.

He recently chatted about his experience at the conference and his views on some of the many facets of this rapidly evolving global issue. You can read more about other highlights from the gathering, which were published by the European Data Protection Supervisor. 

Q. Your panel focused on AI use by international organizations. Broadly speaking, what is your impression of how widely used AI systems are by these agencies? 

Suffice it to say, international organizations — including those that were represented at the workshop in D.C. — are very diverse in terms of their missions and mandates.  

These range from U.N. agencies with development or humanitarian missions to organizations like NATO or Interpol, which facilitate security and law enforcement cooperation internationally. Each of them is exploring the use of AI in different ways, and my impression is that, currently, their approach is a cautious one, which is encouraging.

A key feature of IOs is that they enjoy what are known as legal immunities and privileges — these help ensure their independence and effective functioning. What this means in practice is that national laws and regulations for data protection (like the General Data Protection Regulation) and AI (like the EU AI Act) won’t apply to IOs as they do for government bodies or commercial firms.  

This becomes a real governance issue — how do we ensure that IOs are processing data and using new technologies responsibly? Most of these organizations have established policies for privacy and data protection, but AI introduces a new set of challenges that they need to grapple with. The point of this workshop is for the organizations to work together to develop good guidance and practices for data and, increasingly, AI. 

Q. The discussion in part focused on the risks of these systems. What should international organizations prioritize when it comes to mitigating the risks of AI to the many constituencies affected by their work? 

Recently, I’ve been struck by news reports about the challenges AI companies face in terms of their access to new sources of quality data. There’s growing anxiety that AI models will become less useful and less reliable if they aren’t fed with more and more data — these models are “hungry,” as one my co-panelists described it. There are fears that AI models will begin to collapse if they’re trained on too much synthetic (i.e., fake) or AI-generated data, so AI companies are scrambling for new data partners.

At the workshop, I focused my intervention on raising awareness about the varied risks of IOs’ oversharing data with AI companies. IOs have incredibly rich and diverse data, for example, about development indicators, global conflict, and humanitarian affairs.  

They also have data from parts of the world that are very underrepresented online, which is where AI companies typically go to scrape data. IOs need to think carefully about how to ensure the confidentiality of this data and to take steps to protect it from misuse and toxic AI-business models. 

Image
Panelists at the International Organisations Workshop on Data Protection.
Panelists at the International Organisations Workshop on Data Protection. (Photo provided by conference organizers)

Q. International organizations, as you mentioned, are not a monolith, and the audience for your panel was comprised of representatives from diverse groups. To what extent should various types of international organizations be thinking about these issues differently based on their mission? 

There will be some common challenges — every IO has a human resources department, for example, so enterprise applications exist in IOs just like they do any other organization. And many if not most IOs have important budgetary considerations that will shape and possibly limit their use of AI tools, including generative AI.

What I’m particularly interested in, including in my research, is how the use of AI by humanitarian IOs may impact the recipients of aid — so-called beneficiaries. Should IOs rely exclusively on AI to make decisions about who receives food aid, for example? What are the risks of doing so? These are hard questions that require engagement with a range of stakeholders, including those directly impacted by these decisions. 

Q. You’ve done a lot of work looking at technology’s impact on historically marginalized communities, particularly refugees. When it comes specifically to humanitarian organizations and AI, what are your biggest concerns? 

Humanitarian organizations are generally being pretty thoughtful about their approach to AI. “Do no digital harm” is their mantra, which means they’re very sensitive to the potential and actual harms that AI might inflict on refugees and others impacted by conflict and crisis.  

I do worry about what’s been referred to as “AI snake oil” in the aid sector, and organizations being sold technology that simply can’t deliver on the hype. It’s important that we continue engaging with these organizations to help them understand the possibilities and the risks. 

Q. What were some of your main takeaways from the other speakers from your panel and any others you heard at the conference? 

Well, it was Chatham House Rule, so I ought to be careful here, but I was quite impressed by the strategic thinking that IOs are undertaking to incorporate AI into their organizations. I’ve attended other conferences where it feels like folks are mindlessly fishing for AI use cases, and that’s usually the wrong approach.

Another panelist explained how his organization is using AI to document human rights abuses around the world, which is a fascinating application and speaks to the potential for AI to have a positive impact in the world.