The University of Virginia's Darden-Data Science Collaboratory for Applied Data Science in Business (DCADS) is sponsoring a research fellowship to explore the challenges posed by bias and misinformation to business and society and consider the opportunities afforded by application of data and technology to assist in understanding, identifying, and mitigating its harmful impacts.
This topic is timely given recent controversies over COVID-19, fake news and election interference, and the growing debate over how and to what extent businesses should play a more active role in addressing the sources and accelerants at work in the information domain.
The use of data and technology to identify biased or inaccurate information, prevent its creation, limit its propagation, and ameliorate its harms at a meaningfully large scale, in the modern business context, affords unique benefits for individuals, businesses, and society. In this context, the use of data and technology can enable:
- individuals to recognize biased or inaccurate information and react appropriately to minimize its spread and mitigate its negative consequences,
- businesses to operate in ways that minimize the production and dissemination of biased or inaccurate information through management policy, process, and practice,
- society to sustain the open, productive, and peaceful exchange of ideas and information necessary to nourish and advance a free, just, and equitable society.
The Fellowship in Bias and Misinformation in Business is intended to support multidisciplinary research efforts by scholars and/or practitioners from a variety of disciplines, including data science, business, journalism, communication, law, and media, that addresses the problem of bias and misinformation.
The work may cover topics such as data science methods for detecting misinformation, the impact of bias and misinformation on business and the economy, and ways to educate people about avoiding false information. Several representative examples of topics that are of interest are shown below. These examples may be incorporated in DCADS Fellowship proposals, but they are listed here primarily to serve as guidance. They are not intended to limit the scope or focus of proposed research in any way.
Examples include:
Interventions to counter misinformation
Explore how we can best measure the relative benefits and consequences of interventions to counter misinformation or provide access to authoritative content.
Information processing on social media platforms
Explore the social, psychological, and cognitive variables involved in the consumption of “grey area” content experiences – sensational, provocative, divisive, hateful, misleading, polarizing, or biased information – received and produced on social media platforms.
Violence and incitement, hateful and/or graphic content
Examine how people and organizations are leveraging social media to organize and potentially influence intergroup relations in their constituencies.
Misinformation across formats
Investigate the role of non-textual media (images, videos, audio, etc.) on the effectiveness of and people's engagement with misinformation. This area includes basic multimedia like infographics, memes, and audio, compared to more complex video and emerging technological advances.
Trust, legitimacy, and information quality
Examine social media users’ exposure to, interaction with, and understanding of qualities of information, especially their attitudes and interpretations of information quality, trust, and bias.
Coordinated harm and inauthentic behavior
Inspect information practices and flows across multiple communication technologies or mediums.
Digital literacy, demographics, and misinformation
Explore the relation between digital literacy and vulnerability to misinformation in communication technologies.