26 Aug

Identifying Bias in Human and Automated Assessment of Research Credibility

August 26, 2022
12:00 PM 1:00 PM

Elliewood Conference Room, 3 Elliewood Avenue

Automated tools are showing accuracy comparable with human reviewers in assessing confidence in research claims, such as predicting whether a finding is likely to replicate successfully. Even so, these automated tools will inevitably reproduce biases present in the scholarly literature that are associated with credibility assessment. For example, papers authored by women are cited less than papers authored by men on average. Automation approaches that attend to citation networks will then likely associate female authorship with lower credibility. However, the fact that algorithms are likely to do this is not a threat, it is an opportunity. Citation and other scholarly biases by social identity exist already and are only discoverable in retrospect and in the aggregate. Automated tools will facilitate identifying social biases in real-time and help isolate their source. Feedback loops from automation to human judgment will provide opportunities to reduce bias in both. 

Brian Nosek
Nosek co-developed the Implicit Association Test, a method that advanced research and public interest in implicit bias. He co-founded three non-profit organizations: Project Implicit to advance research and education about implicit bias, the Society for the Improvement of Psychological Science to improve the research culture in his home discipline, and the Center for Open Science (COS) to improve rigor, transparency, integrity, and reproducibility across research disciplines. Nosek is executive director of COS and a professor of psychology at the University of Virginia. Nosek’s research and applied interests are to understand why people and systems produce behaviors that are contrary to intentions and values; to develop, implement, and evaluate solutions to align practices with values; and, to improve research credibility and cultures to accelerate progress.