Speakers:

Gaymon Bennett (University of Arizona) 

Gaymon Bennett

Title: Optimization in the Flesh. Gurus, CRISPR, and the contradictions of California Algorithms

Abstract:

In this talk, I propose that the biases, exclusions and harms currently being amplified by algorithmic systems are likely to get exponentially worse as biotech seeks to embed them “in our flesh.” Offering a second look at seemingly-playful stories of hippies-founding-tech that have long been a stock feature of Silicon Valley origin myths, I suggest that the Northern Californian entrepreneurial classes continue to be caught by an ethos of spiritual self-improvement that exchanges critical politics and serious ethical struggle for faith in the inevitable beneficence of innovation. That faith constitutes a major obstacle to the kind of moral realism needed to deal responsibly with the darkness that California algorithms have let loose in the world—a darkness only deepening as biotech comes to embody the dangers of the data-centric economy.

Bio:

Gaymon Bennet is associate professor of religion, science, and technology and leads the Humane Tech initiative at Arizona State University. His work on the enchantments and entrapments of innovation traces the legacies of West Coast spirituality through its entanglements with science and technology.

 

Luis Felipe R. Murillo (University of Virginia) Luis Felipe R. Murillo

Title: Between anthropology and computing: the question of ambiguity

Abstract:

Natural language processing (NLP) is one of the long-established areas of artificial intelligence research. As such, it is a constitutive part of the grand promises of AI with the application of large-scale data analytics for informational retrieval, machine translation, and “natural language understanding” for personal assistants and question-answering systems. “Ambiguity,” however, persists as one of the hardest problems. In computational linguistics, this perceived "issue" concerning natural languages has been addressed by several research programs with a myriad of formal and statistical approaches in the past 60 years. In this presentation, I describe an early application of NLP for the study of political discourse to examine an intersection between anthropology and computing. The goal is to describe the limits of NLP to recast questions of meaning and interpretation, charting new terrain for sociotechnical
studies of computational analytics.

Bio:

Luis F. R. Murillo is a research associate at the School of Data Science and director of "librelab," a community laboratory for experimental work with Free and Open Source technologies. His research interests include anthropology, computing, and science and technology studies.

Shreeharsh Kelkar (UC Berkeley)

Title: Reinventing Expertise in the Age of Platforms: The Case of Data Science.

Abstract:
“Data scientist,” so says the Harvard Business Review, is “the sexiest job of the 21st century.” What accounts for the prestige that this new professional mode of knowledge production now enjoys across institutions ranging from non-profits to research labs, corporations, hospitals, and schools? Based on a two-year ethnographic study of teams developing Massive Open Online Courses or MOOCs, I show how the power of data scientists hinges on their ability to situate other professionals with relevant expertise simultaneously as collaborators (or competitors), as “users,” and as “domain experts,” and switching strategically between the three registers. Data scientists thus dictate the terms in which other experts participate in shared projects, incorporating potential rivals and effacing their own power. I argue that “data science” is an example of a broader and emerging form of expertise that I call “platform expertise” that is contingent on the instrumentality of digital platforms. Platform expertise emerged in the workplaces of Silicon Valley but it is being put to use by reformers who seek to reinvent their own institutions; it is thus the vehicle through which Silicon Valley norms around "innovative" work travel into other institutions, thereby setting up debates about the values and ultimate goals of these institutions.  

Bio:
Shreeharsh Kelkar is a Lecturer in Interdisciplinary Studies at UC Berkeley. He studies, using historical and ethnographic methods, how the emerging computing infrastructures of humans, algorithms, and data are transforming work and expertise. His in-progress manuscript, Reinventing Expertise: Technology Reformers and the Platformization of Higher Education, describes how technology reformers are drawing on norms from Silicon Valley to reconfigure what it means to be an educational expert. You can read more about his work at 
http://shreeharshkelkar.net.  

Samuel Lengen (University of Virginia)

Samuel LengenTitle: Social Credit, Trustworthiness, and Algorithmic Governance in China

Abstract:  

The Chinese government plans to establish a national social credit system. While critics have described this plan as a dystopian vision of control, more recent scholarship has challenged this view by pointing to the relative popularity of the project in China as well as to the existence of rating systems in the West. This paper negotiates these positions by analyzing the social credit system in light of a historical effort to modernize China’s population. It argues that social credit presents a technopolitical shift from quality to trustworthiness in moral governance. While the system has significant political implications, I contend that these must be considered in light of deep-running social, cultural, and economic divisions in contemporary China, which were the result of decades of economic reform and marketization. As social credit encodes highly complex issues of trust, insecurity, and personal worth, the paper asks, how will algorithmically generated social credit scores reconfigure moral personhood? And, what can the algorithmic assessment of “trustworthiness” tell us about China’s moral governance?

Bio:

Samuel Lengen is a research associate at the Center for Data Ethics and Justice in association with the School of Data Science at the University of Virginia. His research explores the ethics of data and digital infrastructures with a focus on gender, social media, and government policy in China.

Erin McElroy (AI Now)

Erin McElroyTitle: Property as Data, Data as Property: From Racial Dispossession to Housing Justice

Abstract: 

Following the 2008 recession and subprime mortgage disaster, both data and real estate have become commodified in novel ways. In this talk, I theorize the relationship between the two, looking at new iterations of their historical entanglement as property. In particular, I focus on the emergence of the “proptech” industry, or the post-2008 merger of financial technology, real estate speculation, venture capital, and property management. This industry has sought to “disrupt” real estate with new technologies, from artificial intelligence, machine learning, big data collection, data brokerage, data mining, biometric surveillance, and predictive modeling. Many of these platforms and tools have been rolled by without tenant consent or adequate testing, often forcing poor and working-class tenants of color to become lab rats in experiments of property accumulation. Meanwhile, many proptech companies operating in the Global North outsource labor to the Global South in order to maximize profits, leading to new configurations of data, labor, race, and real estate. While it can be said that these global configurations are new, the propertizing of data and real estate has been co-constitutive of numerous colonial and imperial formations, sitting upon a thick palimpsest of race and dispossession. Thus, here I consider how the proptech industry recodes historic property relations, while I also trace its current contours and utilization of algorithms, new data sets, and more. At the same time, I consider possibilities of expropriating both data and real estate from the proptech industry, looking at housing and land justice tactics. What might data justice advocates learn from housing struggles, and how might the bringing together of these spaces abet emancipatory spatial futures?

Bio: 

Erin McElroy is a postdoctoral researcher at New York University’s interdisciplinary AI Now Institute, researching the digital platforms used by landlords in order to surveil and racialize tenants. Erin is also cofounder of the Anti-Eviction Mapping Project, a data visualization, critical cartography, and multi-media collective documenting dispossession and resistance struggles upon gentrifying landscapes. Erin earned a doctoral degree in Feminist Studies from the University of California, Santa Cruz, with a focus on the politics of space, race, and technology in Romania and Silicon Valley. More recently, Erin co-launched the Radical Housing Journal to foster housing justice transnationally. 

Emanuel Moss (Data and Society)

Emanuel MossTitle: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics

Abstract:

Inside Silicon Valley technology companies a new role, that of the “ethics owner”, has recently emerged as those tasked with managing the portfolio of ethical concerns that exist inside their industry. Partly in response to a public that has grown wary of the intentions of tech companies, partly due to tech workers pressuring their employers to “do the right thing”, and partly in anticipation of potentially forthcoming government regulation, these ethics owners have been tasked with translating such concerns into internal practices that can be integrated into the engineering and product development processes that have long organized work inside Silicon Valley. Ethics owners' work is informed by existing professional ethics frameworks like biomedical research ethics and corporate social responsibility, but must also address ethical concerns unique to data-driven technology development. Similarly, they are leveraging an existing repertoire of organizational practices for engineering and software development to produce new ways of managing ethical concerns. Based on a series of interviews with these ethics owners and extensive fieldwork in the tech sector, this talk identifies a set of logics that pervade work in the Silicon Valley tech industry and that threaten to constrain the capacity of those inside the industry to contend with the ethical challenges they face. These logics also contribute to a set of irresolvable tensions that accompany the work of managing ethics practices across an organization, and have profound consequences for what can be thought of as ethics inside the tech industry, what can be expected from efforts to govern data-driven technologies, and how to develop new structures of accountability for the tech industry.

Bio:

Emanuel Moss is a doctoral candidate in cultural anthropology at the CUNY Graduate Center, where he is studying the work of machine learning from an ethnographic perspective and the role of data scientists as producers of knowledge. He is also a researcher for the AI on the Ground Initiative at the Data & Society Research Institute. He is particularly interested in how data science, machine learning, and artificial intelligence are shaped by organizational, economic, and ethical prerogatives.

Caitlin Wylie (University of Virginia)

Caitlin WylieTitle: Defining data problems: The Case of Open Government Data in Charlottesville

Abstract:

Making a city’s data publicly available online can serve the democratic ideal of transparency. Advocates argue that open civic data equips stakeholders to achieve such lofty goals as supervising their government, identifying social problems, making evidence-based arguments for reform and social justice, and designing tailored solutions and research projects. As a result of this variety of uses, open data brings together several stakeholder groups, such as residents, elected officials and government staff, and engineering researchers. How these groups understand, interpret, and apply the same datasets offers a valuable comparison between their values, beliefs about knowledge, and conceptions of public good. Based on participant observation with engineers, policymakers, and residents during the first two years of an open data portal in Charlottesville, Virginia, USA, I argue that a fundamental factor of data-based knowledge construction is how each group conceptualizes a problem. Their mindsets and practices of problem definition direct how they think about and work with open data. Understanding these groups’ different epistemic approaches to problem definition is crucial for identifying factors that influence whether and how users succeed in transforming open data into knowledge and policy.

Bio:

Caitlin Wylie is an assistant professor of Science, Technology, and Society in the University of Virginia’s School of Engineering and Applied Science. She studies how non-scientists, such as technicians, students, and volunteers, contribute to research in science and engineering. She teaches undergraduate engineers how to assess the social and ethical dimensions of technology, so that they can design safe, equitable, and successful sociotechnical systems.

Jarrett Zigon (University of Virginia)

 

Title: Data as Giveness: Relational Ethics and Justice in a Data-Centric Society

Abstract:

In our contemporary data-driven society, it has become common place to demand that data-centric practices are done so ethically.  What is too often missing from this demand, however, is the very question of the ethics appropriate to data?  This is so, perhaps, because prior to the question of ethics, we have neglected to ask the even more fundamental question of data.  That is, we have not yet asked: what is data?  For it is precisely our response to this more fundamental question that leads us to our ethical starting point.  This is so because all ethics begin with ontology.  And the question of data is fundamentally an ontological question.  Therefore, if we are to ask the question of data ethics, we must begin by asking the ontological question of data.  This lecture is a first-step in thinking this question of data, and the kind of ethics appropriate to it.

This lecture will unfold as a query.  First, I will ask how data is conceived by what I call the data-centric industry and its practices, and sketch one version of the so-called ethical response – that is, the call for the right to data privacy – to this conception of data.  What I hope to show, is that the so-called ethical response of privacy is not ethics but rather economics masquerading as ethics, and this is so, in large part because of the ontological foundations of the data-centric industry, as well as those who critique it.  As a response to this, I ask how shifting the way in which we conceive of data might allow for an appropriate (and actual) ethical response.  Invoking the Latin origin of data – datum: something given as gift – I draw from Jean-Luc Marion’s phenomenology of the given to sketch a relational ethics of the gift.  Such a notion of relational ethics, I contend, not only is more appropriate to the giveness of data, but to the very idea of ethics as such.  That is, to ethics as an ongoing process of living together well, the political consequence of which is justice. 

Bio:

Jarrett Zigon is a William & Linda Porterfield Chair in Biomedical Ethics and Professor of Anthropology at the University of Virginia. As the Founding Director of the Center for Data Ethics and Justice, Jarrett oversees the training and research program in partnership with the School of Data Science.