Kuba Piwowar

Kuba Piwowar2

SWPS University of Social Sciences and Humanities.

I am a sociologist, data analyst at Google in Warsaw, where I lead Insights Team that is responsible for providing Polish businesses with knowledge about digital consumers and market trends, from both horizontal and vertical perspective. I am currently working on my PhD at the University of Social Sciences and Humanities, where from the cultural studies’ perspective I research algorithmic bias/fairness and ethics of A.I. I am a member of the Polish Society of Market and Opinion Researchers. I graduated from sociology at the University of Social Sciences and Humanities, hold a postgraduate degree in gender studies at Polish Academy of Sciences, art history in Collegium Civitas and Marketing Academy at The Wharton Business School of the University of Pennsylvania.

Abstract:
“Sources and ethical  implications of bias in A.I.: overview and critical analysis of current research”

Recent years’ discussion about artificial intelligence became pervasive and brought into the table practitioners and theorists across different industries and disciplines, but has been far from culturally diverse. Tech’s trouble with disproportion of men working on data, algorithms and A.I. resulted in diversity & inclusion programmes being introduced in Silicon Valley companies, addressing topic of gender, race, sexual orientation, among many. A.I. algorithms that are fuelled by ‘digital footprints’ - data collected from billions of signals every day - are facing the threat of bias being a result of interests standing behind those who gather, analyse and utilise these data, as well as mirroring simple but pervasive line dividing stable and growth economies from developing countries that has scarce technology penetration and almost non-existent internet access.

The purpose of the presentation is to explore current state of research in the field of ethical implications of artificial intelligence and algorithms from the perspective of social sciences and humanities, in a context of current discussion mapping.

First, I identify most important actors in the field of A.I., including businesses and academia. Then, by analysing research papers on the topic, as well as business’ efforts to address it, I place the discussion around ethical A.I. in a broader context of bias in decision making, looking at selected sources where it can stem from. Whether coming from incomplete datasets, spurious assumptions or cultural background being an always-on-context for algorithm designers, bias has pervasive impact on societies. It can amplify inequalities (with examples coming from researchers like Virginia Eubanks Eubanks or Cathy O’Neil) or give governments dangerous weapons to oppress minorities (as Stanford researchers Michal Kosinski and Yilun Wang pointed out in their study of deep neural networks being more accurate than humans in detecting sexual orientation). In the presentation I analyse sources of bias and their implications, pointing out significant gaps in research that has already been conducted, as well as limitations of these studies, as perceived by authors.

Juxtaposition of business and academic take on ethics in big data and autonomous systems gives a picture of what “fairness” looks like and what is perceived as “ethical”, but also sheds light on underexplored areas. To fill the gap, I go through efforts of selected academic centers, including research and educational projects, as well as discussion around role of humanities in developing autonomous systems, building a case for the idea that only interdisciplinary efforts in building artificial intelligence can result in an A.I. as a successful project.

Proposed analysis will be a starting point for a research to be included in my PhD dissertation, aimed at understanding what is computer scientists’, designers, product managers, data practitioners and analysts’ take on bias in data, ways of mitigating it, and eventually, its effect on algorithms and autonomous machines, including, but not limited to, artificial intelligence.