Banner Repeater talks.
'A throw of the dice will never abolish chance' was an exhibition at Banner Repeater in Sept/Nov 2016 that acted as a moving configuration that materialised in several forms throughout the exhibition period. It took Stéphane Mallarmé’s text “Un coup de dès jamais n’abolira le hasard” - a throw of the dice will never abolish chance - as a site to consider new ways of thinking through the centuries old puzzle of code, numbers and language. Material articulations, sited in the project space as parts of a puzzle, developed through the Thinking through the Block workshops and associated talks, and developed as a digital artefact that can be found at x-fx.org, that includes text, audio, and visual data, inscribed, ascribed, and described through the block. For more information see here.
Ramon Amaro very kindly agreed to talk to us about Machine Learning and the Politics of Data during the exhibition, the recording of which you find above, and more details below.
Machine learning and the politics of data with Ramon Amaro
Ramon Amaro will be talking through calculus as a key moment in our cultural understanding of data, leading to further discussion of ethics in the application of various mathematical models in our data driven society. There is a growing gap between the generation of datasets and our understanding of its potential uses. Data informs the conditions and long standing interests by which our knowledges about social situations are understood, as Oscar Gandy (2009) suggests: ‘most public decisions these days are made on the basis of some analysis of data’.
Ethically, the pervasiveness of data-driven devices and technologies aide in the delay of social and policy decisions about the usefulness of generated information, in favour of incomplete understandings of how learning and prediction methods can and do offer answers to social and political problems. These problems extend across a wide range of concerns, from accurate predictions of weapons targets and haptic responses to more mobile-friendly search engine results. Nonetheless, they are, by the very process of their design, fallible to social consequences that are most frequently articulated in forms of biases, segregations and other social restrictions. These concerns are particularly vulnerable if sufficient attentions are not given to ethical issues surrounding machine learning activities. That the answers to these concerns are found in profiling and prediction models, and often themselves initiate political, industry and social actions without specific aim, speaks to the continuation of an intensity to categorise social information and social agents into consistencies of performance-based assessments. These logic are founded in partial ethical debates that necessitate further considerations of the social, technical and political outcomes that inform the relationship between human and machines, where the most prevalent context in which we find ourselves in engagement with the machine is perhaps also the most taken for granted.
Ramon Amaro is Associate Lecturer in Digital Media: Critical Theory and Media Philosophy and a PhD researcher at the Digital Culture Unit at the Centre for Cultural Studies, Goldsmiths, University of London. Ramon has an advanced degree in Sociological Research and a BSe in Mechanical Engineering with a background in Technology and Engineering Programs, Engineering Quality Design, and Sociological Research. Ramon has also been an Assistant Editor for Big Data & Society, a SAGE journal. His research interests are in machine learning, racialisations and difference in social modelling, logics and mathematical reason, and the philosophy of maths.