From next September 12th until February 24th, at the Fondazione Prada Observatory will be exposed Training Humans, a great photographic exhibition that explores the theme of artificial intelligence from a new perspective. It all started with the research project conducted for two years by Kate Crawford, New York University professor and co-founder of the AI Now Institute, the first university research institute in the world to deal with the social implications of the increasingly pervasive use of artificial intelligence , and by Trevor Paglen, an artist interested in current political issues such as mass surveillance and data collection, whose work has often seen the involvement of scientists and human rights activists.
From this collaboration comes the idea of telling the story of the images used for training, precisely training, artificial intelligences. These are repertoires of photographs from which artificial intelligence systems learn to observe the world, that is to say to classify it according to certain parameters: their social relevance becomes particularly evident in the case of photos of people. The interest of the two artists has in fact focused on the representation, interpretation and codification of human beings by technological systems, thanks to the data collection provided during the training.
The images presented in the exhibition are those used by government associations in the last fifty years. They range from the first experiments in the laboratory funded by the CIA in 1963 to the more modern computer vision systems developed by the United States Department of Defense in the 1990s, which created a set of data to be used in face recognition studies: this is FERET (Face Recognition Technology), a collection of portraits of more than a thousand people and about fourteen thousand images in total, whose purpose was to obtain a "standard benchmark" that would allow the development of algorithms based on a common database of images. The scenario was radically altered by the spread of the internet. The researchers stopped using government-owned images, such as mug shots of deceased inmates provided by the FBI, to draw on millions of publicly available photos on social media platforms, without asking permission from photographers or portrayed subjects. These images are associated with labels from human subjects, whether they are people working in laboratories or employees of Amazon Mechanical Turk, a crowdsourcing service dependent on Amazon. What results is a system of classification based on race, gender, age, expressed emotions and, sometimes, character traits, which seems to continue the demographic segmentation of post-colonial origin and involves heavy social and political implications.
The racist subdivision, observed Crawford and Paglen, is superimposed on an emotional basis, based on the controversial theories of psychologist Paul Ekman, which reduces human emotions to six universal emotional states. Artificial intelligence systems that are trained to recognize facial expressions can be based on these to assess the mental health, reliability, and propensity to crime of the persons being scrutinized, for example in the case of job interviews. If the algorithms are not at all immune from fundamental prejudices, the boundaries between science, history, politics and ideology become blurred. The analysis of the classification criteria of the photos presented during the exhibition thus becomes an investigation of the asymmetry of power that involves the control of artificial intelligences: a necessary starting point to rethink such systems, and to really understand the filter through which we are observed.