How intelligent are prison inmates

BR navigation


Rate this article:
Average rating: 5.00 of 5 with 2 votes.

Often we are not aware of it, but many decisions are already made by machines with the help of so-called "artificial intelligence". Algorithms decide who gets which advertising, which job advertisement, which apartment offer and even whether someone is creditworthy. Although algorithms do not judge by opinion and feeling, they can still be discriminatory.

From: Katharina Wysocka

As of: May 26th, 2020

A machine makes an objective decision - we would like to believe that. But algorithms can discriminate. Studies have shown that people with a history of migration are classified as less creditworthy. They receive job advertisements for less qualified jobs or poorer housing offers via social platforms. The risk of recidivism among immigrant prisoners is rated higher by algorithms, and police computers are more likely to suspect them.

At the German Research Center for Artificial Intelligence Scientists are also working on making artificial intelligence, or AI for short, fair. The decisive factor here is the data that computers work with.

"If, so to speak, old white men decide which data to use for training, and then certain aspects of the population are not taken into account or not adequately taken into account, then the AI ​​system contains the same prejudices because it was trained with the prejudiced data then these simply continue directly in matters. "

Prof. Philipp Slusallek, German Research Center for Artificial Intelligence

Algorithms are creating a new kind of data discrimination. Because systems do not question the informative value of data, they only recognize patterns from the past and thus create correlations that can deliver discriminatory results.

For example, when algorithms predict creditworthiness: The machine does not check the individual case, but abstract data based on the behavior of other people with similar characteristics, such as gender, school leaving certificate, mother tongue or place of residence. Since people with a history of migration often live in less good areas, this in itself can lead to credit unworthiness and thus discriminate.

Or in investigative proceedings: programs for automatic face recognition regularly incorrectly identify non-white faces.

At the EU border, automatic face recognition is even used to assess whether entrants are telling the truth. The project is largely funded by the European Commission, but it has been criticized for serving racist prejudices.

Sarah Chander investigates the discriminatory potential of AI for the NGO "European Digital Rights" in Brussels. It gets dangerous because we often believe that decisions made by algorithms are always correct, she says.

“Unless we see the need to question these decisions, we are less likely to challenge the mistakes these systems will make. These systems make decisions based on human-generated data, so they will also make mistakes, such as predicting where crimes take place or whether a person has an illness. "

Sarah Chander, European Digital Rights

The EU is still working on rules for AI. Prof. Philipp Slusallek is a member of the "European High-Level Expert Group on Artificial Intelligence". Not only technical solutions are required, but also a clear positioning of society against racism.

"We need philosophers here for the ethical aspects, legal scholars to be able to cover the legal aspects, psychologists are very important because it is also about how the interaction between humans and machines happens exactly. But then of course you also need the affected people from society, from across society, who may not be part of every project, but who help to define the criteria that must be considered in such projects. "

Prof. Philipp Slusallek, European High-Level Expert Group on Artificial Intelligence

The big problem: Often those affected do not even know that they have been discriminated against. Because decisions made by an algorithm are incomprehensible.

"Artificial intelligence systems are completely non-transparent from their development to their use, many call this the 'black box' of AI. Just as we as citizens do not know how they work, governments or private companies often do not know which factors either This makes it much more difficult, firstly, to even find out that AI was involved in decisions that affect our lives, and secondly, to understand that decision and, thirdly, to counteract it, with our laws and the existing one Legal system. "

Sarah Chander, European Digital Rights

But AI is not coming like an unstoppable development that we have to put up with. Politics and society must clearly decide which decisions can be made by machines and which not. And that those who would be most affected by discrimination also have a say in the criteria.