How do I monitor my technical development

He still has to prove whether Elon Musk can actually lead his company Tesla to success. What he can do in any case: To be afraid of artificial intelligence, or AI for short. The technology is "the greatest danger to mankind" and "much more dangerous than nuclear weapons," he repeats at every opportunity. Musk delivers crisp quotes, the media are happy about the headlines, readers are scared of AI.

This alarmism annoys many researchers who have been dealing with this topic for decades. Australian professor Toby Walsh wrote in the magazine that he could hardly open a newspaper without Musk warning of AI that would trigger a world war Wired. He doesn't think that we have to fear what people like him call singularity: that is, the point in time when machines start to develop on their own.

Musk is right: AI research needs to be regulated. But not because robots would otherwise take power, but because companies and states relied too much on the supposed intelligence of the machines. The Terminator will remain science fiction, but dystopia threatens without rules. The following examples show that technological progress can backfire.

Machine fools people

Google boss Sundar Pichai played a call at a conference in early May. A female voice was heard reserving a table in a restaurant. The audience on site cheered the seemingly banal conversation. Two reactions were balanced on social media: one half was enthusiastic, the other half horrified. "This is terrible and so obviously wrong," wrote sociologist Zeynep Tufekci.

For the first time, the audience witnessed how AI mimicked human speech behavior so perfectly that the restaurant employee on the other end of the line did not notice that she was talking to a machine.

Pichai asserted several times that the call had taken place exactly like this. In the meantime, doubts have arisen, Google may have cut the recording together or edited it afterwards. In any case, the debate that the call sparked is more decisive. Does AI have to identify itself when communicating directly with people?

The question goes way beyond the Google assistant that can make appointments. What happens when grandchildren trick fraudsters use software that then automatically calls en masse retirees? Do bots need to be tagged on social networks so that users understand that they are chatting with a computer? AI researchers like Walsh are therefore calling for autonomous systems to be designed in such a way that they cannot be confused with humans.

Algorithms decide about freedom of expression

Tech corporations employ tens of thousands of people to fish the dirt out of the net. The digital garbage disposal clicks its way through disturbing images and videos, sifts through and deletes depictions of extreme violence. In order to spare Facebook and Youtube users the sight, cheap auxiliary workers in emerging and developing countries risk their mental health.

In addition, they feed databases and train software that could make their jobs superfluous. Facebook boss Mark Zuckerberg keeps talking about "AI tools" that will keep Facebook clean in the future. At a hearing before the US Congress, he referred more than 30 times to AI, which is supposed to delete content independently if it violates Facebook's community standards.

But when in doubt, AI does not only eradicate terrorist propaganda and child abuse, where the decision is clear. Even lawyers disagree about where the line between freedom of expression and censorship runs. Numerous examples from the past few years have shown time and again that it is not a good idea to let algorithms decide about it. "It was the machine" shouldn't be an excuse for Facebook or Youtube if, for example, a satire video was blocked again because AI is unable to recognize irony.

AI creates deceptively real fake videos

The media portal published in April Buzzfeed a video in which a man warns of fake videos. He looks like Barack Obama, speaks like Obama, but is not Obama. In fact, it is actor Jordan Peele. The video is a so-called deepfake.

The SZ editorial team has enriched this article with content from Twitter

To protect your data, it was not loaded without your consent.

I consent to content from Twitter being displayed to me. In this way, personal data is transmitted to the operator of the portal for usage analysis. You can find more information and the possibility of revocation at

This external content was loaded automatically because you agreed to it.

Artificial neural networks that are modeled on the natural networks of nerve cells can now falsify sound and video recordings so perfectly that they can hardly be distinguished from the original. With applications like the Fakeapp, even normal users without special technical skills can create terrifyingly good fake videos.

Many may find it funny when the artificial Obama says: "Trump is a complete idiot. Of course I would never say that - at least not in public." It gets less funny when a manipulated video suddenly circulates on Twitter in which Kim Jong-un announces that he has just shot down a nuclear missile aimed at the USA. Will Trump's advisors have enough time to educate him about deepfakes before he pushes the red button? Public opinion with fake videos is already a reality: A Belgian party spread a Trump fake on social media on Sunday in which the President allegedly called on Belgium to get out of the Paris climate agreement.

Experts warn of an era of disinformation. Aviv Ovadya, who had already predicted the flood of fake news in the US election campaign, sees humanity heading for the "infocalypse". Fake news floods the network, but for a long time at least videos were considered forgery-proof. Now you have to distrust your eyes and ears.

Companies monitor employees

Ordering from Amazon is convenient. Working at Amazon is often the opposite. Every move is filmed in the logistics centers. The monitoring could become even more complete in the future: At the beginning of the year Amazon was awarded two patents for a bracelet that follows all movements of the employees meticulously. Thanks to ultrasound and radio technology, the bracelet should always know where the wearer's hands are in relation to the inventory on the shelves. Vibrations could signal to the employee that he is packing the wrong goods.

Amazon denies using the bracelet to monitor employees. It should serve to simplify workflows for logistics employees - if it is ever used. Even if you believe Amazon, the potential for abuse remains enormous. Millions of people work in chord in emerging countries to build smartphones that they will never be able to afford. Trade unions and workers' rights are foreign to them. Your employers will find surveillance wristbands useful. This would degrade people a little further to robots.

A current example shows that countries like China have few inhibitions when it comes to surveillance. A school in the city of Hangzhou in eastern China is testing a facial recognition system: three cameras observe the students and interpret their facial expressions. If the software thinks it has detected an inattentive child, the teacher is notified. "Since the cameras have been hanging in the classroom, I no longer dare to be distracted," said one student. "It's like creepy eyes are watching me all the time."

This text was first published on May 19, 2018.