The discussion of Ethics in A.I. first appeared in 1942 in the short story “Runaround” from Science Fiction author Isaac Asimov. There Asimov introduced the “Three Laws of Robotics”.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In this short story the robot Speedy tries to follow these three laws on planet Mercury, but gets stuck in a logical and literal loop, because he feels that they are contradicting each other. Only as the human master willingly tries to harm himself, the first law supersedes and the robot can get out of the logical loop.
These three famous laws have been the theoretical foundation for many discussions, books and movies for almost 80 years. But since 2011, with the appearance of deep learning and the new A.I. wave, the necessity of Trustworthy A.I. became suddenly very real. Nowadays you read many news articles about “A.I. bias” or “A.I. safety”. No one wants A.I. induced racism in our systems or an A.I. enhanced military drone, which is equipped with heavy bombs, to suddenly mix up friend and foe. How easy it is to turn a sensitive and innocent A.I. program into a raging Nazi-bot was publicly demonstrated by Microsoft in 2016. Tay, their newest chat bot, started to understand the world by reading Gigabyte of Twitter feeds. Within hours innocent Tay became a raging violent Nazi-bot by being indoctrinated by malicious people on Twitter and needed to be shut down.
Over the last years, the role of A.I. programs have moved from simple tools to equivalent partners in our companies and homes. And thus, like any member of society, it needs to also start following pre-determined rules within this society.
In order to address trustworthiness and ethics in artificial intelligence, the European Union created the European AI Alliance. Its steering group, the High-Level Expert Group on Artificial Intelligence (AI HLEG) released AI Ethics Guidelines for true “trustworthy A.I.”.
According to AI HLEG “Trustworthy A.I.” should display these three criteria:
- comply to existing laws,
- adhere to human, ethical principles
- and should be robust, both from a technical and social perspective.
Microsoft’s Tay might have been a negative example of a robust robot. It might have rather been more on the neurotic and highly influenceable side of life.
The HLEG derives seven key real-life requirements out of the three global criteria of the Ethics Guidelines for Trustworthy A.I.
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental wellbeing
In the next passage we will turn to the first criterium “complying to law”.