spot_img

Second criterium: Adhering to human, ethical principles (Respect for human autonomy)

The second global criterium of Trustworthy A.I. is “adhering to human, ethical principles”. The HLEG built their ideas on the 9 basic principles proposed by the European Group on Ethics in Science and New Technologies and derived four ethical principles. Like Immanuel Kant, the father of enlightenment philosophy and forefather of the age of industrialization, they defined their four principles as “ethical imperatives” meaning that A.I. representatives shall continuously strive for reaching the maximum of these four principles.

  1. Principle of respect for human autonomy

According to the HLEG  A.I. systems should not “unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans”. What do they mean by that? Well, the most important word to stress in this principle is autonomy – our human autonomy. It is important that the A.I. systems are not going to reduce our autonomy and freedom as they progress and evolve to become better. It is the old human fear to lose their role as head of the Earth. In Genesis God gives the Earth as (Wo)man’s dominion. And humans constantly fear that they are going to lose this role. It is a hopeful wish that A.I. systems “should be designed to augment, complement and empower human cognitive, social and cultural skills”. As a psychologist speaking this principle is very hard to reach, and in my opinion, the first to be neglected. Just look outside and see how people are glued to their phones, how we all are addicted to social media, news, followers and likes. No one who studies human behavior would truly say that social media or mobile phones respected the human autonomy. And we all willfully admitted to arranging all our lives towards this new technology over the last few years. But now think that the software on your phone would become much smarter and Siri and Alexa really started to “get you”. Imagine you would be able to have a deeper conversation with Alexa than with your spouse. Imagine Siri would be more emotionally intelligent, more understanding and much smarter than your spouse. Do you think that this would increase your autonomy or “empower your cognitive, social and cultural skills”? Why need a partner if your voice assistant is always there for you – never tired or pre-occupied with other things. This would start to emotionally bound the masses to their A.I. systems – more than social media does today.

In the next part, we are going to discuss the third principle – the principle of “prevention of harm”.

Goldblum's Services