After analyzing the four ethical principles (respect for human autonomy, prevention of harm, fairness & explicability) of the second criterium, we start to explore how the HLEG envisions Europe to move forward using AI and how to implement the four ethical principles in real life.
The HLEG identifies three different groups as stakeholders in the AI lifecycle. At the beginning are the “developers”. Developers are defined as individuals and companies who develop AI systems. The HLEG wants these companies to adhere to the four ethical principles when they design and develop new AI technologies. The second group are the “deployers”. These are individuals, groups or companies which use the AI technologies in their products and services. I can imagine that AI developing companies will need to produce a declaration of no objection, or declaration of ethical clearance in the future to sell their products to deployers and that deployers might be forced by legal governance processes to only use AI software which can produce these kind of declarations (equal to processes in data protection legalized within the GDPR framework).
The third group is the end-user and the broader society which should have the right in the future to be automatically informed about the results of the ethical evaluation and who should have the right of being informed about the reasoning of the AI system.
The HLEG tried to create a list of concrete requirements to adhere on the one hand to the dimension of ethical principles and on the other hand to the dimension of stakeholders. Each requirement in this non-exhaustive list stresses a different principle and/or different stakeholder.
Seven tangible requirements of Trustworthy AI
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, non-discrimination and fairness
- Societal and environmental wellbeing
As you can imagine, each requirement is not equally important to every AI system and in certain settings these requirements might even contradict each other. And this is fully okay. If you start your journey of integrating ethical discussions of AI in “development” or “deployment” you might not find a perfect solution at the end. Even within your team or company there might be different stakeholders with different view points. The most important thing is however that your collective reasoning process should be transparent and explainable and explicable. Own your ethical discourse within your company and use this tension to progress further rather than cancelling out voices. You might be working in a field where certain ethical requirements cannot be met fully. This might have regulatory reasons or even pure economical reasons as you are fearing other companies might use your knowledge to develop even better algorithms. Whatever it is, make your thought process transparent and explicable.
In the next section will talk about the first requirement “Human agency and oversight” and we will try to see, how you as a company can try to implement it in your every day work.