spot_img

Realizing Trustworthy AI – Fifth requirement: Diversity, non-discrimination and fairness

Before we discuss today’s fifth requirement, we need to define certain termini to not create confusion.

Discrimination

Discrimination has been defined by the EU Council Directive 2000/78/EC of 27 November 2000 which established a general framework for equal treatment in employment and occupation. The Directive’s purpose was “to lay down a general framework for combating discrimination on the grounds of religion or belief, disability, age or sexual orientation as regards employment and occupation”. As we see here, it has a very defined scope of definition (just religion, disability, age or sexual orientation) and people should not be treated differently based on religion, belief, disability, age or sexual orientation (Article 2,2.a).

Article 2a & 2b define further direct and non-direct discrimination as follows:

(a) direct discrimination shall be taken to occur where one person is treated less favourably than another is, has been or would be treated in a comparable situation, on any of the grounds referred to in Article 1;

(b) indirect discrimination shall be taken to occur where an apparently neutral provision, criterion or practice would put persons having a particular religion or belief, a particular disability, a particular age, or a particular sexual orientation at a particular disadvantage compared with other persons unless:

(i) that provision, criterion or practice is objectively justified by a legitimate aim and the means of achieving that aim are appropriate and necessary

As we see above, the HLEG uses the word discrimination not in a global sense as data science does, but in a very specific context. Furthermore it allows indirect discrimination if it is “objectiviely justified by a legitimate aim”. Thus denying someone a 500.000 Euro loan because the person is 85 would be objectively justified by risk management and thus would not violate the principle of non-discrimination.

Avoidance of unfair bias

There is a real risk that the use of historic data to train an AI system can lead to unwanted bias and thus unintended indirect prejudice and discrimination (as defined above). However, the HLEG extends this unfair bias interestingly also to the exploitation of the user’s biases by “engaging in unfair competition, such as the homgenisation of prices”. For further reading I suggest following paper from the European Union: #Big Data: Discrimination in data-supported decision making

Accessibility and universal design

This addresses the fact that systems should be designed to be used by people of all ages, gender, abilities or characteristics. Specifically people with disabilities should not be excluded when designing AI software.

Stakeholder Participation

Like in any good project management framework, stakeholder management is key to ensure broad acceptance and ethical validation from all groups. Thus stakeholders from all avenues of life, who are potentially effected by the AI system, should be consulted and heard.

In the next chapter we are going to discuss the sixth requirement “Societal and environmental well being”.

Goldblum's Services