spot_img

Second criterium: Adhering to human, ethical principles (principle of fairness)

In the last section we discussed the principal of prevention of harm. Today we are looking at the next principal:

The principle of fairness

The HLEG suggests that “the development, deployment and use of AI systems must be fair”. Since the word fair is true bucket-word in which you can put literally hundreds of definitions, they define two distinct dimensions: substantive and procedural.

A substantive dimension in which an A.I. system shall “ensure equal and just distribution of both benefits and costs” and in which an A.I. system shall “ensure that individuals and groups are free from unfair bias, discrimination and stigmatization”.

Let us look at the first part of the substantive dimension. The HLEG demands an equal distribution of both benefits and costs. What does that mean? Does that mean that the negative impact of the A.I. should be as high as the positive impact of the A.I.? Or should the negative costs on society be equaled by an even higher benefit for the company? As I mentioned above the word fair can be defined in many ways. Merriam-Webster defines it as free from self-interest, prejudice, or favoritism. Free from self-interest, prejudice, or favoritism? Well, every company which is building A.I. systems has a clear self-interest: the increase of customer satisfaction, business value and employee satisfaction. Does that mean that companies can never be fair? And in capitalism, companies should not distribute benefits and costs equally, as that would mean that they never make any profit? Thus by the pure definition of a business, no business could be fair according to the HLEG. What do you think?

Let me furthermore dwell a little in the negatively annotated word of “favoritism”. Is that really bad? Imagine you have two customers. One who brings you one million Euro revenue per month and one who brings you 10 Euros per month. Which customer will be treated with more care and service? Of course the one who brings one million Euro. Every hour your employees work inside the company generates costs. And, unless you are a charity, you cannot justify spending more money on a customer than he generates revenue.

Let me come back to the second part of their definition : “ensure that individuals and groups are free from unfair bias, discrimination and stigmatization”.

As you see in this article I truly try to understand and challenge their ideas in order to understand how we can actually use these ethical principles in real life.

I fully support their second part of the definition that we should ensure that individuals and groups are free from unfair bias, discrimination and stigmatization. I like how they silently used the word “unfair” in front of bias, discrimination and stigmatization. As we have discussed before, each A.I. system is automatically biased by default. So the bias should be just be fairly distributed. Furthermore discrimination in the original sense is actually one of the big strengths of A.I. systems as it just means that individuals or data points can be separated into different groups. For example when you try to discriminate between bagels and puppies or cats and dogs. One group of classification algorithms is literally called discriminant analysis. So the HLEG is not saying that the A.I. system should not be biased or discriminating. It just should not be unfairly biased or discriminating. I think you see what point I try to make. These principles need to be discussed and elaborated for each A.I. system separately and there is surely no “easy bad and good” in this discussion.

 If you are building a fraud detection system for a bank, discrimination (in the literal sense) should be your strong suit and your unique selling proposition, because you want to discriminate between normal and fraudulent transactions. However imagine the system starts to deny certain people just because of their zip code or age, gender or heritage, then the (company’s) benefits might not equal the (societal and individual) costs.

Procedural Dimension

I remember many years back, when one of my relatives got rejected for a member ship card at a big American office supply company. He had not been in debt, was always employed, no criminal convictions, house, car, married, two kids and still got rejected. He felt very bad and rejected, because no one could tell him why they did not want to give him a membership card. He started to investigate in his spare time as he didn’t like the idea of rejection and finally found out after months of research with that company that the internal system had flagged him as a potential “B-client” in regard to future revenue and they were just focusing on potential “A-clients” at that time of their customer acquisition strategy. For the company it was a fully acceptable behavior, but for the individual on the other side, it was very painful.

Thus the procedural dimension of fairness entails the “ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them.” Thus in order to contest decisions made by A.I. systems, the company responsible for the decision must be “identifiable, and the decision-making processes should be explicable”. In his case, it would have put his mind at ease much easier and faster.

As we see, explainability of A.I. decisions to the receiving individual or group is very important. Thus in the next section we will discuss the last principle – the principle of explainability.

Goldblum's Services