spot_img

Realizing Trustworthy AI – First requirement: Human agency and oversight

The first desired requirement for a trustworthy AI system is human agency and oversight. First and foremost, AI systems should adhere to fundamental rights as defined by European courts.

Fundamental rights

To make an informed decision, I looked at the Fundamental Rights as defined by the EU charter from 2012. It is broken down in six chapters: dignity, freedoms, quality, solidarity, citizen’s rights and justice. I would advise that AI developers hold special attention to Title III, Article 21. This article talks about non-discrimination. As we discussed in previous chapters, “discrimination” is differently defined in data science than in general language. And I would advise that employees responsible for ethical governance have the legal department look into this article and get a clear definition of discrimination. The article says that “any discrimination (or group building) based on any ground as …language, …political or any other opinion (!),..property,…, age… shall be prohibited.” Just imagine you are a bank and use AI to determine if somebody should be given a loan. Then it would definitely make a difference if this person owns 10 houses or none, or if it is 40 years or 80 years old. Furthermore “any other opinion” is a very broad term. Just imagine you are assessing the personality of an applicant as an automated step for your job assessment software. Here, the basic idea behind developing personality profiles is that teams with matching personality profiles work better together. But a personality profiles is developed by getting the applicant’s opinion on different topics. If that person then doesn’t match the target profile, the AI system would reject the person of advancing in the assessment process. According to the HLEG this already would violate fundamental rights. Is that realistic? I would advise to read the charter of fundamental rights of the European Union to make sure that this aspect of the first principle is obliged to.

Human agency

The HLEG defines human agency as to:

“ be able to make informed autonomous decisions regarding AI systems. [Users] should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system.

That is a very high goal and I am curious how this would play out. I would love to see a court force the social media platforms to open their algorithms to the public to see why certain stories are not being pushed or why certain articles or people are suggested. Furthermore I find it extremely challenging to have a system which is comprehendible to the user. I think just time will tell, how these general terms will live up to reality. What is “satisfactory degree”, what is “reasonably”, what is meant by “self-assess or challenge”? Should everyone be able to self-assess the AI algorithms of Facebook? It would be lovely to see full transparency in AI algorithms of banks and insurances. But again, time will tell, if ethical desires will beat business needs in reality.

Human oversight

Human oversight can be displayed on a continuum ranging from “Human-in-the-loop (HITL)”, where human can step in every decision cycle of the AI system to “human-in-commad (HIC)”, where humans are just able to oversee the general outcome or activity of the system regardless of the activities “under the hood”. In my opinion HITL is a contradictive option to AI systems, but HIC should be approached in most cases. HIC would allow to override the decisions of the system in specific cases or it could mean to not use the system in particular situations. Insurances already use the HIC principle when separating simple claims handled by AI systems and complex claims handled by humans.

In the next chapter we will look at the second requirement of “Technical robustness and safety”.

Goldblum's Services