spot_img

Realizing Trustworthy AI – Fourth requirement: Transparency

Today we are talking about the fourth requirement: Transparency. Transparency can be directly linked to the principle of explicability, which we talked about earlier.

Transparency of AI should not only cover transparency of the model, but also of the complete data stack and data processing pipeline (data engineering, filtering, aggregation), the set up of the underlying system, the model and most importantly the business model: How is the company generating money? What assets of a user are they selling?

Tracebility

I very much support the HLEG’s idea to extend transparency also towards data and data processing. It would be very interesting to know, which data the companies are actually using. How are they “enriching” the data they collect from your user behavior. There is a whole ecosystem of data-enriching companies out there, which many people have not heard about. It is not enough for a company to tell people which data they record on their website, but also with what data they are enriching it as it is the combination of important data points which makes information inferences so critical.

Furthermore it is important to know, how data is filtered and aggregated before it is used in the AI model. Imagine a bank aggregates the data of credit scores into just two groups before it is infused into the AI model. A person with a credit score above 100 (fictive value) gets a 1 and a person with a credit score below 100 gets a 0. This is a very critical decision of the bank and happens outside of the actual AI model.

Explainability

Explainability does not just cover the explainability and interpretability of the AI model. It also covers the complete pipeline. Futhermore, and this is a wishful thought, it should also cover the non-technical part of the AI system. What decisions are the humans taking based on the suggestion of the AI model. Are there intermediary human steps, are humans in or on the loop? By looking at AI driven application software, I would love to know how that actually works in a company. Does a human actually check which applicants are being rejected and why?

The most important aspect for the topic of explainability is that the people who need to understand it (compliance officers, board, stakeholders, users) will be presented with a format that actually is understandable for them. So AI-inherit-language needs to be translated into a format which is written for everybody. I am excited to see which new job profiles will emerge solely focusing on this topic.

Communication

As AI systems become more and more humanlike, it is important that they identify clearly as AI systems. Many website chats do not make clear, if it is AI or human operated. The HLEG makes clear that users should not only be informed but also given a chance if they want to communicate with an AI system or rather prefer a real human.

In the next chapter, we are looking into the fifth requirement of “diversity, non-discrimination.

Goldblum's Services