spot_img

Realizing Trustworthy AI – Technical Methods

As we have seen in the sections above, we have the four fundamental principles of Trustworthy AI ( respect of human autonomy, prevention of harm, fairness and explicability).

Out of these principles we derived the seven key requirements (human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental wellbeing accountability).

The HLEG suggests both technical and non-technical methods to fulfill these requirements. In this chapter, we will look at a few technical methods. This list, as with everything in this document, is not complete and will develop over time.

Architectures for Trustworthy AI

An Architecture for Trustworthy AI could try to “translate” the above mentioned requirements into processes or procedures. These processes and procedures could be labeled ethically desirable or ethically non-desirable and with business process engines you could for example use process heat maps to determine to which extend your AI system is working in an ethical manner.

As AI systems are learning systems, we can try to implement guarding points along the learning process. In general a learning system (human, animal, artificial) uses senses or sensors to acquire data (or light, touch, smell, sound), then uses cognitive components to plan a certain behaviour based on the sensoric data and then act upon it. An architecture designed in favor of trustworthiness could try to have guard processes for sensing, planning and acting to ensure an overall ethically desired behavior. The importance here is to implement a solution on the concrete detail level.

Ethics and rule of law by design

Nowadays companies are already using “by-design” architectures when they develop privacy-by-design or security-by-design systems. The HLEG suggests to use this idea and extend it to the concept of Trustworthy AI. How could that look like? Well, imagine we want to fulfill the requirement of technical robustness and safety. Then we could build in for example fail-safe switches, if the system is compromised – like the fail-safe switch in hair dryers when they fall into the water.

Explanation methods

The field of Explainable AI or XAI is dedicated to answer the question of how to explain the sensing, planning and acting processes within AI systems. It is cute when an AI system takes a puppy for a bagel, but not so cute when a self-driving car ignores a driving truck, because it has a tarpaulin of an open sky. It is advisable for all AI developers to follow the results of XAI closely to learn from their findings.

Testing and validating

It is important to mention that software testing and AI testing is not the same. Many companies already try to develop AI systems with methods of software development, but that is not always leading to a successful end. The same applies to testing. Only because you have a test engineer in house, you cannot be sure that he or she is able to test AI systems. AI systems need to be tested continuously and in all stages. A sometimes fun task (not for the AI engineers) is to create enemy or red teams within your company, whose only purpose is to break the system. In this way you are not confronted with biased testers.

In the next chapter we are going to discuss non-technical methods to meet the requirements of Trustworthy AI.

Goldblum's Services