spot_img

Realizing Trustworthy AI -Seventh requirement: Accountability

The last requirement “Accountability” is derived from the meta-principle of fairness and this requirement is probably the most important one for business leaders and governance officers, because it delivers clear KPIs for AI systems.

It is important to note that accountability entails the complete life cycle of the AI system. From development to deployment and use. And as in blockchain technologies, the accountability should be retraceable at every given point in time. This means that a company, which deploys a third party AI technology will still be held responsible for the complete accountability life cycle.

Honestly I am curious once liability charges hit the court rooms. And I am sure that companies will try to “hand down” the liability to the maker of the system. But for the sake of the user, each company, which deploys a system, should be held responsible in the face of a user and the law. Otherwise citizens in the EU will have to battle companies all over the world.

The requirement of accountability is divided into four categories:

Auditability

Minimisation and reporting of negative impacts

Trade-offs

Redress

Auditablity

Today, companies offering and running critical infrastructure or critical products are being audited all the time. Medical and pharma companies are governed with the so called GxP – framework and are tightly watched by governmental institutions. Also stock traded companies are being audited continuously and are legally required to do so. Hence, once time has passed and society has caught up with AI, these organizational and governmental structures and procedures will also probably apply to those AI systems which touch “fundamental rights, including safety-criticial applications”. The auditing will surpass the single auditing of the algorithm, but will probably also include auditing of data, the design process and other system and organization related entities. I would assume that special AI auditing companies will see the light of day within the next few years.

Minimisation and reporting of negative impacts

This category focuses on two abilities in order to minimize negative impacts: the ability to report about AI systems and the ability to respond to risks and threats. On the one hand the ability to report includes systems which ensure that employees are able to report issues without being cancelled from the organization (e.g. whistle blower hotlines), on the other hand the ability to respond will need to give employees the pure ability to actually fix the problem. This may seem harder at times and is tightly connected to other requirements like transparency, technical robustness and safety.

However, it is noted by the HLEG that “[risk] assessments must be proportionate to the risk that the AI system pose”. That means (luckily) that small, AI driven Excel Add-ons used by SMEs will probably not be assessed while AI supported elevators or plane guidance systems will probably feel the full force of governmental control.

Trade-offs

As we all learned in life: we cannot please everyone. Thus when trying to implement the seven requirements, some might not be fully met by the AI system. This is per se not bad .But all trade-offs need to be documented well and discussed properly. Furthermore the company leaders will have to acknowledge them in writing and decide if the AI system should be run despite existing trade-offs. This document will then be part of the auditing process as well.

Redress

Redress is closely tight to the ability to respond. Redressing means to “set right again”. So, employees shall be able to actually fix a faulty AI system, if errors are reported.

And this concludes the discussion of the four principles and seven requirements of trustworthy AI. In the next chapters we will discuss which technical and non-technical methods the HLEG suggests to “realise Trustworthy AI”.

Goldblum's Services