In the last chapter we talked about the first requirement “Human agency and oversight”. Today we will look into the second requirement “Technical robustness and safety”. This requirement is concerned with risk assessment and prevention of harm.
Resilience to attack and security
As with any other system, any AI system needs to be resilient to hostile attacks from the outside and inside. AI hacking extends the normal range of cyber security measures as also the model and the incoming data needs to be protected. Protection against data poisoning and model leakage is normally not familiar for cyber security teams and thus they need to actively extend their processes and procedures to protect AI systems against those AI specific threats. We discussed Microsoft’s chat bot which had been deliberately hacked by data poisoning. Futhermore you see that this requirement contradicts the first requirement in some aspects. As model leakage can lead to threats to the system, it should be avoided. But if you see it as model transparency, it should be sought after.
Risk Management / General Safety
As AI systems are used more and more in critical tools and products, risk management needs to be woven into the general project management of the AI development and should be enforced by general company governance.
Accuracy, reliability and reproducibility
Psychological tests, medicine and AI systems have a lot in common. They all are measured and judged by accuracy and reliability. Accuracy is defined as the value of how close the final result is to the actual or actual value. It measures how big or small your overall error is. Reliability however measures how much or little the errors are spread.
I know this can be confusing, so let’s look at the famous archery example which every first semester student in experimental design classes is tortured with. Imagine you have a famous archer. He shoots ten arrows at the target.
- All ten hit the bull’s eye. Then this trial run was both accurate and reliable.
- If the archer however would have put all ten arrow at the top left corner of the target, his trial run would still be reliable, because he constantly reproduced the same result. However it would be not accurate.
- On the other side, if he would have spread them unevenly, but all close around the bull’s eye, then he would still have an accurate but unreliable result.
Highly reliable and reproducible AI systems help developers, scientists and law makers to accurately describe what the AI system does. Replication files, as suggested by the HLEG, can “facilitate the process of testing and reproducing behavior”.
In the next section, we will look into the requirement of “privacy and data