spot_img

First criterium: Complying to existing laws

Before we come to the seven key requirements of Trustworthy A.I., I would like to elaborate the complexity of the three global criteria of Trustworthy A.I.:

  • comply to existing laws,
  • adhere to human, ethical principles
  • and should be robust, both from a technical and social perspective.

When you read the three criteria above, it seems very straight forward, but by closer examination, even these three criteria are hard to reach. Let me explain this a little further.

Imagine you are the project manager of an A.I. project and you want to make sure that your A.I. application is complying to existing law – not just today, but also tomorrow and also next year. By definition however A.I. systems are considered to evolve over time.

So how do you make sure that your A.I. system is not going to go to the dark side over time? And how do you make sure that you have an eye on the changing legal landscape. What is allowed today, might be forbidden tomorrow.

Microsoft’s team developing Tay for example chained Tay’s dark side, by literally forbidding her to talk about anything of any sensitive nature. But using unnaturally restrictive rules for an evolving entity might not be the best solution for the future?

Additionally, the area of liability caused by A.I. is also very complicated from a legal standpoint. If you build a car with faulty brakes, cause and effect can be clearly displayed and argued. But can you truly be held responsible for the creation of a system which evolved into something unforeseeable? If you compare A.I. systems to children and their makers, you might argue with “parental” liability. For example in all 50 US states, parents are responsible for all malicious or willful property damage done by their children. But –  this parental liability is never criminal. The parents are obligated only to financially compensate the party harmed by their child’s actions. If you follow this line of thought, damage done by A.I. systems would never trigger any criminal charges for companies? Think about this a little bit, when you turn on the self-driving mode of your car tomorrow or if you imagine how many military drones are flying over people’s heads.

I am looking forward to future upcoming legal cases, as these issues will definitely be debated in courts all over the world in the near future.

To clarify the first criterium, the AI HLEG redefines its first criterium by suggesting five rough legal guardrails:

  • 1. Respect for human dignity.
    The EU document states „ that all people are treated with respect due to them as moral subjects, rather than merely as objects to be sifted, sorted, scored, herded, conditioned or manipulated”. I do not know how you feel, but if I let the multitude of existing A.I. applications in the internet pass my inner eye, I have the feeling that many companies solely use A.I. to sort, score and manipulate us. Just take the recommendation engine of a movie streaming company or an e-commerce retailer. Isn’t manipulation (into buying or watching something) exactly the sole usecase of a recommendation engine? 
  • 2. Freedom of the individual
    The document defines freedom as “enabling individuals to wield even higher control over their lives, including […] the right to private life and privacy.” Well, I do not want to blow into the same whistle again, but I do not really see the right to privacy improve with the collection of collected data through social media, voice assistents, smart household items and search engines. What is your opinion? Are you seeing us moving into a direction of more freedom or less?
  • 3. Respect for democracy, justice and the rule of law
  • 4. Equality, non-discrimination and solidarity
    This point is very prominent in the news these days. The HLEG stresses that it is “more than non-discrimination”. The AI system should already be trained with data balancing all necessary cultural variables and “should be as inclusive as possible”.
    When I was a young psychology major in the university of Goettingen, our statistic professor told us every week that (psychological) human experiments can never be unbiased. From a statistical and practical standpoint psychological experiments and A.I. systems are very similar. They both use data from a subset of the population and then try to infer the results to the rest of the population. If you try to be truly unbiased, you would try to find a subset of people which 100% represents the whole population in ANY attribute. The attributes we are looking for are not just age and gender, but also height, weight, hair color or any psychological or social variable you can think of (introversion, emotional stability, job experience, financial wisdom, place of living, living situation, tendency to gamble …). And since the attributes by themselves are literally endless, the chance to find a true “representative” subset is zero. We can try to try, but we will never fully succeed. That means that every system is and will always be biased just by default. But that also means that by definition “being biased” is not automatically the same as “being racist”.
  • 5. Citizens’ rights
    The HLEG states that A.I. should improve and not fight existing citizen’ rights in the area of e.g. voting, good administration or access to public documents.

In the next section we will describe the second criterium “adhere to human, ethical principles”

Goldblum's Services