By 2030, AI will be a ubiquitous technology controlling almost every aspect of our lives. Autonomous vehicles will be mainstream. AI will be diagnosing our diseases and delivering new drugs. Facial recognition systems will be in every shopping mall and on every high street. Most of the changes AI brings about will be benign. Indeed, they will be a boon to humanity. However, AI has great potential to do harm. As a result, anyone embarking on an AI transformation should understand the importance of AI ethics.
What is AI ethics?
Most people have an innate understanding of basic ethics. It’s a concept that’s encapsulated in the physician’s oath “first, do no harm”. It underpins much of our legal system. It drives people to perform acts of charity and philanthropy. But now we are introducing a new technology that very few people understand. And if you can’t properly understand something, it becomes hard to understand how to apply ethics to it.
Science fiction has long imagined what would happen in a world populated by artificial intelligence. Terminator pictures a world where the AI has risen against humanity and is waging a war of total annihilation. Issac Asimov set out three laws of robotics, the first of which is “A robot may not harm a human being”. Now, AI has become mainstream and it’s time to think more clearly about AI ethics.
Europe is leading the way
As with data privacy, the EU is leading the way with AI ethics. Around two years ago, they published a seminal report “Ethics Guidelines for Trustworthy AI”. More recently, they published the draft AI Law, set to become a binding law across the EU over the next few years. The ethics guidelines are extremely simple and clear, providing a straightforward framework for anyone to assess whether an AI is trustworthy and ethical. The report sets out the following ethical principles:
- Human agency and oversight. A human must always be involved somewhere in the loop.
- Technical robustness and safety. The AI must be resilient, robust, and fail-safe. This includes being robust to attacks.
- Privacy and data governance. The AI must ensure the privacy of all users and follow data privacy best-practices.
- Transparency. This includes traceability, explainable AI, and good communication, to ensure all users are aware of what the AI is doing.
- Diversity, non-discrimination, and fairness. Any AI must avoid both conscious and unconscious bias against any specific group of people.
- Societal and environmental wellbeing. All the actions of the AI should impact society as a whole in a positive, democratic manner.
- Accountability. All the actions of the AI must be audited and held to account. This includes the need for monitoring and reporting.
In addition, they explain that delivering trustworthy AI is a combination of lawfulness, ethics, and robustness.
The dark side of AI
Over the past five years, we have already seen numerous cases where AI has failed to be used ethically. Here are three examples.
Racial bias in facial recognition systems
Facial recognition can be used for all manner of applications, from tracking visitors in a mall to identifying known hooligans at sports matches. In 2018, Amazon’s Face Recognition platform, Rekognition, hit the headlines for all the wrong reasons. The American Civil Liberties Union submitted photos of serving Members of Congress. 28 of these were incorrectly identified as criminals or suspects from police mugshots. That was bad enough, highlighting that the technology was fundamentally flawed. But far worse was the fact that the software was disproportionately more likely to incorrectly identify people of Color. This sort of bias is all too common in AI systems which have been trained on unrepresentative datasets.
Gender bias in credit scoring systems
One of the most common use cases for AI is in decision support systems. This includes helping identify fraudulent loan applications and providing credit scoring decisions. These sorts of decisions can have a huge impact on people. Being denied credit could result in failure to get a mortgage or could severely curtail your freedom to purchase goods. In 2019, Apple found itself on the wrong side of the news headlines when it was reported that their new credit card was biased against women. Specifically, it had a tendency to offer women much lower credit limits than men. Even if they applied using a joint bank account, or were a bigger earner than their husband. Ultimately, the system wasn’t actively biased against women. A much more subtle bias was at work, whereby it had learned that, on average, women earn less and thus may be worse credit risks. The really subtle part of this was that gender wasn’t even explicitly being used in the algorithm. But AIs are so good at identifying patterns that things like gender are often learned through proxy data correlations.
Reinforcement bias in the criminal justice system
AI is often viewed as a perfect tool for making complex decisions and predictions. While this is definitely true in many cases, there are occasions when AI gets it badly wrong. AI has always had something of a rocky relationship with the criminal justice system. As far back as 2016, it was reported that COMPAS, an AI system used to risk-assess offenders in the US, was strongly biased against Black people. That resulted in more Black people being given custodial sentences. In turn, that taught the algorithm that Black people presented a higher risk. Nowadays, we see Predictive policing becoming ever more popular. Here, police forces use AI systems to identify areas with a higher crime risk. Unfortunately, these systems all suffer from the exact same reinforcement bias as the COMPAS system above. If an area is perceived to be at higher risk of crime, more police resources are sent there. All too often, this means the police are able to identify and solve crimes more readily in those areas. Since the model is driven by solved-crime statistics, this results in bias.
How you can ensure your AI is responsible
If you are looking to develop an AI solution for your business, it’s important to ensure it will be responsible. That requires your team to do three things:
- Fully understand the data, including any potential sources of bias within it. This includes understanding how good AI is at spotting correlations, even if you suppress certain features in the data.
- Once you have trained the model, validate and test it against real data and make sure you check for any bias or other issues. Check that personal data isn’t leaking and make sure the results make sense.
- When your model is in production, ensure that you constantly perform monitoring and maintenance. You must be aware that bias can creep into a model purely as a result of that model existing and being used.
Delivering these requirements needs highly-skilled data scientists, top-notch AI engineers, and a dedicated team of specialist SysAdmins or DevOps engineers. In the current market, finding such a team is not only expensive, it might even be impossible. Alternatively, you could rely on Sonasoft’s end-to-end AI expertise. We believe that all AI should be responsible and ethical. A key part of that is to improve visibility on what your AI is doing and the data it is receiving. That’s why we have put smart monitoring right at the heart of our AI platform, SAIBRE. We also provide a complete end-to-end process for creating your AI application, starting from data discovery and ending with a fully functional AI application deployed to production. If you want to learn more, contact us today.