“It is all a black box!” is just no longer good enough. If AI drives your business then you need to be able to explain what is going on.
As Artificial Intelligence (AI) plays an increasingly important role in all of our lives the need to explain what is going on in simple user-friendly terms gets ever more urgent. Stakeholders, from media to regulators, are increasingly focused on ethical and legal concerns whilst commentary about “black boxes” or “mutant algorithms” do not help. Especially not when people’s life outcomes are at stake.
A practical answer is to produce an AI Explainability Statement. This is a public-facing document providing information to end-users and consumers on why, when and how AI is being used. It provides transparency on data sourcing and tagging, how algorithms are trained, the processes in place to spot, and respond to, biases and how governance works. In addition, it should show that the wider implications and impact from AI deployment have been thought through and reviewed.
It sounds like — and can be — quite a lot of work. So why should you prepare an AI Explainability Statement?
1. In Europe, it is expected
Under GDPR fully automated decisions with legal or other significant effect need to be explainable. You need to be able to explain how, where and why you are using AI that impacts on citizens. Clearly this matters where legally privileged issues are being covered but it is increasingly best practice everywhere.
2. It will help you get your stuff together internally
Does your right hand know what your left hand is doing? Many organisations have not yet had the joined-up conversation between those doing the AI, those creating the data, those worrying about the regulations, those explaining it to customers and those ultimately responsible for good governance and oversight. Creating an AI Explainability Statement brings all these stakeholders together — you would be surprised what might have slipped between the cracks.
3. It is good for your customers
Customers like to be treated as adults — especially if they are using your algorithms with their customers (because you are a B2B supplier). Not everyone is interested in the details but most like to know that the information is there — especially before it turns in to a crisis. (You might also appreciate that).
4. It may protect you from the law
Because this can turn in to a crisis. Two sets of courts in Europe — for Uber / Ola in the Netherlands and Deliveroo and Glovo in Italy have already been clear that if your AI is going to impact on individuals (in these cases, their employment rights) then you had better be able to explain what is going on. These court cases are setting clear precedents.
5. And this is going a lot further than just Europe
China, New York and California are all moving in the same legal direction. Transparency is at the heart of emerging regulation everywhere. Meanwhile, Europe is gearing up to introduce more AI Regulation, and this will be based on the principle of enhanced transparency
There is a bonus…
6. We can make it easy
At Best Practice AI, working with colleagues at Fountain Court and Simmons & Simmons, we have already created AI Explainability Statements which have been reviewed by the relevant UK regulator, the Information Commissioner’s Office (ICO).
If you want help making this happen, or just want to know more — then do be in contact.
Comments