How to reduce the risks from AI's original sin — the hallucination problem
Our op-ed in the Washington Post on the fundamental problem with large language models — and what organisations need to do to protect themselves from AI that confidently produces false information.
Dawn of the AI Election
Roosevelt was the first radio president, Kennedy mastered TV, Trump cracked Twitter. By the end of 2024 we may have our first AI president. What does that mean for democracy and truth?
How to handle AI in UK Elections
Many expect US Big Tech to solve the problem of AI in elections. That is not good enough. This piece argues for a structured regulatory response grounded in transparency and accountability.
ChatGPT's threat to Google is that it makes search worse, not better
Consumer demand switching is less of a risk than the degradation of content supply. The real danger of generative AI to search is what it does to the quality of information on the internet.
Why "AI Bias" is a good thing
When data-driven algorithms expose a problem that cannot be put down to a few bad apples, the opportunity is there to fix it structurally. AI bias, used correctly, is a diagnostic tool.
Humans and machines should both up their game
Published in the FT in response to Sarah O'Connor's column on AI and hiring. Argues that we hold AI to a standard we would never apply to human interviewers — who can be biased, lazy and unsympathetic.
Artificial intelligence is an unforgiving mirror
Published in the FT in response to John Thornhill's column on AI bias. Argues that AI bias is a reflection of human bias encoded in data — and that this forces an uncomfortable reckoning.
HireVue Launches AI Explainability Statement — HR Industry First
Reviewed by the UK's Information Commissioner's Office (ICO), supported by Best Practice AI, Simmons & Simmons, and Jacob Turner of Fountain Court Chambers. The world's first AI explainability statement reviewed by a regulator.
C-Suite Toolkit Helps Executives Navigate the AI Landscape
The WEF published its AI C-Suite Toolkit co-authored with Best Practice AI — helping senior executives take advantage of AI opportunities while understanding and managing the associated risks.
Healthily and Best Practice AI publish world's first AI Explainability Statement reviewed by the ICO
A landmark moment in AI governance: the world's first AI Explainability Statement to have been reviewed by the UK Information Commissioner's Office — developed with Simmons & Simmons and Fountain Court Chambers.
Landmark Legal Case Highlights Corporate Accountability For Automated Decisions
Amsterdam's District Court ruling on Uber and Ola — the first court in the world to address the issue of explainable AI — and its major implications for corporates using automated decision systems.