AI Intelligence Brief

Thu 2 April 2026

Daily Brief — Curated and contextualised by Best Practice AI

38Articles
Editor's pickEditor's Highlights

AI Startups Raise $297 Billion, France Buys Supercomputer Firm, and UK Teachers Report Pupil Skill Decline

TL;DR OpenAI, Anthropic, Waymo and other AI companies raised $297 billion in funding during the first three months of 2026, shattering previous records amid accelerating investor enthusiasm. France acquired supercomputer maker Bull to bolster national AI independence, while California's Governor Newsom issued an executive order mandating safety and privacy measures for state AI contracts. The Bank of England warned that AI adoption risks in UK finance will likely rise, as two-thirds of English secondary teachers report students losing core thinking skills like writing and problem-solving due to AI reliance.

Editor's highlights

The stories that matter most

Selected and contextualised by the Best Practice AI team

10 of 38 articles
Lead story
Editor's pickPAYWALL
NYT· Yesterday

A.I. Companies Shatter Fund-Raising Records, as Boom Accelerates

OpenAI, Anthropic, Waymo and other artificial intelligence companies hauled in $297 billion in funding in the first three months of the year.

Editor's pickDefense & National Security
theregister· Today

Live and Let AI: Former CIA officer says human spies matter more in the LLM age

AI is eroding trust in digital communications and data, giving old-school spycraft fresh relevance for modern agents.

BPAI context

Thomas Mulligan, a former CIA case officer and RAND researcher, argues persuasively that the proliferation of generative AI, particularly large language models, will paradoxically elevate the role of human intelligence (HUMINT) in espionage. As AI democratizes disinformation—enabling agents to fabricate convincing backstories or false intelligence—it erodes trust in digital channels, pushing operatives toward analog methods like dead drops and face-to-face encounters. While AI offers tools for pattern recognition in surveillance or psychological manipulation, Mulligan warns it also heightens risks for human handlers, making counterintelligence more vital. This perspective revives Cold War-era spycraft, but skeptics might question whether it underplays AI's potential to automate even interpersonal deceptions, potentially rendering human elements as vulnerable relics in an increasingly synthetic intelligence landscape. Overall, Mulligan's analysis underscores a timely shift: in an era of deepfakes, the human touch may indeed be the ultimate verifier of truth. Key points: • AI enables easier fabrication of false intelligence, increasing the need for human verification in HUMINT. • Digital communications lose trustworthiness due to AI-generated disinformation, favoring old-school spycraft like dead drops. • AI tools can aid spies in manipulation and data analysis but also pose detection risks for operatives. • Human intelligence officers will remain essential as AI complicates distinguishing fact from fiction. Expert question (counterfactual): What if rapid advancements in AI-driven behavioral prediction and real-time deepfake countermeasures ultimately automate HUMINT tasks, making human spies as obsolete as they were feared to be?

Editor's pickTechnology
siliconrepublic· Yesterday

France buys supercomputer maker Bull in tech sovereignty push

‘By supporting the emergence of Bull, we are choosing strategic independence,’ said France’s minister delegate for artificial intelligence and digital affairs.

BPAI context

Sovereignty is the key question in AI. Can we realistically have sovereignty by country or region in a world where infrastructure investment of US BigTech runs to $500B / year. The scale of compute, data, talent and distribution seem to be key, and that is not something that is easily addressed.

Editor's pickTechnology
siliconangle· Today

ServiceNow says agentic AI will redefine work — but only if companies get governance right

Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.

BPAI context

ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?

Editor's pickFinancial Services
Artificial Intelligence Newsletter· Today

Risks from AI adoption in UK financial sector are likely to increase

The Bank of England said on Wednesday that artificial intelligence risks are likely to increase, as UK financial firms expand their deployment of the advanced technology.

Editor's pick
arXiv· Yesterday

When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods

Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a depreciating public archive.

Editor's pick
MIT Sloan Management Review· 3 days ago

When Not to Use AI

BPAI context

Benjamin Laker's column astutely delineates the boundaries of AI in managerial practice, advocating its use for accelerating rote tasks like data synthesis and drafting while cautioning against outsourcing human-centric judgments on values, relationships, and trust. This balanced perspective counters the hype surrounding AI as a panacea for productivity, emphasizing that overreliance risks eroding critical thinking and emotional intelligence—core to effective leadership. However, Laker's framework assumes managers possess the self-awareness to discern these boundaries consistently, a skepticism warranted given evidence of cognitive biases in tech adoption. By positioning AI as a tool rather than a proxy, the piece promotes deliberate integration, potentially fostering more resilient organizational cultures amid rapid technological shifts, though it underplays the need for systemic training to prevent unintended delegation of authority. Key points: • Use AI for mechanical tasks like summarizing reports or outlining agendas to free up time for strategic sensemaking. • Avoid AI in decisions involving trust, values, or relationships, such as delivering feedback or assessing candidate resilience. • Treat AI outputs as raw material requiring human editing to maintain judgment and accountability. • Instruct AI to generate counterarguments to challenge biases and broaden decision-making perspectives. Expert question (counterfactual): What if advancing AI capabilities in emotional intelligence and contextual nuance render the distinction between mechanical and human tasks obsolete, forcing managers to redefine leadership entirely?

Editor's pick
Reuters· Yesterday

Does the AI business model have a fatal flaw?

The AI tools that power ChatGPT and its rivals - known as large language models, or LLMs - are a genuine productivity-enhancing innovation.

Editor's pickEducation
Guardian· Today

Pupils in England are losing their thinking skills because of AI, survey suggests

Two-thirds of secondary school teachers report a decline in core abilities such as writing and problem-solving.

Editor's pickPAYWALLGovernment & Public Sector
The New York Times· Yesterday

What to Know About California's Executive Order on A.I.

Gov. Gavin Newsom, a Democrat, issued an order requiring safety and privacy guardrails for artificial intelligence companies contracting with the state.

Economics & Markets

Investments in AI are on the rise, with Microsoft investing $5.5 billion in Singapore and Nvidia investing $2 billion in Marvell. OpenAI has also secured $122 billion in funding. The AI boom is expected to continue, with big tech firms spending at least $630 billion to build AI infrastructure this year.

7 articles

Labor & Society

The increasing use of AI is raising concerns about job displacement and the need for workers to develop new skills. Oracle is cutting thousands of jobs in a round of layoffs, while the UK is seeking to improve law enforcement with AI. There are also concerns about the ethics and safety of AI, with some experts warning about the risks of AI agents.

10 articles
AI Ethics & Safety2 articles
Editor's pickDefense & National Security
theregister· Today

Live and Let AI: Former CIA officer says human spies matter more in the LLM age

AI is eroding trust in digital communications and data, giving old-school spycraft fresh relevance for modern agents.

BPAI analysis

Thomas Mulligan, a former CIA case officer and RAND researcher, argues persuasively that the proliferation of generative AI, particularly large language models, will paradoxically elevate the role of human intelligence (HUMINT) in espionage. As AI democratizes disinformation—enabling agents to fabricate convincing backstories or false intelligence—it erodes trust in digital channels, pushing operatives toward analog methods like dead drops and face-to-face encounters. While AI offers tools for pattern recognition in surveillance or psychological manipulation, Mulligan warns it also heightens risks for human handlers, making counterintelligence more vital. This perspective revives Cold War-era spycraft, but skeptics might question whether it underplays AI's potential to automate even interpersonal deceptions, potentially rendering human elements as vulnerable relics in an increasingly synthetic intelligence landscape. Overall, Mulligan's analysis underscores a timely shift: in an era of deepfakes, the human touch may indeed be the ultimate verifier of truth. Key points: • AI enables easier fabrication of false intelligence, increasing the need for human verification in HUMINT. • Digital communications lose trustworthiness due to AI-generated disinformation, favoring old-school spycraft like dead drops. • AI tools can aid spies in manipulation and data analysis but also pose detection risks for operatives. • Human intelligence officers will remain essential as AI complicates distinguishing fact from fiction. Expert question (counterfactual): What if rapid advancements in AI-driven behavioral prediction and real-time deepfake countermeasures ultimately automate HUMINT tasks, making human spies as obsolete as they were feared to be?

Technology & Infrastructure

The development of AI technology is driving innovation in various fields, including chip design, data centers, and cybersecurity. OpenAI and Anthropic are preparing to unleash the next leap in AI, with a focus on agentic AI and autonomous systems. Google's TurboQuant reduces AI's memory use, but won't save us from DRAM-pricing hell.

9 articles

Adoption & Impact

The adoption of AI is having a significant impact on various industries, including healthcare, finance, and education. Companies such as Intuit and Salesforce are using AI to improve their services, while others are investing in AI-powered solutions to enhance their operations.

9 articles
AI Adoption & Diffusion3 articles
Editor's pick
MIT Sloan Management Review· 3 days ago

When Not to Use AI

BPAI context

Benjamin Laker's column astutely delineates the boundaries of AI in managerial practice, advocating its use for accelerating rote tasks like data synthesis and drafting while cautioning against outsourcing human-centric judgments on values, relationships, and trust. This balanced perspective counters the hype surrounding AI as a panacea for productivity, emphasizing that overreliance risks eroding critical thinking and emotional intelligence—core to effective leadership. However, Laker's framework assumes managers possess the self-awareness to discern these boundaries consistently, a skepticism warranted given evidence of cognitive biases in tech adoption. By positioning AI as a tool rather than a proxy, the piece promotes deliberate integration, potentially fostering more resilient organizational cultures amid rapid technological shifts, though it underplays the need for systemic training to prevent unintended delegation of authority. Key points: • Use AI for mechanical tasks like summarizing reports or outlining agendas to free up time for strategic sensemaking. • Avoid AI in decisions involving trust, values, or relationships, such as delivering feedback or assessing candidate resilience. • Treat AI outputs as raw material requiring human editing to maintain judgment and accountability. • Instruct AI to generate counterarguments to challenge biases and broaden decision-making perspectives. Expert question (counterfactual): What if advancing AI capabilities in emotional intelligence and contextual nuance render the distinction between mechanical and human tasks obsolete, forcing managers to redefine leadership entirely?

AI Applications6 articles
Editor's pickTechnology
siliconangle· Today

ServiceNow says agentic AI will redefine work — but only if companies get governance right

Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.

BPAI context

ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?

Editor's pickTechnology
Artificial Intelligence Newsletter· Yesterday

Having Humans Check AI Agents May Not Be Necessary

Contrary to popular opinion, having a 'human in the loop' when companies deploy new agentic artificial intelligence tools may not be necessary in the future.

Editor's pickTransportation & Logistics
Wired· Yesterday

Robotaxi Outage in China Leaves Passengers Stranded on Highways

Geopolitics

2 articles

Latest arXiv Papers

1 articles
Best Practice AI© 2026 Best Practice AI Ltd. All rights reserved.

Get the full executive brief

Receive curated insights with practical implications for strategy, operations, and governance.

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.