Thu 2 April 2026
Daily Brief — Curated and contextualised by Best Practice AI
AI Startups Raise $297 Billion, France Buys Supercomputer Firm, and UK Teachers Report Pupil Skill Decline
TL;DR OpenAI, Anthropic, Waymo and other AI companies raised $297 billion in funding during the first three months of 2026, shattering previous records amid accelerating investor enthusiasm. France acquired supercomputer maker Bull to bolster national AI independence, while California's Governor Newsom issued an executive order mandating safety and privacy measures for state AI contracts. The Bank of England warned that AI adoption risks in UK finance will likely rise, as two-thirds of English secondary teachers report students losing core thinking skills like writing and problem-solving due to AI reliance.
The stories that matter most
Selected and contextualised by the Best Practice AI team
A.I. Companies Shatter Fund-Raising Records, as Boom Accelerates
OpenAI, Anthropic, Waymo and other artificial intelligence companies hauled in $297 billion in funding in the first three months of the year.
Live and Let AI: Former CIA officer says human spies matter more in the LLM age
AI is eroding trust in digital communications and data, giving old-school spycraft fresh relevance for modern agents.
Thomas Mulligan, a former CIA case officer and RAND researcher, argues persuasively that the proliferation of generative AI, particularly large language models, will paradoxically elevate the role of human intelligence (HUMINT) in espionage. As AI democratizes disinformation—enabling agents to fabricate convincing backstories or false intelligence—it erodes trust in digital channels, pushing operatives toward analog methods like dead drops and face-to-face encounters. While AI offers tools for pattern recognition in surveillance or psychological manipulation, Mulligan warns it also heightens risks for human handlers, making counterintelligence more vital. This perspective revives Cold War-era spycraft, but skeptics might question whether it underplays AI's potential to automate even interpersonal deceptions, potentially rendering human elements as vulnerable relics in an increasingly synthetic intelligence landscape. Overall, Mulligan's analysis underscores a timely shift: in an era of deepfakes, the human touch may indeed be the ultimate verifier of truth. Key points: • AI enables easier fabrication of false intelligence, increasing the need for human verification in HUMINT. • Digital communications lose trustworthiness due to AI-generated disinformation, favoring old-school spycraft like dead drops. • AI tools can aid spies in manipulation and data analysis but also pose detection risks for operatives. • Human intelligence officers will remain essential as AI complicates distinguishing fact from fiction. Expert question (counterfactual): What if rapid advancements in AI-driven behavioral prediction and real-time deepfake countermeasures ultimately automate HUMINT tasks, making human spies as obsolete as they were feared to be?
France buys supercomputer maker Bull in tech sovereignty push
‘By supporting the emergence of Bull, we are choosing strategic independence,’ said France’s minister delegate for artificial intelligence and digital affairs.
Sovereignty is the key question in AI. Can we realistically have sovereignty by country or region in a world where infrastructure investment of US BigTech runs to $500B / year. The scale of compute, data, talent and distribution seem to be key, and that is not something that is easily addressed.
ServiceNow says agentic AI will redefine work — but only if companies get governance right
Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.
ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?
Risks from AI adoption in UK financial sector are likely to increase
The Bank of England said on Wednesday that artificial intelligence risks are likely to increase, as UK financial firms expand their deployment of the advanced technology.
When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods
Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a depreciating public archive.
When Not to Use AI
Benjamin Laker's column astutely delineates the boundaries of AI in managerial practice, advocating its use for accelerating rote tasks like data synthesis and drafting while cautioning against outsourcing human-centric judgments on values, relationships, and trust. This balanced perspective counters the hype surrounding AI as a panacea for productivity, emphasizing that overreliance risks eroding critical thinking and emotional intelligence—core to effective leadership. However, Laker's framework assumes managers possess the self-awareness to discern these boundaries consistently, a skepticism warranted given evidence of cognitive biases in tech adoption. By positioning AI as a tool rather than a proxy, the piece promotes deliberate integration, potentially fostering more resilient organizational cultures amid rapid technological shifts, though it underplays the need for systemic training to prevent unintended delegation of authority. Key points: • Use AI for mechanical tasks like summarizing reports or outlining agendas to free up time for strategic sensemaking. • Avoid AI in decisions involving trust, values, or relationships, such as delivering feedback or assessing candidate resilience. • Treat AI outputs as raw material requiring human editing to maintain judgment and accountability. • Instruct AI to generate counterarguments to challenge biases and broaden decision-making perspectives. Expert question (counterfactual): What if advancing AI capabilities in emotional intelligence and contextual nuance render the distinction between mechanical and human tasks obsolete, forcing managers to redefine leadership entirely?
Does the AI business model have a fatal flaw?
The AI tools that power ChatGPT and its rivals - known as large language models, or LLMs - are a genuine productivity-enhancing innovation.
Pupils in England are losing their thinking skills because of AI, survey suggests
Two-thirds of secondary school teachers report a decline in core abilities such as writing and problem-solving.
What to Know About California's Executive Order on A.I.
Gov. Gavin Newsom, a Democrat, issued an order requiring safety and privacy guardrails for artificial intelligence companies contracting with the state.
Economics & Markets
Investments in AI are on the rise, with Microsoft investing $5.5 billion in Singapore and Nvidia investing $2 billion in Marvell. OpenAI has also secured $122 billion in funding. The AI boom is expected to continue, with big tech firms spending at least $630 billion to build AI infrastructure this year.
A.I. Companies Shatter Fund-Raising Records, as Boom Accelerates
OpenAI, Anthropic, Waymo and other artificial intelligence companies hauled in $297 billion in funding in the first three months of the year.
Musk’s SpaceX Files to Go Public in One of the Biggest IPOs Ever
The company, which launches satellites and is building an AI business, is aiming to raise between $40 billion and $80 billion in an offering.
OpenAI gets $122B to 'just build things' as the world blows them up
War, oil shocks, and market nerves could yet knock the AI boom off course.
AI Boom Fuels $172 Billion Semiconductor Investment
Global semiconductor equipment spending is set to jump 18% to $133 billion in 2026, driven by AI and data-center demand.
Three out of four global leaders will prioritize AI investment despite economic uncertainty
OpenAI raises $3bn from retail investors as part of record funding haul
ChatGPT maker taps individuals for first time as it pulls in up to $122bn
Labor & Society
The increasing use of AI is raising concerns about job displacement and the need for workers to develop new skills. Oracle is cutting thousands of jobs in a round of layoffs, while the UK is seeking to improve law enforcement with AI. There are also concerns about the ethics and safety of AI, with some experts warning about the risks of AI agents.
Humanoid Data Training Gig Economy
Breakthrough technology in 2026.
Does it really make sense to retrain as a plumber?
It’s become the go-to advice for anyone whose white-collar job could be eliminated by AI — but it’s not very realistic.
Women Receive Less Recognition for Using AI
Women are less likely to use AI at work and receive less recognition for doing so, according to a survey by LeanIn.
One in Seven Americans Ready for AI Boss
One in seven Americans are ready for an AI boss, but they might not trust it.
From Hierarchy to Intelligence
This article explores how AI could reshape organizational structures, moving from hierarchical models to more fluid, intelligence-driven systems.
Technology & Infrastructure
The development of AI technology is driving innovation in various fields, including chip design, data centers, and cybersecurity. OpenAI and Anthropic are preparing to unleash the next leap in AI, with a focus on agentic AI and autonomous systems. Google's TurboQuant reduces AI's memory use, but won't save us from DRAM-pricing hell.
China's AI Chip Market Booms
In 2025, China's AI accelerator card market witnessed significant growth with around 4 million units shipped, dominated by Nvidia and rising local manufacturers.
Intel Reclaims Full Ownership of Irish Fab
Intel will buy back a 49% stake in its Ireland Fab 34 from Apollo Global Management for $14.2 billion, regaining full ownership.
Alibaba Unveils Third Closed-Source AI Model in Focus on Profit
Alibaba Group Holding Ltd. has released its third proprietary AI model in as many days, reinforcing the company’s intent to focus on profiting off its flagship artificial intelligence services.
Can Science Predict When a Study Won’t Hold Up?
Conducting research is hard; confirming the results is, too. And artificial intelligence isn’t yet ready to help, a major new study finds.
Adoption & Impact
The adoption of AI is having a significant impact on various industries, including healthcare, finance, and education. Companies such as Intuit and Salesforce are using AI to improve their services, while others are investing in AI-powered solutions to enhance their operations.
Risks from AI adoption in UK financial sector are likely to increase
The Bank of England said on Wednesday that artificial intelligence risks are likely to increase, as UK financial firms expand their deployment of the advanced technology.
Intuit's AI agents hit 85% repeat usage. The secret was keeping humans involved
When Intuit shipped AI agents to 3 million customers, 85% came back. The reason, according to the company's EVP and GM: combining AI with human expertise turned out to matter more than anyone expected.
Will.i.am's AI-Powered Vehicle
Will.i.am's Trinity three-wheeled electric vehicle aims to be an AI agent, helping its driver with tasks such as email and strategy.
Employees Rely on AI for Support
This article examines how employees increasingly use AI tools like ChatGPT for emotional support, career advice, and personal growth, raising concerns around AI ethics and workplace dynamics.
China Improves Law Enforcement with AI
China's market authority said its use of digital tools over the past three years has improved law enforcement, with enhanced data-sharing on its national case-handling platform boosting speed and efficiency.
ServiceNow says agentic AI will redefine work — but only if companies get governance right
Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.
ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?
Having Humans Check AI Agents May Not Be Necessary
Contrary to popular opinion, having a 'human in the loop' when companies deploy new agentic artificial intelligence tools may not be necessary in the future.
Robotaxi Outage in China Leaves Passengers Stranded on Highways
Geopolitics
Latest arXiv Papers
Get the full executive brief
Receive curated insights with practical implications for strategy, operations, and governance.