Wed 1 April 2026
Daily Brief — Curated and contextualised by Best Practice AI
Bessemer Charts Infrastructure Path, Microsoft Diversifies Models, and Anthropic Leaks Source Code
TL;DR Bessemer Venture Partners released an AI infrastructure roadmap for 2026, emphasizing harnesses, reinforcement learning platforms, inference optimization, and world models to move beyond scaling laws. Microsoft updated its Copilot Researcher tool to integrate models from both OpenAI and Anthropic, signaling a shift to multi-model strategies in enterprise AI. Anthropic accidentally exposed 500,000 lines of its Claude Code source code via an npm packaging error, following a prior leak of details on its upcoming Mythos model. In China, former Baidu president Zhang Yaqin reported explosive growth in AI tokenization beyond OpenAI's earlier examples. MIT researchers developed an AI model to detect atomic defects, aiming to enhance materials' strength and efficiency.
The stories that matter most
Selected and contextualised by the Best Practice AI team
Former Baidu President on AI Tokenization in China
The former President of Baidu says AI tokenization is exploding in China, far beyond what OpenClaw illustrated earlier this year. Zhang Yaqin, who runs China's Institute of AI Industry Research at Beijing's Tsinghua University speaks to Bloomberg's Chief North Asia Correspondent Stephen Engle in Beijing. (Source: Bloomberg)
Anthropic accidentally exposes Claude Code source code in npm packaging error
Anthropic PBC has accidently exposed the source code for its Claude Code command-line interface tool through a packaging error that led to the inclusion of sensitive files in a publicly distributed node package manager or npm release. Claude Code is Anthropic’s command-line tool that lets developers interact with its Claude artificial intelligence models directly from […] The post Anthropic accidentally exposes Claude Code source code in npm packaging error appeared first on SiliconANGLE.
Oops..so what was found?
Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos
Hundreds of thousands of lines of code were exposed, giving researchers insight into upcoming models and internal architecture.
Anthropic's back-to-back leaks—first a draft blog post unveiling the potent 'Mythos' or 'Capybara' model with heightened cybersecurity risks, now 500,000 lines of Claude Code source code—expose troubling lapses in operational security at a firm pioneering AI safety. While Anthropic downplays the code exposure as mere 'human error' in packaging, without sensitive data loss, experts rightly highlight its value: the agentic harness code reveals proprietary guardrails, tool integrations, and internal APIs, potentially enabling competitors to reverse-engineer enhancements or adversaries to probe safeguards. This pattern, echoing a February 2025 incident, undermines Anthropic's safety-first ethos, especially as Capybara promises unprecedented capabilities like zero-day vulnerability detection that could be dual-use. Skeptically, their assurances of robust processes ring hollow amid repeated misconfigurations, signaling a need for stricter release protocols in an industry racing toward AGI amid escalating risks. Key points: • Anthropic leaked 500,000 lines of Claude Code source code via NPM due to human error, exposing agentic harness details. • The leak follows a prior exposure of 'Mythos/Capybara' model info, described as more advanced than Opus with major cybersecurity implications. • Experts warn the code could aid competitors in replicating features or attackers in bypassing safeguards. • No customer data was compromised, but internal architecture insights were revealed, potentially informing open-source alternatives. Expert question (counterfactual): What if these leaks were not mere accidents but subtle strategic disclosures to accelerate industry-wide safety discussions on dual-use AI risks?
The jobs AI can’t do – and the young adults doing them
For many young people entering the workforce, the stigma of hands-on jobs is fading. There a competitive appeal – and they all require human expertise Gib and Michelle Mouser are proud of their son’s career – just not in the way they once imagined. Only 23 years old, Cale Mouser already earns well over six figures, and he’ll end up making substantially more.
Palantir’s UK boss criticises ‘ideological’ groups as ministers move to scrap NHS contract
Louis Mosley says government should resist calls to trigger break clause in £330m deal with US analytics company UK politics live – latest updates Palantir’s UK boss has urged the government not to give in to “ideologically motivated campaigners” as government ministers explore a way out of a £330m NHS contract with the tech company. Ministers have sought advice on triggering a break clause in Palantir’s deal to deliver the Federated Data Platform (FDP), amid questions over the company’s presence in the public sector. Continue reading...
AI Infrastructure Roadmap: Five frontiers for 2026
The first generation of AI was built for a world where the model was the product, and progress meant bigger weights, more data, and stellar benchmarks. AI infrastructure mirrored this reality, fueling the rise of giants in foundation models, compute capacity, training techniques, and data ops.
Microsoft Revamps AI Tool
Microsoft has revamped one of its AI research tools to use models from both OpenAI and Anthropic, the clearest sign yet that the future of AI may be multi-model. The software giant is taking advantage of multiple models within its Microsoft 365 Copilot Researcher.
A clear admission that a multi-horse betting strategy is required for underlying foundational models.
AI Infrastructure Roadmap 2026
This Bessemer Venture Partners article outlines a forward-looking AI infrastructure roadmap, identifying key areas such as harnesses, reinforcement learning platforms, inference optimization, and world models development.
MIT Researchers Use AI to Uncover Atomic Defects
A new model measures defects that can be leveraged to improve materials' mechanical strength, heat transfer, and energy-conversion efficiency.
Meta's new structured prompting technique makes LLMs significantly better at code review — boosting accuracy to 93% in some cases
Deploying AI agents for repository-scale tasks like bug detection, patch verification, and code review requires overcoming significant technical hurdles. One major bottleneck: the need to set up dynamic execution sandboxes for every repository, which are expensive and computationally heavy. Using large language model (LLM) reasoning instead of executing the code is rising in popularity to bypass this overhead, yet it frequently leads to unsupported guesses and hallucinations.
Meta has spent billions upon billions on ai infrastructure, talent and companies. It is in catchup mode..
Economics & Markets
China’s New AI Stocks Drive Volatility in Asia as Hype Grows
Chinese artificial‑intelligence firms have emerged as one of the most volatile pockets of Asia’s equity markets, with shares of newly listed model developers and chip designers swinging on retail flows.
If OpenAI is to float on the stock market this year, it needs to start turning a profit
The poster child of the AI boom, valued at $850bn, needs to show strategic discipline after ‘casting its net too wide’ If OpenAI is going to float this year, it has to get serious about its business model. The wow factor around the US company – the poster child of an AI industry boom that has stoked fears of a stock market bubble – has been long established, but when will the profits come? The party can’t go on for ever.
US banks raise borrowing costs for private credit funds as AI fears pummel valuations
US banks are raising borrowing costs for private credit funds as AI fears pummel valuations.
OpenAI raises $3bn from retail investors as part of record funding haul
ChatGPT maker taps individuals for first time as it pulls in up to $122bn
Oracle Lays Off Workers Amid Heavy AI Investment
Investors see the database firm as a barometer of the financial prospects for artificial intelligence. Oracle’s stock was up 5%.
Nebius furthers European expansion with $10 billion AI data centre in Finland | Reuters
Nebius has recently won supply contracts worth more than $40 billion in total with U. S. software giants Microsoft and Meta.
Labor & Society
The jobs AI can’t do – and the young adults doing them
For many young people entering the workforce, the stigma of hands-on jobs is fading. There a competitive appeal – and they all require human expertise Gib and Michelle Mouser are proud of their son’s career – just not in the way they once imagined. Only 23 years old, Cale Mouser already earns well over six figures, and he’ll end up making substantially more.
15% of Americans Say They'd Be Willing to Work for an AI Boss
According to a Quinnipiac University poll, 15% of Americans say they'd be willing to have a job where their direct supervisor was an AI program that assigned tasks and set schedules.
Americans Souring on AI
A new Quinnipiac poll of 1,400 Americans finds 55% believe AI does more harm than good, up 11% year-over-year. The rest of the data in the study doesn't paint a pretty picture either.
Counsel for Anthropic and OpenAI on AI Safety
The tension between product safety and user privacy is one of the “hardest questions that we have to grapple with on a daily basis,” Anthropic product counsel Mengyi Xu said Monday, while OpenAI Senior Counsel Daniel Kehl said the issues are currently “converging” in technology policy conversations focused on youth.
Agentic AI Cyber Attacks Growing
Anthropic, Amazon and Meta Platforms have all learned the hard way that the cybersecurity risks posed by AI agents are no longer theoretical. Along with other companies, they've recently reported security breaches carried out by AI agents, and the incidents were a key topic at the annual RSA conference last week.
Google Legal Chief on AI Assistants and Privacy
More capable and personalized AI assistants are pushing companies to be “innovative” with privacy frameworks and adapt to consumers’ expectations for seamless experiences, Google legal chief Kent Walker said Monday, adding that market forces will drive competition on privacy safeguards as much as product quality.
Firms Must Get Back to Data Governance Basics
As artificial intelligence adds cyber risk, human training and getting back to the basics of data security training will be key safeguards for companies, Irish data protection authorities said Monday.
OpenClaw has 500,000 instances and no enterprise kill switch
“Your AI? It’s my AI now. ” The line came from Etay Maor, VP of Threat Intelligence at Cato Networks, in an exclusive interview with VentureBeat at RSAC 2026 — and it describes exactly what happened to a U.
Beijing Court Rejects AI Defense in Defamation Case
A Beijing court ruled that users of generative artificial intelligence tools remain legally responsible for verifying the accuracy of content they publish, rejecting a defendant’s attempt to use AI authorship as a defense in a defamation case.
Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos
Hundreds of thousands of lines of code were exposed, giving researchers insight into upcoming models and internal architecture.
Anthropic's back-to-back leaks—first a draft blog post unveiling the potent 'Mythos' or 'Capybara' model with heightened cybersecurity risks, now 500,000 lines of Claude Code source code—expose troubling lapses in operational security at a firm pioneering AI safety. While Anthropic downplays the code exposure as mere 'human error' in packaging, without sensitive data loss, experts rightly highlight its value: the agentic harness code reveals proprietary guardrails, tool integrations, and internal APIs, potentially enabling competitors to reverse-engineer enhancements or adversaries to probe safeguards. This pattern, echoing a February 2025 incident, undermines Anthropic's safety-first ethos, especially as Capybara promises unprecedented capabilities like zero-day vulnerability detection that could be dual-use. Skeptically, their assurances of robust processes ring hollow amid repeated misconfigurations, signaling a need for stricter release protocols in an industry racing toward AGI amid escalating risks. Key points: • Anthropic leaked 500,000 lines of Claude Code source code via NPM due to human error, exposing agentic harness details. • The leak follows a prior exposure of 'Mythos/Capybara' model info, described as more advanced than Opus with major cybersecurity implications. • Experts warn the code could aid competitors in replicating features or attackers in bypassing safeguards. • No customer data was compromised, but internal architecture insights were revealed, potentially informing open-source alternatives. Expert question (counterfactual): What if these leaks were not mere accidents but subtle strategic disclosures to accelerate industry-wide safety discussions on dual-use AI risks?
Anthropic accidentally exposes Claude Code source code in npm packaging error
Anthropic PBC has accidently exposed the source code for its Claude Code command-line interface tool through a packaging error that led to the inclusion of sensitive files in a publicly distributed node package manager or npm release. Claude Code is Anthropic’s command-line tool that lets developers interact with its Claude artificial intelligence models directly from […] The post Anthropic accidentally exposes Claude Code source code in npm packaging error appeared first on SiliconANGLE.
Oops..so what was found?
Palantir’s UK boss criticises ‘ideological’ groups as ministers move to scrap NHS contract
Louis Mosley says government should resist calls to trigger break clause in £330m deal with US analytics company UK politics live – latest updates Palantir’s UK boss has urged the government not to give in to “ideologically motivated campaigners” as government ministers explore a way out of a £330m NHS contract with the tech company. Ministers have sought advice on triggering a break clause in Palantir’s deal to deliver the Federated Data Platform (FDP), amid questions over the company’s presence in the public sector. Continue reading...
Next US State Regulatory Trend and Agentic AI
Agentic AI will likely be the next area of focus for state legislators, and there could be more excitement for AI regulation in states after the mid-term elections, Connecticut state Senator James Maroney said Monday at the world’s largest gathering of privacy professionals in the US.
Americans Want AI Rules
Nearly two-thirds of Americans now use AI regularly and want stronger oversight, but are conflicted on how far regulation should go, according to a new national survey from AI governance nonprofit Fathom shared exclusively with Axios.
China's Cybersecurity Standards Body Issues Draft Guidance
China's TC260, the nation's cybersecurity standards body, on Tuesday issued a draft practice guide outlining security requirements for deploying and using open-source OpenClaw class AI agents, to mitigate cybersecurity risks and strengthen governance of the systems.
Technology & Infrastructure
Raspberry Pi profit surges as AI boom lifts demand
Sales growth for UK computer company particularly strong in China and US
Meta unveils two $499 Ray-Ban smart glasses for prescription users
Meta has unveiled two new Ray-Ban smart glasses for prescription users, priced at $499.
Microsoft Revamps AI Tool
Microsoft has revamped one of its AI research tools to use models from both OpenAI and Anthropic, the clearest sign yet that the future of AI may be multi-model. The software giant is taking advantage of multiple models within its Microsoft 365 Copilot Researcher.
A clear admission that a multi-horse betting strategy is required for underlying foundational models.
Former Baidu President on AI Tokenization in China
The former President of Baidu says AI tokenization is exploding in China, far beyond what OpenClaw illustrated earlier this year. Zhang Yaqin, who runs China's Institute of AI Industry Research at Beijing's Tsinghua University speaks to Bloomberg's Chief North Asia Correspondent Stephen Engle in Beijing. (Source: Bloomberg)
Claude Dispatch and the Power of Interfaces
We often lack the tools for the job, even if the AI is capable enough
Meta's new structured prompting technique makes LLMs significantly better at code review — boosting accuracy to 93% in some cases
Deploying AI agents for repository-scale tasks like bug detection, patch verification, and code review requires overcoming significant technical hurdles. One major bottleneck: the need to set up dynamic execution sandboxes for every repository, which are expensive and computationally heavy. Using large language model (LLM) reasoning instead of executing the code is rising in popularity to bypass this overhead, yet it frequently leads to unsupported guesses and hallucinations.
Meta has spent billions upon billions on ai infrastructure, talent and companies. It is in catchup mode..
Adoption & Impact
China Improves Law Enforcement with AI
China's market authority said its use of digital tools over the past three years has improved law enforcement, with enhanced data-sharing on its national case-handling platform boosting speed and efficiency.
CrowdStrike and IBM bet on the agentic SOC as AI reshapes security operations
As AI compresses threat response windows to mere seconds, the security operations center is undergoing a fundamental transformation — and the agentic SOC is emerging as a solution that combines autonomous, machine-speed investigation and containment with essential human governance. The industry is confronting a hard reality: AI is now a force multiplier for attackers as […] The post CrowdStrike and IBM bet on the agentic SOC as AI reshapes security operations appeared first on SiliconANGLE.
Academic Papers
Get the full executive brief
Receive curated insights with practical implications for strategy, operations, and governance.