AI Intelligence Brief

Sun 5 April 2026

Weekly Brief — Curated and contextualised by Best Practice AI

65Articles
Editor's pickEditor's Highlights

Khosla Proposes Tax Overhaul, CEOs Blame AI for Layoffs, and Executives Report Unmeasured Gains

TL;DR Vinod Khosla proposed exempting US earners under $100,000 from federal income tax by raising capital gains rates to offset AI-driven job losses. Tech CEOs at Meta, Amazon, and Google attributed recent layoffs to AI productivity gains while planning $650 billion in AI investments. An NBER survey of 750 executives found over 50% of firms have invested in AI, with positive labor productivity gains varying by sector and projected to strengthen in 2026. US workers using AI reported 6% time savings, or 2.5 hours weekly, outpacing most EU countries. Mistral raised $830 million in debt to expand AI data centers, while OpenAI shut down Sora after spending $15 million daily against $2.1 million in revenue. AI companies secured $297 billion in funding during Q1, amid US adoption gaps with Europe documented in a new NBER paper.

Economics & Markets

15 articles
AI Macroeconomics2 articles
Editor's pick
Linkedin· 7 days ago

The Xero + Anthropic Announcement Nobody Is Talking About

The Xero–Anthropic integration is not a product enhancement—it is a structural shift in competitive advantage. By embedding a frontier reasoning model like Claude directly into a system of record with deep, proprietary financial data, Xero is collapsing entire categories of adjacent software and services into its core platform. We have seen this playbook before: when distribution, data, and intell

BPAI context

AI Is Collapsing the Software Stack: Platforms Are Becoming the Entire Value Chain - The Xero–Anthropic integration is a structural shift in competitive advantage. By embedding a frontier reasoning model like Claude directly into a system of record with deep, proprietary financial data, Xero is collapsing entire categories of adjacent software and services into its core platform. We have seen this playbook before: when distribution, data, and intelligence converge, standalone tools become features, and features become invisible. The implication is stark—competitive boundaries are no longer defined by functionality, but by control of data ecosystems and the ability to operationalise AI natively within them. For incumbents and challengers alike, this reframes strategy. You cannot compete horizontally against platforms that combine scale, data, and embedded intelligence. The defensible position shifts to vertical depth—owning highly specialised workflows, edge cases, and domain-specific contexts that generalised AI layers will not prioritise. Crucially, AI is no longer a differentiating feature; it is table stakes infrastructure. The winners in this next phase will be those who move fastest to deploy specialised, deeply integrated AI agents within their niche—effectively building micro-platforms of expertise that sit beyond the reach of horizontal consolidation.

AI Market Competition1 articles
Editor's pick
Linkedin· 7 days ago

Who Keeps the Margin? Five AI Companies, Fourteen Layers Deep

The article argues that in AI, value creation is spreading across a deeply layered stack—often 10–15 layers from raw models to end-user applications—but margins are not distributed evenly. Instead, they concentrate in a small number of positions: typically foundation model providers, platforms with proprietary data, and customer-facing applications that control distribution and workflow. Everythin

BPAI analysis

“Who Captures the Margin in AI? Why Most of the Stack Is Becoming Strategically Irrelevant” Mark Musson’s core insight is that while AI creates value across an increasingly deep and complex stack, margin does not follow evenly. Instead, it concentrates in a small number of positions—those that control the model, the data, or the customer relationship. Everything in between—tools, orchestration layers, wrappers—is exposed to rapid commoditisation. Many companies will participate in delivering AI-driven outcomes, but only a few will retain pricing power. For strategy, this reframes the competitive question entirely. It is not enough to “be in AI” or even to add value within the stack—you must occupy a position that captures margin. That means either owning the interface and workflow, building defensible proprietary data advantages, or creating capabilities that are sufficiently specialised to resist substitution. Otherwise, you risk becoming part of a long value chain where value is created—but not captured.

AI Productivity8 articles
Editor's pickTechnology
Reddit· 7 days ago

Coding with AI Creates Real Addiction

Coding with AI is already creating real addiction, with founders hooked on the 'magic' of instant code.

BPAI analysis

I include this Reddit post as this feels way too close to the bone. I have spent the past nine months down the rabbit hole of vibe coding and it has been a scary addiction as the dopamine high of building or debugging a feature compounds. The studies will be written about this...

Editor's pickTechnology
VentureBeat· 7 days ago

When Product Managers Ship Code

When product managers ship code, AI just broke the software org chart, and this is changing the way companies approach development.

BPAI context

AI vibe coding tools are subversive in that it empowers not IT staff to build and ship code. But that can lead to a lot of challenges for security, privacy and maintainability. Filev's account at Zencoder illustrates a pivotal shift in software development, where AI agents have drastically reduced implementation costs, empowering product managers and designers to bypass traditional handoffs and directly ship features—evident in a PM building a waiting-game widget in a day or a designer tweaking IDE plugins in real time. This disrupts the org chart by eliminating coordination bottlenecks once justified to safeguard engineering bandwidth, fostering faster decision velocity and compounding effects like sharper specifications and heightened ownership. While the narrative aligns with broader AI-driven democratization trends, it warrants skepticism: Filev's optimism for scaling in complex brownfield environments overlooks potential pitfalls, such as code quality inconsistencies, security vulnerabilities from non-engineer contributions, or the risk of siloed innovations that fragment enterprise coherence, especially in regulated industries where accountability remains paramount. Key points: • AI agents collapsed implementation costs, shifting bottlenecks from engineering to decision-making. • PMs and designers now ship directly, reducing tickets, handoffs, and translation layers. • This enables pursuit of low-priority but high-impact ideas, like personality-adding features. • Compounding effects include sharper specs, fewer iterations, and a culture of universal building. • Implications extend to larger organizations, closing the gap between intent-holders and builders. Expert question (counterfactual): What if the surge in non-engineer code contributions leads to unmanageable technical debt or compliance risks in highly regulated sectors, undermining the promised velocity gains?

Editor's pick
@emollick· 7 days ago

The average American worker using AI reports time savings of 6%, or 2.5 hours in a work week. Those are similar to the UK & Netherlands, and slightly more than other EU countries.

The average American worker using AI reports time savings of 6%, or 2. 5 hours in a work week. Those are similar to the UK & Netherlands, and slightly more than other EU countries.

Editor's pick
@emollick· 7 days ago

Because US workers use AI more, and gain more from it, the US currently is benefitting most from AI adoption: https://www.nber.org/papers/w34995

Because US workers use AI more, and gain more from it, the US currently is benefitting most from AI adoption: https://www.nber.org/papers/w34995

Editor's pick
Stanford HAI· 7 days ago

Training AI to be Better Collaborators

Training AI to be better collaborators is essential for effective human-AI collaboration.

Editor's pick
Reuters· 5 days ago

Does the AI business model have a fatal flaw?

The AI tools that power ChatGPT and its rivals - known as large language models, or LLMs - are a genuine productivity-enhancing innovation.

Editor's pick
arXiv· 5 days ago

When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods

Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a de

Editor's pick
NBER· 7 days ago

Artificial Intelligence, Productivity, and the Workforce: Evidence from Corporate Executives

Survey of nearly 750 corporate executives shows substantial heterogeneity in AI adoption across firms, with more than half having already invested. Labor productivity gains are positive, vary across sectors, and expected to strengthen in 2026. Productivity paradox: perceived gains larger than measured gains. Little evidence of near-term aggregate employment declines.

BPAI context

This NBER working paper, drawing on a survey of nearly 750 executives, reveals a mixed landscape for AI's impact on productivity and labor markets, underscoring significant firm-level heterogeneity in adoption rates—over half have invested, yet smaller entities lag behind. Productivity gains are evident and sector-specific, particularly in high-skill services and finance, driven by total factor productivity enhancements via innovation and demand rather than mere capital intensification, with projections for acceleration in 2026. However, a notable productivity paradox emerges, where executives' perceptions of gains outpace measurable outcomes, possibly attributable to lagged revenue effects—a claim that warrants scrutiny amid potential optimism bias in self-reported data. On employment, aggregate near-term declines appear minimal, though reallocation is underway, favoring skilled technical roles over routine clerical ones, with larger firms eyeing reductions while smaller ones anticipate growth; this suggests transitional disruptions rather than outright catastrophe, but long-term dynamics remain uncertain. Key points: • Over 50% of firms have invested in AI, with heterogeneity favoring larger entities. • Productivity gains are positive, sector-varying, and expected to rise in 2026, led by high-skill services and finance. • Perceived productivity benefits exceed measured ones, linked to delayed revenue realization. • Minimal aggregate job losses anticipated short-term, but shifts from clerical to technical roles are evident. Expert question (counterfactual): What if the productivity paradox stems not from revenue lags but from executives overestimating AI's standalone contributions, conflating them with concurrent digital transformations or market recoveries?

Labor & Society

24 articles
AI & Employment13 articles
Editor's pickTechnology
feeds· 7 days ago

Tech CEOs suddenly love blaming AI for mass job cuts. Why?

More tech leaders are pointing to job cuts caused by AI tools, and a need for more investment cash.

BPAI analysis

Tech CEOs' pivot to blaming AI for mass layoffs represents a strategic narrative shift, framing workforce reductions as forward-thinking adaptations to productivity-enhancing tools rather than prosaic cost-cutting amid economic headwinds. While genuine AI advancements, such as 25-75% AI-generated code, are boosting efficiency in roles like software development, the timing aligns suspiciously with ballooning AI investment plans totaling $650 billion across Amazon, Meta, Google, and Microsoft. Executives like Zuckerberg and Dorsey tout smaller teams achieving more, yet prior layoff rounds omitted AI mentions, suggesting spin to appease investors wary of unchecked spending. This rhetoric signals 'discipline' but risks underplaying broader factors like over-hiring corrections and shareholder pressures, potentially masking a more cyclical industry purge than an AI revolution. Key points: • Tech giants like Meta, Amazon, and Google are announcing job cuts while planning $650bn in AI investments. • CEOs frame layoffs as AI-enabled productivity gains to improve public and investor perceptions over mere cost savings. • Real AI tools are increasing coding efficiency by 25-75%, threatening stable tech jobs. • Past layoffs at firms like Block were not attributed to AI, indicating a change in explanatory narrative. Expert question (counterfactual): If AI truly revolutionizes productivity as claimed, why do companies continue hiring in 'priority areas' while imposing broad freezes, suggesting cuts may stem more from financial balancing than technological necessity?

Editor's pick
fortune· 7 days ago

Top leadership experts sound the alarm on the AI doomsday: bosses are choosing tech over people

Companies are spending 93% of their AI budgets on tech and only 7% on people. It's already backfiring.

BPAI context

A critical imbalance in corporate AI strategies, with 93% of budgets allocated to technology and a mere 7% to workforce preparation, as per Deloitte and Wharton analyses, potentially creating bottlenecks and resistance that undermine adoption. Experts like Eric Bradlow and Lara Abrash warn that this tech-centric approach exacerbates organizational weaknesses, particularly among middle managers, while neglecting human skills like curiosity, emotional intelligence, and divergent thinking essential for AI augmentation. Leadership must evolve from 'pathfinding' to 'wayfinding' amid uncertainty, fostering 'bridger' roles to integrate tech and people. However, the narrative's doomsday tone may overstate risks; historical tech shifts suggest adaptive firms could thrive, though the revenue upside from co-intelligence—estimated at $6 billion for a $60 billion company—remains speculative without broader evidence of successful human-AI integration. Key points: • Companies allocate 93% of AI budgets to tech and only 7% to workforce development, leading to adoption failures. • Middle managers often resist AI due to lack of preparation, creating bottlenecks in workflows. • Human strengths like curiosity, emotional intelligence, and divergent thinking are irreplaceable for effective AI use. • New leadership requires 'wayfinding' and 'bridger' roles to navigate AI uncertainties. • AI could drive $6 billion in annual revenue growth for a $60 billion firm through enhanced productivity and innovation. Expert question (counterfactual): What if the projected revenue gains from AI-human collaboration fail to materialize due to persistent workforce resistance, forcing companies to revert to cost-cutting and further erode employee trust?

Editor's pick
Axios AI+· 4 days ago

AI and Labor Market

Goldman Sachs Research estimates that 300 million jobs globally are exposed to automation.

Editor's pick
Guardian· 5 days ago

The jobs AI can’t do – and the young adults doing them

For many young people entering the workforce, the stigma of hands-on jobs is fading. There a competitive appeal – and they all require human expertise Gib and Michelle Mouser are proud of their son’s career – just not in the way they once imagined. Only 23 years old, Cale Mouser already earns well over six figures, and he’ll end up making substantially more.

Editor's pickPAYWALL
NYT· 3 days ago

Economists Are Drawing Stronger Connections Between A.I. and Jobs

Artificial intelligence hasn’t disrupted the labor market, economists say, but they are increasingly convinced that it will — and that policymakers are unprepared.

Editor's pickPAYWALL
feeds· 3 days ago

The New Jobs Being Created by AI

AI is raising big fears about employment losses, but it is also giving rise to new engineering and training jobs.

BPAI context

WSJ reporting AI is creating new jobs - AI is proving to be a job creator as well as a disruptor. LinkedIn data show about 640,000 new U.S. jobs tied to AI between 2023 and 2025, including roles such as Head of AI, AI Engineer, AI Consultant, AI/ML Researcher, Forward-Deployed Engineer, and Data Annotator, not counting the surge in data center construction work. These jobs span technical, strategic, and operational work across industries including finance, healthcare, and manufacturing, where people are helping both train AI systems and help others use them effectively.

Editor's pickPAYWALL
Financial Times· 3 days ago

Will AI make it harder for non-graduates to climb the jobs ladder?

Gateway roles to white-collar work appear particularly exposed to disruption

Editor's pick
fortune· 2 days ago

A Yale economist says AGI won’t automate most jobs—because they’re not worth the trouble

Pascual Restrepo's new NBER paper argues it's not about what AI can do. It's about what AI will bother doing—and most human work doesn't make the cut.

Editor's pick
Daily Brew· 2 days ago

MIT Study Challenges AI Job Apocalypse Narrative

A new study from MIT challenges the prevailing narrative that AI will cause massive job losses. The research provides a more nuanced view of AI's impact on the workforce. Topic group: Labor & Society

BPAI context

https://arxiv.org/abs/2604.01363

Editor's pick
Daily AI News April 3, 2026: Inside Shopify’s AI-first Engineering Playbook· 3 days ago

Preserving Competitive Skills in an AI-driven Economy

This HBR article warns that AI can standardize work so aggressively that companies risk eroding the distinctive capabilities, judgment, and institutional know-how that actually make them competitive.

Editor's pick
@erikbryn· 3 days ago

There has been lots of discussion about how AI will affect jobs and the broader economy, including an article in The New York Times today.

There has been lots of discussion about how AI will affect jobs and the broader economy, including an article in The New York Times today. My friend @amcafee pulls together a bunch of the evidence in a nice Substack post: https://geekway. substack.

Editor's pick
Fortune· 6 days ago

Is the org chart dead in the age of AI? LinkedIn's chief economic opportunity officer thinks so

Companies need to let that go, as it is going to hold back innovation

Editor's pick
Brookings Institution· 3 days ago

How AI May Reshape Career Pathways to Better Jobs

Brookings研究发现,AI将侵蚀蓝领工人从低薪工作转至高薪工作的 pathway。1500万工人从事高度接触AI的Gateway职业,其中近1100万人的职业 pathway 暴露于AI。

AI Ethics & Safety6 articles
Editor's pickTechnology
siliconangle· 5 days ago

Anthropic accidentally exposes Claude Code source code in npm packaging error

Anthropic PBC has accidently exposed the source code for its Claude Code command-line interface tool through a packaging error that led to the inclusion of sensitive files in a publicly distributed node package manager or npm release. Claude Code is Anthropic’s command-line tool that lets developers interact with its Claude artificial intelligence models directly from […] The post Anthropic accide

BPAI analysis

Oops..so what was found?

Editor's pickTechnology
fortune· 5 days ago

Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos

Hundreds of thousands of lines of code were exposed, giving researchers insight into upcoming models and internal architecture.

BPAI context

Anthropic's back-to-back leaks—first a draft blog post unveiling the potent 'Mythos' or 'Capybara' model with heightened cybersecurity risks, now 500,000 lines of Claude Code source code—expose troubling lapses in operational security at a firm pioneering AI safety. While Anthropic downplays the code exposure as mere 'human error' in packaging, without sensitive data loss, experts rightly highlight its value: the agentic harness code reveals proprietary guardrails, tool integrations, and internal APIs, potentially enabling competitors to reverse-engineer enhancements or adversaries to probe safeguards. This pattern, echoing a February 2025 incident, undermines Anthropic's safety-first ethos, especially as Capybara promises unprecedented capabilities like zero-day vulnerability detection that could be dual-use. Skeptically, their assurances of robust processes ring hollow amid repeated misconfigurations, signaling a need for stricter release protocols in an industry racing toward AGI amid escalating risks. Key points: • Anthropic leaked 500,000 lines of Claude Code source code via NPM due to human error, exposing agentic harness details. • The leak follows a prior exposure of 'Mythos/Capybara' model info, described as more advanced than Opus with major cybersecurity implications. • Experts warn the code could aid competitors in replicating features or attackers in bypassing safeguards. • No customer data was compromised, but internal architecture insights were revealed, potentially informing open-source alternatives. Expert question (counterfactual): What if these leaks were not mere accidents but subtle strategic disclosures to accelerate industry-wide safety discussions on dual-use AI risks?

Editor's pickDefense & National Security
theregister· 4 days ago

Live and Let AI: Former CIA officer says human spies matter more in the LLM age

AI is eroding trust in digital communications and data, giving old-school spycraft fresh relevance for modern agents.

BPAI context

Thomas Mulligan, a former CIA case officer and RAND researcher, argues persuasively that the proliferation of generative AI, particularly large language models, will paradoxically elevate the role of human intelligence (HUMINT) in espionage. As AI democratizes disinformation—enabling agents to fabricate convincing backstories or false intelligence—it erodes trust in digital channels, pushing operatives toward analog methods like dead drops and face-to-face encounters. While AI offers tools for pattern recognition in surveillance or psychological manipulation, Mulligan warns it also heightens risks for human handlers, making counterintelligence more vital. This perspective revives Cold War-era spycraft, but skeptics might question whether it underplays AI's potential to automate even interpersonal deceptions, potentially rendering human elements as vulnerable relics in an increasingly synthetic intelligence landscape. Overall, Mulligan's analysis underscores a timely shift: in an era of deepfakes, the human touch may indeed be the ultimate verifier of truth. Key points: • AI enables easier fabrication of false intelligence, increasing the need for human verification in HUMINT. • Digital communications lose trustworthiness due to AI-generated disinformation, favoring old-school spycraft like dead drops. • AI tools can aid spies in manipulation and data analysis but also pose detection risks for operatives. • Human intelligence officers will remain essential as AI complicates distinguishing fact from fiction. Expert question (counterfactual): What if rapid advancements in AI-driven behavioral prediction and real-time deepfake countermeasures ultimately automate HUMINT tasks, making human spies as obsolete as they were feared to be?

Editor's pickGovernment & Public Sector
Top Daily Headlines· 3 days ago

AI Search is Atomizing Our Information

A government digital designer has warned that AI search is atomizing our information, making it difficult to control how our content is interpreted and used.

BPAI context

AI-mediated search is fragmenting public information ecosystems, compelling governments to overhaul digital strategies to preserve policy efficacy and equity in an era of uncontrolled summarization. The UK Department for Education reports surging AI-driven traffic alongside declining direct visits, highlighting risks of narrowed user understanding and disadvantaged access for less confident individuals. "We now need to design with the expectation that much of what we publish will be read indirectly, atomised, summarised or reinterpreted by systems we don't control," writes Mark Edwards, head of design for the UK government's Department for Education.

Editor's pick
@emollick· 4 days ago

My piece in the Economist where I argue against de-weirding AI. It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can result in bad outcomes for companies & their employees.

My piece in the Economist where I argue against de-weirding AI. It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can

BPAI context

We always need to listen to Ethan. He is bang on in that firms must treat AI as a fundamentally novel technology rather than routine IT automation to mitigate risks and harness opportunities, avoiding adverse impacts on operations and workforce stability. 'It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can result in bad outcomes for companies & their employees.'

Editor's pickTechnology
siliconrepublic· 3 days ago

What issues arise when code has the ability to write and review itself?

Agustin Huerta discusses Anthropic’s new Code Review feature and the importance of AI governance.

BPAI context

As AI agents increasingly automate code writing and review, businesses face heightened risks from overreliance without robust governance, potentially amplifying security vulnerabilities and operational blind spots in software development. Anthropic's new Code Review feature employs specialized AI agents to detect bugs and prioritize issues, but experts stress the need for adapted workflows, human oversight, and observability to maintain accountability and prevent compounded errors.

Technology & Infrastructure

14 articles
AI Infrastructure & Compute6 articles
AI Models & Capabilities6 articles
Editor's pickPAYWALLTechnology
feeds· 5 days ago

Former Baidu President on AI Tokenization in China

The former President of Baidu says AI tokenization is exploding in China, far beyond what OpenClaw illustrated earlier this year. Zhang Yaqin, who runs China's Institute of AI Industry Research at Beijing's Tsinghua University speaks to Bloomberg's Chief North Asia Correspondent Stephen Engle in Beijing. (Source: Bloomberg)

Editor's pickTechnology
Hacker News· 3 days ago

Qwen3.6-Plus: Towards Real World Agents

Qwen3.6-Plus is a new AI model towards real world agents.

BPAI context

Alibaba's Qwen3.6-Plus advances China's AI competitiveness by targeting practical real-world agents, intensifying global rivalry and accelerating automation in business operations. This model builds on prior iterations to enable more autonomous AI applications, potentially reshaping labor markets and investment priorities in AI infrastructure. Alibaba's release of Qwen3.6-Plus signals accelerating Chinese innovation in AI agents, heightening geopolitical competition and pressuring Western firms to advance practical AI deployment for real-world applications. The model focuses on building autonomous agents capable of handling complex, real-world tasks, potentially reshaping labor markets and enterprise efficiency.

Editor's pickPAYWALLTechnology
Financial Times· 3 days ago

Microsoft launches 'mid-class' AI model as compute limits bite

Tech giant’s AI chief says it will have the resources to build frontier systems later this year

BPAI context

Interesting Microsoft is behind in building leading frontier models as it lacks the necessary compute! "Microsoft has unveiled its latest midsized AI model as it seeks to gain a foothold in the technology, but chief Mustafa Suleyman said the tech giant still lacks the computing power needed to build cutting-edge systems. The software giant on Thursday released a speech transcription model that Suleyman called the most advanced of its kind, as it steps up efforts to close in on rivals and reduce its reliance on OpenAI. Microsoft has yet to release large language models capable of competing in more sophisticated areas such as coding and text generation, where it lags market leaders Anthropic, Google and OpenAI."

Adoption & Impact

10 articles
AI Adoption & Diffusion4 articles
Editor's pick
@emollick· 7 days ago

Sorry, I wasn't clear. I think this is an excellent paper with consistent methodology, I just mean people's views of what using AI means are changing over time, and the measurement jitter is because they view what AI is quite differently now than they did a year ago.

Sorry, I wasn't clear. I think this is an excellent paper with consistent methodology, I just mean people's views of what using AI means are changing over time, and the measurement jitter is because they view what AI is quite differently now than they did a year ago.

Editor's pick
nber· 6 days ago

Mind the Gap: AI Adoption in Europe and the U.S. -- by Alexander Bick, Adam Blandin, David J. Deming, Nicola Fuchs-Schündeln, Jonas Jessen

This paper combines international evidence from worker and firm surveys conducted in 2025 and 2026 to document large gaps in AI adoption, both between the US and Europe and across European countries. Cross-country differences in worker demographics and firm composition account for an important share of these gaps. AI adoption, within and across countries, is also closely linked to firm personnel management practices and whether firms actively encourage AI use by workers. Micro-level evidence suggests that AI generates meaningful time savings for many workers. At the macro level, in recent years industries with higher AI adoption rates have experienced faster productivity growth. While we do not establish causality, this relationship is statistically significant and similar in magnitude in Europe and the US. We do not find clear evidence that industry-level AI adoption is associated with employment changes. We discuss limitations of existing data and outline priorities for future data collection to better assess the productivity and labor market effects of AI. Topic group: Adoption & Impact

Editor's pick
MIT Sloan Management Review· 6 days ago

When Not to Use AI

BPAI context

Benjamin Laker's column astutely delineates the boundaries of AI in managerial practice, advocating its use for accelerating rote tasks like data synthesis and drafting while cautioning against outsourcing human-centric judgments on values, relationships, and trust. This balanced perspective counters the hype surrounding AI as a panacea for productivity, emphasizing that overreliance risks eroding critical thinking and emotional intelligence—core to effective leadership. However, Laker's framework assumes managers possess the self-awareness to discern these boundaries consistently, a skepticism warranted given evidence of cognitive biases in tech adoption. By positioning AI as a tool rather than a proxy, the piece promotes deliberate integration, potentially fostering more resilient organizational cultures amid rapid technological shifts, though it underplays the need for systemic training to prevent unintended delegation of authority. Key points: • Use AI for mechanical tasks like summarizing reports or outlining agendas to free up time for strategic sensemaking. • Avoid AI in decisions involving trust, values, or relationships, such as delivering feedback or assessing candidate resilience. • Treat AI outputs as raw material requiring human editing to maintain judgment and accountability. • Instruct AI to generate counterarguments to challenge biases and broaden decision-making perspectives. Expert question (counterfactual): What if advancing AI capabilities in emotional intelligence and contextual nuance render the distinction between mechanical and human tasks obsolete, forcing managers to redefine leadership entirely?

Editor's pickProfessional Services
Stanford Digital Economy Lab· 2 days ago

The Enterprise AI Playbook: Lessons from 51 Successful Deployments

Stanford Digital Economy Lab analyzed 51 enterprise AI deployments over 5 months. Found that outcomes varied from weeks to years for same technology - difference was organization readiness, not the AI model.

BPAI context

We find this all the time - the limiter to AI adoption is people. Organizational readiness, not AI model quality, determines the speed and success of enterprise AI deployments, positioning firms that invest in adaptive processes and leadership for competitive advantage in the AI economy. Stanford's Digital Economy Lab analysis of 51 cases shows identical technologies delivering outcomes from weeks to years, underscoring the need for internal change to capture AI value.

AI Applications6 articles
Editor's pickEnergy & Utilities
MIT News· 2 days ago

Working to Advance the Nuclear Renaissance

Dean Price, assistant professor in the Department of Nuclear Science and Engineering, sees a bright future for nuclear power, and believes AI can help us realize that vision.

Editor's pickTechnology
siliconangle· 4 days ago

ServiceNow says agentic AI will redefine work — but only if companies get governance right

Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.

BPAI context

ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?

Editor's pickGovernment & Public Sector
Artificial Intelligence Newsletter· 5 days ago

China Improves Law Enforcement with AI

China's market authority said its use of digital tools over the past three years has improved law enforcement, with enhanced data-sharing on its national case-handling platform boosting speed and efficiency.

Editor's pickHealthcare
@emollick· 3 days ago

This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, but when users had to interact with chatbots the interface led to confusion & worse answers

This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, but when users had to interact with chatbots the interface led to confusion & worse answers My post: https://www. oneusefulthing.

Editor's pickPAYWALLTechnology
New York Times· 4 days ago

How A.I. Helped One Man (and His Brother) Build a $1.8 Billion Company

Perhaps the first 2-person company with $1B in revenue, powered by AI workflows.

Editor's pickProfessional Services
GAI Insights· 9 days ago

Agentic Scenarios Every Marketer Must Prepare For

BCG lays out four possible agentic-commerce futures: an open bazaar, brand resurgence through data ecosystems, super apps, and creator-led authenticity. Every scenario collapses to two requirements: machine discoverability and brand desirability. Brands need to shift from SEO to answer-engine optimization.

BPAI context

BCG's exploration of agentic-commerce futures presents a forward-looking framework for marketers navigating AI-driven shopping agents, outlining four scenarios: an open bazaar of seamless transactions, brand resurgence via proprietary data ecosystems, super apps consolidating services, and creator-led authenticity emphasizing human touch. While the analysis astutely reduces these to core imperatives—machine discoverability and brand desirability—it risks oversimplifying the interplay of regulatory hurdles, privacy concerns, and technological fragmentation that could derail such visions. The pivot from SEO to 'answer-engine optimization' is pragmatic, urging brands to prioritize structured data and conversational relevance, yet it assumes agents will uniformly favor discoverable, desirable content without accounting for biases in AI training or monopolistic platform dominance, potentially favoring incumbents over innovators. Key points: • Four agentic-commerce scenarios: open bazaar, data ecosystems for brands, super apps, and creator authenticity. • Core requirements: enhance machine discoverability and brand desirability. • Shift strategy from SEO to answer-engine optimization for AI interactions. Expert question (counterfactual): What if AI agents prioritize user privacy and ethical sourcing over discoverability, forcing brands to invest more in transparent, verifiable desirability rather than optimized data feeds?

Geopolitics

2 articles
AI Geopolitics1 articles
Editor's pickDefense & National Security
IBT International· 7 days ago

US sees AI race with China as strategic battle for dominance

US lawmakers and Big Tech agree China AI threat is existential. Push for tighter export controls and keeping critical infrastructure domestic.

BPAI analysis

China is really good at applied AI at scale. They are also burning more tokens than the US, suggesting a huge amount of AI activity is happening. And there is a burgeoning consensus between US policymakers and Silicon Valley elites framing the AI competition with China as an existential geopolitical and economic imperative, evidenced by forums like the Hill and Valley Forum where figures such as Senator Rick Scott invoke dire stakes. This alignment drives advocacy for stringent measures, including the proposed GAIN AI Act mandating domestic prioritization of AI chips and export licensing to 'countries of concern,' alongside scrutiny of Nvidia's shipments amid smuggling fears. House Speaker Mike Johnson's call to onshore critical infrastructure like data centers reflects a protectionist pivot, yet underlying frictions persist over regulatory overreach versus industry autonomy. While the US leads in AI innovation, China's emphasis on deployment poses a tangible challenge; however, the narrative's alarmist tone warrants skepticism, as it may amplify bipartisan unity at the expense of nuanced international collaboration or overstate short-term threats to justify expansive controls. Key points: • US lawmakers and tech leaders unite in viewing AI rivalry with China as critical to national security and global dominance. • Calls intensify for tighter export controls on AI chips, targeting firms like Nvidia amid smuggling concerns. • Proposed GAIN AI Act would require licenses for exporting advanced AI tech to adversarial nations. • Emphasis on keeping US critical AI infrastructure, such as data centers, domestic to safeguard strategic assets. • Experts advocate coordinated government-industry efforts to counter China's practical AI deployment advantages. Expert question (counterfactual): What if enhanced US-China AI collaboration, rather than isolationist controls, could accelerate global innovation and mitigate mutual security risks more effectively than zero-sum competition?

Best Practice AI© 2026 Best Practice AI Ltd. All rights reserved.

Get the full executive brief

Receive curated insights with practical implications for strategy, operations, and governance.

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.