Sun 5 April 2026
Weekly Brief — Curated and contextualised by Best Practice AI
Khosla Proposes Tax Overhaul, CEOs Blame AI for Layoffs, and Executives Report Unmeasured Gains
TL;DR Vinod Khosla proposed exempting US earners under $100,000 from federal income tax by raising capital gains rates to offset AI-driven job losses. Tech CEOs at Meta, Amazon, and Google attributed recent layoffs to AI productivity gains while planning $650 billion in AI investments. An NBER survey of 750 executives found over 50% of firms have invested in AI, with positive labor productivity gains varying by sector and projected to strengthen in 2026. US workers using AI reported 6% time savings, or 2.5 hours weekly, outpacing most EU countries. Mistral raised $830 million in debt to expand AI data centers, while OpenAI shut down Sora after spending $15 million daily against $2.1 million in revenue. AI companies secured $297 billion in funding during Q1, amid US adoption gaps with Europe documented in a new NBER paper.
Economics & Markets
France's Mistral Raises $830 Million in Debt for AI Data Centre Build-Up
Mistral has raised $830 million in debt to build up its AI data centre capabilities.
The $15 Million-a-Day Mistake, and Other Lessons in AI Economics
Open AI spent roughly $15 million a day keeping Sora alive and made $2.1 million total before pulling the plug this week, a ratio so lopsided it almost reads as parody. But the Sora shutdown is less an embarrassment and more a signal: when the economics of an AI product do not work, even the biggest labs will walk away fast and redirect that compute toward something that might.
A.I. Companies Shatter Fund-Raising Records, as Boom Accelerates
OpenAI, Anthropic, Waymo and other artificial intelligence companies hauled in $297 billion in funding in the first three months of the year.
These AI Whiz Kids Dropped Out of College and Got Investors to Pay Their Bills
Venture capitalists are stepping in to cover expenses like rent while dropouts from Harvard to Stanford chase their startup dreams. Topic group: Economics & Markets
Coding with AI Creates Real Addiction
Coding with AI is already creating real addiction, with founders hooked on the 'magic' of instant code.
I include this Reddit post as this feels way too close to the bone. I have spent the past nine months down the rabbit hole of vibe coding and it has been a scary addiction as the dopamine high of building or debugging a feature compounds. The studies will be written about this...
When Product Managers Ship Code
When product managers ship code, AI just broke the software org chart, and this is changing the way companies approach development.
AI vibe coding tools are subversive in that it empowers not IT staff to build and ship code. But that can lead to a lot of challenges for security, privacy and maintainability. Filev's account at Zencoder illustrates a pivotal shift in software development, where AI agents have drastically reduced implementation costs, empowering product managers and designers to bypass traditional handoffs and directly ship features—evident in a PM building a waiting-game widget in a day or a designer tweaking IDE plugins in real time. This disrupts the org chart by eliminating coordination bottlenecks once justified to safeguard engineering bandwidth, fostering faster decision velocity and compounding effects like sharper specifications and heightened ownership. While the narrative aligns with broader AI-driven democratization trends, it warrants skepticism: Filev's optimism for scaling in complex brownfield environments overlooks potential pitfalls, such as code quality inconsistencies, security vulnerabilities from non-engineer contributions, or the risk of siloed innovations that fragment enterprise coherence, especially in regulated industries where accountability remains paramount. Key points: • AI agents collapsed implementation costs, shifting bottlenecks from engineering to decision-making. • PMs and designers now ship directly, reducing tickets, handoffs, and translation layers. • This enables pursuit of low-priority but high-impact ideas, like personality-adding features. • Compounding effects include sharper specs, fewer iterations, and a culture of universal building. • Implications extend to larger organizations, closing the gap between intent-holders and builders. Expert question (counterfactual): What if the surge in non-engineer code contributions leads to unmanageable technical debt or compliance risks in highly regulated sectors, undermining the promised velocity gains?
The average American worker using AI reports time savings of 6%, or 2.5 hours in a work week. Those are similar to the UK & Netherlands, and slightly more than other EU countries.
The average American worker using AI reports time savings of 6%, or 2. 5 hours in a work week. Those are similar to the UK & Netherlands, and slightly more than other EU countries.
Because US workers use AI more, and gain more from it, the US currently is benefitting most from AI adoption: https://www.nber.org/papers/w34995
Because US workers use AI more, and gain more from it, the US currently is benefitting most from AI adoption: https://www.nber.org/papers/w34995
Training AI to be Better Collaborators
Training AI to be better collaborators is essential for effective human-AI collaboration.
Does the AI business model have a fatal flaw?
The AI tools that power ChatGPT and its rivals - known as large language models, or LLMs - are a genuine productivity-enhancing innovation.
When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods
Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depend on can shrink. We build a dynamic model of public good provision where agents contribute by solving problems that other agents posted on a public platform, and the accumulated solutions form a de
Artificial Intelligence, Productivity, and the Workforce: Evidence from Corporate Executives
Survey of nearly 750 corporate executives shows substantial heterogeneity in AI adoption across firms, with more than half having already invested. Labor productivity gains are positive, vary across sectors, and expected to strengthen in 2026. Productivity paradox: perceived gains larger than measured gains. Little evidence of near-term aggregate employment declines.
This NBER working paper, drawing on a survey of nearly 750 executives, reveals a mixed landscape for AI's impact on productivity and labor markets, underscoring significant firm-level heterogeneity in adoption rates—over half have invested, yet smaller entities lag behind. Productivity gains are evident and sector-specific, particularly in high-skill services and finance, driven by total factor productivity enhancements via innovation and demand rather than mere capital intensification, with projections for acceleration in 2026. However, a notable productivity paradox emerges, where executives' perceptions of gains outpace measurable outcomes, possibly attributable to lagged revenue effects—a claim that warrants scrutiny amid potential optimism bias in self-reported data. On employment, aggregate near-term declines appear minimal, though reallocation is underway, favoring skilled technical roles over routine clerical ones, with larger firms eyeing reductions while smaller ones anticipate growth; this suggests transitional disruptions rather than outright catastrophe, but long-term dynamics remain uncertain. Key points: • Over 50% of firms have invested in AI, with heterogeneity favoring larger entities. • Productivity gains are positive, sector-varying, and expected to rise in 2026, led by high-skill services and finance. • Perceived productivity benefits exceed measured ones, linked to delayed revenue realization. • Minimal aggregate job losses anticipated short-term, but shifts from clerical to technical roles are evident. Expert question (counterfactual): What if the productivity paradox stems not from revenue lags but from executives overestimating AI's standalone contributions, conflating them with concurrent digital transformations or market recoveries?
Labor & Society
Tech CEOs suddenly love blaming AI for mass job cuts. Why?
More tech leaders are pointing to job cuts caused by AI tools, and a need for more investment cash.
Tech CEOs' pivot to blaming AI for mass layoffs represents a strategic narrative shift, framing workforce reductions as forward-thinking adaptations to productivity-enhancing tools rather than prosaic cost-cutting amid economic headwinds. While genuine AI advancements, such as 25-75% AI-generated code, are boosting efficiency in roles like software development, the timing aligns suspiciously with ballooning AI investment plans totaling $650 billion across Amazon, Meta, Google, and Microsoft. Executives like Zuckerberg and Dorsey tout smaller teams achieving more, yet prior layoff rounds omitted AI mentions, suggesting spin to appease investors wary of unchecked spending. This rhetoric signals 'discipline' but risks underplaying broader factors like over-hiring corrections and shareholder pressures, potentially masking a more cyclical industry purge than an AI revolution. Key points: • Tech giants like Meta, Amazon, and Google are announcing job cuts while planning $650bn in AI investments. • CEOs frame layoffs as AI-enabled productivity gains to improve public and investor perceptions over mere cost savings. • Real AI tools are increasing coding efficiency by 25-75%, threatening stable tech jobs. • Past layoffs at firms like Block were not attributed to AI, indicating a change in explanatory narrative. Expert question (counterfactual): If AI truly revolutionizes productivity as claimed, why do companies continue hiring in 'priority areas' while imposing broad freezes, suggesting cuts may stem more from financial balancing than technological necessity?
Top leadership experts sound the alarm on the AI doomsday: bosses are choosing tech over people
Companies are spending 93% of their AI budgets on tech and only 7% on people. It's already backfiring.
A critical imbalance in corporate AI strategies, with 93% of budgets allocated to technology and a mere 7% to workforce preparation, as per Deloitte and Wharton analyses, potentially creating bottlenecks and resistance that undermine adoption. Experts like Eric Bradlow and Lara Abrash warn that this tech-centric approach exacerbates organizational weaknesses, particularly among middle managers, while neglecting human skills like curiosity, emotional intelligence, and divergent thinking essential for AI augmentation. Leadership must evolve from 'pathfinding' to 'wayfinding' amid uncertainty, fostering 'bridger' roles to integrate tech and people. However, the narrative's doomsday tone may overstate risks; historical tech shifts suggest adaptive firms could thrive, though the revenue upside from co-intelligence—estimated at $6 billion for a $60 billion company—remains speculative without broader evidence of successful human-AI integration. Key points: • Companies allocate 93% of AI budgets to tech and only 7% to workforce development, leading to adoption failures. • Middle managers often resist AI due to lack of preparation, creating bottlenecks in workflows. • Human strengths like curiosity, emotional intelligence, and divergent thinking are irreplaceable for effective AI use. • New leadership requires 'wayfinding' and 'bridger' roles to navigate AI uncertainties. • AI could drive $6 billion in annual revenue growth for a $60 billion firm through enhanced productivity and innovation. Expert question (counterfactual): What if the projected revenue gains from AI-human collaboration fail to materialize due to persistent workforce resistance, forcing companies to revert to cost-cutting and further erode employee trust?
AI and Labor Market
Goldman Sachs Research estimates that 300 million jobs globally are exposed to automation.
The jobs AI can’t do – and the young adults doing them
For many young people entering the workforce, the stigma of hands-on jobs is fading. There a competitive appeal – and they all require human expertise Gib and Michelle Mouser are proud of their son’s career – just not in the way they once imagined. Only 23 years old, Cale Mouser already earns well over six figures, and he’ll end up making substantially more.
Economists Are Drawing Stronger Connections Between A.I. and Jobs
Artificial intelligence hasn’t disrupted the labor market, economists say, but they are increasingly convinced that it will — and that policymakers are unprepared.
The New Jobs Being Created by AI
AI is raising big fears about employment losses, but it is also giving rise to new engineering and training jobs.
WSJ reporting AI is creating new jobs - AI is proving to be a job creator as well as a disruptor. LinkedIn data show about 640,000 new U.S. jobs tied to AI between 2023 and 2025, including roles such as Head of AI, AI Engineer, AI Consultant, AI/ML Researcher, Forward-Deployed Engineer, and Data Annotator, not counting the surge in data center construction work. These jobs span technical, strategic, and operational work across industries including finance, healthcare, and manufacturing, where people are helping both train AI systems and help others use them effectively.
Will AI make it harder for non-graduates to climb the jobs ladder?
Gateway roles to white-collar work appear particularly exposed to disruption
A Yale economist says AGI won’t automate most jobs—because they’re not worth the trouble
Pascual Restrepo's new NBER paper argues it's not about what AI can do. It's about what AI will bother doing—and most human work doesn't make the cut.
MIT Study Challenges AI Job Apocalypse Narrative
A new study from MIT challenges the prevailing narrative that AI will cause massive job losses. The research provides a more nuanced view of AI's impact on the workforce. Topic group: Labor & Society
https://arxiv.org/abs/2604.01363
Preserving Competitive Skills in an AI-driven Economy
This HBR article warns that AI can standardize work so aggressively that companies risk eroding the distinctive capabilities, judgment, and institutional know-how that actually make them competitive.
There has been lots of discussion about how AI will affect jobs and the broader economy, including an article in The New York Times today.
There has been lots of discussion about how AI will affect jobs and the broader economy, including an article in The New York Times today. My friend @amcafee pulls together a bunch of the evidence in a nice Substack post: https://geekway. substack.
Is the org chart dead in the age of AI? LinkedIn's chief economic opportunity officer thinks so
Companies need to let that go, as it is going to hold back innovation
How AI May Reshape Career Pathways to Better Jobs
Brookings研究发现,AI将侵蚀蓝领工人从低薪工作转至高薪工作的 pathway。1500万工人从事高度接触AI的Gateway职业,其中近1100万人的职业 pathway 暴露于AI。
Anthropic accidentally exposes Claude Code source code in npm packaging error
Anthropic PBC has accidently exposed the source code for its Claude Code command-line interface tool through a packaging error that led to the inclusion of sensitive files in a publicly distributed node package manager or npm release. Claude Code is Anthropic’s command-line tool that lets developers interact with its Claude artificial intelligence models directly from […] The post Anthropic accide
Oops..so what was found?
Anthropic mistakenly leaks its own AI coding tool’s source code, just days after accidentally revealing an upcoming model known as Mythos
Hundreds of thousands of lines of code were exposed, giving researchers insight into upcoming models and internal architecture.
Anthropic's back-to-back leaks—first a draft blog post unveiling the potent 'Mythos' or 'Capybara' model with heightened cybersecurity risks, now 500,000 lines of Claude Code source code—expose troubling lapses in operational security at a firm pioneering AI safety. While Anthropic downplays the code exposure as mere 'human error' in packaging, without sensitive data loss, experts rightly highlight its value: the agentic harness code reveals proprietary guardrails, tool integrations, and internal APIs, potentially enabling competitors to reverse-engineer enhancements or adversaries to probe safeguards. This pattern, echoing a February 2025 incident, undermines Anthropic's safety-first ethos, especially as Capybara promises unprecedented capabilities like zero-day vulnerability detection that could be dual-use. Skeptically, their assurances of robust processes ring hollow amid repeated misconfigurations, signaling a need for stricter release protocols in an industry racing toward AGI amid escalating risks. Key points: • Anthropic leaked 500,000 lines of Claude Code source code via NPM due to human error, exposing agentic harness details. • The leak follows a prior exposure of 'Mythos/Capybara' model info, described as more advanced than Opus with major cybersecurity implications. • Experts warn the code could aid competitors in replicating features or attackers in bypassing safeguards. • No customer data was compromised, but internal architecture insights were revealed, potentially informing open-source alternatives. Expert question (counterfactual): What if these leaks were not mere accidents but subtle strategic disclosures to accelerate industry-wide safety discussions on dual-use AI risks?
Live and Let AI: Former CIA officer says human spies matter more in the LLM age
AI is eroding trust in digital communications and data, giving old-school spycraft fresh relevance for modern agents.
Thomas Mulligan, a former CIA case officer and RAND researcher, argues persuasively that the proliferation of generative AI, particularly large language models, will paradoxically elevate the role of human intelligence (HUMINT) in espionage. As AI democratizes disinformation—enabling agents to fabricate convincing backstories or false intelligence—it erodes trust in digital channels, pushing operatives toward analog methods like dead drops and face-to-face encounters. While AI offers tools for pattern recognition in surveillance or psychological manipulation, Mulligan warns it also heightens risks for human handlers, making counterintelligence more vital. This perspective revives Cold War-era spycraft, but skeptics might question whether it underplays AI's potential to automate even interpersonal deceptions, potentially rendering human elements as vulnerable relics in an increasingly synthetic intelligence landscape. Overall, Mulligan's analysis underscores a timely shift: in an era of deepfakes, the human touch may indeed be the ultimate verifier of truth. Key points: • AI enables easier fabrication of false intelligence, increasing the need for human verification in HUMINT. • Digital communications lose trustworthiness due to AI-generated disinformation, favoring old-school spycraft like dead drops. • AI tools can aid spies in manipulation and data analysis but also pose detection risks for operatives. • Human intelligence officers will remain essential as AI complicates distinguishing fact from fiction. Expert question (counterfactual): What if rapid advancements in AI-driven behavioral prediction and real-time deepfake countermeasures ultimately automate HUMINT tasks, making human spies as obsolete as they were feared to be?
AI Search is Atomizing Our Information
A government digital designer has warned that AI search is atomizing our information, making it difficult to control how our content is interpreted and used.
AI-mediated search is fragmenting public information ecosystems, compelling governments to overhaul digital strategies to preserve policy efficacy and equity in an era of uncontrolled summarization. The UK Department for Education reports surging AI-driven traffic alongside declining direct visits, highlighting risks of narrowed user understanding and disadvantaged access for less confident individuals. "We now need to design with the expectation that much of what we publish will be read indirectly, atomised, summarised or reinterpreted by systems we don't control," writes Mark Edwards, head of design for the UK government's Department for Education.
My piece in the Economist where I argue against de-weirding AI. It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can result in bad outcomes for companies & their employees.
My piece in the Economist where I argue against de-weirding AI. It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can
We always need to listen to Ethan. He is bang on in that firms must treat AI as a fundamentally novel technology rather than routine IT automation to mitigate risks and harness opportunities, avoiding adverse impacts on operations and workforce stability. 'It is a strange technology with both risks & opportunities that need to be discovered. Pretending AI works like normal IT automation can result in bad outcomes for companies & their employees.'
What issues arise when code has the ability to write and review itself?
Agustin Huerta discusses Anthropic’s new Code Review feature and the importance of AI governance.
As AI agents increasingly automate code writing and review, businesses face heightened risks from overreliance without robust governance, potentially amplifying security vulnerabilities and operational blind spots in software development. Anthropic's new Code Review feature employs specialized AI agents to detect bugs and prioritize issues, but experts stress the need for adapted workflows, human oversight, and observability to maintain accountability and prevent compounded errors.
Risks from AI Adoption in UK Financial Sector are Likely to Increase, BOE Says
The Bank of England said on Wednesday that artificial intelligence risks are likely to increase, as UK financial firms expand their deployment of the advanced technology.
Palantir’s UK boss criticises ‘ideological’ groups as ministers move to scrap NHS contract
Louis Mosley says government should resist calls to trigger break clause in £330m deal with US analytics company UK politics live – latest updates Palantir’s UK boss has urged the government not to give in to “ideologically motivated campaigners” as government ministers explore a way out of a £330m NHS contract with the tech company. Ministers have sought advice on triggering a break clause in P
Technology & Infrastructure
Data to start your week: The AI buildout
The AI economy is scaling exponentially but its physical dependencies - power, water, land, permits - do not. 89% of data center capacity under construction is pre-leased, grid connection queues mean 2/3 of pipeline is stuck on paper, and local communities have blocked nearly $100B in projects.
AI Infrastructure Roadmap 2026
This Bessemer Venture Partners article outlines a forward-looking AI infrastructure roadmap, identifying key areas such as harnesses, reinforcement learning platforms, inference optimization, and world models development.
AI Infrastructure Roadmap: Five frontiers for 2026
The first generation of AI was built for a world where the model was the product, and progress meant bigger weights, more data, and stellar benchmarks. AI infrastructure mirrored this reality, fueling the rise of giants in foundation models, compute capacity, training techniques, and data ops.
DeepMind's Secret Weapon
The AI race won't be won by who builds the best model, but by who can afford to keep the lights on. DeepMind CEO Demis Hassabis knew that when he sold his AI lab to Google, according to a new biography by Sebastian Mallaby, senior fellow at the Council on Foreign Relations.
Insurers turn to catastrophe bonds to offload data centre risks
Industry explores raising capital from alternative investors to cover AI mega-projects
Four things we’d need to put data centers in space
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit.
MIT Researchers Use AI to Uncover Atomic Defects
A new model measures defects that can be leveraged to improve materials' mechanical strength, heat transfer, and energy-conversion efficiency.
The Mirage of Visual Understanding
The mirage of visual understanding in current frontier models is a topic of discussion in the AI community.
Microsoft Revamps AI Tool
Microsoft has revamped one of its AI research tools to use models from both OpenAI and Anthropic, the clearest sign yet that the future of AI may be multi-model. The software giant is taking advantage of multiple models within its Microsoft 365 Copilot Researcher.
A clear admission that a multi-horse betting strategy is required for underlying foundational models.
Former Baidu President on AI Tokenization in China
The former President of Baidu says AI tokenization is exploding in China, far beyond what OpenClaw illustrated earlier this year. Zhang Yaqin, who runs China's Institute of AI Industry Research at Beijing's Tsinghua University speaks to Bloomberg's Chief North Asia Correspondent Stephen Engle in Beijing. (Source: Bloomberg)
Qwen3.6-Plus: Towards Real World Agents
Qwen3.6-Plus is a new AI model towards real world agents.
Alibaba's Qwen3.6-Plus advances China's AI competitiveness by targeting practical real-world agents, intensifying global rivalry and accelerating automation in business operations. This model builds on prior iterations to enable more autonomous AI applications, potentially reshaping labor markets and investment priorities in AI infrastructure. Alibaba's release of Qwen3.6-Plus signals accelerating Chinese innovation in AI agents, heightening geopolitical competition and pressuring Western firms to advance practical AI deployment for real-world applications. The model focuses on building autonomous agents capable of handling complex, real-world tasks, potentially reshaping labor markets and enterprise efficiency.
Microsoft launches 'mid-class' AI model as compute limits bite
Tech giant’s AI chief says it will have the resources to build frontier systems later this year
Interesting Microsoft is behind in building leading frontier models as it lacks the necessary compute! "Microsoft has unveiled its latest midsized AI model as it seeks to gain a foothold in the technology, but chief Mustafa Suleyman said the tech giant still lacks the computing power needed to build cutting-edge systems. The software giant on Thursday released a speech transcription model that Suleyman called the most advanced of its kind, as it steps up efforts to close in on rivals and reduce its reliance on OpenAI. Microsoft has yet to release large language models capable of competing in more sophisticated areas such as coding and text generation, where it lags market leaders Anthropic, Google and OpenAI."
Adoption & Impact
Sorry, I wasn't clear. I think this is an excellent paper with consistent methodology, I just mean people's views of what using AI means are changing over time, and the measurement jitter is because they view what AI is quite differently now than they did a year ago.
Sorry, I wasn't clear. I think this is an excellent paper with consistent methodology, I just mean people's views of what using AI means are changing over time, and the measurement jitter is because they view what AI is quite differently now than they did a year ago.
Mind the Gap: AI Adoption in Europe and the U.S. -- by Alexander Bick, Adam Blandin, David J. Deming, Nicola Fuchs-Schündeln, Jonas Jessen
This paper combines international evidence from worker and firm surveys conducted in 2025 and 2026 to document large gaps in AI adoption, both between the US and Europe and across European countries. Cross-country differences in worker demographics and firm composition account for an important share of these gaps. AI adoption, within and across countries, is also closely linked to firm personnel management practices and whether firms actively encourage AI use by workers. Micro-level evidence suggests that AI generates meaningful time savings for many workers. At the macro level, in recent years industries with higher AI adoption rates have experienced faster productivity growth. While we do not establish causality, this relationship is statistically significant and similar in magnitude in Europe and the US. We do not find clear evidence that industry-level AI adoption is associated with employment changes. We discuss limitations of existing data and outline priorities for future data collection to better assess the productivity and labor market effects of AI. Topic group: Adoption & Impact
When Not to Use AI
Benjamin Laker's column astutely delineates the boundaries of AI in managerial practice, advocating its use for accelerating rote tasks like data synthesis and drafting while cautioning against outsourcing human-centric judgments on values, relationships, and trust. This balanced perspective counters the hype surrounding AI as a panacea for productivity, emphasizing that overreliance risks eroding critical thinking and emotional intelligence—core to effective leadership. However, Laker's framework assumes managers possess the self-awareness to discern these boundaries consistently, a skepticism warranted given evidence of cognitive biases in tech adoption. By positioning AI as a tool rather than a proxy, the piece promotes deliberate integration, potentially fostering more resilient organizational cultures amid rapid technological shifts, though it underplays the need for systemic training to prevent unintended delegation of authority. Key points: • Use AI for mechanical tasks like summarizing reports or outlining agendas to free up time for strategic sensemaking. • Avoid AI in decisions involving trust, values, or relationships, such as delivering feedback or assessing candidate resilience. • Treat AI outputs as raw material requiring human editing to maintain judgment and accountability. • Instruct AI to generate counterarguments to challenge biases and broaden decision-making perspectives. Expert question (counterfactual): What if advancing AI capabilities in emotional intelligence and contextual nuance render the distinction between mechanical and human tasks obsolete, forcing managers to redefine leadership entirely?
The Enterprise AI Playbook: Lessons from 51 Successful Deployments
Stanford Digital Economy Lab analyzed 51 enterprise AI deployments over 5 months. Found that outcomes varied from weeks to years for same technology - difference was organization readiness, not the AI model.
We find this all the time - the limiter to AI adoption is people. Organizational readiness, not AI model quality, determines the speed and success of enterprise AI deployments, positioning firms that invest in adaptive processes and leadership for competitive advantage in the AI economy. Stanford's Digital Economy Lab analysis of 51 cases shows identical technologies delivering outcomes from weeks to years, underscoring the need for internal change to capture AI value.
Working to Advance the Nuclear Renaissance
Dean Price, assistant professor in the Department of Nuclear Science and Engineering, sees a bright future for nuclear power, and believes AI can help us realize that vision.
ServiceNow says agentic AI will redefine work — but only if companies get governance right
Autonomous workflows are fast becoming the next enterprise AI battleground as companies move beyond copilots and enter the agentic AI platform era, where software can act on its own.
ServiceNow's push into agentic AI frames autonomous workflows as the next frontier beyond copilots, promising to redefine enterprise work through self-acting software, but only with rigorous governance to manage risks like data access and outcome drift. Executives advocate treating AI agents as a distinct identity class—neither fully human nor machine—equipped with least-privilege permissions, starting implementations in controlled IT domains like email triage to demonstrate tangible ROI, such as slashing resolution times by 13%. Tools like the AI Control Tower enable runtime monitoring and traceability, ostensibly safeguarding PII and ensuring alignment with intent. While this narrative aligns with ServiceNow's platform-centric strengths and addresses real security fault lines, it warrants skepticism: the hype around productivity gains may overlook integration complexities and the historical struggle of enterprises to monetize AI, potentially positioning governance as a compliance burden rather than a velocity booster. Key points: • Treat AI agents as a new identity class with distinct, scoped permissions for data access. • Begin agentic AI deployments in narrow IT use cases to measure ROI, like reducing task times from days to minutes. • Employ AI Control Tower for logging actions, tracing outcomes, and monitoring drift to maintain security and traceability. • Prioritize governance and security to enable scaling, as lapses could stall AI programs amid CIO pressures for value realization. Expert question (counterfactual): What if robust governance, while mitigating risks, inadvertently creates bureaucratic hurdles that hinder the rapid experimentation needed for agentic AI to truly outpace human workflows in diverse enterprise domains?
China Improves Law Enforcement with AI
China's market authority said its use of digital tools over the past three years has improved law enforcement, with enhanced data-sharing on its national case-handling platform boosting speed and efficiency.
This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, but when users had to interact with chatbots the interface led to confusion & worse answers
This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, but when users had to interact with chatbots the interface led to confusion & worse answers My post: https://www. oneusefulthing.
How A.I. Helped One Man (and His Brother) Build a $1.8 Billion Company
Perhaps the first 2-person company with $1B in revenue, powered by AI workflows.
Agentic Scenarios Every Marketer Must Prepare For
BCG lays out four possible agentic-commerce futures: an open bazaar, brand resurgence through data ecosystems, super apps, and creator-led authenticity. Every scenario collapses to two requirements: machine discoverability and brand desirability. Brands need to shift from SEO to answer-engine optimization.
BCG's exploration of agentic-commerce futures presents a forward-looking framework for marketers navigating AI-driven shopping agents, outlining four scenarios: an open bazaar of seamless transactions, brand resurgence via proprietary data ecosystems, super apps consolidating services, and creator-led authenticity emphasizing human touch. While the analysis astutely reduces these to core imperatives—machine discoverability and brand desirability—it risks oversimplifying the interplay of regulatory hurdles, privacy concerns, and technological fragmentation that could derail such visions. The pivot from SEO to 'answer-engine optimization' is pragmatic, urging brands to prioritize structured data and conversational relevance, yet it assumes agents will uniformly favor discoverable, desirable content without accounting for biases in AI training or monopolistic platform dominance, potentially favoring incumbents over innovators. Key points: • Four agentic-commerce scenarios: open bazaar, data ecosystems for brands, super apps, and creator authenticity. • Core requirements: enhance machine discoverability and brand desirability. • Shift strategy from SEO to answer-engine optimization for AI interactions. Expert question (counterfactual): What if AI agents prioritize user privacy and ethical sourcing over discoverability, forcing brands to invest more in transparent, verifiable desirability rather than optimized data feeds?
Geopolitics
Get the full executive brief
Receive curated insights with practical implications for strategy, operations, and governance.