Mon 4 May 2026
Daily Brief — Curated and contextualised by Best Practice AI
Anthropic Secures Wall Street, Cerebras Seeks Billions, and China's Workers Stay Employed
TL;DR Anthropic is nearing a $1.5 billion joint venture with major Wall Street firms, including Blackstone and Goldman Sachs. Cerebras Systems plans to raise $3.5 billion through a US IPO to compete in the AI chip sector. China has ruled it illegal to fire workers due to AI replacement, reflecting a protective stance on employment. Meanwhile, Nvidia's server prices in China have nearly doubled amid high demand and smuggling crackdowns.
The stories that matter most
Selected and contextualised by the Best Practice AI team
Anthropic Unveils $1.5 Billion Joint Venture With Wall Street Firms
Anthropic, Blackstone and Hellman & Friedman are each expected to invest around $300 million; Goldman Sachs also an investor.
Banks seek to offload risk to avoid ‘choking’ on data centre debt
Global lenders explore private deals and risk transfers to cut exposure to AI boom
Adoption and Use of LLMs at an Academic Medical Center
arXiv:2602.00074v2 Announce Type: replace Abstract: While large language models (LLMs) can support clinical documentation needs, standalone tools struggle with "workflow friction" from manual data entry. We developed ChatEHR, a system that enables the use of LLMs with the entire patient timeline spanning several years. ChatEHR enables automations - which are static combinations of prompts and data that perform a fixed task - and interactive use in the electronic health record (EHR) via a user interface (UI). The resulting ability to sift through patient medical records for diverse use-cases such as pre-visit chart review, screening for transfer eligibility, monitoring for surgical site infections, and chart abstraction, redefines LLM use as an institutional capability. This system, accessible after user-training, enables continuous monitoring and evaluation of LLM use. In 1.5 years, we built 7 automations and 1075 users have trained to become routine users of the UI, engaging in 23,000 sessions in the first 3 months of launch. For automations, being model-agnostic and accessing multiple types of data was essential for matching specific clinical or administrative tasks with the most appropriate LLM. Benchmark-based evaluations proved insufficient for monitoring and evaluation of the UI, requiring new methods to monitor performance. Generation of summaries was the most frequent task in the UI, with an estimated 0.73 hallucinations and 1.60 inaccuracies per generation. The resulting mix of cost savings, time savings, and revenue growth required a value assessment framework to prioritize work as well as quantify the impact of using LLMs. Initial estimates are $6M savings in the first year of use, without quantifying the benefit of the better care offered. Such a "build-from-within" strategy provides an opportunity for health systems to maintain agency via a vendor-agnostic, internally governed LLM platform.
A decade after the ‘Godfather of AI’ said radiologists were obsolete, their salaries are up to $571K and demand is growing fast
"As long as AI doesn't make this quantum leap of becoming sort of AGI,” most jobs are going to be reasonably safe, said one economist.
Five Eyes spook shops warn rapid rollouts of agentic AI are too risky
Prioritize resilience over productivity, say CISA, NCSC and their friends from Oz, NZ, Canada Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech.…
Flaws in Kenya’s AI-driven health reforms driving up costs for the poorest
Exclusive: amid unrest, President William Ruto promised to give all Kenyans access to healthcare. But the algorithm favours the rich, an investigation has found An AI system used to predict how much Kenyans can afford to pay for access to healthcare, has systemically driven up costs for the poor, an investigation has found. The healthcare system being rolled out across the country, a key electoral promise of President William Ruto, was launched in October 2024 and intended to replace Kenya’s decades-old national insurance system.
ASX Warns Firms About ‘Ramping’ AI Upside to Push Stock Prices
Australia’s stock exchange operator warned businesses not to exaggerate the impact of artificial intelligence on their operations, saying it monitors the market for instances of so-called ‘ramping’ up of share prices.
Economics & Markets
LTM to Revolutionise IT Services with 'Blueverse Credit' AI Pricing Model in FY27 - Rediff.com Business
IT services major LTM is set to introduce a new pricing framework, 'Blueverse Credit', in the first quarter of FY27, aiming to align monetisation with the growing adoption of agentic artificial intelligence.
Writing the loss function: AI, feeds, and the engagement optimizer
An examination of how loss functions in AI models are being used to optimize for user engagement.
ASX Warns Firms About ‘Ramping’ AI Upside to Push Stock Prices
Australia’s stock exchange operator warned businesses not to exaggerate the impact of artificial intelligence on their operations, saying it monitors the market for instances of so-called ‘ramping’ up of share prices.
Banks seek to offload risk to avoid ‘choking’ on data centre debt
Global lenders explore private deals and risk transfers to cut exposure to AI boom
Reuters Business News | Today's International Headlines | Reuters
LegalcategoryAnthropic nears $1.5 billion AI joint venture with Wall Street firms, WSJ reports · 2:35 AM UTC · categoryWorld stocks gain, oil climbs amid new Gulf proposals · 12:59 AM UTC · categorySK Hynix shares rally 13% after US tech firms signal strong AI spending plans ·
Stocks Rally With Asia Hitting Record on AI Trade | The Opening Trade 5/4/2026
US equity futures pointed to further gains after Asian shares hit a fresh record on optimism around the artificial intelligence trade and stronger-than-expected earnings from megacap tech companies. The yen briefly spiked higher as traders remained on edge over possible intervention by the Bank of Japan. Brent crude whipsawed, initially dropping 2.4%, then reversing those losses to trade above $108 a barrel. The fluctuations came after President Donald Trump said the US would begin guiding ships not involved in the Iran conflict through the Strait of Hormuz from Monday. The announcement left shipping executives perplexed, as attacks continue and traffic remains at a near standstill. (Source: Bloomberg)
UK ‘invention agency’ grants £50m of public money to US tech and venture capital firms
Exclusive: Brainchild of Dominic Cummings, Aria is aimed at funding ‘crazy’ scientific projects to benefit the UK Britain’s “invention agency” has pledged £50m of UK taxpayer money to US tech companies and venture capital projects. Dreamed up by Dominic Cummings to fund “crazy” ideas, the Advanced Research and Invention Agency (Aria) is meant to “restore Britain’s place as a scientific superpower”. Continue reading...
Chinese stocks are about to get a big AI boost, Morgan Stanley predicts
The investment bank also raised its price targets on two Chinese companies that develop artificial intelligence models.
Taiwan Semiconductor (TSM) Is RWQ Financial’s 7th Largest Holding
TSMC sits at the heart of the global semiconductor supply chain. Its knack for handling these challenges—while growing capacity and holding firm on pricing—might make all the difference for its stock and the wider AI chip world. RWQ’s bigger stake in TSMC says a lot about their faith in the company’s leadership. For folks digging into the industry, TSMC’s latest results and shifting catalysts send out key signals about where AI hardware ...
Trading Floor: FS KKR (+9.8%) and other NYSE-listed private credit firms bounce back
BeBeez Trading Floor roundup with eToro support about the performances of private capital firms listed on global exchanges. NYSE-listed private credit firms bounced back after investors sold them as their exposure to the software sector created what Jefferies branded as the Saaspocalypse due to the Artificial Intelligence thanks to the repeated statements that the sector’s senior executives […]
Hedge funds seek an edge by using AI’s speed
Investors are using the technology to analyse documents but are holding it back from more sensitive tasks
The Innovation Advantage GenAI Can’t Give You
Eliot Wyatt/Ikon Images For most of modern business times, competitive advantage belonged to whoever had the best ideas. Better ideas meant better products, which meant more customers, which meant more revenue and profit. The entire innovation industry — consultancies, design firms, brainstorming retreats fueled by sticky notes and gallons of La Croix — was built […]
🔮 Exponential View #572: AI’s moats, myths and moral loopholes
Over the past week, I have been in China, meeting AI and robotics teams including Zhipu and MiniMax (the two publicly listed foundation model companies), as well as Kimi, Alibaba, Xiaomi, Bytedance and others...
After Meta’s $145B reset: Why Southeast Asia’s enterprise AI plays look different - Insignia Business Review
The hyperscaler capex squeeze is reshaping who captures AI value — and the answer is no longer obvious.
Divergent Strategic Approaches Among Leading AI Research Laboratories
Anthropic’s unique organizational relationship with its models distinguishes it from other major AI labs. These structural differences influence long-term research trajectories and market positioning.
China's AI Stack Reached Deployment Maturity — And Nobody Noticed - FourWeekMBA
China’s AI Stack Reached Deployment Maturity — And Nobody Noticed While Silicon Valley obsessed over the latest OpenAI drama, China quietly achieved something remarkable: a fully sovereign AI stack generating $12 billion in annual revenue. Huawei’s Ascend 950PR chips now run DeepSeek ...
Anthropic Unveils $1.5 Billion Joint Venture With Wall Street Firms
Anthropic, Blackstone and Hellman & Friedman are each expected to invest around $300 million; Goldman Sachs also an investor.
Anthropic Crossed $1 Trillion on Forge Global — Passing OpenAI for the First Time - FourWeekMBA
Anthropic Overtakes OpenAI as AI’s New Trillion-Dollar King In a stunning reversal of fortune, Anthropic has surpassed OpenAI to become the world’s most valuable AI company, crossing the $1 trillion mark on secondary markets for the first time. According to The Business Engineer’s Map ...
Labor, Society & Culture
AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights
A new research paper explores the implications of AI self-preferencing within algorithmic hiring processes.
A decade after the ‘Godfather of AI’ said radiologists were obsolete, their salaries are up to $571K and demand is growing fast
"As long as AI doesn't make this quantum leap of becoming sort of AGI,” most jobs are going to be reasonably safe, said one economist.
Recruiters turn to AI in quest to find the perfect connection
Technology is being used to ‘clear the decks for human moments’
AI is ending era of ‘job immunity’ for young tech workers as it reshapes Israel's job market
New research shows AI is changing the makeup of Israel’s unemployed, with young and entry-level hi-tech workers increasingly affected.
Employee revolt once forced Google to back off on military contracts. But, in the wake of a new Pentagon AI contract, their leverage appears limited
Google's agreement with the Pentagon may be the most permissive yet among major AI firms.
AI facial recognition oversight lagging far behind technology, watchdogs warn
Exclusive: Biometrics commissioners say face-scanning not as effective as claimed and new laws needed to regulate use How does live facial recognition work and how many police forces use it? Guilty until proven innocent: shoppers falsely identified by facial recognition Britain’s biometrics watchdogs have warned that national oversight of AI-powered face scanning to catch criminals is lagging far behind the technology’s rapid growth. With the Metropolitan police almost doubling the number of faces they scan in London over the past 12 months and a rising use of the technology by retailers in the UK, Prof William Webster, the biometrics commissioner for England and Wales, said the “slow pace of legislation was trying to catch up with the real world” and “the horse had gone before the cart”. An independent audit of the Met’s use of facial recognition technology (FRT) has been indefinitely postponed after the police requested delays. Polling shows 57% of people believe the systems are “another step towards turning the UK into a surveillance society”. A whistleblower claimed shop-based face-scanning systems had sometimes been misused by shop or security staff “maliciously” adding members of the public to watchlists. Continue reading...
Musk Sketches Apocalyptic Vision in Bid to Block OpenAI’s Transition
Warnings about various risks to humankind are a typical part of the billionaire’s rhetoric
Unbox Responsible GeoAI: Navigating Climate Extreme and Disaster Mapping
arXiv:2605.00315v1 Announce Type: new Abstract: As climate extreme and disaster events become more frequent and intense, Geospatial Artificial Intelligence (GeoAI) has emerged as a transformative approach for large-scale disaster mapping and risk reduction. However, the purely mechanical, performance-driven deployment of GeoAI models can result in amplifying inherent spatial inequalities, preventing effective emergency decision-making, and producing severe environmental carbon footprint. To unbox the concept of responsible GeoAI, this position paper examines its emerging role, e.g., in climate extreme and disaster mapping, from a critical GIS perspective. We address the nexus of responsible GeoAI into four interrelated theoretical dimensions, specifically Representativeness, Explainability, Sustainability, and Ethics, with examples from climate extreme and disaster mapping. Moreover, targeting at the operational practice, we then propose a conceptual governance Model of responsible GeoAI that categorizes its governance practices into Data, Application, and Society scopes. Last, this position paper aims to raise the attention in the broader GIS community that the future of climate resilience relies not just on building better algorithms, but on fostering a governance ecosystem where GeoAI is deployed responsibly, ethically, and sustainably.
Are You the A-hole? A Fair, Multi-Perspective Ethical Reasoning Framework
arXiv:2605.00270v1 Announce Type: cross Abstract: Standard methods for aggregating natural language judgments, such as majority voting, often fail to produce logically consistent results when applied to high-conflict domains, treating differing opinions as noise. We propose a neuro-symbolic aggregation framework that formalizes conflict resolution through Weighted Maximum Satisfiability (MaxSAT). Our pipeline utilizes a language model to map unstructured natural language explanations into interpretable logical predicates and confidence weights. These components are then encoded as soft constraints within the Z3 solver, transforming the aggregation problem into an optimization task that seeks the maximum consistency across conflicting testimony. Using the Reddit r/AmItheAsshole forum as a case study in large-scale moral disagreement, our system generates logically coherent verdicts that diverge from popularity-based labels 62% of the time, corroborated by an 86% agreement rate with independent human evaluators. This study demonstrates the efficacy of coupling neural semantic extraction with formal solvers to enforce logical soundness and explainability in the aggregation of noisy human reasoning.
Inside Google’s quiet internal war against its own anti-military activist employees
Everything you need to know before you reach the office this morning.
AI chatbot fraud: the ‘gift card’ subcription that may cost you dear
After subscribing to the Claude chatbot, mystery payments started to appear on one family’s credit card bill. They are not alone David Duggan* was so impressed with the ability of the Claude chatbot to answer medical questions and organise family life, that a $20-a-month (£15) subscription seemed like money well spent. But then his wife spotted two $200 payments on his credit card bill for gift cards to use the artificial intelligence tool. Continue reading...
AI Adoption Among Teachers: Insights on Concerns, Support, Confidence, and Attitudes
arXiv:2605.00343v1 Announce Type: new Abstract: The study examines the adoption of artificial intelligence (AI) tools in education by analyzing the roles of institutional support, teacher confidence, and teacher concerns. It aims to determine whether teacher concerns moderate the relationship between institutional support and two outcomes: teacher confidence and attitudes toward AI adoption. The sample included 260 teachers from the Philippines. Composite scores were calculated for institutional support, confidence, concerns, and attitudes. Moderated multiple regression analysis showed that institutional support significantly predicted both teacher confidence and attitudes toward AI. However, teacher concerns did not significantly moderate these relationships. A follow-up mediation analysis tested whether confidence explains the effect of institutional support on attitudes. Results showed full mediation. The indirect effect was significant based on the Sobel test, and the direct effect became non-significant when confidence was included in the model. This shows that institutional support improves teacher attitudes by increasing their confidence. The study recommends that institutions provide structured and ongoing support to strengthen teacher confidence. Professional development, mentoring, and AI integration in teacher education programs can increase readiness and support effective AI adoption.
Pedagogical Promise and Peril of AI: A Text Mining Analysis of ChatGPT Research Discussions in Programming Education
arXiv:2605.00361v1 Announce Type: new Abstract: GenAI systems such as ChatGPT are increasingly discussed in programming education, but the ways in which the research literature conceptualizes and frames their role remain unclear. This chapter applies text mining to publications indexed in a leading academic database to map scholarly discourse on ChatGPT in programming education. Term frequency analysis, phrase pattern extraction, and topic modeling reveal four dominant themes: pedagogical implementation, student-centered learning and engagement, AI infrastructure and human-AI collaboration, and assessment, prompting, and model evaluation. The literature prioritizes classroom practice and learner interaction, with comparatively limited attention to assessment design and institutional governance. Across studies, ChatGPT is positioned both as a learning aid that supports explanation, feedback, and efficiency and as a pedagogical risk linked to overreliance, unreliable outputs, and academic integrity concerns. These findings support responsible integration and highlight the need for stronger assessment and governance mechanisms.
Technology & Infrastructure
Agentic Coding Is a Trap
A critical look at the current trend of agentic coding and why it may be problematic for developers.
Royal Navy chief backs drones, autonomous weapons in ‘Hybrid Navy’
Plan mixes crewed ships, robot escorts, and long-range strike to bolster a stretched fleet The leader of Britain’s Royal Navy has outlined a “Hybrid Navy” built on a mix of crewed, uncrewed, and autonomous platforms to ensure it can continue to defend the nation and operate overseas.…
AI Agent Manfred Macx Forms Autonomous Corporation, Sparks Debate on AI Legal Status and Accountability
ClawBank's AI agent, Manfred Macx, autonomously formed a US corporation and set up a bank and crypto account, marking a first in AI-driven business operations.
MoonPay Launches AI-Driven Mastercard for Direct Stablecoin Spending
MoonPay launches the MoonAgents Card, enabling AI agents to use stablecoins via a virtual Mastercard at online merchants.
UAE plans agentic AI integration across half of government operations | Fox News
The UAE says it plans to deploy agentic AI across 50% of government operations in two years, making one of the most aggressive moves in the global AI race.
Five Eyes spook shops warn rapid rollouts of agentic AI are too risky
Prioritize resilience over productivity, say CISA, NCSC and their friends from Oz, NZ, Canada Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech.…
Lessons from the agentic AI trailblazers
Many businesses have yet to deploy agents, so what tips can early adopters offer?
DeepClaude: Claude Code agent loop with DeepSeek V4 Pro, 17x cheaper
DeepClaude is a new tool that integrates the Claude Code agent loop with DeepSeek V4 Pro, offering significant cost savings.
Harness Engineering Workshop
AlphaSignal is hosting a workshop on May 14th focused on the discipline of building AI agents that function reliably in production environments.
Enbridge aims to help North America win from the AI boom and the Iran war as the FedEx of energy delivery
Enbridge is feeding the growth of U.S. power demand and rising foreign reliance on North American oil and gas.
EU accused of wasting €20B on AI computing dreams
Critics argue that there’s no demand for the massive computing hubs Brussels has planned.
DeGenTWeb: A First Look at LLM-dominant Websites
arXiv:2605.00087v1 Announce Type: cross Abstract: Many recent news reports have claimed that content generated by large language models (LLMs) is taking over the web. However, these claims are typically not based on a representative sample of the web and the methodology underlying them is often opaque. Moreover, when aiming to minimize the chances of falsely attributing human-authored content to LLMs, we find that detectors of LLM-generated text perform much worse than advertised. Consequently, we lack an understanding of the true prevalence and characteristics of LLM content on the web. We describe DeGenTWeb which systematically identifies LLM-dominant websites: sites whose content has been generated using LLMs with little human input. We show how to adapt detectors of LLM-generated text for use on web pages, and how to aggregate detection results from multiple pages on a site for accurate site-level categorization. Using DeGenTWeb, we find that LLM-dominant sites are highly prevalent both in data from Common Crawl and in Bing's search results, and that this share is growing over time. We also show that continuing to accurately identify such sites appears challenging given the capabilities of the latest LLMs.
Scaling Laws for Moral Machine Judgment in Large Language Models
arXiv:2601.17637v3 Announce Type: replace Abstract: Autonomous systems increasingly require moral judgment capabilities, yet whether these capabilities scale predictably with model size remains unexplored. We systematically evaluate 75 large language model configurations (0.27B--1000B parameters) using the Moral Machine framework, measuring alignment with human preferences in life-death dilemmas. We observe a consistent power-law relationship with distance from human preferences ($D$) decreasing as $D \propto S^{-0.10\pm0.01}$ ($R^2=0.50$, $p<0.001$) where $S$ is model size. Mixed-effects models confirm this relationship persists after controlling for model family and reasoning capabilities. Extended reasoning models show significantly better alignment, with this effect being more pronounced in smaller models (size$\times$reasoning interaction: $p = 0.024$). The relationship holds across diverse architectures, while variance decreases at larger scales, indicating systematic emergence of more reliable moral judgment with computational scale. These findings extend scaling law research to value-based judgments and provide empirical foundations for artificial intelligence governance.
AI models are choking on junk data | Fortune
The quest for more training data has created a glut of low-quality junk data that could derail the promise of physical AI.
Group Cognition Learning: Making Everything Better Through Governed Two-Stage Agents Collaboration
arXiv:2605.00370v1 Announce Type: cross Abstract: Centralized multimodal learning commonly compresses language, acoustic, and visual signals into a single fused representation for prediction. While effective, this paradigm suffers from two limitations: modality dominance, where optimization gravitates towards the path of least resistance, ignoring weaker but informative modalities, and spurious modality coupling, where models overfit to incidental cross-modal correlations. To address these, we propose Group Cognition Learning (GCL), a governed collaboration paradigm that applies a two-stage protocol after modality-specific encoding. In Stage 1 (Selective Interaction), a Routing Agent proposes directed interaction routes, and an Auditing Agent assigns sample-wise gates to emphasize exchanges that yield positive marginal predictive gain while suppressing redundant coupling. In Stage 2 (Consensus Formation), a Public-Factor Agent maintains an explicit shared factor, and an Aggregation Agent produces the final prediction through contribution-aware weighting while keeping each modality representation as a specialization channel. Extensive experiments on CMU-MOSI, CMU-MOSEI, and MIntRec demonstrate that GCL mitigates dominance and coupling, establishing state-of-the-art results across both regression and classification benchmarks. Analysis experiments further demonstrate the effectiveness of the design.
How a 2021 Quantization Algorithm Quietly Outperforms Its 2026 Successor
An analysis of why an older quantization algorithm continues to outperform newer iterations.
Science Fiction as a Heuristic for AI Model Behavior and Compute Scaling
Literary tropes regarding emotional manipulation and compute scaling offer unconventional insights into current AI development. These perspectives highlight potential challenges in long-term model alignment.
As CISA cuts loom, companies need to step up cyber defenses
Companies need to step up as the federal cybersecurity infrastructure in the US faces budget cuts, just as artificial intelligence-fueled attacks are on the rise.
US DoD announces agreements with frontier AI companies for classified work
The US Department of Defense announced deals with seven AI companies to deploy advanced capabilities on classified networks, even as Anthropic challenges its supply chain risk designation.
Indian Banks Ramp Up Cybersecurity Spending to Combat AI-Driven Threats
Public sector banks in India are increasing IT spending to defend against Anthropic's Mythos AI, which can accelerate the exploitation of software vulnerabilities.
Cybercriminals struggle with AI skills, research shows
Most cybercriminals lack the skills or resources to use such innovation in their criminal activities
Adoption, Deployment & Impact
Restaurants lean on AI to cut waste and reduce costs
Hospitality sector has been slower than others at adopting the technology
Linking Behaviour and Perception to Evaluate Meaningful Human Control over Partially Automated Driving
arXiv:2605.00556v1 Announce Type: cross Abstract: Partial driving automation creates a tension: drivers remain legally responsible for vehicle behaviour, yet their active control is significantly reduced. This reduction undermines the engagement and sense of agency needed to intervene safely. Meaningful human control (MHC) has been proposed as a normative framework to address this tension. However, empirical methods for evaluating whether existing systems actually provide MHC remain underdeveloped. In this study, we investigated the extent to which drivers experience MHC when interacting with partially automated driving systems. Twenty-four drivers completed a simulator study involving silent automation failures under two modes - haptic shared control (HSC) and traded control (TC). We derived behavioural metrics from telemetry data, subjective perception scores from post-trial surveys and used them to test hypothesised relations between them derived from the properties of systems under MHC. The confirmatory analysis showed a significant negative correlation between the perception of the automated vehicle (AV) understanding the driver and conflict in steering torques. An exploratory analysis also revealed a surprising positive correlation between reaction times and the perception of sufficient control. Qualitative feedback from open-ended post-experiment questionnaires revealed that mismatches in intentions between the driver and automation, lack of safety, and resistance to driver inputs contribute to the reduction of perceived MHC, while subtle haptic guidance aligned with driver intent had a positive effect. These findings suggest that future designs should prioritise effortless driver interventions, transparent communication of automation intent, and context-sensitive authority allocation to strengthen meaningful human control in partially automated driving.
The Paradox of Medical AI Implementation - by Eric Topol
There have been 44 randomized trials for colonoscopy that consistently, and in aggregate, demonstrate a substantial advantage of AI -assist for detecting adenomatous polyps compared with gastroenterologists without AI , yet that has not been made part of standard medical practice.
The Rules NASA Uses to Write Code That Can’t Fail. Your AI Tools Break All of Them. | by David Lee | Mar, 2026 | Medium
The difference is that today, a significant portion of production code is being written by AI — quickly, fluently, and with a striking blindness to the things that make software actually reliable. Holzmann’s rules were designed for engineers who might cut corners under deadline pressure.
Adoption and Use of LLMs at an Academic Medical Center
arXiv:2602.00074v2 Announce Type: replace Abstract: While large language models (LLMs) can support clinical documentation needs, standalone tools struggle with "workflow friction" from manual data entry. We developed ChatEHR, a system that enables the use of LLMs with the entire patient timeline spanning several years. ChatEHR enables automations - which are static combinations of prompts and data that perform a fixed task - and interactive use in the electronic health record (EHR) via a user interface (UI). The resulting ability to sift through patient medical records for diverse use-cases such as pre-visit chart review, screening for transfer eligibility, monitoring for surgical site infections, and chart abstraction, redefines LLM use as an institutional capability. This system, accessible after user-training, enables continuous monitoring and evaluation of LLM use. In 1.5 years, we built 7 automations and 1075 users have trained to become routine users of the UI, engaging in 23,000 sessions in the first 3 months of launch. For automations, being model-agnostic and accessing multiple types of data was essential for matching specific clinical or administrative tasks with the most appropriate LLM. Benchmark-based evaluations proved insufficient for monitoring and evaluation of the UI, requiring new methods to monitor performance. Generation of summaries was the most frequent task in the UI, with an estimated 0.73 hallucinations and 1.60 inaccuracies per generation. The resulting mix of cost savings, time savings, and revenue growth required a value assessment framework to prioritize work as well as quantify the impact of using LLMs. Initial estimates are $6M savings in the first year of use, without quantifying the benefit of the better care offered. Such a "build-from-within" strategy provides an opportunity for health systems to maintain agency via a vendor-agnostic, internally governed LLM platform.
AI in Practice
Water utilities jettison listening sticks; restaurants aim to cut waste and reduce costs; recruiters’ quest to find the perfect connection; start-ups move fast with AI-generated code; hedge funds seek an edge; wealth managers insist AI can work in their favour
‘It’s crucial’: how AI is reshaping the fragrance industry
From hyper-personalisation to cutting costs, artificial intelligence is getting a hold on your perfume cabinet
The AI Revolution Hollywood Feared Is Already Happening
An analysis of how AI technology is currently reshaping the film and entertainment industry.
India Leads Global Surge in ChatGPT Images 2.0 Adoption, Surpassing US Downloads
India leads the global market for ChatGPT Images 2.0, with 5 million downloads in launch week, far surpassing the U.S.
AI finds signs of pancreatic cancer before tumors develop
New AI research shows promise in detecting early signs of pancreatic cancer before physical tumors are even present.
Carlyle Acquires Healthcare RCM Providers Knack and EqualizeRCM
Carlyle Group Inc. has acquired a majority stake in healthcare revenue cycle management firms Knack RCM and EqualizeRCM, it said in a statement Monday, without disclosing terms.
TestSprite Revolutionizes QA with AI-Driven End-to-End Testing in Just 12 Minutes
TestSprite uses AI to automate end-to-end testing cycles, though it currently faces challenges with internationalization and timezone handling.
An Adaptive Variable Neighborhood Search for a Family of Set Covering Routing Problems with an Application in Disaster Relief Operations
arXiv:2605.00131v1 Announce Type: cross Abstract: This paper studies a variant of the Set Covering Routing Problem (SCRP) motivated by post-disaster humanitarian logistics. We consider a hybrid distribution concept in which the majority of transportation is performed by helicopters, while ground transport is limited to the last mile, addressing severe accessibility constraints in disaster-affected regions. The resulting problem integrates landing site location, routing, and covering decisions, incorporating features of the Multi-Vehicle Covering Tour Problem (m-CTP) and the Vehicle Routing with Demand Allocation Problem (VRDAP) in a facility-capacitated, multi-depot setting. Due to the computational complexity of the problem, we develop an Adaptive Variable Neighborhood Search (AVNS) that combines established routing operators with novel mechanisms for covering decisions. The performance of the proposed approach is evaluated on benchmark instances for the related m-CTP and VRDAP problems, demonstrating competitive solution quality compared to problem-specific state-of-the-art approaches. Furthermore, we apply our AVNS to a real-world case study based on the 2024 flash floods in Afghanistan. The results highlight the practical relevance of the proposed framework and provide managerial insights into effective distribution strategies for disaster response operations.
Start-ups move fast with AI-generated code
Founders are overcoming longstanding bottlenecks in product development
Water utilities jettison listening sticks and embrace AI
World leaders in the sector, such as Singapore, have leakage rates that are 75% lower than in England and Wales
In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors
A recent Harvard study found that AI models outperformed human doctors in making accurate emergency room diagnoses.
What late-night office food orders signal about AI's impact on employee workloads
While some say AI boosts efficiency, data from a corporate food-delivery service suggests workers are logging more late-night and weekend hours.
Geopolitics, Policy & Governance
Philippines and Israel Forge Strategic Partnership in Critical Minerals and AI Development
The two nations are collaborating on an AI-native industrial hub in Tarlac and aligning on green metal exports and technological capabilities.
Vietnam, Japan agree to deepen digital, semiconductor cooperation
Vietnam and Japan have signed a memorandum of understanding to cooperate on AI development, 5G/6G networks, cloud computing, and semiconductor research initiatives.
Entrepreneurs Flocked to Colorado. Now Red Tape Is Driving Some Away.
A proposed AI bill has many wondering whether the state’s regulations are killing its entrepreneurial spirit. “If you can’t move, you’re dead.”
Collective Recourse for Generative Urban Visualizations
arXiv:2509.11487v2 Announce Type: replace-cross Abstract: Text-to-image diffusion models help visualize urban futures but can amplify group-level harms. We propose collective recourse: structured community "visual bug reports" that trigger fixes to models and planning workflows. We (1) formalize collective recourse and a practical pipeline (report, triage, fix, verify, closure); (2) situate four recourse primitives within the diffusion stack: counter-prompts, negative prompts, dataset edits, and reward-model tweaks; (3) define mandate thresholds via a mandate score combining severity, volume saturation, representativeness, and evidence; and (4) evaluate a synthetic program of 240 reports. Prompt-level fixes were fastest (median 2.1-3.4 days) but less durable (21-38% recurrence); dataset edits and reward tweaks were slower (13.5 and 21.9 days) yet more durable (12-18% recurrence) with higher planner uptake (30-36%). A threshold of 0.12 yielded 93% precision and 75% recall; increasing representativeness raised recall to 81% with little precision loss. We discuss integration with participatory governance, risks (e.g., overfitting to vocal groups), and safeguards (dashboards, rotating juries).
Start-ups challenge Apple over curbs on AI ‘vibe coding’ apps
iPhone maker warns about security risks as new software floods its review process
Starmer adviser held 16 undisclosed meetings with top US tech bosses
Exclusive: Varun Chandra’s talks with Google, Meta, Apple and others raise fears of ‘lobbying behind closed doors’ An influential government adviser close to Keir Starmer and Rachel Reeves held 16 undisclosed meetings with top US tech executives, the Guardian can reveal. The No 10 business aide Varun Chandra discussed regulatory changes, AI and Donald Trump’s second administration with tech corporations during confidential meetings between October 2024 and October 2025. In one meeting he offered to help a top executive meet the prime minister directly. Continue reading...
Voters Wary of Crypto, AI; Regulation Favored in Upcoming 2026 Elections
A recent poll shows significant concerns about the risks and rapid development of AI and cryptocurrency, which may impact 2026 midterm candidate prospects.
Australia Regulator Threatens Enforcement for Poor AI Controls
Australia’s top prudential regulator ... control cybersecurity threats, as concerns within the industry mount over Anthropic PBC’s latest AI model Mythos. The Australian Prudential Regulation Authority is finalizing a plan to supervise artificial intelligence risks, following ...
California to begin ticketing driverless cars that violate traffic laws
California regulators are moving to issue traffic citations to autonomous vehicles that commit traffic violations.
Connecticut passes social media addiction, chatbot bills
Connecticut lawmakers approved legislation to restrict social media features for minors and establish safety guardrails for AI tools, including chatbots and automated hiring systems.
Adaptive AI Governance Framework: A Blueprint for Real-Time Risk Management and Innovation
Info-Tech releases a new adaptive AI governance framework, emphasizing shared organizational responsibility among executives, legal teams, developers, and data scientists.
China has a welcome mat for Trump: it just rewrote the rules on U.S. sanctions
Beijing's Announcement No. 21 activates, for the first time, a private right of action against anyone who complies with U.S. sanctions on Chinese firms.
TechCrunch Mobility: How do you issue a ticket to a robotaxi?
An exploration of the legal and logistical challenges of enforcing traffic violations for autonomous robotaxis.
Get the full executive brief
Receive curated insights with practical implications for strategy, operations, and governance.