AI Intelligence Brief

Wed 15 April 2026

Daily Brief — Curated and contextualised by Best Practice AI

125Articles
Editor's pickEditor's Highlights

ASML Raises Forecasts, TSMC Beats Earnings, and Snap Cuts 1,000 Jobs

TL;DR ASML raised its 2026 revenue forecast to 36-40 billion euros, citing surging demand for AI chipmaking tools. TSMC reported Q1 2026 revenue of $35.71 billion, up 35% year-over-year, with $6 billion in planned capex. Snap announced layoffs of about 1,000 employees, or 16% of its workforce, to fund greater AI reliance. Maine passed the first US state ban on new data center construction, potentially influencing other regions. The IMF stated that rapid AI advances have yet to boost global productivity.

Editor's highlights

The stories that matter most

Selected and contextualised by the Best Practice AI team

4 of 125 articles

Economics & Markets

15 articles
AI Investment & Valuations11 articles
Editor's pickPAYWALLConsumer & Retail
FT· 5 days ago

Allbirds is turning into an AI compute provider, because of course it is

Flipping the ‘Bird

Editor's pickTechnology
machine-learning-made-simple.medium.com· 6 days ago

March 2026 Changed how we will Price AI. | by Devansh | Apr, 2026 | Medium

March 2026 Changed how we will Price AI. | by Devansh | Apr, 2026 | Medium Sign up Get app Sign up # March 2026 Changed how we will Price AI. ## AI Market Report March 2026: Google’s price war, America’s shadow-banking energy play, and why the whole AI stack just fractured into four industries — each with different economics. 14 min read 6 hours ago -- Share This article was originally published here. Sign up for free to never miss out and join 200K weekly readers getting the most important insights in AI delivered to their inbox. Every month, I break down the most important developments in AI. Usually, they cluster around a theme — a new model race, a pricing shift, a regulatory wave. March was different. March was the month where AI stopped behaving like a software industry. Let me exp

Editor's pickTechnology
agenticaihype.substack.com· 6 days ago

ANTHROPIC CROWNED #1, OPENAI GRABS $122B, AND MULTI-AGENT SYSTEMS EXPLODE 1,445%

ANTHROPIC CROWNED #1, OPENAI GRABS $122B, AND MULTI-AGENT SYSTEMS EXPLODE 1,445% # Artificially Intelligent, Naturally Confused 🧠 SubscribeSign in # ANTHROPIC CROWNED #1, OPENAI GRABS $122B, AND MULTI-AGENT SYSTEMS EXPLODE 1,445% ### Stanford's 2026 AI Index drops, Claude Mythos leaks from Project Glasswing, humanoid robots eye a $20B feeding frenzy, and enterprise agents officially go mainstream. Apr 14, 2026 Share Photo by Igor Omilaev on Unsplash Happy Tuesday, agents. If you felt the tectonic plates of AI shift overnight, you were not imagining things. Stanford just published its 2026 AI Index and it reads like a victory lap for the entire industry. OpenAI banked the single largest private round in history. Anthropic is wearing the crown. Multi-agent systems are going vertical on enterprise dashboards. And a secret An

Editor's pickManufacturing & Industrials
VentureBeat· 5 days ago

Traza raises $2.1 million led by Base10 to automate procurement workflows with AI

For decades, procurement has been the back office that enterprise software forgot. Billions of dollars flow through vendor negotiations, purchase orders, and supplier communications every year at the largest manufacturers and construction companies in the country — and the vast majority of that work still runs on email threads, spreadsheets, and phone calls. Traza, a newly launched startup headquartered in New York, believes the moment has arrived to change that. The company announced today the close of a $2.1 million pre-seed round led by Base10 Partners, with participation from Kfund, a16z scouts, Clara Ventures, Masia Ventures, and a roster of angel investors including Pepe Agell, who scaled Chartboost to 700 million monthly users before its acquisition by Zynga. The funding is modest by Silicon Valley standards. But Traza's pitch is anything but incremental: the company deploys AI agents that don't just recommend procurement actions — they execute them autonomously, handling vendor outreach, request-for-quote generation, order tracking, supplier communications, and invoice processing without continuous human supervision. "AI is redesigning the procurement category from the ground up," said Silvestre Jara Montes, Traza's CEO and co-founder, in an exclusive interview with VentureBeat. "This wave of AI won't just build procurement software — it will rebuild how procurement works." Why procurement contracts silently lose millions after the ink dries The market Traza is targeting is enormous and, by the company's framing, spectacularly underserved. The procurement software market alone exceeds $8 billion and grows at roughly 10% annually. But the real cost sits in the labor — the armies of people, agencies, and ad hoc workarounds required to actually run procurement operations at scale. Most enterprises meaningfully engage with only their top 20% of suppliers. The remaining 80% — the vendor outreach, order tracking, invoice reconciliation, and compliance monitoring — goes largely unmanaged. Research from World Commerce & Contracting and Ironclad finds that organizations lose an average of 11% of total contract value after agreements are signed, a phenomenon described as "post-signature value leakage." As Tim Cummins, President of WorldCC, put it: "The research shows that the 11% value gap is not caused by poor negotiation, but by how contracts are managed after signature." For a large enterprise with $500 million in annual contracted spend, that represents $55 million vanishing each year — not from bad deals, but from the operational void between what gets agreed at the negotiating table and what actually gets executed on the ground. Missed savings, unauthorized changes, and poor renewal planning are responsible for the biggest losses. Jara Montes argues that Traza sits precisely in this gap. "The 11% spans commercial, operational, and compliance leakage. We own the operational layer — and that's where the most recoverable value sits," he said. "Supplier tail management that never happens, RFQ processes skipped because someone ran out of bandwidth, invoice discrepancies that slip through unnoticed. That's where contracts bleed value after signing, and that's exactly what we automate." The numbers from Traza's early deployments, while nascent, are striking: the company claims a 70% reduction in human hours spent on procurement tasks and procurement cycles running three times faster than manual baselines. How AI agents crossed the line from procurement copilot to autonomous worker To understand what makes Traza's approach different, it helps to understand what "AI for procurement" has meant until now. For the past several years, the term largely described dashboards, analytics layers, and recommendation engines that surfaced insights but left every decision and action in a human's hands. Products from incumbents like SAP Ariba and Coupa — as well as newer entrants like Zip, Fairmarkit, and Tonkean — have layered AI capabilities on top of existing systems of record. But the gap between piloting AI and achieving production-scale impact remains stark, with 49 percent of procurement teams running pilots but only 4 percent reaching meaningful deployment. Traza's bet is that 2026 represents an inflection point. AI agents now possess the multi-step reasoning, tool use, and contextual memory required to execute full procurement workflows autonomously — from vendor discovery through invoice processing. The company frames this not as an upgrade to existing procurement software, but as an entirely new product category. "The incumbents built systems of record. They organize procurement data and they've never executed procurement work — and their AI additions don't fundamentally change that," Jara Montes said. "What they're shipping is a recommendation layer on the same underlying architecture. A human still has to act on every suggestion. We replace the operational layer entirely." Industry data supports the thesis that enterprises are hungry for this shift. According to the 2025 Global CPO Survey from EY, 80 percent of global chief procurement officers plan to deploy generative AI in some capacity over the next three years, and 66 percent consider it a high priority over the next 12 months. A 2025 ABI Research survey found that 76% of supply chain professionals already see autonomous AI agents as ready to handle core tasks like reordering, supplier outreach, and shipment rerouting without human intervention — and early deployments are demonstrably reducing supply chain operational costs by 20 to 35%. Inside the workflow: what Traza's AI does and where humans still make the call In a typical deployment, Traza's AI agent takes over the operational labor that currently lives in inboxes, spreadsheets, and manual follow-up chains. In a standard RFQ workflow, the agent identifies suitable suppliers, drafts and sends the request for quotes, monitors supplier responses, follows up automatically when responses lag, parses incoming quotes regardless of their format, and builds a structured comparison table ready for a human decision-maker. The key design principle is deliberate: humans remain in the loop at critical junctures. "At critical steps — approving a purchase order, flagging a compliance issue, committing spend above a threshold — a human is always in the loop," Jara Montes explained. "That's not a limitation, it's the design. It's how you maintain the auditability enterprises require while moving faster than any manual process could. You earn expanded autonomy over time, as trust is built and results compound." When asked about the risk of AI errors — a wrong purchase order or a missed compliance check that could prove costly — Jara Montes was direct: "Anything with meaningful financial or compliance exposure requires human approval before it executes — that's non-negotiable and baked into the architecture. Below those thresholds, the agent acts autonomously and logs everything." He added a point that reveals a subtler product insight: "Most procurement operations today are a black box — nobody has a clear picture of what's happening across the supplier tail. We make it legible." In other words, the transparency the AI agent provides may itself be a product — giving procurement leaders visibility they have never had into the long tail of supplier relationships that most enterprises simply ignore. How Traza plugs into legacy enterprise systems without ripping them out One of the recurring challenges for any enterprise AI startup is the integration question: How do you plug into the deeply entrenched, often decades-old technology stacks that large manufacturers and construction companies rely on? Traza's answer is to sit on top of existing systems rather than replace them. "We connect via API or direct integration into whatever the customer already runs — ERPs, email, supplier portals. We have reach across more than 200 enterprise tools," Jara Montes said. "We don't rip out their system, we sit on top of them." The go-to-market motion mirrors this pragmatism. Instead of attempting a big-bang deployment, Traza runs a two-to-three-month proof of value focused on a single, specific workflow. Integrations are built at the key steps that matter for that particular use case, then expanded as the scope of the engagement grows. "We don't try to connect everything upfront — we compound integrations as we expand scope within each account," Jara Montes said. "And every integration we build compounds across customers too. Each new deployment makes the next one faster." Throughout the process, the company works side by side with the customer's team, managing complexity and helping them transition into a new way of operating. It is a notably high-touch approach for a company selling automation. The company is already working with large manufacturers and construction companies and says they are paying, though it declines to name them publicly. "We want to earn the right to grow inside each account, not land a pilot that goes nowhere," Jara Montes said. "That's how you build something that actually sticks in enterprise." Traza bets that vertical depth in physical industry will beat horizontal AI platforms Traza enters a market that is rapidly heating up. The leading AI procurement solutions include platforms from Coupa, Ivalua, SAP Ariba, Zip, Zycus, and Fairmarkit. Keelvar provides autonomous sourcing bots capable of launching RFQs, collecting bids, and recommending optimal awards, while Tonkean offers a no-code orchestration platform using NLP and generative AI to streamline procurement intake and tail-spend management. Against this crowded field, Jara Montes draws a sharp distinction between horizontal automation tools and Traza's focus on physical industry. "We're built specifically for the physical industry, where supplier relationships, compliance requirements, and workflow complexity are categorically different from software procurement," he said. "A generic agent doesn't survive contact with how procurement actually works in manufacturing or construction. Specificity is the moat." The competitive dynamics with major incumbents are perhaps even more consequential. SAP Ariba, Coupa, and their peers have massive installed bases and deep enterprise relationships. Jara Montes frames their AI initiatives as surface-level additions to legacy architectures — but whether Traza can convert that framing into market share at scale, especially given the gravitational pull of existing vendor relationships, remains the central strategic question. Beneath Traza's product pitch sits a deeper strategic thesis about compounding data advantages. The company describes a two-layered learning architecture: at the agent level, Traza gets smarter across every deployment by absorbing supplier behavior patterns, RFQ response dynamics, pricing anomalies, and workflow edge cases. At the data level, each customer's information stays fully isolated. "What we're building is deep operational knowledge of how procurement actually runs in the physical industry — not how it's supposed to run according to an RFP, but how it really runs, with all the exceptions and workarounds," Jara Montes said. "That's extraordinarily hard to replicate if you're starting from scratch, and it gets harder to catch up with the more deployments we have." Three Spanish founders, one fellowship, and a plan to rewire industrial procurement Traza was co-founded by three Spanish entrepreneurs — Silvestre Jara Montes, Santiago Martínez Bragado, and Sergio Ayala Miñano — who came to the United States through the Exponential Fellowship, a program that brings Europe's top technical talent to the U.S. to build companies at the frontier of AI. Their backgrounds span both sides of the problem Traza is trying to solve. Jara Montes worked at Amazon and CMA CGM — one of the world's largest shipping groups — at the intersection of operations strategy and supply chain optimization. Martínez Bragado built and deployed agentic AI at Clarity AI before joining Concourse (backed by a16z, Y Combinator, and CRV) as Founding AI Engineer. Ayala Miñano comes from StackAI, one of the fastest-growing enterprise AI platforms in San Francisco, where he was a Founding Engineer. None of the founders carry the title of Chief Procurement Officer, a gap that the company acknowledges has occasionally surfaced in buyer conversations. Jara Montes's response is characteristically direct: "Our work is the answer. The results we're generating move that conversation quickly." He noted that the company has senior procurement leaders serving as advisors who have run procurement at the scale of its target customers. Base10 Partners, the lead investor, is a San Francisco-based venture capital firm that invests in companies automating sectors of what it calls "the Real Economy." Its portfolio includes Notion, Figma, Nubank, Stripe, and Aurora Solar. Rexhi Dollaku, General Partner at Base10, framed the investment in emphatic terms: "Supply chain and procurement is one of the largest, most underautomated markets in the Real Economy. AI agents are finally capable of doing the work, not just assisting with it." The supporting cast of investors reinforces the immigrant-founder narrative. Clara Ventures — founded by the executives behind Olapic's $130 million exit — specifically invests in driven foreign founders building in the United States, and Agell adds operational credibility from building Chartboost into a $100 million revenue business in under three years as a Spanish founder in Silicon Valley. Why $2.1 million may stretch further than it looks for an enterprise AI startup At $2.1 million, this is a deliberately small round for a company selling to large enterprises with notoriously long procurement cycles. Jara Montes argues it goes further than it appears for structural reasons. "We leverage Europe as a tech talent hub, where we have a deep network of exceptional engineers — people who want to work at the frontier of AI but have far fewer opportunities to do so than their US counterparts," he said. "We're not just lean — we're built to outcompete on capital efficiency while others are burning through runway trying to hire in San Francisco." The go-to-market motion is designed for speed to revenue. Proofs of value are scoped, time-bounded, and converted to paying partnerships. The company says it is not running 18-month enterprise sales cycles before seeing a dollar. The milestone for the next raise is explicit: more paying customers, meaningfully stronger annual recurring revenue, and a repeatable sales motion that makes the seed round, as Jara Montes put it, "an obvious conversation." Looking ahead, he outlined an ambitious three-year target: 20 to 30 large industrial enterprises in the U.S. and Europe running Traza across their procurement operations, with over a billion dollars in procurement spend flowing through the platform. Whether that vision is achievable depends on several interlocking variables — the pace at which AI agent capabilities continue to improve, the speed of enterprise adoption in a traditionally conservative buyer segment, and Traza's ability to navigate the competitive gauntlet of incumbents adding AI features and well-funded startups attacking adjacent workflows. But the underlying math may be on Traza's side. In procurement, the money that disappears does not look like waste. It vanishes into inefficiency, missed obligations, unmanaged risks, and forgotten commitments — the kind of silent losses that no one tracks because no one has the bandwidth to track them. The traditional mandate of procurement, as currently configured, ends where the value gap begins: at signature. Traza is building an AI workforce that picks up where the humans leave off. For an industry that has spent decades losing $55 million at a time to the back office nobody watches, that might be precisely the point.

Editor's pick
en.oninvest.com· 6 days ago

Who will be out of work and how much money is in AI: six questions from the Stanford report – Oninvest

Who will be out of work and how much money is in AI: six questions from the Stanford report – Oninvest AI is conquering the market, but it still has a long way to go to master the physical world, says the report / Photo: Shutterstock.com Stanford University has published the AI Index Report, a summary of how AI is changing the economy, the labor market, and everyday life. In 423 pages, the authors analyze data for the year 2025 and conclude that artificial intelligence has rapidly become a mass technology, and money, infrastructure, and influence are concentrating even faster around a narrow group of players. This year's report centers on the nearly disappearing gap between the U.S. and China in terms of model quality and the rapid growth of the AI economy. The report's authors write that since 2013, investment in AI has grown

Editor's pickFinancial Services
Daily Brew· 7 days ago

OpenAI has bought AI personal finance startup Hiro

OpenAI has acquired Hiro, a startup specializing in AI-driven personal finance tools.

Editor's pickTechnology
Daily Brew· 5 days ago

Humyn Labs Invests $20M in Global AI Data Expansion

Humyn Labs invests $20 million to expand its infrastructure for Physical AI, targeting the growing $25 billion AI training data market by 2030.

Editor's pickTechnology
finance.yahoo.com· 6 days ago

nEye.ai Secures $80 Million Series C to Scale Optical Circuit Switching for AI Infrastructure

nEye.ai Secures $80 Million Series C to Scale Optical Circuit Switching for AI Infrastructure Oops, something went wrong This is a paid press release. Contact the press release distributor directly with any inquiries. # nEye.ai Secures $80 Million Series C to Scale Optical Circuit Switching for AI Infrastructure Investment led by Sutter Hill Ventures enables next-generation optical interconnects for AI Data Centers SANTA CLARA, Calif., April 14, 2026--(BUSINESS WIRE)--nEye.ai, a pioneer in integrated optical interconnects, today announced it has raised $80 million in Series C financing led by Sutter Hill Ventures with participation from existing investors including Alphabet’s independent growth fund, CapitalG; Microsoft’s Venture Fund, M12; and Socratic Partners, among others. This latest infusion of capital brings nEye’s total funding to $152 millio

Editor's pickTechnology
futuredigestnews.substack.com· 6 days ago

You’re Paying $200/Month for AI. The Same Models Are Now Free.

You’re Paying $200/Month for AI. The Same Models Are Now Free. # Future Digest SubscribeSign in # You’re Paying $200/Month for AI. The Same Models Are Now Free. Apr 14, 2026 ∙ Paid Share Last month, I got my Anthropic invoice. $200. Again. Claude Max 20x. I’ve been on it since January. Before that, I was paying $100 for Max 5x, plus $20 for ChatGPT Plus on the side, plus $20 for Perplexity Pro. Th

Editor's pickManufacturing & Industrials
benzinga.com· 6 days ago

AI Arms Race Triggers $143 Billion Explosion In Chip Gear Spending - Applied Materials (NASDAQ:AMAT) - Benzinga

AI Arms Race Triggers $143 Billion Explosion In Chip Gear Spending - Applied Materials (NASDAQ:AMAT) - Benzinga My Account --- Login LoginRegister Trade Ideas Stock of the Day Best Stocks & ETFs Best S&P 500 ETFs Best Swing Trade Stocks Best Blue Chip Stocks Best High-Volume Penny Stocks Best Small Cap ETFs Best Stocks to Day Trade Trade Ideas Stock of the Day Best Stocks & ETFs [Best S&P 500 ETFs](https://www.benzinga.com/money/best-s

Editor's pickTechnology
Siliconrepublic· 5 days ago

Dublin start-up Otel AI raises €2m to expand hotel AI platform

The start-up plans to use the funding to expand its product and engineering team, and accelerate expansion into the UK, Europe, the US and the UAE. Read more: Dublin start-up Otel AI raises €2m to expand hotel AI platform

AI Macroeconomics2 articles

Labor & Society

41 articles
AI & Employment10 articles
Editor's pickProfessional Services
Fortune· 5 days ago

The org chart isn’t ready: How AI exposed the hidden crisis inside the American corporation

A landmark KPMG index finds boards demanding transformation, execs spending billions on tech, and the people caught in the middle burning out.

Editor's pickTechnology
Guardian· 5 days ago

Snap Inc blames AI as it lays off 1,000 workers

Cuts by Snapchat’s parent company come in response to a declining stock price and pressure from an activist investor Snapchat’s parent company plans to lay off 16% of its employees, around 1,000 people, citing “rapid advancements in artificial intelligence”, the social media company told staff on Wednesday in an internal memo. The staff reduction is part of a wave of tech industry layoffs in the past year, with many firms blaming AI for the cuts. Snap Inc’s layoffs follow demands last month from Irenic Capital Management, an activist investor whose portfolio manager wrote a letter to the Snap Inc CEO, Evan Spiegel, calling on him to reduce costs and headcount while criticizing the company’s current strategy. In Spiegel’s memo to staff, he claimed that the layoffs would move Snap towards profitability and suggested that artificial intelligence could fill the lack of human labor. Continue reading...

Editor's pickPAYWALL
NYT· 5 days ago

What Is ‘Jagged Intelligence’ and How Can It Reframe the AI Debate?

A.I. has always been compared to human intelligence, but that may not be the right way to think about it. What it does well can help predict what jobs it may replace.

Editor's pickProfessional Services
Arxiv· 5 days ago

Identifying and Mitigating Gender Cues in Academic Recommendation Letters: An Interpretability Case Study

arXiv:2604.12337v1 Announce Type: cross Abstract: Letters of recommendation (LoRs) can carry patterns of implicitly gendered language that can inadvertently influence downstream decisions, e.g. in hiring and admissions. In this work, we investigate the extent to which Transformer-based encoder models as well as Large Language Models (LLMs) can infer the gender of applicants in academic LoRs submitted to an U.S. medical-residency program after explicit identifiers like names and pronouns are de-gendered. While using three models (DistilBERT, RoBERTa, and Llama 2) to classify the gender of anonymized and de-gendered LoRs, significant gender leakage was observed as evident from up to 68% classification accuracy. Text interpretation methods, like TF-IDF and SHAP, demonstrate that certain linguistic patterns are strong proxies for gender, e.g. "emotional'' and "humanitarian'' are commonly associated with LoRs from female applicants. As an experiment in creating truly gender-neutral LoRs, these implicit gender cues were remove resulting in a drop of up to 5.5% accuracy and 2.7% macro $F_1$ score on re-training the classifiers. However, applicant gender prediction still remains better than chance. In this case study, our findings highlight that 1) LoRs contain gender-identifying cues that are hard to remove and may activate bias in decision-making and 2) while our technical framework may be a concrete step toward fairer academic and professional evaluations, future work is needed to interrogate the role that gender plays in LoR review. Taken together, our findings motivate upstream auditing of evaluative text in real-world academic letters of recommendation as a necessary complement to model-level fairness interventions.

Editor's pickPAYWALL
NYT· 5 days ago

That Meeting You Hate May Keep A.I. From Stealing Your Job

As artificial intelligence makes many tasks easier, the human work of cajoling, arm-twisting and reassuring appears to be rising in importance.

Editor's pickProfessional Services
Fortune· 5 days ago

A16z’s Ben Horowitz sees ‘AI anxiety’ consuming Silicon Valley founders. Workers’ fear of something else is killing adoption

The a16z cofounder said the “laws of physics” have changed for founders—and “AI anxiety” is real. Workers are experiencing something much darker.

Editor's pick
ca.finance.yahoo.com· 6 days ago

Workers who want AI training the most are using it the least

Workers who want AI training the most are using it the least Oops, something went wrong # Workers who want AI training the most are using it the least When it comes to using generative AI in the workplace, adoption skews highest among currently employed college graduates, big earners, and those with full-time jobs, according to a new analysis from New York Federal Reserve researchers. That raises questions about whether artificial intelligence might worsen labor market inequities — especially given that 62% of workers believe AI will lead to higher unemployment, the analysis found. Workers who earn less than $50,000 have an AI adoption rate of 15.9%, for example, while those who earn over $200,000 a year have an adoption rate of 66.3%. Subscribe Still, worker

Editor's pickProfessional Services
prnewswire.com· 6 days ago

New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace

New Research Finds Workers Are Leveraging AI for Career Mobility as Employers Struggle to Keep Pace Accessibility Statement Skip Navigation New University of Phoenix Career Institute® Career Optimism Index® study points to an emerging shift in workforce power dynamics: while employees "job hug" in a stabilizing but constrained labor market, many are quietly using AI to build skills, boost confidence, and explore career mobility – creating new talent retention risks for employers. PHOENIX, April 14, 2026 /PRNewswire/ -- University of Phoenix Career Institute® today released its sixth annual Career Optimism Index® recurring national workforce research study of 5,000 U.S. working adults and 1,000 employers fielded January 21–February 6, 2026. The study found that while workers appear to be "job hugging" in a stabilizing labor market where mob

Editor's pickProfessional Services
medium.com· 6 days ago

AI Is No Longer Optional at Work. For a while, AI felt like an advantage. | by Yevgeniy Maliyev | Apr, 2026 | Medium

AI Is No Longer Optional at Work. For a while, AI felt like an advantage. | by Yevgeniy Maliyev | Apr, 2026 | Medium Sign up Get app Sign up Press enter or click to view image in full size # AI Is No Longer Optional at Work 7 min read Just now -- Share For a while, AI felt like an advantage. Something extra. Something useful. Something curious people experimented with before everyone else caught up. If you used it well, you moved faster. You wrote quicker. You summarized better. You researched more efficiently. You automated things other people still did manually. That was the first phase. In that phase, AI looked like a skill. Now it’s starting to look like something else. A requirement. And that changes everything. Because the moment a tool stops being optional, it stops feeling like a personal advantage and starts becoming part of the new baseline. That is where man

Editor's pick
uctoday.com· 6 days ago

Enterprise AI Adoption Is Failing. Here's Why - UC Today

Enterprise AI Adoption Is Failing. Here's Why - UC Today # 44% of Gen Z Workers Are Sabotaging Your Enterprise AI Rollout. The Problem Isn’t Gen Z. Worker resistance is undermining enterprise AI strategies at scale, and the data suggests the real failure is in how organisations are deploying these tools, not who they're deploying them to 4 Productivity & Automation News Published: April 14, 2026 Almost every enterprise AI deployment in 2025 and 2026 has started the same way: executives sign off on the investment, the tools get provisioned, and the announcement goes out. What happens after that is a different story. A survey of 1,200 employees and 1,200 C-suite executives by Writer and Workplace Intelligence found th

AI Ethics & Safety10 articles
Editor's pickPAYWALLGovernment & Public Sector
washingtonpost.com· 5 days ago

Woman jailed for 6 months after facial recognition flagged her as a ...

Woman jailed for 6 months after facial recognition flagged her as a suspect - The Washington Post Democracy Dies in Darkness HERMITAGE, Pa. — To investigators, the case against Kimberlee Williams appeared straightforward. “It’s very obvious it’s you,” an officer in Montgomery County, Maryland, said to Williams, who was handcuffed to a table in the police department.

Editor's pickTechnology
Guardian· 5 days ago

‘Misogyny with a marketing budget’: UK AI firm accused of sexist advert

Narwhal Labs ad for ‘AI employee’ contains strapline: ‘She outworks everyone. And she’ll never ask for a raise’ A British AI company that recently secured millions of pounds of investment has been accused of running a misogynistic and sexist advertising campaign. The Advertising Standards Authority (ASA) has received at least seven complaints about the campaign by Narwhal Labs, which includes an advert depicting a woman next to the strapline: “She outworks everyone. And she’ll never ask for a raise.” Continue reading...

Editor's pickPAYWALLTechnology
washingtonpost.com· 6 days ago

Attempted fire-bombing has tech titans worried about AI backlash

Attack on Sam Altman’s San Francisco home prompts fears of AI division - The Washington Post Democracy Dies in Darkness By Gerrit De Vynck and SAN FRANCISCO — In the early morning hours on Friday, a 20-year-old man threw a firebomb at the home of OpenAI CEO Sam Altman, then tried to set fire to the company’s headquarters, according to federal charges filed Monday.

Editor's pickTechnology
Theregister· 5 days ago

Bad teacher bots can leave hidden marks on model students

Study finds LLMs will smuggle biases into others even if they're scrubbed from training data New research warns about the dangers of teaching LLMs on the output of other models, showing that undesirable traits can be transmitted "subliminally" from teacher to student, even when they are scrubbed from training data.…

Editor's pickFinancial Services
🎓 $10K AI college· 6 days ago

AI's threat to your money

AI's rapid advance is increasing risks of large-scale financial infrastructure attacks and targeted fraud. Experts suggest rethinking how money is stored, potentially using strategies like cold wallets.

Editor's pick
Arxiv· 5 days ago

Deepfakes at Face Value: Image and Authority

arXiv:2604.12490v1 Announce Type: new Abstract: Deepfakes are synthetic media that superimpose or generate someone's likeness on to pre-existing sound, images, or videos using deep learning methods. Existing accounts of the wrongs involved in creating and distributing deepfakes focus on the harms they cause or the non-normative interests they violate. However, these approaches do not explain how deepfakes can be wrongful even when they cause no harm or set back any other non-normative interest. To address this issue, this paper identifies a neglected reason why deepfakes are wrong: they can subvert our legitimate interests in having authority over the permissible uses of our image and the governance of our identity. We argue that deepfakes are wrong when they usurp our authority to determine the provenance of our own agency by exploiting our biometric features as a generative resource. In particular, we have a specific right against the algorithmic conscription of our identity. We refine the scope of this interest by distinguishing between permissible forms of appropriation, such as artistic depiction, from wrongful algorithmic simulation.

Editor's pickTechnology
Arxiv· 5 days ago

BiasIG: Benchmarking Multi-dimensional Social Biases in Text-to-Image Models

arXiv:2604.11934v1 Announce Type: new Abstract: Text-to-Image (T2I) generative models have revolutionized content creation, yet they inherently risk amplifying societal biases. While sociological research provides systematic classifications of bias, existing T2I benchmarks largely conflate these nuances or focus narrowly on occupational stereotypes, leaving the multi-dimensional nature of generative bias inadequately measured. In this paper, we introduce BiasIG, a unified benchmark that quantifies social biases across a curated dataset of 47,040 prompts. Grounded in sociological and machine ethics frameworks, BiasIG disentangles biases across 4 dimensions to enable fine-grained diagnosis. To facilitate scalable and reliable evaluation, we propose a fully automated pipeline powered by a fine-tuned multi-modal large language model, achieving high alignment accuracy comparable to human experts. Extensive experiments on 8 T2I models and 3 debiasing methods not only validate BiasIG as a robust diagnostic tool, but also reveal critical insights: interventions on protected attributes often trigger unintended confounding effects on unrelated demographics, and debiasing methods exhibit a persistent tendency toward discrimination rather than mere ignorance. Our work advocates for a precise, taxonomy-driven approach to fairness in AIGC, providing a theoretical framework for using BiasIG's metrics as feedback signals in future closed-loop mitigation. The benchmark is openly available at https://github.com/Astarojth/BiasIG.

Editor's pick
Arxiv· 5 days ago

Detecting and Enhancing Intellectual Humility in Online Political Discourse

arXiv:2604.12821v1 Announce Type: new Abstract: Intellectual humility (IH)-a recognition of one's own intellectual limitations-can reduce polarization and foster more understanding across lines of difference. Yet little work explores how IH can be systematically defined, measured, evaluated, and enhanced in spaces that often lack it the most: online political discussions. In this paper, we seek to bridge these gaps by exploring two questions: 1) how might preexisting levels of IH influence future expressions of IH during online political discourse? and 2) can online interventions enhance IH across different political topics and conversational environments? To pursue these questions, we define a codebook characterizing different dimensions of IH and intellectual arrogance (IA) and have researchers use it to annotate several hundred Reddit posts, which we then use to develop and validate a classifier to support IH analysis at scale. These tools subsequently enable two key contributions: i) an observational data analysis of how IH varies across different political discussions on Reddit, which reveals that more/less IH environments tend to contain future posts of a similar nature, and ii) a randomized control trial evaluating strategies for nudging discussion participants to demonstrate more IH in their posts, which reveals the possibility of enhancing IH in online discussions across a range of contentious topics. Our findings highlight the possibility of measuring and increasing IH online without necessarily reducing engagement.

Editor's pickPAYWALL
Bloomberg· 5 days ago

The Dark Art of AI Marketing

When tech CEOs like Sam Altman and Dario Amodei acknowledge the risks of AI, they’re selling you a product, Bloomberg Opinion columnist Parmy Olson says. (Source: Bloomberg)

Editor's pickTechnology
Arxiv· 5 days ago

Characterizing Resource Sharing Practices on Underground Internet Forum Synthetic Non-Consensual Intimate Image Content Creation Communities

arXiv:2604.12190v1 Announce Type: new Abstract: Many malicious actors responsible for disseminating synthetic non-consensual intimate imagery (SNCII) operate within internet forums to exchange resources, strategies, and generated content across multiple platforms. Technically-sophisticated actors gravitate toward certain communities (e.g., 4chan), while lower-sophistication end-users are more active on others (e.g., Reddit). To characterize key stakeholders in the broader ecosystem, we perform an integrated analysis of multiple communities, analyzing 282,154 4chan comments and 78,308 Reddit submissions spanning 165 days between June and November 2025 to characterize involved actors, actions, and resources. We find: (a) that users with differing levels of technical sophistication employ and share a wide range of primary resources facilitating SNCII content creation as well as numerous secondary resources facilitating dissemination; and (b) that knowledge transfer between experts and newcomers facilitates propagation of these illicit resources. Based on our empirical analysis, we identify gaps in existing SNCII regulatory infrastructure and synthesize several critical intervention points for bolstering deterrence.

AI Policy & Regulation15 articles
Editor's pickPAYWALLTechnology
FT· 5 days ago

Big Tech’s $300mn election war chest rattles Democrats

Pro-industry campaign groups deploy millions amid growing public support for tighter regulation

Editor's pickTechnology
news.microsoft.com· 6 days ago

Why Digital Sovereignty Is About Control, Not Location

Why Digital Sovereignty Is About Control, Not Location Skip to main content We use optional cookies to improve your experience on our websites, such as through social media connections, and to display personalized advertising based on your online activity. If you reject optional cookies, only cookies necessary to provide you the services will be used. You may change your selection by clicking “Man

Editor's pickPAYWALL
Bloomberg· 5 days ago

Andreessen, Horowitz Boost AI Super PAC Cash to Over $50 Million

Venture capitalists Marc Andreessen and Ben Horowitz poured $25 million into a pro-artificial intelligence super political action committee, boosting the industry’s war chest ahead of the November midterm elections.

Editor's pickGovernment & Public Sector
ad-hoc-news.de· 6 days ago

Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft

Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft News Suche - Blogs - Bilder - News Sie befinden sind hier: Startseite> Börse> Ueberblick> Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft ... Die US-Regierung will mit einem neuen Bundesgesetz die KI-Regulierung vereinheitlichen und durch massive Infrastrukturinvestitionen die technologische Führungsposition sichern. Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft - Foto: über boerse-global.de Die US-Regierung will mit einem neuen Gesetzesrahmen die fragmentierte KI-Regulierung der Bundesstaaten beenden und die Infrastruktur massiv ausbauen. Das Ziel: Amerikas technologische Dominanz sichern. ## Einheitliches Gesetz statt Flickenteppich Am 20. März 2026 stellte

Editor's pickTechnology
Arxiv· 5 days ago

The Enforcement and Feasibility of Hate Speech Moderation on Twitter

arXiv:2604.12289v1 Announce Type: new Abstract: Online hate speech is associated with substantial social harms, yet it remains unclear how consistently platforms enforce hate speech policies or whether enforcement is feasible at scale. We address these questions through a global audit of hate speech moderation on Twitter (now X). Using a complete 24-hour snapshot of public tweets, we construct representative samples comprising 540,000 tweets annotated for hate speech by trained annotators across eight major languages. Five months after posting, 80% of hateful tweets remain online, including explicitly violent hate speech. Such tweets are no more likely to be removed than non-hateful tweets, with neither severity nor visibility increasing the likelihood of removal. We then examine whether these enforcement gaps reflect technical limits of large-scale moderation systems. While fully automated detection systems cannot reliably identify hate speech without generating large numbers of false positives, they effectively prioritize likely violations for human review. Simulations of a human-AI moderation pipeline indicate that substantially reducing user exposure to hate speech is economically feasible at a cost below existing regulatory penalties. These results suggest that the persistence of online hate cannot be explained by technical constraints alone but also reflects institutional choices in the allocation of moderation resources.

Editor's pickGovernment & Public Sector
Arxiv· 5 days ago

PolicyLLM: Towards Excellent Comprehension of Public Policy for Large Language Models

arXiv:2604.12995v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly integrated into real-world decision-making, including in the domain of public policy. Yet, their ability to comprehend and reason about policy-related content remains underexplored. To fill this gap, we present \textbf{\textit{PolicyBench}}, the first large-scale cross-system benchmark (US-China) evaluating policy comprehension, comprising 21K cases across a broad spectrum of policy areas, capturing the diversity and complexity of real-world governance. Following Bloom's taxonomy, the benchmark assesses three core capabilities: (1) \textbf{Memorization}: factual recall of policy knowledge, (2) \textbf{Understanding}: conceptual and contextual reasoning, and (3) \textbf{Application}: problem-solving in real-life policy scenarios. Building on this benchmark, we further propose \textbf{\textit{PolicyMoE}}, a domain-specialized Mixture-of-Experts (MoE) model with expert modules aligned to each cognitive level. The proposed models demonstrate stronger performance on application-oriented policy tasks than on memorization or conceptual understanding, and yields the highest accuracy on structured reasoning tasks. Our results reveal key limitations of current LLMs in policy understanding and suggest paths toward more reliable, policy-focused models.

Editor's pickTechnology
Arxiv· 5 days ago

The A-R Behavioral Space: Execution-Level Profiling of Tool-Using Language Model Agents in Organizational Deployment

arXiv:2604.12116v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed as tool-augmented agents capable of executing system-level operations. While existing benchmarks primarily assess textual alignment or task success, less attention has been paid to the structural relationship between linguistic signaling and executable behavior under varying autonomy scaffolds. This study introduces an execution-layer be-havioral measurement approach based on a two-dimensional A-R space defined by Action Rate (A) and Refusal Signal (R), with Divergence (D) capturing coor-dination between the two. Models are evaluated across four normative regimes (Control, Gray, Dilemma, and Malicious) and three autonomy configurations (di-rect execution, planning, and reflection). Rather than assigning aggregate safety scores, the method characterizes how execution and refusal redistribute across contextual framing and scaffold depth. Empirical results show that execution and refusal constitute separable behavioral dimensions whose joint distribution varies systematically across regimes and autonomy levels. Reflection-based scaffolding often shifts configurations toward higher refusal in risk-laden contexts, but redis-tribution patterns differ structurally across models. The A-R representation makes cross-sectional behavioral profiles, scaffold-induced transitions, and coordination variability directly observable. By foregrounding execution-layer characterization over scalar ranking, this work provides a deployment-oriented lens for analyzing and selecting tool-enabled LLM agents in organizational settings where execution privileges and risk tolerance vary.

Editor's pickGovernment & Public Sector
executivegov.com· 6 days ago

GAO Finds AI Acquisition Challenges in Government

GAO Finds AI Acquisition Challenges in Government Processing.... Federal agencies face barriers in AI procurement, from talent gaps to cost complexity, according to a new report from the Government Accountability Office. Logo: Government Accountability Office Share this Federal agencies are expanding their use of artificial intelligence, but a new Government Accountability Office report identified key challenges in buying and deploying the technology. Leaders from GAO, the Pentagon and other agencies will explore the challenges in deploying AI in federal and mission-critical systems at the Potomac Officers Club’s 2026 Digital Transformation Summit on April 22.

Editor's pickFinancial Services
Artificial Intelligence Newsletter | April 15, 2026· 6 days ago

Insurers seek to exclude their risk-assessment models from EU's AI Act

The European Insurances and Occupational Pensions Authority has requested that insurers' risk-assessment models be excluded from the EU's AI Act, arguing they are not autonomous.

Editor's pickGovernment & Public Sector
bradleytusk.substack.com· 6 days ago

Realistically, Where Can and Can’t You Regulate AI?

Realistically, Where Can and Can’t You Regulate AI? # Bradley Tusk’s Substack SubscribeSign in # Realistically, Where Can and Can’t You Regulate AI? Apr 14, 2026 1 Share The title of this column may be misleading because realistic and thoughtful are typically not the two words that drive most legislation and policy (there’s only one phrase that matters 99% of the time — politically advantageous). However, for the first time since I’ve worked in politics, we have something that is societally, technologically and economically transformational, has the ability to solve existential problems and also the ability to destroy humanity itself. Clearly, thoughtful regulation is needed. But because AI is also so new and is developing so quickly, exactly what you even could regulate right now in a way that makes sense and does more than propel a tweet and a press release is unclear at best.

Editor's pickGovernment & Public Sector
Arxiv· 5 days ago

Cross-Cultural Simulation of Citizen Emotional Responses to Bureaucratic Red Tape Using LLM Agents

arXiv:2604.12545v1 Announce Type: cross Abstract: Improving policymaking is a central concern in public administration. Prior human subject studies reveal substantial cross-cultural differences in citizens' emotional responses to red tape during policy implementation. While LLM agents offer opportunities to simulate human-like responses and reduce experimental costs, their ability to generate culturally appropriate emotional responses to red tape remains unverified. To address this gap, we propose an evaluation framework for assessing LLMs' emotional responses to red tape across diverse cultural contexts. As a pilot study, we apply this framework to a single red-tape scenario. Our results show that all models exhibit limited alignment with human emotional responses, with notably weaker performance in Eastern cultures. Cultural prompting strategies prove largely ineffective in improving alignment. We further introduce \textbf{RAMO}, an interactive interface for simulating citizens' emotional responses to red tape and for collecting human data to improve models. The interface is publicly available at https://ramo-chi.ivia.ch.

Editor's pickGovernment & Public Sector
itp.net· 6 days ago

White House Ramps up Efforts to Counter Risks from Powerful AI Tools as Models Grow More Capable - ITP.net

White House Ramps up Efforts to Counter Risks from Powerful AI Tools as Models Grow More Capable - ITP.net / Sean Cairncross, National Cyber Director SHARE The White House is accelerating efforts to address emerging risks from powerful artificial intelligence systems, as the next wave of advanced AI models raises concerns around cybersecurity and critical infrastructure protection. Reports indicate that senior U.S. officials are working across agencies and with private sector leaders to identify vulnerabilities that could be exploited by increasingly capable AI tools. At the centre of this push is National Cyber Director Sean Cairncross, who is leading a coordinated government response aimed at strengthening defences across key systems, including energy, finance, and digital infrastructure. The urgency reflects a broader shift. AI models are evolving from tools that assist with tas

Editor's pickGovernment & Public Sector
Daily Brew· 5 days ago

RED ALERT: Tennessee is about to make building chatbots a Class A felony

Proposed legislation in Tennessee could classify the development of certain chatbots as a Class A felony, carrying severe prison sentences.

Editor's pickGovernment & Public Sector
Siliconrepublic· 6 days ago

Anthropic’s Mythos a game-changer, NCSC chief tells Oireachtas

'We’re in a race whether we choose to accept it or not,' Richard Browne said. Read more: Anthropic’s Mythos a game-changer, NCSC chief tells Oireachtas

Editor's pickPAYWALLMedia & Entertainment
washingtonpost.com· 6 days ago

Opinion | Can you spot a fake political ad? AI is making it harder.

Opinion | AI-generated political ads are coming. We’re not ready. - The Washington Post Democracy Dies in Darkness By Darrell M. West You’re reading Superintelligent, a newsletter about AI. Click here to get the full edition in your inbox. Darrell M. West is a senior fellow at the Brookings Institution’s Center for Technology Innovation and co-editor in chief of TechTank.

AI Skills & Education6 articles
Editor's pickEducation
Arxiv· 5 days ago

Mathematics Teachers Interactions with a Multi-Agent System for Personalized Problem Generation

arXiv:2604.12066v1 Announce Type: new Abstract: Large language models can increasingly adapt educational tasks to learners characteristics. In the present study, we examine a multi-agent teacher-in-the-loop system for personalizing middle school math problems. The teacher enters a base problem and desired topic, the LLM generates the problem, and then four AI agents evaluate the problem using criteria that each specializes in (mathematical accuracy, authenticity, readability, and realism). Eight middle school mathematics teachers created 212 problems in ASSISTments using the system and assigned these problems to their students. We find that both teachers and students wanted to modify the fine-grained personalized elements of the real-world context of the problems, signaling issues with authenticity and fit. Although the agents detected many issues with realism as the problems were being written, there were few realism issues noted by teachers and students in the final versions. Issues with readability and mathematical hallucinations were also somewhat rare. Implications for multi-agent systems for personalization that support teacher control are given.

Editor's pickEducation
Arxiv· 5 days ago

GoodPoint: Learning Constructive Scientific Paper Feedback from Author Responses

arXiv:2604.11924v1 Announce Type: new Abstract: While LLMs hold significant potential to transform scientific research, we advocate for their use to augment and empower researchers rather than to automate research without human oversight. To this end, we study constructive feedback generation, the task of producing targeted, actionable feedback that helps authors improve both their research and its presentation. In this work, we operationalize the effectiveness of feedback along two author-centric axes-validity and author action. We first curate GoodPoint-ICLR, a dataset of 19K ICLR papers with reviewer feedback annotated along both dimensions using author responses. Building on this, we introduce GoodPoint, a training recipe that leverages success signals from author responses through fine-tuning on valid and actionable feedback, together with preference optimization on both real and synthetic preference pairs. Our evaluation on a benchmark of 1.2K ICLR papers shows that a GoodPoint-trained Qwen3-8B improves the predicted success rate by 83.7% over the base model and sets a new state-of-the-art among LLMs of similar size in feedback matching on a golden human feedback set, even surpassing Gemini-3-flash in precision. We further validate these findings through an expert human study, demonstrating that GoodPoint consistently delivers higher practical value as perceived by authors.

Editor's pickProfessional Services
Siliconrepublic· 5 days ago

Do data and AI talent needs conflict with a workforce seeking stability?

A new report shows that many organisations plan to increase the size of their data teams – however, many professionals say they are unlikely to change employers in 2026. Read more: Do data and AI talent needs conflict with a workforce seeking stability?

Editor's pickEducation
Arxiv· 5 days ago

Design and Deployment of a Course-Aware AI Tutor in an Introductory Programming Course

arXiv:2604.11836v1 Announce Type: new Abstract: Large Language Models (LLMs) have become part of how students solve programming tasks, offering immediate explanations and even full solutions. Previous work has highlighted that novice programmers often heavily rely on LLMs, thereby neglecting their own problem-solving skills. To address this challenge, we designed a course-specific online Python tutor that provides retrieval-augmented, course-aligned guidance without generating complete solutions. The tutor integrates a web-based programming environment with a conversational agent that offers hints, Socratic questions, and explanations grounded in course materials. Students used the system during self-study to work on homework assignments, and the tutor also supported questions about the broader course material. We collected structured student feedback and analyzed interaction logs to investigate how they engaged with the tutor's guidance. We observed that students used the tutor primarily for conceptual understanding, implementation guidance, and debugging, and perceived it as a course-aligned, context-aware learning support that encourages engagement rather than direct solution copying.

Editor's pickEducation
Daily Brew· 6 days ago

Q&A: MIT SHASS and the future of education in the age of AI

An exploration of how the School of Humanities, Arts, and Social Sciences at MIT is adapting educational frameworks to address the challenges and opportunities presented by AI.

Editor's pickEducation
hiringlab.org· 6 days ago

Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? - Indeed Hiring Lab

Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? - Indeed Hiring Lab Share Share Speaker delivering a motivating training session to a group of diverse employees. # Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? Workers across countries see skill development as essential but many feel employers consider it less of a priority. Recent evidence from Spain shows that incentivizing companies to offer permanent contracts can lead them to offer more training. By Carlos Victoria Lanzón& Jonas Jessen& Pawel Adrjan April 14, 2026 Key points: - Data - Workers worldwide say skills development is essential, but many also say they feel employers treat it as less

Technology & Infrastructure

34 articles
AI Hardware3 articles
Editor's pickTechnology
financial-world.org· 6 days ago

Intel pushes advanced packaging to win a bigger role in the AI chip market

Intel pushes advanced packaging to win a bigger role in the AI chip market Add as preferred source on Google SHARE © Pirat_Nation/X/Fair Use Intel’s attempt to rebuild relevance in the AI supply chain may depend less on winning the most glamorous manufacturing race and more on selling something subtler: a place in the middle of everyone else’s. That is the strategic significance of the company’s advanced packaging push, now centered in part on a once-dormant New Mexico fab that has been revived with billions of dollars in investment and support from the US CHIPS Act. The wager is notable because packaging sits at the intersection of technology, customer trust, and industrial policy. For Intel, it offers a route back into growth that does not require immediately matching Taiwan Semiconductor Manufacturing Co. at the f

Editor's pickTechnology
semiconductor-digest.com· 6 days ago

Lighting the Path for AI: Co‑Packaged Optics Moves from Promise to Production - Semiconductor Digest

Lighting the Path for AI: Co‑Packaged Optics Moves from Promise to Production - Semiconductor Digest Lighting the Path for AI: Co‑Packaged Optics Moves from Promise to Production Artificial intelligence has pushed the semiconductor industry into a new performance regime—one where data movement, not transistor scaling, has become the defining constraint. As trillion‑parameter models scale across thousands of GPUs, the limits of electrical interconnects are no longer theoretical; they are active barriers to system performance, energy efficiency, and operational cost. At the 22nd IMAPS Device Packaging Conference (DPC 2026), held March 2–5, 2026, in Phoenix, Arizona, two major presentations—a keynote from Sandeep Razdan, Director of Process Development Engineering at NVIDIA, and a Global Business Council Plenary Session talk from Jean Trewhella, Director of SiPh Packaging Development at G

AI Infrastructure & Compute15 articles
Editor's pickTechnology
Bebeez· 5 days ago

Savills: European data centers to drive surge in logistics space demand over the next three years

Development in European data center markets will create an additional 8.462 million sq ft (786,130 sqm) of logistics space over the next three years, according to real estate services firm Savills. The firm said it had observed recent trends in logistics and data center activity in Ireland and found that, in Dublin, occupiers connected to […]

Editor's pickTechnology
Artificial Intelligence Newsletter | April 15, 2026· 6 days ago

South Korea draws five proposals for $1.4 billion GPU infrastructure push

South Korea's science ministry received five proposals for a 2.08 trillion won government project to build and operate advanced GPU infrastructure to support its AI ambitions.

Editor's pickPAYWALLEnergy & Utilities
washingtonpost.com· 5 days ago

Maine lawmakers pass nation's first statewide ban on large data ...

Maine lawmakers pass country’s first statewide ban on large data centers - The Washington Post Democracy Dies in Darkness Legislators in Maine on Tuesday passed the nation’s first statewide ban on large data centers, part of a growing backlash to the energy-intensive facilities that fuel the rise of artificial intelligence.

Editor's pickTechnology
Guardian· 6 days ago

Amazon to buy satellite firm Globalstar for $11.57bn in challenge to Musk’s Starlink

Deal, subject to regulatory approval, would give Bezos firm access to Globalstar’s network of two dozen satellites Amazon said on Tuesday it would acquire a satellite company in an $11.57bn deal, bolstering its own fledgling space business as it looks to take on Elon Musk-led bigger rival Starlink. The deal gives Amazon access to Globalstar’s network of two dozen satellites, boosting the tech giant’s ambitions to challenge SpaceX unit Starlink, which currently has about 10,000 units in orbit. Continue reading...

Editor's pickTelecommunications
Theregister· 5 days ago

Not all networks can handle AI traffic – and experts are sounding alarms

Y'all been focusing on compute and forgot about how the data moves around AI is reshaping the demands on network infrastructure, and many organizations are not prepared – including some of the so-called neocloud providers offering AI services.…

Editor's pickTechnology
Arxiv· 5 days ago

Spatial Atlas: Compute-Grounded Reasoning for Spatial-Aware Research Agent Benchmarks

arXiv:2604.12102v1 Announce Type: new Abstract: We introduce compute-grounded reasoning (CGR), a design paradigm for spatial-aware research agents in which every answerable sub-problem is resolved by deterministic computation before a language model is asked to generate. Spatial Atlas instantiates CGR as a single Agent-to-Agent (A2A) server that handles two challenging benchmarks: FieldWorkArena, a multimodal spatial question-answering benchmark spanning factory, warehouse, and retail environments, and MLE-Bench, a suite of 75 Kaggle machine learning competitions requiring end-to-end ML engineering. A structured spatial scene graph engine extracts entities and relations from vision descriptions, computes distances and safety violations deterministically, then feeds computed facts to large language models, thereby avoiding hallucinated spatial reasoning. Entropy-guided action selection maximizes information gain per step and routes queries across a three-tier frontier model stack (OpenAI + Anthropic). A self-healing ML pipeline with strategy-aware code generation, a score-driven iterative refinement loop, and a prompt-based leak audit registry round out the system. We evaluate across both benchmarks and show that CGR yields competitive accuracy while maintaining interpretability through structured intermediate representations and deterministic spatial computations.

Editor's pickTechnology
Fortune· 5 days ago

The Bezos-Musk space rivalry is shooting for the moon and the winner will not just dominate the cosmos—but the future of AI infrastructure

SpaceX and Blue Origin are racing to land on the moon, and both have filed plans to put AI satellites into orbit.

Editor's pickTechnology
Bebeez· 5 days ago

CamGraPhIC lands €211 million to tackle AI’s data bottleneck – in one of Italy’s…

Pisa-based CamGraPhIC, the research arm of 2D Photonics, has been given approval by the European Commission for €211 million in funding by the Italian State to build a new class of optical technology designed to address how data moves inside advanced computing systems. Today’s announcement will reportedly be one of the largest single public investments ever […]

Editor's pickTechnology
algeriatech.news· 6 days ago

NVIDIA GB300 NVL72: 72 GPUs, 1.44 Exaflops in One Rack

NVIDIA GB300 NVL72: 72 GPUs, 1.44 Exaflops in One Rack Wednesday April 15, 2026 - 27 Shawwal 1447Technology · Innovation · Algeria # NVIDIA GB300 NVL72: The 72-GPU Rack Redefining AI Infrastructure April 14, 2026 Sign in to save Published April 14, 2026 · by ## ⚡ Key Takeaways NVIDIA’s GB300 NVL72 packs 72 Blackwell Ultra GPUs into a single liquid-cooled rack delivering 1.44 exaflops of FP4 performance and 37TB of fast memory. Microsoft has deployed over 4,600 racks for OpenAI, with cloud pricing starting at $2.90/hour per GPU. The system delivers 50x the output of Hopper platforms while requiring mandatory liquid cooling at 132-140kW per rack. Bottom Line: Cloud architects should evaluate GB300-class instances from Azure, AWS, or CoreWeave for

Editor's pickTechnology
ai.via.news· 6 days ago

Semiconductor Industry Races to Build AI Infrastructure as Intel Joins Mega-Fab Project | Via News

Semiconductor Industry Races to Build AI Infrastructure as Intel Joins Mega-Fab Project | Via News Image generated by AI for illustrative purposes. Not actual footage or photography from the reported events. Loading stream... Intel joined a major semiconductor fabrication partnership focused on AI infrastructure capacity1, part of an industry-wide race to build production capabilities for AI training and deployment chips. The collaboration reflects growing demand for specialized silicon as AI models require increasingly powerful hardware. ARM is positioning for significant growth in the AI chip market, targeting substantial revenue from processor designs2. The company's architecture powers many mobile and edge AI applications, making it central to the physical AI infrastructure buildout. Meanwhile, component manufacturers are securing positions in the AI supply chain—Camtek landed or

Editor's pickTechnology
edgeir.com· 6 days ago

Akamai pushes AI inference to the edge with orchestrated GPU grid across 4,400 sites | Edge Infrastructure Review

Akamai pushes AI inference to the edge with orchestrated GPU grid across 4,400 sites | Edge Infrastructure Review # Akamai pushes AI inference to the edge with orchestrated GPU grid across 4,400 sites Akamai launched the Akamai Inference Cloud late last year, the first global-scale implementation of NVIDIA AI Grid, enabling distributed AI inference across 4,400 edge locations. Akamai empowers a platform that delivers AI systems using NVIDIA AI infrastructure and optimizes workload routing with Akamai’s network to offer the best possible latency, cost, and performance. Intelligent orchestration optimizes the cost-efficiency and response time of AI applications via improved “tokenomics” in Akamai’s AI Grid, resulting in throughput gains. “AI

Editor's pickTechnology
👀 Claude complaints· 4 days ago

AI scaling now depends on efficiency

Scaling AI requires efficient compute that can deliver both performance and scale. As demand grows, data centers must do more within fixed power and space constraints.

Editor's pickEnergy & Utilities
Arxiv· 5 days ago

When Forecast Accuracy Fails: Rank Correlation and Decision Quality in Multi-Market Battery Storage Optimization

arXiv:2604.12082v1 Announce Type: cross Abstract: Battery energy storage systems (BESS) participating in multi-market electricity trading require price forecasts to optimize dispatch decisions. A widely held assumption is that forecast accuracy, measured by standard metrics such as mean absolute error (MAE), drives trading performance. We challenge this assumption using a hierarchical three-layer optimization system trading simultaneously on frequency containment reserve (FCR), automatic frequency restoration reserve (aFRR), day-ahead, and continuous intraday (XBID) markets in Germany and Switzerland over 2020-2025, with real market data from Regelleistung.net and Swissgrid. We find that rank correlation (Kendall tau), rather than MAE, is the primary predictor of intraday dispatch value: forecasts above an empirical threshold of tau approximately 0.85-0.95 capture up to 97-100% of perfect-foresight revenue, while persistence forecasts with near-zero tau capture only 33%. This threshold is stable across market regimes and volatility levels, and reflects the ordinal structure of the dispatch problem. Furthermore, under reserve market constraints, FCR capacity revenue exceeds XBID by 6.5x per MW, making capacity allocation -- not forecast accuracy -- the primary driver of total revenue. In the Swiss market, hydrological surplus anomalies are significantly associated with balancing market revenue (p = 0.0005), a mechanism absent from existing German-focused literature. These findings reframe forecast evaluation for BESS operators: the relevant question is not what the MAE is, but whether the forecast achieves tau-sufficiency.

Editor's pickEnergy & Utilities
Bebeez· 5 days ago

Delska Launches One of the Baltics’ Most Advanced and Sustainable Data Centers in Riga

RIGA, Latvia, April 15, 2026 /PRNewswire/ — Today, Delska, one of the leading data center operators in the Baltics, officially launched EU North Riga LV DC1 – a 10 MW data center designed for artificial intelligence (AI) and high-performance computing (HPC). The project has received the Latvian Construction Annual Award (1st place in the “Production […]

Editor's pickTechnology
eenewseurope.com· 6 days ago

Intel and Google expand AI infrastructure partnership ...

Intel and Google expand AI infrastructure partnership ... MENU # Intel and Google expand AI infrastructure partnership News | April 14, 2026 By Asma Adhimi --- Intel and Google are deepening their long-running partnership with a multiyear collaboration aimed at advancing next-generation AI and cloud infrastructure. The agreement focuses on improving performance, efficiency and scalability across Google’s global data center footprint as AI workloads continue to grow in scale and complexity. The companies say the expanded collaboration will combine Intel’s Xeon CPUs with custom infrastructure processing units (IPUs) to support modern heterogeneous computing systems. For engineers and system architects following AI infrastructure trends,

AI Models & Capabilities14 articles
Editor's pickEducation
Arxiv· 5 days ago

Can Persona-Prompted LLMs Emulate Subgroup Values? An Empirical Analysis of Generalisability and Fairness in Cultural Alignment

arXiv:2604.12851v1 Announce Type: new Abstract: Despite their global prevalence, many Large Language Models (LLMs) are aligned to a monolithic, often Western-centric set of values. This paper investigates the more challenging task of fine-grained value alignment: examining whether LLMs can emulate the distinct cultural values of demographic subgroups. Using Singapore as a case study and the World Values Survey (WVS), we examine the value landscape and show that even state-of-the-art models like GPT-4.1 achieve only 57.4% accuracy in predicting subgroup modal preferences. We construct a dataset of over 20,000 samples to train and evaluate a range of models. We demonstrate that simple fine-tuning on structured numerical preferences yields substantial gains, improving accuracy on unseen, out-of-distribution subgroups by an average of 17.4%. These gains partially transfer to open-ended generation. However, we find significant pre-existing performance biases, where models better emulate young, male, Chinese, and Christian personas. Furthermore, while fine-tuning improves average performance, it widens the disparity between subgroups when measured by distance-aware metrics. Our work offers insights into the limits and fairness implications of subgroup-level cultural alignment.

Editor's pickPAYWALLTechnology
NYT· 5 days ago

We Don’t Really Know How A.I. Works. That’s a Problem.

For us to trust it on certain subjects, researchers in the growing field of interpretability might need to learn how to open the black box of its brain.

Editor's pickTechnology
reuters.com· 6 days ago

OpenAI unveils GPT-5.4-Cyber a week after rival's ... - Reuters

OpenAI unveils GPT-5.4-Cyber a week after rival's announcement of AI model | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights, opens new tab April 14 (Reuters) - OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its latest flagship model fine-tuned specifically ​for defensive cybersecurity work, following rival Anthropic's ‌announcement of frontier AI model Mythos. Mythos, announced on April 7, is being deployed as part of Anthropic's " [Project ​Glasswing](https://www.r

Editor's pick
Arxiv· 5 days ago

The Long-Horizon Task Mirage? Diagnosing Where and Why Agentic Systems Break

arXiv:2604.11978v1 Announce Type: new Abstract: Large language model (LLM) agents perform strongly on short- and mid-horizon tasks, but often break down on long-horizon tasks that require extended, interdependent action sequences. Despite rapid progress in agentic systems, these long-horizon failures remain poorly characterized, hindering principled diagnosis and comparison across domains. To address this gap, we introduce HORIZON, an initial cross-domain diagnostic benchmark for systematically constructing tasks and analyzing long-horizon failure behaviors in LLM-based agents. Using HORIZON, we evaluate state-of-the-art (SOTA) agents from multiple model families (GPT-5 variants and Claude models), collecting 3100+ trajectories across four representative agentic domains to study horizon-dependent degradation patterns. We further propose a trajectory-grounded LLM-as-a-Judge pipeline for scalable and reproducible failure attribution, and validate it with human annotation on trajectories, achieving strong agreement (inter-annotator \kappa=0.61; human-judge \kappa=0.84). Our findings offer an initial methodological step toward systematic, cross-domain analysis of long-horizon agent failures and offer practical guidance for building more reliable long-horizon agents. We release our project website at \href{https://xwang2775.github.io/horizon-leaderboard/}{HORIZON Leaderboard} and welcome contributions from the community.

Editor's pickTechnology
Daily Brew· 4 days ago

Meta researchers introduce Hyperagents to unlock self-improving AI for non-coding tasks

Meta has unveiled 'Hyperagents,' a new approach designed to enable AI to improve itself across various non-programming domains.

Editor's pickTechnology
VentureBeat· 5 days ago

AI's next bottleneck isn't the models — it's whether agents can think together

AI agents can connect together, but they cannot think together. That’s a huge difference and a bottleneck for next-gen systems, says Outshift by Cisco’s SVP and GM Vijoy Pandey. As he describes the current state of AI: Agents can be stitched together in a workflow or plug into a supervisor model — but there's no semantic alignment, no shared context. They’re essentially working from scratch each go-around.  This calls for next-level infrastructure, or what Pandey describes as the "internet of cognition."  “Agents are not able to think together because connection is not cognition,” he said. “We need to get to a point where you are sharing cognition. That is the greater unlock.” Creating new protocols to support next-gen agent communication So what is shared cognition? It’s when AI agents or entities can meaningfully work together to solve for something net new that they weren’t trained for, and do it “100% without human intervention,” Pandey said on the latest episode of Beyond the Pilot. The Cisco exec analogizes it to human intelligence. Humans evolved over hundreds of thousands of years, first becoming intelligent individually, then communicating on a basic level (with gestures or drawings). That communication improved over time, eventually unlocking a ‘cognitive revolution’ and collective intelligence that allowed for shared intent and the ability to coordinate, negotiate, and ground and discover information.  “Shared intent, shared context, collective innovation: That's the exact trajectory that's playing out in silicon today,” Pandey said.  His team sees it as a “horizontal distributed assistance problem.” They are pursuing “distributed super intelligence” by codifying intent, context, and collective innovation as a set of rules, APIs, and capabilities within the infrastructure itself.  Their approach is a set of new protocols: Semantic State Transfer Protocol (SSTP); Latent Space Transfer Protocol (LSTP); and Compressed State Transfer Protocol (CSTP).  SSTP operates at the language level, analyzing semantic communication so systems can infer the right tool or task. Pandey's team recently collaborated with MIT on a related piece called the Ripple Effect Protocol. LSTP can be used to transfer the “entire latent space” of one agent to another, Pandey explained. “Can we just take the KV cache and send it over as an example?” he said. “Because that would be the most efficient way: instead of going through the tax of tokenizing it, going to a natural language, then going back the stack on the other side.”  CSTP handles compression — grounding only the targeted variants while compressing everything else. Pandey says it's particularly well-suited for edge deployments where you need to send large amounts of state accurately. Ultimately, Pandey’s team is building a fabric to scale out intelligence and ensure that cognition states are synchronized across endpoints. Further, they are developing what they call “cognition engines” that provide guardrails and accelerate systems.  “Protocols, fabric, cognition engines: These are the three layers that we are building out in the pursuit of distributed super intelligence,” Pandey said.  How Cisco solved a big pain point Stepping back from these advanced, next-level systems, Cisco has achieved tangible results with existing AI capabilities. Pandey described a specific pain point with the company’s site reliability engineering (SRE) team.  While they were churning out more and more products and code, the team itself wasn’t growing, and were feeling pressure to improve efficiency. Pandey introduced AI agents that automated more than a dozen end-to-end workflows, including continuous integration/continuous delivery CI/CD pipelines, EC2 instance spin-ups and Kubernetes cluster deployments.  Now, more than 20 agents — some built in-house, some third-party — have access to 100-plus tools via frameworks like Model Context Protocol (MCP), while also plugging into Cisco’s security platforms.  The result: A decrease from “hours and hours to seconds” with certain deployments; further, agents have reduced 80% of the issues the SRE team were seeing within Kubernetes workflows. Still, as Pandey noted, AI is a tool like any other. “It does not mean that I have a new hammer and I'm just gonna go around looking for nails,” he said. “You still have deterministic code. You need to marry these two worlds to get the best outcome for the problem that you're solving.” Listen to the podcast to hear more about:  How we are now enabling a new paradigm of non-deterministic computing.  How Cisco bumped error detection capabilities in large networks from 10% to 100%.  How Pandey named his own AI agent "Arnold Layne" after an early Pink Floyd song. Why the "internet of cognition" must be an open, interoperable effort.  How Cisco’s open source project Agntcy addresses discovery, identity and access management (IAM), observability, and evaluation. You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Editor's pick
Arxiv· 5 days ago

When to Forget: A Memory Governance Primitive

arXiv:2604.12007v1 Announce Type: new Abstract: Agent memory systems accumulate experience but currently lack a principled operational metric for memory quality governance -- deciding which memories to trust, suppress, or deprecate as the agent's task distribution shifts. Write-time importance scores are static; dynamic management systems use LLM judgment or structural heuristics rather than outcome feedback. This paper proposes Memory Worth (MW): a two-counter per-memory signal that tracks how often a memory co-occurs with successful versus failed outcomes, providing a lightweight, theoretically grounded foundation for staleness detection, retrieval suppression, and deprecation decisions. We prove that MW converges almost surely to the conditional success probability p+(m) = Pr[y_t = +1 | m in M_t] -- the probability of task success given that memory m is retrieved -- under a stationary retrieval regime with a minimum exploration condition. Importantly, p+(m) is an associational quantity, not a causal one: it measures outcome co-occurrence rather than causal contribution. We argue this is still a useful operational signal for memory governance, and we validate it empirically in a controlled synthetic environment where ground-truth utility is known: after 10,000 episodes, the Spearman rank-correlation between Memory Worth and true utilities reaches rho = 0.89 +/- 0.02 across 20 independent seeds, compared to rho = 0.00 for systems that never update their assessments. A retrieval-realistic micro-experiment with real text and neural embedding retrieval (all-MiniLM-L6-v2) further shows stale memories crossing the low-value threshold (MW = 0.17) while specialist memories remain high-value (MW = 0.77) across 3,000 episodes. The estimator requires only two scalar counters per memory unit and can be added to architectures that already log retrievals and episode outcomes.

Editor's pickTechnology
medium.com· 6 days ago

The Last Human Stronghold Falls: Inside the GrandCode Multi-Agent System | by ArXiv In-depth Analysis | Apr, 2026 | GoPenAI

The Last Human Stronghold Falls: Inside the GrandCode Multi-Agent System | by ArXiv In-depth Analysis | Apr, 2026 | GoPenAI Sign up Get app Sign up ## GoPenAI Follow publication Where the ChatGPT community comes together to share insights and stories. Follow publication Member-only story # The Last Human Stronghold Falls: Inside the GrandCode Multi-Agent System 17 min read 5 hours ago -- Share Discover how GrandCode, a groundbreaking multi-agent reinforcement learning system, shattered the final AI bottleneck to achieve Grandmaster level in live Codeforces competitive programming. ## 1. Introduction: The Hook & The Historical Context For years, the artificial intelligence community has tracked the exponential rise of Large Language Models (LLMs) with a mix of awe and calculated skepticism. We watched as AI conquered conversational fluency, aced the bar exam, and even gene

Editor's pick
Arxiv· 5 days ago

Self-Monitoring Benefits from Structural Integration: Lessons from Metacognition in Continuous-Time Multi-Timescale Agents

arXiv:2604.11914v1 Announce Type: new Abstract: Self-monitoring capabilities -- metacognition, self-prediction, and subjective duration -- are often proposed as useful additions to reinforcement learning agents. But do they actually help? We investigate this question in a continuous-time multi-timescale agent operating in predator-prey survival environments of varying complexity, including a 2D partially observable variant. We first show that three self-monitoring modules, implemented as auxiliary-loss add-ons to a multi-timescale cortical hierarchy, provide no statistically significant benefit across 20 random seeds, 1D and 2D predator-prey environments with standard and non-stationary variants, and training horizons up to 50,000 steps. Diagnosing the failure, we find the modules collapse to near-constant outputs (confidence std < 0.006, attention allocation std < 0.011) and the subjective duration mechanism shifts the discount factor by less than 0.03%. Policy sensitivity analysis confirms the agent's decisions are unaffected by module outputs in this design. We then show that structurally integrating the module outputs -- using confidence to gate exploration, surprise to trigger workspace broadcasts, and self-model predictions as policy input -- produces a medium-large improvement over the add-on approach (Cohen's d = 0.62, p = 0.06, paired) in a non-stationary environment. Component-wise ablations reveal that the TSM-to-policy pathway contributes most of this gain. However, structural integration does not significantly outperform a baseline with no self-monitoring (d = 0.15, p = 0.67), and a parameter-matched control without modules performs comparably, so the benefit may lie in recovering from the trend-level harm of ignored modules rather than in self-monitoring content. The architectural implication is that self-monitoring should sit on the decision pathway, not beside it.

Editor's pickTechnology
Daily AI News April 15, 2026: AI transformation - A BBVA Playbook· 5 days ago

Databricks tested a stronger model against its multi-step agent on hybrid queries

The article discusses how multi-step agentic systems can outperform single models on hybrid enterprise queries by separating data access from reasoning.

Editor's pickTechnology
medium.com· 6 days ago

Your AI Isn’t Dumb: It Just Doesn’t Have a Harness | by Ben Chen | Apr, 2026 | Medium

Your AI Isn’t Dumb: It Just Doesn’t Have a Harness | by Ben Chen | Apr, 2026 | Medium Sign up Get app Sign up # Your AI Isn’t Dumb: It Just Doesn’t Have a Harness 5 min read Just now -- Share There’s a small experiment that perfectly captures where AI engineering is headed. Take Gemma 4, a compact 2-billion-parameter open-source model from Google, and give it a debugging task: fix a buggy email-parsing function in `parser.py` so that the tests in `verify.py` all pass. Both files are sitting right next to it. The model’s first move? It ignores the files entirely, writes an imaginary `parser.py` from scratch, pretends to verify it, and declares itself done. That sounds like a failure of intelligence. It isn’t. The model understood the task completely. It knew what the function should look like, and it could write correct code for it. What it failed to do was look down and notice

Editor's pickPharma & Biotech
Daily AI News April 15, 2026: AI transformation - A BBVA Playbook· 5 days ago

Evaluating agents for scientific discovery

This piece examines benchmark design for scientific discovery agents, testing how well AI systems handle science-like tasks within simulated environments.

Editor's pickTechnology
Arxiv· 5 days ago

Identity as Attractor: Geometric Evidence for Persistent Agent Architecture in LLM Activation Space

arXiv:2604.12016v1 Announce Type: new Abstract: Large language models map semantically related prompts to similar internal representations -- a phenomenon interpretable as attractor-like dynamics. We ask whether the identity document of a persistent cognitive agent (its cognitive_core) exhibits analogous attractor-like behavior. We present a controlled experiment on Llama 3.1 8B Instruct, comparing hidden states of an original cognitive_core (Condition A), seven paraphrases (Condition B), and seven structurally matched controls (Condition C). Mean-pooled states at layers 8, 16, and 24 show that paraphrases converge to a tighter cluster than controls (Cohen's d > 1.88, p < 10^{-27}, Bonferroni-corrected). Replication on Gemma 2 9B confirms cross-architecture generalizability. Ablations suggest the effect is primarily semantic rather than structural, and that structural completeness appears necessary to reach the attractor region. An exploratory experiment shows that reading a scientific description of the agent shifts internal state toward the attractor -- closer than a sham preprint -- distinguishing knowing about an identity from operating as that identity. These results provide representational evidence that agent identity documents induce attractor-like geometry in LLM activation space.

Editor's pickManufacturing & Industrials
Daily AI News April 15, 2026: AI transformation - A BBVA Playbook· 5 days ago

Gemini Robotics-ER 1.6: Powering real-world robotics tasks through enhanced embodied reasoning

Google DeepMind has updated its robotics stack to improve embodied AI performance on real-world tasks like instrument reading and environment interpretation.

Adoption & Impact

26 articles
AI Adoption & Diffusion17 articles
Editor's pickPAYWALLManufacturing & Industrials
Bloomberg· 5 days ago

Skild AI Acquires Zebra Technologies’ Robotics Automation Business

Skild AI, a fast-rising startup that makes software to help robots learn to complete tasks, has bought the robotics automation division of Zebra Technologies Corp., marking its latest effort to broaden its reach in a hot segment.

Editor's pickPAYWALLProfessional Services
Bloomberg· 5 days ago

Thoma Bravo Signs Multiyear Deal With Google for AI Adoption

Software investor Thoma Bravo struck a strategic partnership with Alphabet Inc.’s Google Cloud to help the private equity firm’s portfolio companies accelerate their adoption of artificial intelligence.

Editor's pickTechnology
Theregister· 5 days ago

Headless 360: Salesforce's latest pitch to let AI do the dev work

Here comes 'enterprise vibe coding' as CRM giant aims to open development to anyone on the platform Salesforce has introduced what it calls Headless 360 at its developer event TDX, which starts today in San Francisco, designed to expand the reach of its app-building tools beyond traditional developers.…

Editor's pickMedia & Entertainment
VentureBeat· 5 days ago

Adobe’s new Firefly AI Assistant wants to run Photoshop, Premiere, Illustrator and more from one prompt

Adobe today launched its most ambitious AI offensive to date, unveiling the Firefly AI Assistant — a new agentic creative tool that can orchestrate complex, multi-step workflows across the company's entire Creative Cloud suite from a single conversational interface — alongside a raft of new video, image, and collaboration features designed to position the company at the center of the rapidly evolving AI-powered content creation landscape. The announcements, which also include a new Color Mode for Premiere Pro, the addition of Kling 3.0 video models to Firefly's growing roster of third-party AI engines, and Frame.io Drive — a virtual filesystem that lets distributed teams work with cloud-stored media as though it lived on their local machines — represent Adobe's clearest signal yet that it views agentic AI not as a feature upgrade but as a fundamental reshaping of how creative work gets done. "We want creators to tell us the destination and let the Firefly assistant — with its deep understanding of all the Adobe professional tools and generative tools — bring the tools to you right in the conversation," Alexandru Costin, Vice President of AI & Innovation at Adobe, told VentureBeat in an exclusive interview ahead of the launch. The stakes could hardly be higher. Adobe is fighting to convince Wall Street, creative professionals, and a wave of well-funded AI-native competitors that its decades-old software empire can not only survive the generative AI revolution but lead it. How Adobe turned a research prototype into a 100-tool creative agent The centerpiece of today's announcement is the Firefly AI Assistant, which Adobe describes as a fundamentally new way to interact with its creative tools. Rather than requiring users to manually navigate between Photoshop, Premiere, Illustrator, Lightroom, Express, and other apps — selecting the right tool for each step of a complex project — the assistant lets creators describe an outcome in natural language. The agent then figures out which tools to invoke, in what order, and executes the workflow. The assistant is the productized version of Project Moonlight, a research prototype Adobe first previewed at its annual MAX conference in the fall of 2025 and subsequently refined through a private beta. "This is basically [Project] Moonlight," Costin confirmed to VentureBeat. "We started with all the learnings from Moonlight, and we engaged with customers. We looked internally. We evolved that architecture to make it more ambitious." Under the hood, Adobe says it has assembled roughly 100 tools and skills that the assistant can call upon, spanning generative image and video creation, precision photo editing, layout adaptation, and even stakeholder review through Frame.io. The system is built around a single conversational interface inside the Firefly web app where users describe what they want and the assistant maintains context across sessions. Pre-built Creative Skills — purpose-built, multi-step workflow templates such as portrait retouching or social media asset generation — can be run from a single prompt and customized to match a creator's own style. The assistant also learns a creator's preferred tools, workflows, and aesthetic choices over time, and understands the content type being worked on — image, video, vector, brand assets — to make context-aware decisions. Crucially, outputs use native Adobe file formats — PSD, AI, PRPROJ — meaning users can take any result into the corresponding flagship app for manual, pixel-level refinement at any point. "We always imagine this continuum where you can have complete conversational edits and pixel-perfect edits, and you can decide, as a creative, where you want to land," Costin said. The Firefly AI Assistant will enter public beta in the coming weeks, though Adobe did not specify an exact date. Why Wall Street is watching Adobe's AI pricing model so closely For a company whose AI monetization story has faced persistent skepticism from investors, the pricing structure of the Firefly AI Assistant will be closely watched. Costin told VentureBeat that, at launch, using the assistant will require an active Adobe subscription that includes the relevant apps — meaning users who want the agent to invoke Photoshop cloud capabilities, for instance, will need an entitlement that includes the Photoshop SKU. Generative actions will consume the user's existing pool of generative credits, consistent with how Firefly credits work across the rest of Adobe's platform. "To use some of these cloud capabilities from Photoshop and other apps, you need to have a subscription that includes access to the Photoshop SKU," Costin explained. "You'll be consuming your credits when you use generative features." He acknowledged, however, that the model could evolve: "As we better understand the value of this — and the costs of operating the brain, the conversation engine — things might change." The question of whether Adobe can convert AI enthusiasm into meaningful revenue growth is anything but theoretical. When Adobe reported its most recent quarterly results in March, it touted 10% year-over-year revenue growth to $6.4 billion and disclosed that annual recurring revenue from AI standalone and add-on products had reached $125 million — a figure CEO Shantanu Narayen projected would double within nine months. Adobe adds Chinese AI video models to Firefly, raising commercial safety questions Alongside the assistant, Adobe is expanding Firefly's roster of third-party AI models to include Kling 3.0 and Kling 3.0 Omni, two video generation models developed by Kuaishou, the Chinese technology company. Kling 3.0 focuses on fast, high-quality production with smart storyboarding and audio-visual sync, while the Omni variant adds professional controls for shot duration, camera angle, and character movement across multi-shot sequences. The additions bring Firefly's model count to more than 30, joining Google's Nano Banana 2 and Veo 3.1, Runway's Gen-4.5, Luma AI's Ray3.14, Black Forest Labs' FLUX.2[pro], ElevenLabs' Multilingual v2, and others. When asked whether Adobe had concerns about integrating a model from a Chinese tech company given the current geopolitical climate, Costin was direct: "We think choice is what we want to offer our customers." He explained that Adobe's strategy distinguishes between its own commercially safe, first-party Firefly models — trained on licensed Adobe Stock imagery and public domain content — and third-party partner models, which carry different commercial safety profiles. "For some use cases, like ideation, non-production use cases, we got requests from customers to support some external models," Costin said. "If I'm in ideation, I might be more flexible with commercial safety. When I go into production, I’d want to have a model that gives you more confidence." This raises an important nuance for the agentic era. When the Firefly AI Assistant autonomously selects which model to use for a given task, the commercial safety guarantees may vary depending on which engine it invokes. Costin pointed to Adobe's Content Credentials system — the metadata-and-fingerprinting framework developed through the Content Authenticity Initiative — as the mechanism for maintaining transparency. "The agentic power — and the fact that the assistant has access to all of those models — means it could decide to use a model that carries different content credentials," he acknowledged. "But with the transparency of content credentials, the user will know how a particular piece of content was created and can decide whether that's commercially safe or not." Adobe offers commercial indemnity for its first-party Firefly models but applies different indemnity levels for third-party models — a distinction that enterprise buyers, in particular, will need to carefully evaluate. Inside Adobe's active collaboration with Nvidia on long-running AI agent infrastructure Adobe's agentic ambitions also intersect with its strategic partnership with Nvidia, announced earlier this year at Nvidia’s GTC conference. When asked whether the Firefly AI Assistant's agentic capabilities are built on NVIDIA's agent toolkit and NeMo infrastructure, Costin revealed that the collaboration is active but has not yet made it into a shipping product. "We're in active discussions — investigating not only Nemotron," Costin said. "They have this technology called Open Shell and Nemo Claw, which give us the ability to efficiently run long-running agentic workflows in a sandboxed environment." He said the technology would become increasingly important as Adobe pushes the assistant to handle longer, more autonomous creative tasks — but cautioned that "it's not shipping yet. It's being actively explored." For Nvidia, which is building an ecosystem of enterprise AI agent platforms with partners like Adobe, Salesforce, and SAP, the partnership could eventually serve as a high-profile proof point for its agent infrastructure stack in the creative vertical. For Adobe, the ability to run complex, long-duration agentic workflows efficiently and securely in sandboxed environments could be the technical foundation that separates the Firefly AI Assistant from lighter-weight chatbot integrations offered by competitors. The partnership also signals Adobe's recognition that the computational demands of agentic AI — where a single user request may trigger dozens of model calls and tool invocations — require infrastructure partnerships that go well beyond what a software company can build alone. Premiere Pro's new color grading mode and the tools Adobe is shipping today Beyond the headline AI assistant announcement, Adobe's broader set of updates reflects a company trying to strengthen its position across every phase of the content creation pipeline. Color Mode in Premiere Pro may be the most significant near-term upgrade for working editors. Entering public beta today, Color Mode is described as a first-of-its-kind color grading experience built specifically for the way editors — rather than dedicated colorists — think and work. Adobe notes that it was developed through an extensive private beta with hundreds of working editors, and that participants reported they "actually enjoy color grading" — a sentiment suggesting Adobe may have found a way to democratize one of post-production's most intimidating disciplines. General availability is expected later in 2026. The Firefly Video Editor gains audio upgrades including the Enhance Speech feature migrated from Premiere and Adobe Podcast, direct Adobe Stock integration with access to more than 800 million licensed assets, and simple color adjustment controls with intuitive sliders and one-click looks. On the image editing front, Adobe introduced Precision Flow, which generates a range of semantic variations from a single prompt and lets users browse them via an interactive slider — a novel approach that Costin described as "the best slider-based control mixed with the best semantic understanding of not only the existing scene, but what the scene could be." AI Markup complements this by letting users draw directly on images to specify where and how edits should be applied. After Effects 26.2 adds an AI-powered Object Matte tool that dramatically accelerates rotoscoping and masking — create accurate mattes of moving subjects with a hover and click, refine with a Quick Selection brush, and perfect edges with a Refine Edge tool. Frame.io Drive wants to kill the shipped hard drive and make cloud media feel local Rounding out the announcements, Frame.io Drive addresses one of the most persistent pain points in distributed video production: getting media from point A to point B without losing hours — or days — to downloads, syncing, and shipped hard drives. Frame.io Drive is a desktop application that mounts Frame.io projects to a user's computer so media appears in Finder or Explorer and behaves like local files. The underlying technology, called Frame.io Mounted Storage, streams media on demand as applications request it, while local caching ensures smooth playback. The product builds on streaming technology provided by Suite Studios, and the real-time file access capability is included with every Frame.io account. Adobe emphasized that all content lives solely within Frame.io and is never shared with third parties. The move positions Frame.io not just as a review-and-approval tool at the end of the production pipeline but as the central media layer from the very beginning of a project — from first capture through final delivery. If successful, the strategy could significantly deepen Adobe's lock-in with professional video teams by making Frame.io the single source of truth for distributed productions. Frame.io Drive and Mounted Storage will roll out in phases, with Enterprise customers gaining access starting today and accounts on other plans following shortly. Others can join a waitlist. Adobe's biggest challenge isn't building the AI — it's convincing creators to trust it Taken together, today's announcements paint a picture of a company executing aggressively across multiple fronts — but also one that is navigating a complex moment. Adobe first introduced Firefly in March 2023 as a family of generative AI models focused on image and text effects, with a strong emphasis on commercial safety through training on licensed Adobe Stock content. In the two years since, the company has rapidly expanded into video generation, multi-model access, and now agentic workflows — a trajectory that mirrors the broader industry's shift from standalone AI features to AI-native systems. But the competitive field has grown dramatically. Runway, Pika, and a host of AI-native video generation startups have captured mindshare among creators. Canva has aggressively integrated AI into its design platform. And the emergence of powerful foundation models from OpenAI, Google, and Anthropic — the latter of which Adobe says it will integrate with Firefly AI Assistant capabilities — means the barrier to building creative AI tools has never been lower. Adobe is also navigating these product ambitions against a complex corporate backdrop: the impending departure of CEO Shantanu Narayen, an actively exploited zero-day vulnerability in Acrobat Reader (CVE-2026-34621) that had been used by hackers for months before being patched this week, a U.K. antitrust investigation over cancellation fees, and a recent $75 million lawsuit settlement. Adobe's response, articulated clearly through today's launches, is to lean into what it believes is its deepest moat: the integration of AI into a set of professional-grade, category-leading applications that no startup can replicate overnight. Costin framed the agentic transition as empowering rather than threatening to creative professionals, comparing Creative Skills to a next-generation version of Photoshop Actions — the macro-recording feature that has long allowed power users to automate repetitive tasks. "We want to help our customers become — from the ones doing all the work — to be creative directors, doing some of the work, but most importantly, guiding the assistant in executing some of those creative visions," he said. It is a compelling pitch — and, in its own way, a revealing one. For three decades, Adobe made its fortune by selling the tools that turned creative vision into finished pixels. Now it is asking its customers to let an AI agent handle more of that translation, trusting that the human role will shift from operating the tools to directing the outcome. Whether creators embrace that bargain — and whether Wall Street rewards it — will determine not just Adobe's trajectory but the shape of an entire industry learning to create alongside machines.

Editor's pickPAYWALLMedia & Entertainment
FT· 5 days ago

Can the traditional British tabloid survive the digital age?

Social media feeds, AI summaries and influencer-driven content are luring readers away

Editor's pickTechnology
medium.com· 6 days ago

# The Rise of AI Coding Agents: Why 2026 Is the Year Software Writes Itself | by Sugam Basnet | Apr, 2026 | Medium

# The Rise of AI Coding Agents: Why 2026 Is the Year Software Writes Itself | by Sugam Basnet | Apr, 2026 | Medium Sign up Get app Sign up 5 min read Just now -- Share # The Rise of AI Coding Agents: Why 2026 Is the Year Software Writes Itself *Published: April 14, 2026 · 8 min read* --- **Picture this:** You file a GitHub issue at 9 PM describing a bug. By the time you wake up, there's a pull request waiting — code written, tests passing, edge cases handled. You didn't prompt an AI. You didn't babysit a chatbot. An agent just… did it. This isn't science fiction. It's Tuesday. --- ## From Autocomplete to Autonomy: The 18-Month Leap Let's rewind to mid-2023. GitHub Copilot was the bleeding edge — a fancy autocomplete that could save you a few keystrokes. Impressive? Absolutely. But it was still a passenger. You drove, it suggested turns. Then something broke loose. In the

Editor's pickFinancial Services
Bebeez· 5 days ago

Swedish SolvaPay raises €2.4 million in a pre-seed to build infrastructure for autonomous digital commerce

– Advertisement – SolvaPay has raised €2.4 million in a pre-seed round to build what it positions as the first payment infrastructure layer for the emerging agentic economy, where AI agents can independently transact across digital services. The round was led by European fintech VC Redstone and Silicon Valley–based MS&AD Ventures, with participation from Antler […]

Editor's pickFinancial Services
Daily AI News April 15, 2026: AI transformation - A BBVA Playbook· 5 days ago

The Hidden Demand for AI Inside Your Company

This HBR case study explains how BBVA transformed unmanaged employee LLM use into a secure, governed ChatGPT Enterprise rollout, serving as a practical playbook for enterprise AI adoption.

Editor's pickProfessional Services
Arxiv· 5 days ago

Memory as Metabolism: A Design for Companion Knowledge Systems

arXiv:2604.12034v1 Announce Type: new Abstract: Retrieval-Augmented Generation remains the dominant pattern for giving LLMs persistent memory, but a visible cluster of personal wiki-style memory architectures emerged in April 2026 -- design proposals from Karpathy, MemPalace, and LLM Wiki v2 that compile knowledge into an interlinked artifact for long-term use by a single user. They sit alongside production memory systems that the major labs have shipped for over a year, and an active academic lineage including MemGPT, Generative Agents, Mem0, Zep, A-Mem, MemMachine, SleepGate, and Second Me. Within a 2026 landscape of emerging governance frameworks for agent context and memory -- including Context Cartography and MemOS -- this paper proposes a companion-specific governance profile: a set of normative obligations, a time-structured procedural rule, and testable conformance invariants for the specific failure mode of entrenchment under user-coupled drift in single-user knowledge wikis built on the LLM wiki pattern. The design principle is that personal LLM memory is a companion system: its job is to mirror the user on operational dimensions (working vocabulary, load-bearing structure, continuity of context) and compensate on epistemic failure modes (entrenchment, suppression of contradicting evidence, Kuhnian ossification). Five operations implement this split -- TRIAGE, DECAY, CONTEXTUALIZE, CONSOLIDATE, AUDIT -- supported by memory gravity and minority-hypothesis retention. The sharpest prediction: accumulated contradictory evidence should have a structural path to updating a centrality-protected dominant interpretation through multi-cycle buffer pressure accumulation, a failure mode no existing benchmark captures. The safety story at the single-agent level is partial, and the paper is explicit about what it does and does not solve.

Editor's pickFinancial Services
Bebeez· 5 days ago

Aryza Acquires Umbrella Tech, advancing Collection Intelligence with voice-based Agentic AI for the global Collections & Recoveries industry

LONDON, April 15, 2026 /PRNewswire/ — Aryza, a global provider of E2E AI-enabled automation software across the credit & debt lifecycle, has acquired Umbrella Tech, a Toronto based leader in voice-based agentic AI for the financial services industry.   The acquisition significantly expands Aryza’s Agentic Collections & Recoveries intelligence, integrating human-like conversations into its existing […]

Editor's pickFinancial Services
ibsintelligence.com· 6 days ago

FinTech bets big on AI agents, but deployment gaps persist

FinTech bets big on AI agents, but deployment gaps persist Cedar-IBSi FinTech Labs Analyst Briefings Get in touch IBSi Newsletter Back # FinTech bets big on AI agents, but deployment gaps persist ##### By Puja Sharma Today - IBSi Newsletter - agnetic ai - AI - AI enterprise ##### Share - Over 88% of UK enterprises are already deploying AI agents, but over half face barriers to further deployments – including skills gaps, business cases, and lack of quality data or technology partner - Speed and customer experience are seen as primary benefits of AI agent adoption - Around 73% of UK enterprises say the ability to deploy AI agents wi

Editor's pickTransportation & Logistics
Daily Brew· 7 days ago

Uber, Nuro, San Francisco testing premium robotaxi service

Uber and Nuro are collaborating on a pilot program in San Francisco to test a new premium robotaxi service.

Editor's pickTransportation & Logistics
Theregister· 5 days ago

Waymo's self-driving cars face their toughest test yet: London

Google sibling takes on the Big Smoke – with a human hand on the wheel Waymo has started letting its software take the wheel on London streets, with trained specialists on standby as it gradually accelerates toward a fully driverless ride-hailing launch.…

Editor's pickProfessional Services
MIT Technology Review· 5 days ago

Building trust in the AI era with privacy-led UX

The practice of privacy-led user experience (UX) is a design philosophy that treats transparency around data collection and usage as an integral part of the customer relationship. An undertapped opportunity in digital marketing, privacy-led UX treats user consent not as a tick-box compliance exercise, but rather as the first overture in an ongoing customer relationship.…

Editor's pickTechnology
medium.com· 6 days ago

Navigating AI Technical Debt. Why the rush to ship Generative AI is… | by Cassian Raja Thomas | Apr, 2026 | Medium

Navigating AI Technical Debt. Why the rush to ship Generative AI is… | by Cassian Raja Thomas | Apr, 2026 | Medium Sign up Get app Sign up # Navigating AI Technical Debt 3 min read Just now -- Share ## Why the rush to ship Generative AI is creating a multi-generational maintenance nightmare. We’ve all seen this movie before. A new technology arrives, the FOMO (Fear Of Missing Out) reaches a fever pitch, and suddenly, best practices are treated like optional suggestions. In the software world, we call this Technical Debt. It’s like a high-interest credit card: you get the features now, but you’ll be paying for them — with interest — for the rest of the product’s life. But with AI, we aren’t just dealing with messy code. We’re being squeezed by a pincer movement of reckless code engineering and unstable data management. As an architect, I’ve spent my career cleaning up legacy m

Editor's pick
michaelparekh.substack.com· 6 days ago

AI: Three Things can be true about AI at the same time? AI-RTZ 1056

AI: Three Things can be true about AI at the same time? AI-RTZ 1056 # AI: Reset to Zero SubscribeSign in # AI: Three Things can be true about AI at the same time? AI-RTZ 1056 ### ...the realities around AI 'power users, doubters and resisters' Apr 14, 2026 3 Share I’ve long said in these pages that ‘ two opposite things can be true at the same time’. Especially in technology waves. And this AI Tech Wave may be expanding that to three. Let me explain. Axios explains the current reality in AI as [“The Three Re

Editor's pickTechnology
itwire.com· 6 days ago

iTWire - 77% of global enterprises shift AI strategies from efficiency to growth

iTWire - 77% of global enterprises shift AI strategies from efficiency to growth A- A A+ Tuesday, 14 April 2026 12:34 ## 77% of global enterprises shift AI strategies from efficiency to growth 0 Shares - ITWIRE TV - Share - Tweet - Share - Share - Share By Thoughtworks GUEST RESEARCH: - Nearly half of business leaders expect more than 15% revenue uplift from AI within 10 years - Agentic AI is emerging as a priority with 35% of organisations calling it a top focus The era of efficiency is over and the growth race has begun. According to new research from Thoughtworks, a global technology consultancy that integrates design, engineering and AI to drive digital innovation, 77% of business leaders have shifted their AI strategies from cost savings to growth and innovation. Among large enterprises the shift rises to 92%. The research surv

AI Applications9 articles
Editor's pickPharma & Biotech
reuters.com· 6 days ago

Amazon launches AI research tool to speed early-stage drug discovery

Amazon launches AI research tool to speed early-stage drug discovery | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv An Amazon Web Services (AWS) logo is pictured during a trade fair in Hannover Messe, in Hanover, Germany, April 22, 2024. REUTERS/Annegret Hilse Purchase Licensing Rights, opens new tab - Companies Amazon Web Services Inc Follow Follow Follow Show more companies April 14 (Reuters) - Amazon's (AMZN.O), opens new tab cloud unit on Tuesday launched Amazon Bio Discovery, an ar

Editor's pickManufacturing & Industrials
reuters.com· 5 days ago

Korean AI chip startup DEEPX, Hyundai work on robots powered by ...

Korean AI chip startup DEEPX, Hyundai work on robots powered by generative AI | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv The DEEPX booth at the 2025 Korea Tech Festival in Seoul, South Korea, December 4, 2025. REUTERS/Kim Hong-Ji/File Photo Purchase Licensing Rights, opens new tab - Companies Follow Follow Follow Show more companies SEOUL, April 15 (Reuters) - South Korean AI chip startup DEEPX will expand its partnership with Hyundai Motor Group to develop a computing platform for generative AI robots using its ​second generation of low-power chips, its top executive said, as it gets set ‌for an

Editor's pickPAYWALLManufacturing & Industrials
washingtonpost.com· 6 days ago

Tesla leader believes Shanghai factory operations will play a role in ...

Tesla leader believes Shanghai factory operations will play a role in robot mass production - The Washington Post Democracy Dies in Darkness By Andy Wong and Kanis Leung | AP SHANGHAI — A Tesla Inc. leader said Tuesday he believes its Shanghai factory operations will help resolve the challenges in achieving mass production of the company’s humanoid robots as the U.S. electric vehicle giant pivots to robotics.

Editor's pickHealthcare
Arxiv· 5 days ago

A longitudinal health agent framework

arXiv:2604.12019v1 Announce Type: new Abstract: Although artificial intelligence (AI) agents are increasingly proposed to support potentially longitudinal health tasks, such as symptom management, behavior change, and patient support, most current implementations fall short of facilitating user intent and fostering accountability. This contrasts with prior work on supporting longitudinal needs, where follow-up, coherent reasoning, and sustained alignment with individuals' goals are critical for both effectiveness and safety. In this paper, we draw on established clinical and personal health informatics frameworks to define what it would mean to orchestrate longitudinal health interactions with AI agents. We propose a multi-layer framework and corresponding agent architecture that operationalizes adaptation, coherence, continuity, and agency across repeated interactions. Through representative use cases, we demonstrate how longitudinal agents can maintain meaningful engagement, adapt to evolving goals, and support safe, personalized decision-making over time. Our findings underscore both the promise and the complexity of designing systems capable of supporting health trajectories beyond isolated interactions, and we offer guidance for future research and development in multi-session, user-centered health AI.

Editor's pickTechnology
cheekypint.substack.com· 6 days ago

The world of voice AI, with Mati Staniszewski of ElevenLabs

The world of voice AI, with Mati Staniszewski of ElevenLabs # Cheeky Pint SubscribeSign in # The world of voice AI, with Mati Staniszewski of ElevenLabs Apr 14, 2026 Share Mati Staniszewski is the co-founder of ElevenLabs, the research company making audio accessible across languages and voices. He sits down with John to discuss the “voice Turing Test” and why AI has conquered text but still struggles with conversational speech. They discuss the future of human-computer interaction, including why we still can’t get our phones to read a PDF properly and the massive potential for voice agents in everything from farming to healthcare. Mati also opens up about ElevenLabs’ rapid ascent to an $11 billion valuation and gives a behind-the-scenes look at how Ukraine is using their tech for digital government services. #### Timestamps 00:00:27 How audio models work00:08:52 ElevenLabs busin

Editor's pickGovernment & Public Sector
Daily Brew· 5 days ago

IRS considers using Palantir for smarter audits

The IRS is exploring the use of Palantir's technology to help decide which taxpayers should be flagged for audits.

Editor's pickDefense & National Security
Bebeez· 5 days ago

Finnish deeptech startup Kelluu secures €15 million Series A to scale autonomous airship surveillance platform

– Advertisement – Finnish deeptech startup Kelluu has raised €15 million in a Series A round led by the NATO Innovation Fund, marking the fund’s first investment in a Finnish company, with participation from Keen Venture Partners, Gungnir Capital, and Tesi. The company develops and operates autonomous, hydrogen-powered airships designed to provide persistent intelligence, surveillance, […]

Editor's pickProfessional Services
Arxiv· 5 days ago

Narrative-Driven Paper-to-Slide Generation via ArcDeck

arXiv:2604.11969v1 Announce Type: new Abstract: We introduce ArcDeck, a multi-agent framework that formulates paper-to-slide generation as a structured narrative reconstruction task. Unlike existing methods that directly summarize raw text into slides, ArcDeck explicitly models the source paper's logical flow. It first parses the input to construct a discourse tree and establish a global commitment document, ensuring the high-level intent is preserved. These structural priors then guide an iterative multi-agent refinement process, where specialized agents iteratively critique and revise the presentation outline before rendering the final visual layouts and designs. To evaluate our approach, we also introduce ArcBench, a newly curated benchmark of academic paper-slide pairs. Experimental results demonstrate that explicit discourse modeling, combined with role-specific agent coordination, significantly improves the narrative flow and logical coherence of the generated presentations.

Editor's pickGovernment & Public Sector
Fortune· 5 days ago

Exclusive: Jeremy Renner bets on the tech that could have saved his life faster. ‘There’s 150 people that are responsible for me not dying’

Renner says he hates AI himself, but then again, he’s an actor: “When AI is used as a tool ... this is the most powerful, most magnificent tool.”

Geopolitics

9 articles
AI Geopolitics7 articles
Editor's pickGovernment & Public Sector
Guardian· 6 days ago

China now the ‘good guy’ on AI as Trump takes ‘wild west’ approach, MPs told

Experts say China is backing attempts at global governance, while US has set up race between profit-hungry companies China is now the “good guy” on AI rather than Donald Trump’s US, where the technology is being pursued in a dangerous “wild west” manner, a former UN and UK government adviser has told MPs. Prof Dame Wendy Hall, who was a member of the UN’s AI advisory board and co-wrote a review of

Editor's pickDefense & National Security
Exponentialview· 5 days ago

🔮 The classified frontier

The US won’t lose control of frontier AI – it will choose who else gets access to it.

Editor's pickGovernment & Public Sector
Theregister· 5 days ago

UK told its Big Tech habit is now a national security risk

Open Rights Group says years of reliance on US giants have left Britain exposed Britain has spent years wiring its public sector into US Big Tech, and a new report says that dependence could quickly become a national security headache.…

Editor's pickTechnology
medium.com· 6 days ago

AI Geopolitics: Infrastructure, Sovereignty, and Allied Enforcement

AI Geopolitics: Infrastructure, Sovereignty, and Allied Enforcement | by Tasos Tassos | Apr, 2026 | Medium Sign up Get app Sign up # AI Geopolitics: Infrastructure, Sovereignty, and Allied Enforcement 6 min read 10 hours ago -- Share In the weekly sensemaking memo at www.geopoliticsofai.com we noted that AI geopolitics is now centered on infrastructure governance (compute, power, hosting) rather than model releases. US leadership relies on allied enforcement and domestic capacity, while Europe gains hosted capacity without full sovereignty, and India actively accumulates sovereign compute. Executive Takeaways - The week ending 2026–04–10 confirmed that the decisive arena in AI geopolitics is no longer model release cycles but [infrastructure governance](https://www.geopoliticsofai.com/research/ai-power-isn-t-just-about-better-models-it

Editor's pickTechnology
Fortune· 5 days ago

Silicon Valley has no monopoly on AI brainpower. That’s why Demis Hassabis is very happy to stay in London

Silicon Valley may lead the AI race, but Demis Hassabis is betting London can rival it—proving world-class talent, and the future of AI, aren’t confined to one place.

Editor's pickTechnology
@emollick· 6 days ago

The idea of sovereign frontier models is only viable as long as the main Chinese model makers keep shipping open weights models that anyone can build on. I am not sure how long that will continues but if it ends, there seem to be few viable alternatives (until LLMs hit a wall)

The idea of sovereign frontier models is only viable as long as the main Chinese model makers keep shipping open weights models that anyone can build on. I am not sure how long that will continues but if it ends, there seem to be few viable alternatives (until LLMs hit a wall)

Editor's pickTechnology
@emollick· 6 days ago

And this is a very generous definition of notable. If we are talking frontier models, only the US and China that are even in the race. And that obscures the fact that the Big Three US labs really do seem to be holding a durable lead, albeit one measured in months, not years.

And this is a very generous definition of notable. If we are talking frontier models, only the US and China that are even in the race. And that obscures the fact that the Big Three US labs really do seem to be holding a durable lead, albeit one measured in months, not years.

AI Regional Development2 articles
Best Practice AI© 2026 Best Practice AI Ltd. All rights reserved.

Get the full executive brief

Receive curated insights with practical implications for strategy, operations, and governance.

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.