AI Intelligence Brief

Wed 29 April 2026

Daily Brief — Curated and contextualised by Best Practice AI

107Articles
Editor's pickEditor's Highlights

China Blocks Meta, Microsoft Frees OpenAI, and Big Tech Faces AI Reckoning

TL;DR China has forced Meta to unwind its $2 billion acquisition of Manus, marking a significant setback for Chinese tech entrepreneurs. Microsoft and OpenAI have ended their exclusive partnership, allowing OpenAI to expand its cloud services. Big Tech is set to invest $600 billion in AI this year, testing investor patience. Meanwhile, AI-driven efficiencies are putting pressure on revenue for India's tech services firms, and global regulators lag behind banks in AI oversight.

Editor's highlights

The stories that matter most

Selected and contextualised by the Best Practice AI team

2 of 107 articles

Economics & Markets

23 articles
AI Market Competition7 articles
AI Startups & Venture5 articles

Labor, Society & Culture

13 articles
AI & Employment6 articles
AI Skills & Education1 articles
Editor's pickEducation
Arxiv· Today

Coasting Through Class: Learning Opportunity Loss from Practice Avoidance During Individual Seatwork

arXiv:2604.25014v1 Announce Type: new Abstract: Measures of disengagement provide insights into unproductive use of learning opportunities. Although measures of active disengagement, such as gaming the system and mind-wandering, are well studied, loss of practice time due to outright task avoidance remains relatively understudied. The current study addresses this gap by extending existing within-task measures (idle time) with two new session-level measures (delayed start and early stop) to capture loss of practice time due to task avoidance. We characterize the combined lost time as coasted time and the associated behavior as coasting behavior. Using ASSISTments logs (N = 1,425), we find that students dedicate only 40% of available classwork time to math practice and coast through the remaining 60%. Of the coasted time, 36% resulted from delayed starts, 2% from mid-practice idling, and 62% from stopping early. Delayed start and early stop showed moderate temporal stability (G = 0.73 and 0.71, respectively), suggesting that coasting is a consistent behavioral pattern. Even after excluding early stops attributable to assignment completion (i.e., early stop = 0), coasted time remained substantial at 32%. While we observe significant differences in coasting by gender and IEP status, we do not observe them by other demographic factors or school locale. Critically, students who continued working beyond the first assignment completion ("extra effort") performed significantly better on standardized tests. For research, coasting offers a new lens on opportunity loss by combining session-level disengagement with within-task disengagement. For practitioners, our results highlight the need for platform affordances that support sustained engagement and more productive use of available practice time.

Technology & Infrastructure

28 articles
AI Agents & Automation10 articles
Editor's pickTechnology
Ethan Mollick· Yesterday

Competitive Analysis of Enterprise AI Agent Integration in Productivity Software

A critique of Microsoft's Outlook agent implementation suggests that current enterprise AI integrations often suffer from poor UX and limited cross-platform visibility. The analysis compares these shortcomings against more agile, third-party agentic alternatives.

Editor's pickProfessional Services
Ethan Mollick· Yesterday

Leveraging Real-Time Generative AI Coding for Accelerated Organizational Decision-Making

Integrating AI coding tools directly into team meetings can significantly compress project timelines and improve engagement. This approach shifts the focus from discussion to immediate, iterative prototyping during collaborative sessions.

Editor's pickTechnology
SiliconANGLE· Yesterday

Amazon Connect’s second act: From contact center to agentic AI suite - SiliconANGLE

The company loaded Connect up with ... pricing model to disrupt, and it rapidly gained traction. Today, it’s the primary customer experience platform for many major brands, including Capital One, Hilton Hotels, State Farm and Air Canada. AWS is repositioning the Connect brand as a family of agentic AI solutions that integrate into business workflows, ...

Editor's pickGovernment & Public Sector
ZDNET· Yesterday

Over 80% of US government agencies already use AI agents - and it's only the beginning | ZDNET

Operational orchestration refers to agent-driven systems that coordinate multi-step workflows across departments, improving service delivery speed and scale. Citizen service delivery is enhanced with agents that can deliver proactive, context-aware, and personalized interactions. Agentic AI can also use synthetic data and model ...

Editor's pickManufacturing & Industrials
Supply Chain Dive· Yesterday

Amazon Web Services unveils agentic AI supply chain tool | Supply Chain Dive

Amazon Web Services today launched ... that aims to help businesses improve planning, data analysis and decision making. The product combines more than 25 specialized supply chain tools into a set of artificial intelligence agents, called “teammates,” to make calculations on behalf of the user, VP of AWS Supply Chain Ozgur Dogan told Supply Chain Dive. “Every variance, invisible patterns — the system learns and ...

Editor's pickTechnology
Presidio· Yesterday

Enterprise AI Adoption: What I Learned Building an Agentic AI Operating System - Presidio

Build faster with AI: turn meeting transcripts into working prototypes and scale adoption by focusing on shared context, not model choice.

Editor's pickProfessional Services
Daily AI News April 28, 2026: 65 Countries. One AI Brain.· Yesterday

How SAP Concur Automates Expense Reporting with Agentic AI

SAP Concur is utilizing agentic AI to streamline expense reporting, specifically focusing on automating the completion of incomplete receipt data.

Editor's pickManufacturing & Industrials
Daily Brew· Yesterday

Open source Xiaomi MiMo-V2.5 and V2.5-Pro are among the most efficient (and affordable) at agentic 'claw' tasks

Xiaomi's new open-source MiMo-V2.5 models are setting new benchmarks for efficiency and affordability in agentic robotic tasks.

Editor's pick
Daily AI News April 28, 2026: 65 Countries. One AI Brain.· Yesterday

Learning to Orchestrate Agents in Natural Language with the Conductor

Researchers have introduced a 'Conductor' model trained via reinforcement learning to coordinate multiple AI agents using natural-language instructions, favoring decentralized orchestration.

Editor's pickFinancial Services
WIRED· Yesterday

The Bloomberg Terminal Is Getting an AI Makeover, Like It or Not | WIRED

Amid rapid enterprise growth, Anthropic ... for businesses to build AI agents with Claude. ... Researchers at the company found representations inside of Claude that perform functions similar to human feelings. ... A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey ...

AI Infrastructure & Compute5 articles
AI Models & Capabilities5 articles
Editor's pickTechnology
VentureBeat· Yesterday

American AI startup Poolside launches free, high-performing open model Laguna XS.2 for local agentic coding

The AI race lately has felt a bit like a game of tennis: first, Anthropic releases a new, pricey state-of-the-art proprietary model for general users (Claude Opus 4.7), then, a week or so later, its rival OpenAI volleys back with one of its own (GPT-5.5). And all the while, Chinese companies like DeepSeek and even Xiaomi are seeking to appeal to users by playing a different game: nearing the frontier, but with open licensing and far lower costs. So it's a big surprise when a new, affordable, highly performant open source contender from the U.S. emerges. Today, we got one from the smaller, lesser-known U.S. AI startup, Poolside, founded in San Francisco in 2023. The company launched its two new Laguna large language models, both of which offer affordable intelligence optimized for agentic workflows (AI that does more than just chat or generate content, but can, in this case, write code, use third-party tools, and take actions autonomously), as well as a new coding agent harness called (fittingly) "pool" and a new web-based, mobile optimized agentic coding development and interactive preview environment, "shimmer," which lets you write code with the Laguna models on the go. The new AI models that Poolside released today include: Laguna M.1: a proprietary 225-billion parameter Mixture of Experts (MoE) model with 23 billion active parameters. This flagship model is optimized for high-consequence enterprise and government environments, designed to solve complex, long-horizon software engineering problems that require maximum reasoning and planning capabilities. Laguna XS.2: an Apache 2.0 open licensed 33-billion parameter MoE with 3 billion active. Engineered for efficiency and community innovation, this model is designed for local agentic coding tasks and provides a versatile foundation for developers looking to fine-tune, quantize, or serve powerful agents on a single GPU. In other words, developers can download and run Laguna XS.2 on their desktop or even laptop computers without an internet connection — completely private and secured. Notably, as mentioned above, only the smaller of the two models, XS.2, is available now under an open source Apache 2.0 license (on Hugging Face) — yet Poolside is offering even the larger M.1 for free temporarily through its API and third-party distribution partners, OpenRouter, Ollama, and Baseten, making it a great use case for developers who wish to test it out. Also noteworthy: the two new Lagunas were trained from scratch — not fine-tuned/post-trained base models from Chinese giant Alibaba's Qwen series like some other U.S. labs have pursued lately (*cough cough* Cursor *cough). As Poolside wrote in a blog post today, it's spent the last few years "focused on serving our government and public sector clients with capable models deployable into the highest-security environments," yet is now going open source "to support builders and the wider research community." When I asked on X why government agencies would seek to use Poolside instead of leading proprietary U.S. labs like Anthropic, OpenAI and Google, Poolside post-training engineer George Grigorev told me in a reply that: "we think that we can be faster to deploy our models to enterprise customers, and we can literally ship weights in fully isolated environments on-prem, so it can work offline. which might be critical for gov/public sectors :) but ofc anthropic enterprise is hard to beat" How Poolside's Laguna M.1 and Laguna XS.2 were trained Poolside constructs its AI models within a specialized digital environment called the "Model Factory". At the heart of this process is Titan, the company's powerful internal software that serves as the "furnace" for training. To help the AI learn as efficiently as possible, Poolside uses a unique tool called the Muon optimizer. Think of Muon as a high-speed tutor; it helps the model master new information approximately 15% faster than standard industry methods, a critical gain when training at the 30-trillion-token scale. It achieves this by ensuring that every update to the model's "brain" is mathematically balanced and pointing in the right direction, which prevents the AI from getting confused or stuck during its intensive training sessions. The information used to train these models—a staggering 30 trillion "tokens" or pieces of data—is carefully selected using a system called AutoMixer. Rather than just feeding the AI everything it finds on the internet, AutoMixer leverages a a "swarm" of sixty proxy models on different data mixes to scientifically determine which combination of code, math, and general web data produces the best reasoning capabilities. In this way, it acts like a master chef, scientifically testing thousands of different "recipes" to find the perfect balance of computer code, mathematics, and general knowledge. While much of this data comes from the public web, about 13% of it is "synthetic data". This is high-quality, custom-made practice material created by other AIs to teach the models specific skills that are difficult to find in the real world. Once the model has finished its basic "schooling," it enters a virtual gym for Reinforcement Learning. In this stage, the AI practices solving real software engineering problems in a safe, isolated digital playground. It learns through trial and error, receiving a "reward" or positive signal every time it successfully fixes a bug or writes a working piece of code. This constant cycle of practice and feedback is what transforms the AI from a simple text generator into a capable "agent" that can plan and execute complex, multi-step projects just like a human software engineer. While M.1 represents the peak of Poolside’s current research, the smaller Laguna XS.2 may be the more disruptive entry. At just 33 billion total parameters (3 billion activated), XS.2 is a "second-generation" MoE model that incorporates everything the team learned from training M.1. Benchmarks show Poolside's Laguna models punch far above their weight class Langua M.1's performance on the SWE-bench Pro—a benchmark designed to test an AI’s ability to solve real-world software issues—reached 46.9% on SWE-bench Pro, nearing the performance of the far-larger Qwen-3.5 and DeepSeek V4-Flash. Despite being a fraction of the size, Laguna XS.2 achieves a 44.5% score on SWE-bench Pro, nearly matching its larger sibling. On the SWE-bench Verified track, M.1 scored 72.5%, outperforming the dense Devstral 2 (72.2%) but trailing Claude Sonnet 4.6, which leads the category at 79.6%. These results highlight M.1’s specialization in long-horizon software tasks, particularly those involving complex planning across interconnected files. The smaller Laguna XS.2 exhibits remarkable efficiency, nearly matching the performance of its much larger sibling on high-consequence tasks. Despite having only 3B active parameters, XS.2 surpasses Claude Haiku 4.5 (39.5%) and the significantly larger Gemma 4 31B dense model (35.7%) on SWE-bench Pro. In terminal-based reasoning, XS.2’s 30.1% on Terminal-Bench 2.0 also edges out Haiku 4.5’s 29.8%, although it remains behind specialized "nano" models such as GPT-5.4 Nano, which reached 46.3% on the same benchmark. Collectively, these benchmarks suggest that Poolside’s focus on agentic RL and synthetic data curation has allowed its smaller models to "punch up" into weight classes typically reserved for far denser architectures. While top-tier proprietary models like Claude Sonnet 4.6 maintain a lead in overall success rates, the Laguna family—particularly the open-weight XS.2—offers a competitive alternative for developers who prioritize local execution and customizable agent workflows. All benchmarking was conducted using the Harbor Framework with sandboxed execution, ensuring that the results reflect the models' ability to function in realistic, resource-constrained environments. Running Laguna XS.2 locally To run the Laguna XS.2 (33B) model locally, your hardware must accommodate its 33 billion total parameters. On Apple Silicon, the baseline requirement is 36 GB of unified memory. For PC and Linux users, while the standard weights would typically require over 60 GB of VRAM, the model’s support for 4-bit quantization (Q4) allows it to run on consumer-grade GPUs with at least 24 GB to 32 GB of VRAM, such as the newly released RTX 5090. Storage is also a factor; you should reserve at least 70 GB for the full model or roughly 20–35 GB for a compressed version suitable for local "agent" tasks. For the most seamless experience, Poolside recommends utilizing Ollama or their own terminal-based agent, pool, which are designed to manage the model's native reasoning and tool-calling capabilities on consumer hardware. You can find the full technical requirements, including specific quantization configurations and code execution sandboxing details, on the official Hugging Face model page and the Poolside release blog. Some sample suggested hardware is listed below: Mac MacBook Pro (14-inch or 16-inch): You should look for models equipped with the M5 Max chip, which specifically supports a starting configuration of 36 GB of unified memory. While the M5 Pro is available, you would need to custom-configure it to exceed its base memory to meet the 36 GB threshold. Mac Studio / Mac Mini: A Mac Mini (M4 or M5 Pro) configured with at least 48 GB or 64 GB of RAM is an excellent desktop alternative. NO "MacBook Neo": this model is not suitable for running Laguna XS.2. Released in early 2026 as a budget-friendly option, the MacBook Neo is capped at 8 GB of non-upgradable memory, which is insufficient for a 33B parameter model. PC Single-GPU Setup: The NVIDIA GeForce RTX 5090 is the premier choice for 2026, offering 32 GB of GDDR7 VRAM, which can handle the Laguna XS.2 at high speeds (approximately 45 tokens/sec) using Q4 quantization. Pro-Grade Setup: For professional developers running complex, long-horizon agents, the RTX PRO 6000 Blackwell (96 GB VRAM) or a dual RTX 5090 configuration allows the model to run without any compression loss. Minimum PC Spec: An RTX 4090 (24 GB) can run the model with heavier quantization, though performance may be slower during complex reasoning tasks. pool (agent) and shimmer (IDE) Models are only as useful as the environments they inhabit, and Poolside has released two "preview" products to house the Laguna series: pool and shimmer. pool is a terminal-based coding agent designed for the developer’s local environment. It acts as an Agent Client Protocol (ACP) server, the same harness the team uses internally for reinforcement learning (RL) training. By bringing the researchers' own tools to the general public, Poolside is effectively inviting the developer community to participate in the "real-world gym" that trains their future models. Shimmer represents a vision for the cloud-native future of development. It is an instant-on Virtual Machine (VM) sandbox where developers can iterate on web apps, APIs, and CLIs in seconds. Unlike traditional integrated developer environments (IDEs) such as Microsoft Visual Studio, shimmer integrates the Poolside Agent directly into the workspace, allowing it to push changes to GitHub or import existing repositories with ease. Perhaps the most surprising feature of shimmer is its portability. Poolside Founding Designer Alasdair Monk shared a demonstration showing shimmer running entirely on a smartphone. In the demo, a split-screen interface shows the Poolside Agent generating a "Happy New Year 2026!" animation while a dev environment runs below. As Monk noted, it offers an instant-on VM with Poolside Agent in split screen and a full dev environment on a mobile device. This suggests a future where high-consequence engineering isn't tethered to a desktop, but can happen wherever an engineer has a screen. Why release Laguna XS.2 as Apache 2.0 open weights? The most significant strategic move in this release is the licensing of Laguna XS.2. Poolside has released the weights of XS.2 under the Apache 2.0 license. This is a highly permissive license that allows users to use, distribute, and modify the software for any purpose, including commercial use, without royalties. This is a stark contrast to the "closed" models of many competitors or even the more restrictive "open-ish" licenses used by some other labs. Poolside’s leadership is explicit about why they chose this path. Poolside's blog post states its conviction that "the West needs strong open-weight models" and that releasing the weights is the fastest way for the team to improve their work through community evaluation and fine-tuning. By putting the weights of a highly capable, 33B-parameter agentic model in the hands of researchers and startups, Poolside is positioning itself as a cornerstone of the open-AI ecosystem. While Laguna M.1 remains primarily behind an API, the open release of XS.2 ensures that Poolside’s technology will be baked into the next generation of third-party tools. Poolside's philosophy and approach The core thesis behind Poolside’s work is that software development serves as the ultimate proxy for general intelligence. Creating software requires long-horizon planning, complex reasoning, and the ability to manipulate abstract systems—all traits central to human cognition. While most current AI "agents" are restricted to tool-calling via pre-defined interfaces, Poolside’s agents are designed to write and execute their own code to solve problems. This shift from using tools to building systems marks a fundamental evolution in how AI interacts with the digital world. The team of roughly 60 people in the Applied Research organization spent three years and conducted tens of thousands of experiments to reach this point. Their vision of AGI is not just about intelligence, but about "abundance for humanity". By focusing on software engineering—a domain with verifiable rewards like test passes and compilation results—they have created a self-improving feedback loop. As the team puts it, they are building a "fusion reactor" for data: extracting every last drop of intelligence from existing human knowledge while using RL to harvest the "wind energy" of new, fresh experiences. Poolside’s journey is just beginning, but the Laguna release sets a high bar for what "agentic" AI should look like in 2026. By combining frontier-level performance with a commitment to open weights and novel developer surfaces, they are charting a path to AGI that is as much about the way we build as it is about the what we build. For the enterprise and the individual developer alike, the message is clear: the future of work is agentic, and the language of that future is code.

Editor's pickTechnology
Daily Brew· Today

Enabling privacy-preserving AI training on everyday devices

Researchers are developing methods to enable AI model training directly on personal devices while maintaining user privacy.

Editor's pickTechnology
Daily Brew· Yesterday

New AI framework autonomously optimizes training data, architectures and algorithms — outperforming human baselines

A new AI framework has been developed that autonomously optimizes its own training data and architecture, surpassing human-designed baselines.

AI Security & Cybersecurity6 articles

Adoption, Deployment & Impact

20 articles
AI Adoption Barriers & Enablers9 articles
Editor's pickTechnology
VentureBeat· Yesterday

Mistral AI launches Workflows, a Temporal-powered orchestration engine already running millions of daily executions

Mistral AI, the Paris-based artificial intelligence company valued at €11.7 billion ($13.8 billion), today released Workflows in public preview — a production-grade orchestration layer designed to move enterprise AI systems out of proofs of concept and into the business processes that generate revenue. The product, which launches as part of Mistral's Studio platform, is the company's clearest articulation yet of a thesis that is quietly reshaping the enterprise AI market: that the bottleneck for organizations adopting AI is no longer the model itself, but the infrastructure required to run it reliably at scale. "What we're seeing today is that organizations are struggling to go beyond isolated proofs of concept," Elisa Salamanca, head of product at Mistral AI, told VentureBeat in an exclusive interview ahead of the launch. "The gap is operational. Workflows is the infrastructure to run AI systems reliably across business-critical processes." The release arrives at a pivotal moment for both Mistral and the broader AI industry. The dedicated agentic AI market has been valued at approximately $10.9 billion in 2026 and is projected to reach $199 billion by 2034. Yet despite that staggering growth trajectory, industry research points to a stark reality: over 40% of agentic AI projects will be aborted by 2027 due to high costs, unclear value, and complexity. Mistral is betting that Workflows can help its enterprise customers avoid becoming one of those statistics. Mistral's new orchestration layer separates execution from control to keep enterprise data private At its core, Workflows provides a structured system for defining, executing, and monitoring multi-step AI processes — from simple sequential tasks to complex, stateful operations that blend deterministic business rules with the probabilistic outputs of large language models. Salamanca described Workflows as containing several key components. The first is a development kit that allows engineers to build orchestration logic in just a few lines of Python code. "We have also been able to expose MCP servers," she explained, referring to the Model Context Protocol standard for connecting AI systems to external tools, "so that they can actually do this with agent authoring." The second — and arguably more technically significant — component is an architecture that separates orchestration from execution. "We're decorrelating the orchestration from the execution," Salamanca said. "Execution can happen close to the customer's data — their critical systems — and orchestration can happen on the cloud or wherever they want to run it." This means the data never has to leave the customer's perimeter, a design decision with enormous implications for regulated industries where data sovereignty is non-negotiable. "Enterprises do not have to worry about us having access to the data," she added. The third pillar is observability. According to Mistral's blog post announcing the release, every branch, retry, and state change within a workflow is recorded in Studio with native support for OpenTelemetry. Salamanca noted that this is not an afterthought: "You can easily see what decisions have been taken by the workflow, by the agent, and you can deep dive into where problems are happening." Workflows is fully customizable across models — engineers can select which model handles which step and can inject arbitrary code, allowing them to blend deterministic pipelines with agentic sections. The system also supports connectors that integrate directly with CRMs, ticketing systems, support platforms, and other enterprise tools, with built-in authentication and secrets management. Why Mistral chose a code-first approach over low-code drag-and-drop builders Unlike some competitors offering drag-and-drop workflow builders, Mistral has deliberately targeted developers and engineers rather than business users. "There are a couple of solutions out there that have click-and-drag, drag-and-drop solutions for workflows," Salamanca acknowledged. "This is not the approach that we've been taking. We've been really focused towards developers and critical systems that will not scale if you're doing these drag-and-drop workflows." The decision is part of a broader philosophy at Mistral: that enterprise AI systems handling mission-critical operations — cargo releases, compliance reviews, financial transactions — require the precision and version control that only code can provide. Business users are not excluded from the picture, but their role is downstream. Once engineers write a workflow in Python, it can be published to Le Chat, Mistral's chatbot platform, so anyone in the organization can trigger it. Every step remains tracked and auditable in Studio. Under the hood, Workflows runs on Temporal's durable execution engine — a platform whose $5 billion valuation reflects how its durable execution capabilities, originally built for cloud workflow orchestration, have become essential infrastructure for AI agents requiring reliable, long-running, stateful processes. Temporal's customers include OpenAI, Snap, Netflix, and JPMorgan Chase, and its technology powers orchestration at companies like Stripe and Salesforce. Mistral extended Temporal's core engine for AI-specific workloads by adding streaming, payload handling, multi-tenancy, and observability that the base engine does not provide out of the box. "Workflows is built on top of Temporal," Salamanca confirmed. "We added all the AI requirements to make these AI workflows reliable. It provides out of the box durability, retries, state management. Whenever there's a failure, it starts again wherever it stopped." Originally spun out of Uber's Cadence project, Temporal transparently handles retries, state persistence, and timeouts, providing durable execution across failures. In late 2025, Temporal joined the newly formed Agentic AI Foundation as a Gold Member and announced an official OpenAI Agents SDK integration. By building on this infrastructure rather than creating a proprietary alternative, Mistral inherits battle-tested reliability while focusing its own engineering efforts on the AI-specific layer that sits above it. From cargo ships to KYC reviews, customers are already running millions of daily executions Mistral is not launching Workflows as a concept — the company says customers are already running the product in production, processing millions of executions daily across three primary use cases. The first is cargo release automation in the logistics sector. Global shipping still runs on paperwork, and a single cargo release can involve customs declarations, dangerous goods classifications, safety inspections, and regulatory checks spanning multiple jurisdictions. Salamanca described the scope of the problem: "Their global shipping today runs on paperwork. They have to involve customs declaration, Dangerous Goods classification, safety inspections, regulatory checks, and Workflows is now powering that with our models and business rules inside." Critically, the system keeps humans in the loop at the right moments. According to Mistral's blog, the human approval step in a workflow is a single line of code — wait_for_input() — that pauses the workflow indefinitely with no compute consumption, notifies the reviewer, and resumes exactly where it left off once approval is given. "Humans are still in the loop, but they're in the loop at the right time," Salamanca said. "They just get the validation — I don't have to go into multiple tools — and the shipment gets released." The second production use case is document compliance checking for financial institutions, specifically Know Your Customer reviews. These reviews are manual, repetitive, and traditionally require hours of analyst time per case. Salamanca said Workflows now processes these reviews in minutes and provides outputs in an auditable manner — a requirement for meeting regulatory obligations. The third example involves customer support in the banking sector. "You'd have millions of users actually asking to have credit cards blocked, or feedbacks on their account situation, on their credit feedbacks," Salamanca said. With Workflows, incoming support tickets are analyzed, categorized by intent and urgency, and routed automatically. Each routing decision is visible and traceable in Studio, and when the system gets a categorization wrong, the team can correct it at the workflow level without retraining the model. How Workflows fits into Mistral's three-layer enterprise AI platform strategy Workflows does not exist in isolation. It is the middle layer of a three-part enterprise platform that Mistral has been assembling at a rapid clip throughout 2026. At the bottom sits Forge, the custom model training platform Mistral launched in March at Nvidia’s GTC conference. Forge allows organizations to build, customize, and continuously improve AI models using their own proprietary data. At the top sits Vibe, Mistral's coding agent platform that provides the user-facing interaction layer — available on web, mobile, or desktop. Salamanca connected the three explicitly: "We just released Forge. It enables you to create your own models. But the question is, how do you put these models to do valuable work for your enterprise? That's where Workflows comes in, because this is the orchestration piece — how you blend in deterministic rules and agentic capabilities. And then if you really want to have your end users interact with these AI patterns, it's where Vibe comes into play." Forge is already seeing strong traction, Salamanca said, across two distinct patterns of enterprise demand. "First, they wanted to really build completely dedicated models to solve unique problems — transformers-based architecture for time series in the financial sector, adding new types of modalities to the LLMs," she explained. "And the second motion was about customers with really specific tasks they want to solve. Reinforcement learning really caught their attention as to how they can use Forge and Forge RL to actually have models do these tasks very well." This layered architecture — model customization, workflow orchestration, and end-user interfaces — positions Mistral as something more ambitious than a model provider. It is building a full-stack enterprise AI platform, a strategy that pits it directly against not just other AI labs like OpenAI and Anthropic, but also against the hyperscale cloud providers. The company's product portfolio now ranges, as Salamanca put it, "from compute to end-user interfaces," including data centers in Europe, document processing with its OCR model, and audio capabilities through its Voxtral models. Mistral's aggressive scaling campaign and the $14 billion valuation powering it The Workflows launch comes as Mistral executes one of the most aggressive scaling campaigns in the history of the European technology industry. The French AI startup has increased its revenue twentyfold within a year, with co-founder and CEO Arthur Mensch putting the company's annualized revenue run rate at over $400 million, compared to just $20 million the previous year. The Paris-based company aims to achieve recurring annual revenue of more than $1 billion by year-end. The company's fundraising trajectory has been equally dramatic. Mistral announced a €1.7 billion ($1.9 billion) Series C round at a €11.7 billion ($12.8 billion) valuation in September 2025. Bloomberg reported in September 2025 that the company was finalizing a €2 billion investment valuing it at €12 billion ($14 billion). ASML led the round and contributed €1.3 billion, a landmark investment that aligned chip manufacturing expertise with frontier AI development and underscored European industrial capital's commitment to building a sovereign AI ecosystem. Mistral then secured $830 million in debt in March 2026 to buy 13,800 Nvidia chips for a new data center near Paris. The financial picture illustrates why Workflows matters strategically. Mistral's revenue growth is being driven primarily by enterprise adoption, with approximately 60% of revenue coming from Europe, according to CEO Mensch's public statements. Those enterprise customers are not buying Mistral's models for casual chatbot applications — they are deploying them in regulated, mission-critical environments where reliability and data sovereignty are table stakes. Workflows gives those customers the production infrastructure they need to actually deploy AI systems that matter. In May 2025, Mistral released Mistral Medium 3, which was priced at $0.40 per million input tokens and $2 per million output tokens. The company said clients in financial services, energy, and healthcare had been beta testing it for customer service, workflow automation, and analyzing complex datasets. That model now becomes one of many that can be plugged into Workflows, creating a flywheel where better models drive more workflow adoption, which in turn drives more inference revenue. Where Mistral's orchestration play fits in an increasingly crowded competitive landscape Mistral's entry into workflow orchestration arrives in an increasingly crowded field. AI orchestration platforms are quickly becoming the backbone of enterprise AI systems in 2026, and as businesses deploy multiple AI agents, tools, and LLMs, the need for unified control, oversight, and efficiency has never been greater. Major cloud providers — Amazon with Bedrock AgentCore, Microsoft with Copilot Studio, Google with Vertex AI's agent tools, and IBM with WatsonX — all offer some form of workflow or agent orchestration. Open-source frameworks like LangChain, LlamaIndex, and Microsoft AutoGen provide developer-level building blocks. And dedicated orchestration startups are proliferating. Mistral's differentiation rests on three pillars. First, vertical integration: because Workflows is native to Studio, the orchestration layer and the components it orchestrates — models, agents, connectors, observability — are built to work together, eliminating the integration tax that enterprises pay when stitching together disparate tools. Second, deployment flexibility: the split control-plane/data-plane architecture means customers in regulated industries can run execution workers in their own environments while still benefiting from managed orchestration. Third, data sovereignty: Mistral's European roots and infrastructure investments give it a natural advantage with organizations wary of routing sensitive data through U.S.-headquartered cloud providers — a concern that has intensified amid ongoing geopolitical tensions and growing European anxiety about relying on foreign providers for over 80% of digital services and infrastructure. Still, the challenges are real. OpenAI and Anthropic both have significantly larger model ecosystems and developer communities. The hyperscalers control the cloud infrastructure where most enterprise workloads actually run. And the enterprise sales cycles for production-grade AI deployments remain long and complex, requiring deep technical integration work that even well-funded startups can struggle to staff. What comes next for Workflows — and why Mistral thinks orchestration is the real AI battleground Salamanca outlined three areas of near-term development. First, Mistral plans to release a more managed version of Workflows that abstracts deployment logic for developers who don't need granular control over worker placement. "Whenever you want to have this flexibility, you can, but if you want to be able to have this on a managed infrastructure, even if it's running in your own VPC, this is something that we're adding," she said. Second, the company intends to make Workflows accessible to business users, not just engineers. "With Vibe code, you can actually author a workflow. This can be executed at scale, and any end user, in the end, can actually do that with Workflows," Salamanca explained. The third area is enterprise guardrails and safety controls for agentic applications — ensuring agents use the correct tools, run with appropriate permissions, and that administrators can enforce policies at scale. "Making sure that we have all these enterprise controls to be able to scale the authoring and the building of these workflows is something we're actively working on," she said. The Python SDK for Workflows (v3.0) is now publicly available. Developers can try the product in Studio and access documentation and demo templates immediately. Mistral will be hosting its inaugural AI Now Summit in Paris on May 27–28, where the company is expected to provide additional details on its platform roadmap. For three years, the AI industry has been captivated by a single question: who can build the most powerful model? Mistral's Workflows launch suggests the company has moved on to a different question entirely — one that may prove far more consequential for the enterprises writing the checks. It's not about which model is smartest. It's about which one can actually show up for work.

Editor's pickTransportation & Logistics
Daily Brew· Yesterday

Why supply chains are the proving ground for automation-led iPaaS

Supply chains are increasingly becoming the primary testing ground for automation-driven integration platforms.

Editor's pickEducation
Forbes· Yesterday

How Higher Ed Can Put The Student AI Bill Of Rights To Work

94% of higher ed staff use AI at work. 46% can't name a governing policy. The Student AI Bill of Rights closes the gap if institutions wire it into procurement.

Editor's pickHealthcare
Joplin Globe· Yesterday

HTEC Research: Only One in Three Healthcare Organizations is Ready to Scale AI | National Business | joplinglobe.com

PALO ALTO, Calif.--(BUSINESS WIRE)--Apr 28, 2026--

Editor's pickProfessional Services
PR Newswire· Yesterday

Nearly 90 percent of companies believe people will determine AI success, but far fewer are investing in related people strategies, Inaugural Aon Study Finds

/PRNewswire/ -- Aon plc (NYSE: AON), a leading global professional services firm, today released its inaugural Human Capital Trends Study, revealing a critical...

Geopolitics, Policy & Governance

23 articles
AI Geopolitics6 articles
AI Policy & Regulation11 articles
Editor's pickGovernment & Public Sector
Reuters· Yesterday

EU countries, lawmakers fail to reach deal on watered-down AI rules | Reuters

EU countries and European Parliament lawmakers failed to reach a deal on watered-down landmark artificial intelligence rules after 12 ​hours of negotiations on Tuesday and will resume talks next month.

Editor's pickTechnology
Top Daily Headlines: UK govt dept sent a document 'in error.' Now it's being used in a £370M contract lawsuit· Today

Australia threatens tech companies with 2.25 percent tax if they don’t pay publishers

Australia is proposing a new tax on tech companies that fail to compensate publishers for content.

Editor's pickTechnology
Artificial Intelligence Newsletter | April 28, 2026· 2 days ago

Google sees EU publish draft plans on DMA compliance for Android's AI features

The European Commission has published draft instructions on how Android's AI-relevant features can comply with the EU's Digital Markets Act.

Editor's pickGovernment & Public Sector
Daily Brew· Yesterday

CGI Unveils High-Security AI Platform in Finland, Boosting Data Sovereignty and Enterprise Efficiency

CGI has unveiled a high-security AI and data platform in Finland, aligning with KATAKRI standards to ensure data sovereignty for enterprises and the public sector.

Editor's pickTechnology
Artificial Intelligence Newsletter | April 29, 2026· Yesterday

Diverging US Senate bills show bipartisan momentum to rein in chatbots

US senators are pitching diverging approaches to chatbot regulation this week, with one bill that combines parental controls with app design standards and another that would bar youth from the platforms altogether.

Editor's pickTechnology
Artificial Intelligence Newsletter | April 29, 2026· Yesterday

Data moats, copyright gaps cloud India's AI plans for machine learning

Legal experts warn of a "legislative paradox" in India’s artificial intelligence regulation push, saying the proposed framework could favor large technology companies.

Editor's pickTechnology
Artificial Intelligence Newsletter | April 28, 2026· Yesterday

US appeal of loss to Anthropic stayed by US Ninth Circuit

The US Court of Appeals for the Ninth Circuit has stayed the government's appeal against Anthropic pending resolution of related proceedings in the D.C. Circuit.

Editor's pickTechnology
Artificial Intelligence Newsletter | April 28, 2026· 2 days ago

Musk, OpenAI engage in social media feud on eve of high-stakes US trial

Elon Musk and OpenAI executives exchanged barbs on X as jury selection began for a US trial regarding Musk's lawsuit against the company.

Editor's pickGovernment & Public Sector
Artificial Intelligence Newsletter | April 29, 2026· Yesterday

Florida House again tanks AI proposal backed by Gov. DeSantis

Florida Governor Ron DeSantis’ last-ditch attempt to pass artificial intelligence regulations during a special legislative session died after the state House speaker refused to take up the proposal.

Editor's pick
AgFunderNews· Yesterday

A Brussels moat: Can European regulation work for founders?

Against this backdrop, startups ... fears of regulatory burden and turning compliance into a competitive advantage. To meet the ongoing due diligence requirements of the CSDDD, companies need AI-enabled systems that aggregate fragmented value-chain data, standardize it, and improve ...

Editor's pickGovernment & Public Sector
Digital Watch Observatory· Yesterday

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting | Digital Watch Observatory

The Global Dialogue on AI governance aims to create an inclusive forum to connect existing initiatives and address challenges in regulating AI.

Best Practice AI© 2026 Best Practice AI Ltd. All rights reserved.

Get the full executive brief

Receive curated insights with practical implications for strategy, operations, and governance.

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.