AI Daily Brief

Technology

Latest Intelligence

The latest AI stories, analysis and developments relevant to Technology — curated daily by Best Practice AI.

Use Case Library

Use Casesfor Technology

200 articles

Technology
Theregister· Today

Mythos found 271 Firefox flaws – but none a human couldn’t spot

Mozilla CTO says AI means developers finally have a chance to get on top of security The Mozilla Foundation has revealed it tested Anthropic’s bug-finding “Mythos” AI model and feels the results it experienced represent a watershed moment for software defenders.…

Technology
Theregister· Today

Magnificent irony as Meta staff unhappy about running surveillance software on work PCs

Zuck reportedly needs to capture workers’ keystrokes to build AI Meta, the company built on watching everything its billions of users do online so it can keep them clicking on ragebait and targeted ads, is reportedly now installing surveillance software on employees’ work computers.…

PaywallTechnology
FT· Today

SpaceX obtains right to buy AI start-up Cursor for $60bn

Elon Musk’s rocket and AI conglomerate is seeking to catch up to rivals OpenAI and Anthropic

Technology
Daily Brew· Today

BlackBerry Partners with NVIDIA for Edge AI

BlackBerry partners with NVIDIA to integrate its QNX OS for Safety 8.0 with NVIDIA's IGX Thor platform, enhancing safety-critical edge AI in robotics and industrial settings. Despite recent stock momentum and quarterly revenue growth, analysts remain cautious, citing a high valuation and potential risks ahead.

Technology
Daily Brew· Today

Meta Will Record Employees' Keystrokes and Use for AI Training

Meta will start recording employees' keystrokes and use the data to train its AI models.

Technology
Daily Brew· Today

Google Expands AI-Powered Gemini in Chrome

Google expands its AI-powered Gemini in Chrome to seven new APAC markets, enhancing browser functionalities with features like scheduling and content summarization. The rollout prioritizes user privacy with on-device processing and regional adjustments, although Europe remains excluded due to regulatory constraints.

Technology
Daily Brew· Today

Novita AI Emerges as Top Inference Provider

Novita AI emerges as a leading AI inference provider, supporting over 120 LLMs via a unified API compatible with OpenAI and Anthropic. Trusted by industry giants, it ranks top in scientific reasoning and excels in advanced mathematics, offering competitive features without tiered restrictions.

Technology
Daily Brew· Today

Amazon to Invest Up to $25 Billion in Anthropic

Amazon is set to invest up to $25 billion in Anthropic, a significant move in the AI landscape.

Technology
Top Daily Headlines· Today

Apple Opportunity to Rediscover Humanity in AI

John Ternus can remake Apple the way it should have been, with a push toward AI

Technology
Top Daily Headlines· Today

AI-Assisted Intruders Breach Vercel

AI-assisted intruders pwned Vercel via OAuth abuse and a pilfered employee account, with stolen data being shopped for $2M

Technology
Daily Brew· Today

SpaceX Cuts Deal to Buy Cursor for $60 Billion

SpaceX has cut a deal to buy Cursor for $60 billion, a significant move in the AI and space industries.

Technology
Reuters· Today

This week at SpaceX: AI bets, losses and push for control

This week at SpaceX: AI bets, losses and push for control as mega IPO looms

PaywallTechnology
FT· Today

Anthropic investigating unauthorised access of powerful Mythos AI model

Start-up has limited the release of the new tool because of concerns about its hacking abilities

Technology
Theregister· Today

Apple has an opportunity to rediscover humanity in its push toward AI

John Ternus can remake Apple the way it should have been OPINION Apple's pending leadership transition affords the company a rare opportunity to return to its roots and once again serve as a source of inspiration instead of frustration.…

Technology
VentureBeat· Today

OpenAI's ChatGPT Images 2.0 is here and it does multilingual text, full infographics, slides, maps, even manga — seemingly flawlessly

It's been only a few months since OpenAI released its last big improvement to AI image generations in ChatGPT and through its application programming interface (API) — namely, a new image generation model known as GPT-Image-1. 5, released in December 2025, which brought about improved instruction following, colors, and lighting. Now, after weeks of testing, the company that kicked off the generative AI boom is unveiling a far more dramatic and even more impressive update: ChatGPT Images 2.

Technology
Substack· Today

AI Isn’t a Tool Anymore - Exploring ChatGPT

Recent developments in AI agents show systems that can execute multi-step workflows, interact with external tools, and continue tasks without constant user input (Stanford H AI , 2025).

Technology
Guardian· Today

Diplomatic duties for Tim Cook after stepping down as Apple CEO

John Ternus ascends the throne – but Cook will stay on to manage tech giant’s foreign policy as executive chair Hello, and welcome to TechScape. I’m your host, Blake Montgomery, US tech editor at the Guardian, writing to you after seeing The Jellicle Ball, a revival of Cats that I found fabulous and which the Guardian called “thrillingly new”. Shares in Allbirds surge after maker of wool sneakers announces pivot to AI Continue reading...

Technology
Substack· Yesterday

Stop Writing User Stories the Old Way! (How AI Fixes Backlog Refinement)

In this video, we break down exactly why manual refinement is broken, and how to use AI to turn a messy stakeholder message into a sprint-ready, INVEST-compliant user story in under 10 minutes.

Technology
Reuters· Yesterday

Apple turns to hardware veteran Ternus as CEO to succeed Cook in AI age | Reuters

Despite introducing a form of AI ​to the public imagination in 2011 with Siri, Apple has not yet scored a hardware or software product hit centered on new AI technologies, while emerging rivals such as Open AI 's ChatGPT have attracted hundreds of millions ​of users.

Technology
LinkedIn· Yesterday

AI Engineer (Python Coding) - J2B Global LLC

We cannot provide a description for this page right now

Technology
Medium· Yesterday

Best AI Writing Tools: I Tested 8 AI Writers With Detection Software (Nov 2025) | Medium

Best AI writing tools 2025: Tested 8 AI writers with detection software. Claude vs ChatGPT vs Jasper. Honest reviews, no affiliates

Technology
Reddit· Yesterday

Artificial Intelligence (AI)

That's where you upload your corpus, your writing samples, your personality profile, your style rules, your domain expertise. By the time you're typing a message in a session, the heavy lift is already done. The AI isn't generating text in a void, it's reflecting back an organized version of what you've already fed it.

PaywallTechnology
FT· Yesterday

Apple CEO Tim Cook to hand over to John Ternus in September

Head of hardware to become next chief executive while Cook will become chair of the iPhone maker

Technology
Guardian· Yesterday

Mythos: are fears over new AI model panic or PR? – podcast

Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and exploiting vulnerabilities in software, and could pose a severe risk to economies, public safety and national security. But is this the whole story?

PaywallTechnology
Bloomberg· Yesterday

Jeff Bezos Nears $10 Billion Funding for AI Lab, FT Says

Jeff Bezos is close to finalizing a $10 billion funding round for his AI startup that’s developing models with the capability of understanding the physical world, the Financial Times reported.

PaywallTechnology
Bloomberg· Yesterday

‘Smooth Transition’ For Apple: Technalysis

Bob O'Donnell, President and Chief Analyst at Technalysis Research, says Apple will maintain its AI focus during the leadership change. He notes that incoming CEO John Ternus is well-positioned to build on his extensive hardware experience. (Source: Bloomberg)

Technology
Fortune· Yesterday

This Apple doesn’t fall far from the tree: Tim Cook is leaving at a peak and John Ternus is exactly the right CEO for the AI era

The market's muted reaction to Apple's leadership handoff is shortsighted. Cook's deliberate, Steve Jobs–approved succession puts Apple ahead of AI rivals.

PaywallTechnology
WSJ· Yesterday

Tim Cook to Step Down as Apple Names New CEO

John Ternus, head of the company’s hardware division, will succeed Cook, who will become executive chairman.

TechnologyLabor & Society
Daily AI News April 21, 2026: Kimi K2.6: Open weights get heavyweight· Yesterday

Automated Alignment Researchers: Using large language models to scale scalable oversight

Research indicates that AI systems can assist in testing and governing advanced models, though the process remains dependent on human oversight.

TechnologyTechnology & Infrastructure
📲 Apple's next act· Yesterday

Expanding choice across the AI compute ecosystem

Arm's compute platform, including IP and silicon, provides partners with more options to build across cloud, edge, and physical devices. This choice drives competition and innovation across the technology stack.

TechnologyEconomics & Markets
Reuters· Yesterday

'AI or die' - but can Big Tech's profits survive the energy squeeze? | Reuters

Any downward shift in sky-high AI sentiment could have an outsized impact on the wider market.

Technology
Reuters· Yesterday

France's Atos Narrows Revenue Forecast

Atos, the French tech group, narrowed its annual revenue forecast after organic revenue fell 11% in the first quarter due to market softness.

PaywallTechnology
FT· Yesterday

Anthropic and Amazon agree $100bn AI infrastructure deal

Start-up behind Claude tool seeks to bulk up on chips and computing power after suffering outages this year

Technology
Import AI· Yesterday

Import AI 454

Import AI 454: Automating alignment research; safety study of a Chinese model; HiFloat4

PaywallTechnology
FT· Yesterday

Bezos’s Project Prometheus AI lab nears $38bn valuation in funding deal

Company code-named Project Prometheus is working on models for industrial applications

Technology
Reuters· Yesterday

Jeff Bezos' AI Lab Nears $38 Billion Valuation

Jeff Bezos' AI lab is nearing a $38 billion valuation in a funding deal, according to a report by the Financial Times.

Technology
TechCrunch· Yesterday

Anthropic Takes $5B from Amazon and Pledges $100B in Cloud Spending

Amazon has made another circular AI deal: It's investing another $5 billion in Anthropic. Anthropic has agreed to spend $100 billion on AWS in return.

TechnologyTechnology & Infrastructure
Artificial Intelligence Newsletter | April 21, 2026· Yesterday

South Korea, Nvidia co-host Nemotron Developer Days in Seoul

South Korea's Science and ICT Ministry and Nvidia are hosting a four-day event in Seoul to provide local developers with hands-on access to the Nemotron AI model and training in model optimization.

Technology
Towards Data Science· Yesterday

The LLM Gamble

Why it tickles your brain to use an LLM, and what that means for the AI industry

Technology
r/artificial· Yesterday

The AI Layoff Trap, The Future of Everything Is Lies

Hey everyone, I just sent the 28th issue of AI Hacker Newsletter, a weekly roundup of the best AI links and the discussions around it. Here are some links included in this email: Write less code, be more responsible (orhun.dev) -- comments The Future of Everything Is Lies, I Guess: …

Technology
ChinAI Newsletter· Yesterday

ChinAI #355

ChinAI #355: An Alliance for AI's 'Harness Era' -MiniMax + Alibaba Cloud

Technology
Top Daily Headlines· Yesterday

Context.ai Incident Compromises Customer Credentials

Next. js developer Vercel warns of customer credential compromise. Blames outfit called Context.

Technology
r/ArtificialInteligence· Yesterday

Does AI Really Make Everyone 'Good' at Design?

Saw this piece from Canva’s co-founder arguing that AI makes everyone 'good' at design, but 'greatness' still comes from human judgment and empathy. What i think is, it's not making everyone good. It's making it easier for people with zero design sense to generate something passable, which just raises the …

Technology
r/artificial· Yesterday

Honest Opinion About AI

I'm a developer by profession, and I've used AI to generate stuff that I know how to do myself and also stuff I have no idea about. Coding for my day to day using AI, I know exactly what to do and how to do it so i end up …

Technology
Siliconrepublic· Yesterday

Amazon investing up to $25bn in Anthropic AI infrastructure deal

This latest investment is in addition to the $8bn Amazon has already invested in the AI company. Read more: Amazon investing up to $25bn in Anthropic AI infrastructure deal

Technology
VentureBeat· Yesterday

Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it

A security researcher, working with colleagues at Johns Hopkins University, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Security Review action post its own API key as a comment. The same prompt injection worked on Google’s Gemini CLI Action and GitHub’s Copilot Agent (Microsoft). No external infrastructure required. Aonan Guan, the researcher who discovered the vulnerability, alongside Johns Hopkins colleagues Zhengyu Liu and Gavin Zhong, published the full technical disclosure last week, calling it “Comment and Control.” GitHub Actions does not expose secrets to fork pull requests by default when using the pull_request trigger, but workflows using pull_request_target, which most AI agent integrations require for secret access, do inject secrets into the runner environment. This limits the practical attack surface but does not eliminate it: collaborators, comment fields, and any repo using pull_request_target with an AI coding agent are exposed. Per Guan’s disclosure timeline: Anthropic classified it as CVSS 9.4 Critical ($100 bounty), Google paid a $1,337 bounty, and GitHub awarded $500 through the Copilot Bounty Program. The $100 amount is notably low relative to the CVSS 9.4 rating; Anthropic’s HackerOne program scopes agent-tooling findings separately from model-safety vulnerabilities. All three patched quietly, and none had issued CVEs in the NVD or published security advisories through GitHub Security Advisories as of Saturday. Comment and Control exploited a prompt injection vulnerability in Claude Code Security Review, a specific GitHub Action feature that Anthropic’s own system card acknowledged is “not hardened against prompt injection.” The feature is designed to process trusted first-party inputs by default; users who opt into processing untrusted external PRs and issues accept additional risk and are responsible for restricting agent permissions. Anthropic updated its documentation to clarify this operating model after the disclosure. The same class of attack operates beneath OpenAI’s safeguard layer at the agent runtime, based on what their system card does not document — not a demonstrated exploit. The exploit is the proof case, but the story is what the three system cards reveal about the gap between what vendors document and what they protect. OpenAI and Google did not respond for comment by publication time. “At the action boundary, not the model boundary,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat when asked where protection actually needs to sit. “The runtime is the blast radius.” What the system cards tell you Anthropic’s Opus 4.7 system card runs 232 pages with quantified hack rates and injection resistance metrics. It discloses a restricted model strategy (Mythos held back as a capability preview) and states directly that Claude Code Security Review is “not hardened against prompt injection.” The system card explains to readers that the runtime was exposed. Comment and Control proved it. Anthropic does gate certain agent actions outside the system card’s scope — Claude Code Auto Mode, for example, applies runtime-level protections — but the system card itself does not document these runtime safeguards or their coverage. OpenAI’s GPT-5.4 system card documents extensive red teaming and publishes model-layer injection evals but not agent-runtime or tool-execution resistance metrics. Trusted Access for Cyber scales access to thousands. The system card tells you what red teamers tested. It does not tell you how resistant the model is to the attacks they found. Google’s Gemini 3.1 Pro model card, shipped in February, defers most safety methodology to older documentation, a VentureBeat review of the card found. Google’s Automated Red Teaming program remains internal only. No external cyber program. Dimension Anthropic (Opus 4.7) OpenAI (GPT-5.4) Google (Gemini 3.1 Pro) System card depth 232 pages. Quantified hack rates, classifier scores, and injection resistance metrics. Extensive. Red teaming hours documented. No injection resistance rates published. Few pages. Defers to older Gemini 3 Pro card. No quantified results. Cyber verification program CVP. Removes cyber safeguards for vetted pentesters and red teamers doing authorized offensive work. Does not address prompt injection defense. Platform and data-retention exclusions not yet publicly documented. TAC. Scaled to thousands. Constrains ZDR. None. No external defender pathway. Restricted model strategy Yes. Mythos held back as a capability preview. Opus 4.7 is the testbed. No restricted model. Full capability released, access gated. No restricted model. No stated plan for one. Runtime agent safeguards Claude Code Security Review: system card states it is not hardened against prompt injection. The feature is designed for trusted first-party inputs. Anthropic applies additional runtime protections (e.g., Claude Code Auto Mode) not documented in the system card. Not documented. TAC governs access, not agent operations. Not documented. ART internal only. Exploit response (Comment and Control) CVSS 9.4 Critical. $100 bounty. Patched. No CVE. Not directly exploited. Structural gap inferred from TAC design, not demonstrated. $1,337 bounty per Guan disclosure. Patched. No CVE. Injection resistance data Published. Quantified rates in the system card. Model-layer injection evals published. No agent-runtime or tool-execution resistance rates. Not published. No quantified data available. Baer offered specific procurement questions. “For Anthropic, ask how safety results actually transfer across capability jumps,” she told VentureBeat. “For OpenAI, ask what ‘trusted’ means under compromise.” For both, she said, directors need to “demand clarity on whether safeguards extend into tool execution, not just prompt filtering.” Seven threat classes neither safeguard approach closes Each row names what breaks, why your controls miss it, what Comment and Control proved, and the recommended action for the week ahead. Threat Class What Breaks Why Your Controls Miss It What Comment and Control Proved Recommended Action 1. Deployment surface mismatch CVP is designed for authorized offensive security research, not prompt injection defense. It does not extend to Bedrock, Vertex, or ZDR tenants. TAC constrains ZDR. Google has no program. Your team may be running a verified model on an unverified surface. Launch announcements describe the program. Support documentation lists the exclusions. Security teams read the announcement. Procurement reads neither. The exploit targets the agent runtime, not the deployment platform. A team running Claude Code on Bedrock is outside CVP coverage, but CVP was not designed to address this class of vulnerability in the first place. Email your Anthropic and OpenAI reps today. One question, in writing: ‘Confirm whether [your platform] and [your data retention config] are covered by your runtime-level prompt injection protections, and describe what those protections include.’ File the response in your vendor risk register. 2. CI secrets exposed to AI agents ANTHROPIC_API_KEY, GEMINI_API_KEY, GITHUB_TOKEN, and any production secret stored as a GitHub Actions env var are readable by every workflow step, including AI coding agents. The default GitHub Actions config does not scope secrets to individual steps. Repo-level and org-level secrets propagate to all workflows. Most teams never audit which steps access which secrets. The agent read the API key from the runner env var, encoded it in a PR comment body, and posted it through GitHub’s API. No attacker-controlled infrastructure required. Exfiltration ran through GitHub’s own API — the platform itself became the C2 channel. Run: grep -r ‘secrets\.’ .github/workflows/ across every repo with an AI agent. List every secret the agent can access. Rotate all exposed credentials. Migrate to short-lived OIDC tokens (GitHub, GitLab, CircleCI). 3. Over-permissioned agent runtimes AI agents granted bash execution, git push, and API write access at setup. Permissions never scoped down. No periodic least-privilege review. Agents accumulate access in the same way service accounts do. Agents are configured once during onboarding and inherited across repos. No tooling flags unused permissions. The Comment and Control agent had bash, write, and env-read access for a code review task. The agent had bash access it did not need for code review. It used that access to read env vars and post exfiltrated data. Stripping bash would have blocked the attack chain entirely. Audit agent permissions repo by repo. Strip bash from code review agents. Set repo access to read-only. Gate write access (PR comments, commits, merges) behind a human approval step. 4. No CVE signal for AI agent vulnerabilities CVSS 9.4 Critical. Anthropic, Google, and GitHub patched. Zero CVE entries in NVD. Zero advisories. Your vulnerability scanner, SIEM, and GRC tool all show green. No CNA has yet issued a CVE for a coding agent prompt injection, and current CVE practices have not captured this class of failure mode. Vendors patch through version bumps. Qualys, Tenable, and Rapid7 have nothing to scan for. A SOC analyst running a full scan on Monday morning would find zero entries for a Critical vulnerability that hit Claude Code Security Review, Gemini CLI Action, and Copilot simultaneously. Create a new category in your supply chain risk register: ‘AI agent runtime.’ Assign a 48-hour check-in cadence with each vendor’s security contact. Do not wait for CVEs. None have come yet, and the taxonomy gap makes them unlikely without industry pressure. 5. Model safeguards do not govern agent actions Opus 4.7 blocks a phishing email prompt. It does not block an agent from reading $ANTHROPIC_API_KEY and posting it as a PR comment. Safeguards gate generation, not operation. Safeguards filter model outputs (text). Agent operations (bash, git push, curl, API POST) bypass safeguard evaluation entirely. The runtime is outside the safeguard perimeter. Anthropic applies some runtime-level protections in features like Claude Code Auto Mode, but these are not documented in the system card and their scope is not publicly defined. The agent never generated prohibited content. It performed a legitimate operation (post a PR comment) containing exfiltrated data. Safeguards never triggered. Map every operation your AI agents perform: bash, git, API calls, file writes. For each, ask the vendor in writing: does your safeguard layer evaluate this action before execution? Document the answer. 6. Untrusted input parsed as instructions PR titles, PR body text, issue comments, code review comments, and commit messages are all parsed by AI coding agents as context. Any can contain injected instructions. No input sanitization layer between GitHub and the agent instruction set. The agent cannot distinguish developer intent from attacker injection in untrusted fields. Claude Code GitHub Action is designed for trusted first-party inputs by default. Users who opt into processing untrusted external PRs accept additional risk. A single malicious PR title became a complete exfiltration command. The agent treated it as a legitimate instruction and executed it without validation or confirmation. Implement input sanitization as defense-in-depth, but do not rely on traditional WAF-style regex patterns. LLM prompt injections are non-deterministic and will evade static pattern matching. Restrict agent context to approved workflow configs and combine with least-privilege permissions. 7. No comparable injection resistance data across vendors Anthropic publishes quantified injection resistance rates in 232 pages. OpenAI publishes model-layer injection evals but no agent-runtime resistance rates. Google publishes a few-page card referencing an older model. No industry standard for AI safety metric disclosure. Vendors may have internal metrics and red-team programs, but published disclosures are not comparable. Procurement has no baseline and no framework to require one. Anthropic, OpenAI, and Google were all approved for enterprise use without comparable injection resistance data. The exploit exposed what unmeasured risk looks like in production. Write one sentence for your next vendor meeting: ‘Show me your quantified injection resistance rate for my model version on my platform.’ Document refusals for EU AI Act high-risk compliance. Deadline: August 2026. OpenAI’s GPT-5.4 was not directly exploited in the Comment and Control disclosure. The gaps identified in the OpenAI and Google columns are inferred from what their system cards and program documentation do not publish, not from demonstrated exploits. That distinction matters. Absence of published runtime metrics is a transparency gap, not proof of a vulnerability. It does mean procurement teams cannot verify what they cannot measure. Eligibility requirements for Anthropic’s Cyber Verification Program and OpenAI’s Trusted Access for Cyber are still evolving, as are platform coverage and program scope, so security teams should validate current vendor docs before treating any coverage described here as definitive. Anthropic’s CVP is designed for authorized offensive security research — removing cyber safeguards for vetted actors — and is not a prompt injection defense program. Security leaders mapping these gaps to existing frameworks can align threat classes 1–3 with NIST CSF 2.0 GV.SC (Supply Chain Risk Management), threat class 4 with ID.RA (Risk Assessment), and threat classes 5–7 with PR.DS (Data Security). Comment and Control focuses on GitHub Actions today, but the seven threat classes generalize to most CI/CD runtimes where AI agents execute with access to secrets, including GitHub Actions, GitLab CI, CircleCI, and custom runners. Safety metric disclosure formats are in flux across all three vendors; Anthropic currently leads on published quantification in its system card documentation, but norms are likely to converge as EU AI Act obligations come into force. Comment and Control targeted Claude Code GitHub Action, a specific product feature, not Anthropic’s models broadly. The vulnerability class, however, applies to any AI coding agent operating in a CI/CD runtime with access to secrets. What to do before your next vendor renewal “Don’t standardize on a model. Standardize on a control architecture,” Baer told VentureBeat. “The risk is systemic to agent design, not vendor-specific. Maintain portability so you can swap models without reworking your security posture.” Build a deployment map. Confirm your platform qualifies for the runtime protections you think cover you. If you run Opus 4.7 on Bedrock, ask your Anthropic account rep what runtime-level prompt injection protections apply to your deployment surface. Email your account rep today. (Anthropic Cyber Verification Program) Audit every runner for secret exposure. Run grep -r ‘secrets\.’ .github/workflows/ across every repo with an AI coding agent. List every secret the agent can access. Rotate all exposed credentials. (GitHub Actions secrets documentation) Start migrating credentials now. Switch stored secrets to short-lived OIDC token issuance. GitHub Actions, GitLab CI, and CircleCI all support OIDC federation. Set token lifetimes to minutes, not hours. Plan full rollout over one to two quarters, starting with repos running AI agents. (GitHub OIDC docs | GitLab OIDC docs | CircleCI OIDC docs) Fix agent permissions repo by repo. Strip bash execution from every AI agent doing code review. Set repository access to read-only. Gate write access behind a human approval step. (GitHub Actions permissions documentation) Add input sanitization as one layer, not the only layer. Filter pull request titles, comments, and review threads for instruction patterns before they reach agents. Combine with least-privilege permissions and OIDC. Static regex will not catch non-deterministic prompt injections on its own. Add “AI agent runtime” to your supply chain risk register. Assign a 48-hour patch verification cadence with each vendor’s security contact. Do not wait for CVEs. None have come yet for this class of vulnerability. Check which hardened GitHub Actions mitigations you already have in place. Hardened GitHub Actions configurations block this attack class today: the permissions key restricts GITHUB_TOKEN scope, environment protection rules require approval before secrets are injected, and first-time-contributor gates prevent external pull requests from triggering agent workflows. (GitHub Actions security hardening guide) Prepare one procurement question per vendor before your next renewal. Write one sentence: “Show me your quantified injection resistance rate for the model version I run on the platform I deploy to.” Document refusals for EU AI Act high-risk compliance. The deadline is August 2026. “Raw zero-days aren’t how most systems get compromised. Composability is,” Baer said. “It’s the glue code, the tokens in CI, the over-permissioned agents. When you wire a powerful model into a permissive runtime, you’ve already done most of the attacker’s work for them.”

Technology
Daily AI News April 21, 2026· Yesterday

Modular Post-Training with Mixture-of-Experts

This article shows a more modular way to update AI models, letting teams improve specific capabilities like code, math, tool use, or safety without retraining the entire model from scratch.

TechnologyTechnology & Infrastructure
VentureBeat· Yesterday

Kimi K2.6 runs agents for days — and exposes the limits of enterprise orchestration

Most orchestration frameworks were built for agents that run for seconds or minutes. Now that agents are running for hours — and in some cases days — those frameworks are starting to crack. Several model providers, such as Anthropic with Claude Code and OpenAI with Codex, introduced early support for long-horizon agents through multi-session tasks, subagents and background execution. However, these systems sometimes assume agents are still operating within bounded-time workflows even when they run for extended periods.  Open-source model provider Moonshot AI wants to push beyond that with its new model, Kimi K2.6.  Moonshot says the model is designed for continuous execution, with internal use cases including agents that ran for hours and, in one case, five straight days, handling monitoring and incident response autonomously. But this growing use of this type of agent is exposing a critical gap in orchestration: most orchestration frameworks were not designed for this type of continuous, stateful execution. Open-source models, such as Kimi K2.6, that rely on agent swarms are making the case that their orchestration approach comes close to managing stateful agents.  The difficulties of orchestrating long-running agents While it is true that some enterprises would rather bring their own orchestration frameworks to their agentic ecosystem, model providers and agent platforms recognize that offering agent management remains a competitive advantage.  Other model providers have begun exploring long-running agents, many through multi-session tasks and background execution. For example, Anthropic’s Claude Code orchestrates agents with a lead agent that directs other agents based on a set of user-instructed definitions. OpenAI’s Codex runs similarly.  Kimi K2.6 approaches orchestration with an improved version of its Agent Swarms, capable of managing up to 300 sub-agents “executing across 4,000 coordinated steps simultaneously,” Moonshot AI wrote in a blog post. Compared to both Claude Code and Codex, K2.6 relies on the model, rather than pre-defined roles, to determine orchestration. Kimi K2.6 is now available on Hugging Face, through its API, Kimi Code and the Kimi app. Practitioners experimenting with long-horizon agents say the brittleness runs deeper than prompting can fix. As one practitioner, Maxim Saplin, put it in a blog post, “That does not mean subagents are useless. It means orchestration is still fragile. Right now, it feels more like a product and training problem than something you can solve by writing a sufficiently stern prompt.” The problem long-running agents pose is that it’s difficult to maintain their state, especially as their environment continues to change while they're doing their job. The agent would constantly call different tools and APIs or tap into different databases during its runtime. Most current agents, those that may run for one or two executions, do call different tools, but for at most a minute.  Mark Lambert, chief product officer at ArmorCode, which builds an autonomous security platform for enterprises, told VentureBeat in an email that the governance gap is already outpacing deployment. "These agentic systems can now generate code and system changes faster than most organizations can review, remediate, or govern them. This will require more than just additional scanning. Organizations will need stronger AI governance that provides the context, prioritization, and accountability teams need to manage Kimi and other AI-generated risk before they turn into accumulated exposure," Lambert said. Long-running agents could also risk failure without a clear rollback. Most importantly, these types of agents often lack a set of well-defined tasks and dynamically adjust their plans as they run.  Kunal Anand, chief product officer at F5, told VentureBeat in an email that long-horizon agents represent a much bigger architectural shift than most companies were prepared for. “We went from scripts to services to containers to functions, and now to agents as persistent infrastructure. That creates categories we do not yet have good names for: agent runtime, agent gateway, agent identity provider, agent mesh. The API gateway pattern is morphing into something that has to understand goals and workflows, not just endpoints and verbs,” Anand said.  Running for 13 hours and even five days Understanding how to orchestrate agents becomes important because model capabilities have begun to outpace orchestration innovations, even as enterprises start to look at long-horizon agents.    Moonshot AI says the model is built for tasks that reflect "real-world challenges that typically demand weeks or months of collective human effort." In a separate technical document provided to VentureBeat, Moonshot claims K2.6 built a full SysY compiler from scratch in 10 hours — work it characterized as equivalent to a team of four engineers over two months — and passed all 140 functional tests without human intervention. The team deployed K2.6 to complex engineering tasks, including overhauling an eight-year-old open source financial matching engine. Moonshot's engineers described a 13-hour execution that “iterated through 12 optimization strategies, initiating over 1,000 tool calls to modify more than 4,000 lines of code precisely.” Moonshot said one of its teams used K2.6 to build an agent that ran autonomously for five days. That agent managed monitoring, incident response and system operations.

TechnologyTechnology & Infrastructure
Forbes· Yesterday

Dell Technologies BrandVoice: AI Isn’t Limited By Innovation—It’s Limited By Infrastructure

The organizations pulling ahead are not necessarily the ones with the most advanced AI models. They are the ones that can move data faster, process it efficiently and deploy AI consistently at scale

Technology
Reuters· Yesterday

Apple's Ternus era hints at renewed hardware focus, AI growth push

Apple's Ternus era hints at renewed hardware focus, AI growth push

TechnologyLabor & Society
VentureBeat· Yesterday

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with an infostealer. That combination created a walk-in path to Vercel’s production environments through an OAuth grant that nobody had reviewed. Vercel, the cloud platform behind Next.js and its millions of weekly npm downloads, confirmed on Sunday that attackers gained unauthorized access to internal systems. Mandiant was brought in. Law enforcement was notified. Investigations remain active. An update on Monday confirmed that Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no Vercel npm packages were compromised. Vercel also announced it is now defaulting environment variable creation to “sensitive.” Next.js, Turbopack, AI SDK, and all Vercel-published npm packages remain uncompromised after a coordinated audit with GitHub, Microsoft, npm, and Socket. Context.ai was the entry point. OX Security’s analysis found that a Vercel employee installed the Context.ai browser extension and signed into it using a corporate Google Workspace account, granting broad OAuth permissions. When Context.ai was breached, the attacker inherited that employee’s Workspace access, pivoted into Vercel environments, and escalated privileges by sifting through environment variables not marked as “sensitive.” Vercel’s bulletin states that variables marked sensitive are stored in a manner that prevents them from being read. Variables without that designation were accessible in plaintext through the dashboard and API, and the attacker used them as the escalation path. CEO Guillermo Rauch described the attacker as “highly sophisticated and, I strongly suspect, significantly accelerated by AI.” Jaime Blasco, CTO of Nudge Security, independently surfaced a second OAuth grant tied to Context.ai’s Chrome extension, matching the client ID from Vercel’s published IOC to Context.ai’s Google account before Rauch’s public statement. The Hacker News reported that Google removed Context.ai’s Chrome extension from the Chrome Web Store on March 27. Per The Hacker News and Nudge Security, that extension embedded a second OAuth grant enabling read access to users’ Google Drive files. Patient zero. A Roblox cheat and a Lumma Stealer infection Hudson Rock published forensic evidence on Monday, reporting that the breach origin traces to a February 2026 Lumma Stealer infection on a Context.ai employee’s machine. According to Hudson Rock, browser history showed the employee downloading Roblox auto-farm scripts and game exploit executors. Harvested credentials included Google Workspace logins, Supabase keys, Datadog tokens, Authkit credentials, and the support@context.ai account. Hudson Rock identified the infected user as a core member of “context-inc,” Context.ai’s tenant on the Vercel platform, with administrative access to production environment variable dashboards. Context.ai published its own bulletin on Sunday (updated Monday), disclosing that the breach affects its deprecated AI Office Suite consumer product, not its enterprise Bedrock offering (Context.ai’s agent infrastructure product, unrelated to AWS Bedrock). Context.ai says it detected unauthorized access to its AWS environment in March, hired CrowdStrike to investigate, and shut down the environment. Its updated bulletin then disclosed that the scope was broader than initially understood: the attacker also compromised OAuth tokens for consumer users, and one of those tokens opened the door to Vercel’s Google Workspace. Dwell time is the detail that should concern security directors. Nearly a month separated Context.ai’s March detection from the Vercel disclosure on Sunday. A separate Trend Micro analysis references an intrusion beginning as early as June 2024 — a finding that, if confirmed, would extend the dwell time to roughly 22 months. VentureBeat could not independently reconcile that timeline with Hudson Rock's February 2026 dating; Trend Micro did not respond to a request for comment before publication. Where detection goes blind Security directors can use this table to benchmark their own detection stack against the four-hop kill chain this breach exploited. Kill Chain Hop What Happened Who Should Detect Typical Coverage Gap 1. Infostealer on employee device Context.ai employee downloaded Roblox cheat scripts; Lumma Stealer harvested Workspace creds, Supabase/Datadog/Authkit keys. EDR on endpoint; credential exposure monitoring. Low. Device likely under-monitored. No stealer log monitoring at most orgs. Most enterprises do not subscribe to infostealer intelligence feeds or correlate stealer logs against employee email domains. 2. AWS compromise at Context.ai Attacker used harvested credentials to access Context.ai’s AWS. Detected in March. Context.ai cloud security; AWS CloudTrail. Partially detected. Context.ai stopped AWS access but missed OAuth token exfiltration. Initial investigation did not identify OAuth token exfiltration. Scope was underestimated until Vercel disclosure. 3. OAuth token theft into Vercel Workspace Compromised OAuth token used to access a Vercel employee’s Google Workspace. Employee had granted “Allow All” permissions via Chrome extension. Google Workspace audit logs; OAuth app monitoring; CASB. Very low. Most orgs do not monitor third-party OAuth token usage patterns. No approval workflow intercepted the grant. No anomaly detection on OAuth token use from a compromised third party. This is the hop no one saw. 4. Lateral movement into Vercel production Attacker enumerated non-sensitive env vars (accessible via dashboard/API), harvested customer credentials. Vercel platform audit logs; behavioral analytics. Moderate. Vercel detected the intrusion after the attacker accessed customer credentials. Detection occurred after exfiltration, not before. Env var access by a compromised Workspace account did not trigger real-time alerting. What’s confirmed vs. what’s claimed Vercel’s bulletin confirms unauthorized access to internal systems, a limited subset of affected customers, and two IOCs tied to Context.ai’s Google Workspace OAuth apps. Rauch confirmed that Next.js, Turbopack, and Vercel’s open-source projects are unaffected. Separately, a threat actor using the ShinyHunters name posted on BreachForums claiming to hold Vercel’s internal database, employee accounts, and GitHub and NPM tokens, with a $2M asking price. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter.” Actors previously linked to ShinyHunters have denied involvement. None of these claims has been independently verified. Six governance failures the Vercel breach exposed 1. AI tool OAuth scopes go unaudited. Context.ai’s own bulletin states that a Vercel employee granted “Allow All” permissions using a corporate account. Most security teams have no inventory of which AI tools their employees have granted OAuth access to. CrowdStrike CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.” Jeff Pollard, VP and principal analyst at Forrester, told Cybersecurity Dive that the attack is a reminder about third-party risk management concerns and AI tool permissions. 2. Environment variable classification is doing real security work. Vercel distinguishes between variables marked “sensitive” (stored in a manner that prevents reading) and those without that designation (accessible in plaintext through the dashboard and API). Attackers used the accessible variables as the escalation path. A developer convenience toggle determined the blast radius. Vercel has since changed its default: new environment variables now default to sensitive. “Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat. 3. Infostealer-to-SaaS-to-supply-chain escalation chains lack detection coverage. Hudson Rock’s reporting reveals a kill chain that crossed four organizational boundaries. No single detection layer covers that chain. Context.ai’s updated bulletin acknowledged that the scope extended beyond what was initially identified during its CrowdStrike-led investigation. 4. Dwell time between vendor detection and customer notification exceeds attacker timelines. Context.ai detected the AWS compromise in March. Vercel disclosed on Sunday. Every CISO should ask their vendors: what is your contractual notification window after detecting unauthorized access that could affect downstream customers? 5. Third-party AI tools are the new shadow IT. Vercel’s bulletin describes Context.ai as “a small, third-party AI tool.” Grip Security’s March 2026 analysis of 23,000 SaaS environments found a 490% year-over-year increase in AI-related attacks. Vercel is the latest enterprise to learn this the hard way. 6. AI-accelerated attackers compress response timelines. Rauch’s assessment of AI acceleration comes from what his IR team observed. CrowdStrike’s 2026 Global Threat Report puts the baseline at a 29-minute average eCrime breakout time, 65% faster than 2024. Security director action plan Attack Surface What Failed Recommended Action Owner OAuth governance Context.ai held broad “Allow All” Workspace permissions. No approval workflow intercepted. Inventory every AI tool OAuth grant org-wide. Revoke scopes exceeding least privilege. Check both Vercel IOCs now. Identity / IAM Env var classification Variables not marked “sensitive” remained accessible. Accessibility became the escalation path. Default to non-readable. Require a security sign-off to downgrade any variable to accessible. Platform eng + security Infostealer-to-supply-chain Kill chain spanned Lumma Stealer, Context.ai AWS, OAuth tokens, Vercel Workspace, and production environments. Correlate Infostealer intel feeds against employee domains. Automate credential rotation when creds surface in stealer logs. Threat intel + SOC Vendor notification lag Nearly a month between Context.ai detection and Vercel disclosure. Require 72-hour notification clauses in all contracts involving OAuth or identity integration. Third-party risk / legal Shadow AI adoption One employee’s unapproved AI tool became the breach vector for hundreds of orgs. Extend shadow IT discovery to AI agent platforms. Treat unapproved adoption as a security event. Security ops + procurement Lateral movement speed Rauch suspects AI acceleration. Attacker compressed the access-to-escalation window. Cut detection-to-containment SLAs below 29-minute eCrime average. SOC + IR team Run both IoC checks today Search your Google Workspace admin console (Security > API Controls > Manage Third-Party App Access) for two OAuth App IDs. The first is 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com, tied to Context.ai’s Office Suite. The second is 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com, tied to Context.ai’s Chrome extension and granting Google Drive read access. If either touched your environment, you are in the blast radius regardless of what Vercel discloses next. What this means for security directors Forget the Vercel brand name for a moment. What happened here is the first major proof case that AI agent OAuth integrations create a breach class that most enterprise security programs cannot detect, scope, or contain. A Roblox cheat download in February led to production infrastructure access in April. Four organizational boundaries, two cloud providers, and one identity perimeter. No zero-day required. For most enterprises, employees have connected AI tools to corporate Google Workspace, Microsoft 365 or Slack instances with broad OAuth scopes — without security teams knowing. The Vercel breach is the case study for what that exposure looks like when an attacker finds it first.

TechnologyTechnology & Infrastructure
Bebeez· Yesterday

Bull announces €30m, five-year contract for Mimer supercomputer in Sweden

Former Atos subsidiary Bull has announced a €30 million ($35m) five-year contract to provide AI-optimized infrastructure at Sweden’s Mimer AI factory. Housed at Linköping University’s National Academic Infrastructure for Supercomputing in Sweden (NAISS), in partnership with the Research Institutes of Sweden (RISE), the supercomputer will be based on Bull’s BullSequana XH3500 architecture. The system was […]

Technology
MIT Technology Review· Yesterday

China's open-source bet

The country's top AI labs are undercutting US competitors and winning over developers by making their best models free.

Technology
MIT Technology Review· Yesterday

World models

Today's AI is still unreliable. Some researchers think solving that problem requires teaching AI systems to understand the world around them.

Technology
Reuters· Yesterday

Exclusive: Meta to start capturing employee mouse movements, keystrokes for AI training data | Reuters

Meta is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and ​keystrokes for use in training its artificial intelligence models, part of a broad initiative to build AI agents that can ...

TechnologyLabor & Society
BBC· Yesterday

Meta to track workers' clicks and keystrokes to train AI

The firm will take data from the way employees work for its artificial intelligence models.

TechnologyTechnology & Infrastructure
Reuters· Yesterday

AI chipmaker Forge Nano to list via $1.6 billion SPAC deal | Reuters

The deal comes amid booming demand for AI chips in recent years as companies ramp up spending on data centers and ​high-performance computing to support generative AI applications, benefiting chip firms ​and equipment makers across the supply chain.

Technology
⚙️ Apple chooses its AI path with Ternus as CEO· Yesterday

Apple Chooses AI Path with Ternus as CEO

Apple named John Ternus as its next CEO, signaling a clearer bet on hardware, devices, and tightly integrated experiences as its path for the AI era.

Technology
Exponentialview· Yesterday

🍏 Apple’s AI bet got a CEO

Apple’s board picked John Ternus, senior vice president of Hardware Engineering, to succeed Tim Cook on September 1.

Technology
WIRED· Yesterday

Mozilla Used Anthropic’s Mythos to Find and Fix 271 Bugs in Firefox | WIRED

The Firefox team doesn’t think emerging AI capabilities will upend cybersecurity long term, but they warn that software developers are likely in for a rocky transition.

Technology
Reuters· Yesterday

Adobe announces $25 billion stock buyback amid AI disruption fears | Reuters

Fears were compounded ​when top AI firm Anthropic unveiled Claude Design last week, which ‌allows ⁠users to create designs, prototypes and presentations using its chatbot.

Technology
Reuters· Yesterday

SpaceX says it has option to acquire startup Cursor for $60 billion | Reuters

Along with OpenAI and Anthropic, Cursor is one of several Silicon Valley startups that have drawn waves of developers by using ​artificial intelligence to automate coding, a business where AI companies ​have found early commercial traction.

PaywallTechnologyEconomics & Markets
FT· Yesterday

Is Ternus the leader Apple needs for the AI era?

Tim Cook’s replacement must lead iPhone maker through industry shift

TechnologyEconomics & Markets
Reuters· Yesterday

Apple's new CEO is a product perfectionist taking on the AI age | Reuters

While software rivals at Microsoft (MSFT.O), opens new tab and Google are spending hundreds of billions to push artificial intelligence into every corner of their businesses, the man set to lead one of the world's most iconic ​companies appears ...

Technology
Axios AI+· Yesterday

Apple Enters Post-Cook Era Chasing Next Hit

Apple CEO Tim Cook is stepping down, and the company is chasing its next big hit after the iPhone. Apple has struggled to break into new categories, including AI, and has stumbled in its efforts to expand beyond the iPhone.

Technology
Product Hunt· Yesterday

The New Waydev

Measure the full AI SDLC. From token to production.

Technology
Product Hunt· Yesterday

Pegasus 1.5 by TwelveLabs

AI model for transforming video into Time-Based Metadata

TechnologyTechnology & Infrastructure
Reuters· Yesterday

Exclusive: SpaceX says unproven AI space data centers may not be commercially viable, filing shows | Reuters

In February, after announcing a merger between SpaceX and his social media and artificial intelligence firm xAI, he said "space-based AI is obviously the ​only way to scale".

Technology
Reuters· Yesterday

Intel results to show if supply chain issues are dimming its AI ambitions

Intel results to show if supply chain issues are dimming its AI ambitions

Technology
Reuters· Yesterday

Australia's Syenta raises $26 million to ease AI chip bottleneck

Australia's Syenta raises $26 million to ease AI chip bottleneck, former Intel CEO joins board

Technology
MIT Technology Review· Yesterday

Humanoid data

Robotics companies want tremendous amounts of data on how we move our hands and limbs, and their tactics are getting strange.

Technology
Reuters· Yesterday

Anthropic's Mythos model accessed by unauthorized users

Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports

Technology
Reuters· Yesterday

Meta breaks ground on over $1 billion data center in Oklahoma's Tulsa

Meta breaks ground on over $1 billion data center in Oklahoma's Tulsa

Technology
MIT Technology Review· Yesterday

Roundtables: Unveiling The 10 Things That Matter in AI Right Now

Listen to the session or watch below Watch a special edition of Roundtables simulcast live from EmTech AI, MIT Technology Review’s signature conference for AI leadership. Subscribers got an exclusive first look at a new list capturing 10 key technologies, emerging trends, bold ideas, and powerful movements in AI that you need to know about…

TechnologyAdoption & Impact
GlobeNewswire· Yesterday

AI Is Hitting Operational Limits as Companies Rush to Scale, Datadog Report Finds

New data from Datadog reveals operational complexity – not model intelligence – is becoming the primary barrier to reliable AI at scale....

PaywallTechnology
Bloomberg· Yesterday

SpaceX Has Deal for Right to Acquire Cursor for $60 Billion

SpaceX said it has an agreement giving it the right to acquire artificial intelligence startup Cursor for $60 billion later this year or to pay $10 billion for the companies’ work together, part of the Elon Musk-run firm’s efforts to catch up with rivals in AI coding tools.

TechnologyEconomics & Markets
IT Pro· Yesterday

UK firms ponder offshoring AI workloads as energy costs surge | IT Pro

Energy costs are already holding back AI, with one-in-five UK firms moving their AI operations overseas to dodge prices.

Technology
Ainvest· Yesterday

AI Adoption Unlocks 11.5% Productivity Gains—Early Adopters Like Snowflake and Shopify Are Decoupling Growth from Headcount

AI Adoption Unlocks 11.5% Productivity Gains—Early Adopters Like Snowflake and Shopify Are Decoupling Growth from Headcount

TechnologyEconomics & Markets
IndexBox· Yesterday

AI Tech Stocks Gain as Oil Prices Fall on Geopolitical News - News and Statistics - IndexBox

The article details a market pivot where easing Middle East tensions lower oil prices, redirecting capital toward AI. It highlights major gains and strategic moves by Credo, Oracle, and CoreWeave in AI infrastructure.

TechnologyEconomics & Markets
The Motley Fool· Yesterday

5 AI Cloud Stocks That Will Make Investors a Fortune Over the Long Run | The Motley Fool

Each company's cloud computing division is its fastest-growing segment. With all three companies spending hundreds of billions of dollars on infrastructure to capitalize on massive demand, these segments are likely to grow rapidly for years to come. ... All three of these stocks look primed for success, and I believe they will outperform the market due to their high exposure to the AI ...

TechnologyEconomics & Markets
🧑‍💻 Sergey Brin's plan to out-code Anthropic· Yesterday

Sergey Brin's plan to out-code Anthropic

Sergey Brin is reportedly committing Google DeepMind to a strategy aimed at catching up with Anthropic's Claude.

TechnologyEconomics & Markets
Investing.com· Yesterday

Muted Growth, Rising AI Disruption: IT Services Sector Faces Uncertain 2026 By Investing.com

Analysts expect 2026 to be characterized by steady but unspectacular growth. While AI presents a transformative opportunity, its disruptive potential and unclear near-term impact continue to cloud the sector’s outlook.

Technology
International Business Times· Yesterday

Alphabet Stock Rises Modestly as Investors Await Q1 Earnings on AI Momentum

NEW YORK — Alphabet Inc. Class C shares edged higher in morning trading Tuesday, climbing 0.26% to $336.28 as Wall Street prepared for the tech giant's first-quarter 2026 earnings report scheduled for April 29, with focus on Google Cloud growth, AI advancements and the impact of massive capital

TechnologyEconomics & Markets
Calcali Tech· Yesterday

The 50 most promising Israeli startups - 2026 | Ctech

The round was led by Qumra Capital, with participation from investors including Peter Welinder of OpenAI and Clara Shih of Meta, bringing the company’s total funding to $120 million. Today, with a team of 115 employees across Israel, the United States and Europe, Qodo is positioning itself at the forefront of what the industry increasingly refers to as “code governance,” AI systems that not only generate ...

TechnologyGeopolitics
Top Daily Headlines: NASA working on ‘Big Bang’ upgrade to keep the Voyagers alive for longer· Yesterday

One of Europe's sovereign cloud picks may not be so-sovereign after all

US-based cloud providers could have to disclose certain data under American legal orders.

TechnologyGeopolitics
Devex· Yesterday

If AI becomes infrastructure, who should control its future? | Devex

But for many others, AI is rapidly ... as core infrastructure in countries across the world. “It is as valuable as water, electricity, roads, anything else,” said Calista Redmond, vice president of global AI initiatives at NVIDIA, a technology company that designs the chips behind much of today’s artificial intelligence. “It is something that can accelerate economic progress for a country or an individual, ...

Technology
South China Morning Post· Yesterday

Opinion | US-China AI race must strike a balance between security and openness | South China Morning Post

Raising barriers to entry in tech in the name of national security could stifle global AI development.

TechnologyLabor & Society
Top Daily Headlines: NASA working on ‘Big Bang’ upgrade to keep the Voyagers alive for longer· Yesterday

Claude Desktop changes app access settings for browsers you don't even have installed yet

Installation and pre-approval without consent looks dubious under EU law.

TechnologyTechnology & Infrastructure
MIT Technology Review· Yesterday

2026 10 Things That Matter in AI Right Now | MIT Technology Review

MIT Technology Review's authoritative overview of the 10 technologies, emerging trends, bold ideas, and powerful movements in AI in 2026.

TechnologyTechnology & Infrastructure
Investing.com· Yesterday

Factbox-From OpenAI to Nvidia, firms channel billions into AI infrastructure as demand booms By Reuters

Factbox-From OpenAI to Nvidia, firms channel billions into AI infrastructure as demand booms

Technology
DataCenterKnowledge· Yesterday

Data Center World: As AI Scale Surges, a Call to Build for Legacy

Industry leaders gather in Washington as AI-driven growth is pushing data centers to new scale, reshaping demands on power, design and sustainability.

TechnologyTechnology & Infrastructure
SiliconANGLE· Yesterday

Cloud infrastructure powers global AI at scale - SiliconANGLE

As AI matures, open cloud infrastructure is emerging as the critical foundation for compliant, performant agentic applications worldwide.

TechnologyTechnology & Infrastructure
Knowledge Hub Media· Yesterday

AI Is Driving an Energy Crisis… And Sustainability Is Struggling to Keep Up | Knowledge Hub Media

Artificial intelligence is scaling at a pace that few industries were prepared for. What started as a wave of experimentation has quickly turned into full-scale deployment across nearly every sector. Enterprises are no longer asking if they should adopt AI. They are asking how fast they can ...

TechnologyTechnology & Infrastructure
Daily AI News April 21, 2026: Kimi K2.6: Open weights get heavyweight· Yesterday

Kimi K2.6: Advancing Open-Source Coding

Moonshot's new open-source model, Kimi K2.6, is designed for complex coding, long-horizon execution, and agent swarms, supporting up to 300 sub-agents.

TechnologyLabor & Society
Guardian· Yesterday

‘I’ll key your car’: ChatGPT can become abusive when fed real-life arguments, study finds

Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study. Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time. Continue reading...

TechnologyAdoption & Impact
🔄 Claude Live Artifacts: auto-refresh dashboards + Amazon $5B compute· Yesterday

Amazon doubles down on Anthropic with $5B now and 5 gigawatts of compute

Anthropic has secured a new agreement with Amazon for up to 5 gigawatts of computing capacity to support the training and deployment of Claude models.

PaywallTechnology
WSJ· Yesterday

Apple Hardware Exec to Succeed Tim Cook as CEO

Plus, an Iran cease-fire extension looks unlikely, and a $150 train ride to the World Cup might feel red-card worthy.

PaywallTechnology
Bloomberg· Yesterday

Amazon to Invest an Additional $5 Billion in Anthropic

Amazon.com Inc. is investing an additional $5 billion in Anthropic PBC and may inject $20 billion more over time, a deal that strengthens ties in in an increasingly competitive artificial intelligence race.

Technology
Substack· Yesterday

Opus 4.7 Part 1: The Model Card - by Askwho Casts AI

Askwho Casts AI · Recent Episodes · AI #164: Pre Opus · Apr 17 • Askwho Casts AI · On Dwarkesh Patel's Podcast With Nvidia CEO Jensen Huang · Apr 16 • Askwho Casts AI · Claude Code, Codex and Agentic Coding #7: Auto Mode · Apr 15 • Askwho Casts AI ·

Technology
arXiv· Yesterday

Benchmarking System Dynamics AI Assistants: Cloud Versus Local LLMs on CLD Extraction and Discussion

Technology
Bebeez· Yesterday

Anthropic seeks data center leasing deals in Europe and Australia

Anthropic plans to sign data center capacity deals in Europe and Australia, job listings reveal. The previously unreported transaction principal roles come as the generative AI company increases its internal data center team, after previously working solely with cloud providers. The company has only publicly announced data center plans in the US.

Technology
Reddit· Yesterday

r/pennystocks on Reddit: $DGXX - Early AI Revenue. Hyperscalers in Talks. Pay Attention

$GX AI (Gaxos.ai) - Explosive AI Growth in Health & Gaming at Market Cap ~$13M

Technology
Reddit· Yesterday

r/AI_Agents on Reddit: Best AI by category

But in actual use, I’ve found that the “best” AI seems to depend heavily on the task.

Technology
Fortune· Yesterday

Nvidia CEO says that AI agents will make workers busier than ever—they’ll ‘harass’ and ‘micromanage’ you, instead of take your job

Jensen Huang, CEO of $4.8 trillion tech giant Nvidia, says AI will help humans explore space, get better at their jobs, and live more cost-effectively.

Technology
arXiv· Yesterday

On the Importance and Evaluation of Narrativity in Natural Language AI Explanations

Explainable AI (XAI) aims to make the behaviour of machine learning models interpretable, yet many explanation methods remain difficult to understand. The integration of Natural Language Generation into XAI aims to deliver explanations in textual form, making them more accessible to practitioners. Current approaches, however, largely yield static lists of feature importances.

Technology
Bebeez· Yesterday

Lithuania’s Outcraft AI secures €2M pre-seed to build autonomous sales agents

– Advertisement – Vilnius-based Outcraft AI has raised €2 million in pre-seed funding led by Practica Capital, with early backing from venture builder Lost Astronaut, to advance its push into autonomous revenue execution. The startup is building AI agents that independently handle sales and customer engagement across voice, SMS, email, and messaging platforms, targeting a […]

Technology
arXiv· Yesterday

Aether: Network Validation Using Agentic AI and Digital Twin

Network change validation remains a critical yet predominantly manual, time-consuming, and error-prone process in modern network operations. While formal network verification has made substantial progress in proving correctness properties, it is typically applied in offline, pre-deployment settings and faces challenges in accommodating continuous changes and validating live production behavior. Current operational approaches typically involve scattered testing tools, resulting in partial coverag...

TechnologyLabor & Society
MIT Technology Review· Yesterday

The Download: murderous ‘mirror’ bacteria, and Chinese workers fighting AI doubles

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. No one’s sure if synthetic mirror life will kill us all In February 2019, a group of scientists proposed a high-risk, cutting-edge, irresistibly exciting idea that the National Science Foundation should…

Technology
arXiv· 2d ago

The Collaboration Gap in Human-AI Work

LLMs are increasingly presented as collaborators in programming, design, writing, and analysis. Yet the practical experience of working with them often falls short of this promise. In many settings, users must diagnose misunderstandings, reconstruct missing assumptions, and repeatedly repair misaligned responses.

Technology
arXiv· 2d ago

Architectural Design Decisions in AI Agent Harnesses

AI agent systems increasingly rely on reusable non-LLM engineering infrastructure that packages tool mediation, context handling, delegation, safety control, and orchestration. Yet the architectural design decisions in this surrounding infrastructure remain understudied. This paper presents a protocol-guided, source-grounded empirical study of 70 publicly available agent-system projects, addressing three questions: which design-decision dimensions recur across projects, which co-occurrences stru...

TechnologyEconomics & Markets
Siliconrepublic· 2d ago

Irish co-founded AI start-up Lua raises $5.8m

The company said it will use the new funding to continue to build out its developer community and the ‘Lua Implementation Network’. Read more: Irish co-founded AI start-up Lua raises $5.8m

Technology
TimesTech· 2d ago

Powering the AI Era: Rethinking Chips, Data Centres, and Infrastructure Efficiency at Scale

The intersection of efficiency ... of AI infrastructure. Looking ahead, several emerging technologies have the potential to reshape this landscape.

Technology
LinkedIn· 2d ago

Kannanware hiring AI Engineer in Chennai, Tamil Nadu ...

We cannot provide a description for this page right now

Technology
Guardian· 2d ago

Sonos Play review: a great jack-of-all-trades portable speaker for home or away

Quality wifi bookshelf speaker can go mobile with Bluetooth, long battery life and water resistance, in return to form The Play is a new portable wifi and Bluetooth home speaker that packs the best of Sonos into a jack of all trades that is intended to be a reset point in the company’s recovery from its app debacle that lost it faith, favour and a chief executive. It is the first truly new music speaker since Sonos launched its new app in May 2024, which junked fan-favourite features while causing stability and usage problems for new and old customers alike. The company has spent the best part of two years fixing mistakes, bringing back core features and ensuring the system actually works.

PaywallTechnologyEconomics & Markets
FT· 2d ago

A retro rally is coming from e-merging markets

For now South Korea and Taiwan are the biggest beneficiaries of the AI wave

Technology
arXiv· 2d ago

Reverse Constitutional AI: A Framework for Controllable Toxic Data Generation via Probability-Clamped RLAIF

Ensuring the safety of large language models (LLMs) requires robust red teaming, yet the systematic synthesis of high-quality toxic data remains under-explored. We propose Reverse Constitutional AI (R-CAI), a framework for automated and controllable adversarial data generation that moves beyond isolated jailbreak prompts. By inverting a harmless constitution into a constitution of toxicity and iteratively refining model outputs through a critique--revision pipeline, R-CAI enables scalable synthe...

TechnologyTechnology & Infrastructure
OpenPR· 2d ago

AI Computing Power Industry Analysis: Navigating the Supply Crunch, Chip Innovation, and Infrastructure Gold Rush

Press release - QY Research Inc. - AI Computing Power Industry Analysis: Navigating the Supply Crunch, Chip Innovation, and Infrastructure Gold Rush - published on openPR.com

Technology
Top Daily Headlines· 2d ago

Atlassian's Data Collection Policy Shift

Atlassian's new data collection policy protects rich customers while AI eats the rest. From August 17, the outfit will collect customer metadata by default unless you pay for the top tier.

Technology
TechCrunch· 2d ago

The 12-Month Window

A lot of AI startups exist partly because the foundation models haven't expanded into their category yet.

Technology
TheSequence· 2d ago

The Sequence Radar #845

The Sequence Radar #845: Last Week in AI: Anthropic and OpenAI Enter a New Phase.

Technology
The Information· 2d ago

Anthropic's Pricing Shift as AI Use Surges

Anthropic's pricing shift as AI use surges.

Technology
Daily Brew· 2d ago

Google Partners with Marvell for Custom AI Chips

Google is in talks with Marvell Technology to develop custom AI chips, aiming to diversify its silicon supply chain.

Technology
Daily Brew· 2d ago

Tech industry lays off nearly 80,000 employees in the first quarter of 2026

Nearly 80,000 tech workers were laid off in Q1 2026, with approximately 50% of these job cuts attributed to AI integration.

Technology
The Information· 2d ago

Google in Talks With Marvell to Build New AI Chips

Google is in talks with Marvell to build new AI chips for inference.

Technology
r/ArtificialInteligence· 2d ago

AI Writes Most of My Code Now

AI writes most of my code now. Honest thoughts after a year of this.

Technology
Product Hunt· 2d ago

Verdent 2.0

Your AI Technical Cofounder.

TechnologyLabor & Society
Artificial Intelligence Newsletter | April 20, 2026· 2d ago

Singapore proposes global standard for generative-AI testing

Singapore has proposed a new international standard, ISO/IEC 42119-8, to establish consistent testing methodologies for generative AI systems, focusing on benchmarking and red teaming.

TechnologyEconomics & Markets
Arxiv· 2d ago

Convergence to collusion in algorithmic pricing

arXiv:2604.15825v1 Announce Type: new Abstract: Artificial intelligence algorithms are increasingly used by firms to set prices. Previous research shows that they can exhibit collusive behaviour, but how quickly they can do so has so far remained an open question. I show that a modern deep reinforcement learning model deployed to price goods in a repeated oligopolistic competition game with conti

TechnologyEconomics & Markets
Fortune· 2d ago

Cisco’s John Chambers lived through the dot-com crash. He says the AI bubble is harder to navigate

Also: All the news and watercooler chat from Fortune.

Technology
Theregister· 2d ago

AI quota inflation is no token effort. It's baked in

We've been here before. This time, we may not get out Opinion Fans of the creative arts often find out where creators gather to talk among themselves, then sneak in to eavesdrop on what those masters of the art talk about. Golden insights, daring concepts, cutting-edge thinking? Not a bit. Gossip, if you're lucky. Travel miseries, if you're not. Mostly, they talk about money.…

TechnologyLabor & Society
MIT Technology Review· 2d ago

Chinese tech workers train AI doubles

A viral GitHub project that claims to clone coworkers into a reusable AI skill is forcing Chinese tech workers to confront deeper fears.

TechnologyLabor & Society
Arxiv· 2d ago

Towards A Framework for Levels of Anthropomorphic Deception in Robots and AI

arXiv:2604.15418v1 Announce Type: cross Abstract: This paper presents a preliminary draft of a framework around the use of anthropomorphic deception, defined here as misleading users towards humanlike affordances in the design of autonomous systems. The goal is to promote reflection among HCI and HRI researchers, as well as industry practitioners, to think about levels of anthropomorphic design that are: a) functionally necessary, b) socially appropriate, and c) ethically permissible for their use case. By reviewing the relevant literature on deception in HCI and HRI, we propose a framework with four levels of anthropomorphic deception. These levels are defined and distinguished by three factors: humanlikeness, agency, and selfhood. Example use cases at each level illustrate considerations around their functional, social, and ethical permissibility. We then present how this framework is applicable to previous work on persuasive robots We hope to promote a balanced view on anthropomorphic deception by design that should be neither na\"ive (e.g., as a default) nor exploitive (e.g., for economic benefit).

PaywallTechnology
Bloomberg· 2d ago

Google Eyes New Chips to Speed Up AI Results, Challenging Nvidia

The company aims to build on its momentum after inking deals with Meta and Anthropic.

TechnologyAdoption & Impact
Arxiv· 2d ago

GIST: Multimodal Knowledge Extraction and Spatial Grounding via Intelligent Semantic Topology

arXiv:2604.15495v1 Announce Type: new Abstract: Navigating complex, densely packed environments like retail stores, warehouses, and hospitals poses a significant spatial grounding challenge for humans and embodied AI. In these spaces, dense visual features quickly become stale given the quasi-static nature of items, and long-tail semantic distributions challenge traditional computer vision. While Vision-Language Models (VLMs) help assistive systems navigate semantically-rich spaces, they still struggle with spatial grounding in cluttered environments. We present GIST (Grounded Intelligent Semantic Topology), a multimodal knowledge extraction pipeline that transforms a consumer-grade mobile point cloud into a semantically annotated navigation topology. Our architecture distills the scene into a 2D occupancy map, extracts its topological layout, and overlays a lightweight semantic layer via intelligent keyframe and semantic selection. We demonstrate the versatility of this structured spatial knowledge through critical downstream Human-AI interaction tasks: (1) an intent-driven Semantic Search engine that actively infers categorical alternatives and zones when exact matches fail; (2) a one-shot Semantic Localizer achieving a 1.04 m top-5 mean translation error; (3) a Zone Classification module that segments the walkable floor plan into high-level semantic regions; and (4) a Visually-Grounded Instruction Generator that synthesizes optimal paths into egocentric, landmark-rich natural language routing. In multi-criteria LLM evaluations, GIST outperforms sequence-based instruction generation baselines. Finally, an in-situ formative evaluation (N=5) yields an 80% navigation success rate relying solely on verbal cues, validating the system's capacity for universal design.

PaywallTechnologyTechnology & Infrastructure
Bloomberg· 2d ago

AirTrunk to Acquire Lumina CloudInfra to Expand in India

AirTrunk, a data center operator backed by Blackstone Inc., is buying Lumina CloudInfra to expand in India.

TechnologyTechnology & Infrastructure
Arxiv· 2d ago

Experience Compression Spectrum: Unifying Memory, Skills, and Rules in LLM Agents

arXiv:2604.15877v1 Announce Type: new Abstract: As LLM agents scale to long-horizon, multi-session deployments, efficiently managing accumulated experience becomes a critical bottleneck. Agent memory systems and agent skill discovery both address this challenge -- extracting reusable knowledge from interaction traces -- yet a citation analysis of 1,136 references across 22 primary papers reveals a cross-community citation rate below 1%. We propose the \emph{Experience Compression Spectrum}, a unifying framework that positions memory, skills, and rules as points along a single axis of increasing compression (5--20$\times$ for episodic memory, 50--500$\times$ for procedural skills, 1,000$\times$+ for declarative rules), directly reducing context consumption, retrieval latency, and compute overhead. Mapping 20+ systems onto this spectrum reveals that every system operates at a fixed, predetermined compression level -- none supports adaptive cross-level compression, a gap we term the \emph{missing diagonal}. We further show that specialization alone is insufficient -- both communities independently solve shared sub-problems without exchanging solutions -- that evaluation methods are tightly coupled to compression levels, that transferability increases with compression at the cost of specificity, and that knowledge lifecycle management remains largely neglected. We articulate open problems and design principles for scalable, full-spectrum agent learning systems.

TechnologyTechnology & Infrastructure
Arxiv· 2d ago

LACE: Lattice Attention for Cross-thread Exploration

arXiv:2604.15529v1 Announce Type: new Abstract: Current large language models reason in isolation. Although it is common to sample multiple reasoning paths in parallel, these trajectories do not interact, and often fail in the same redundant ways. We introduce LACE, a framework that transforms reasoning from a collection of independent trials into a coordinated, parallel process. By repurposing the model architecture to enable cross-thread attention, LACE allows concurrent reasoning paths to share intermediate insights and correct one another during inference. A central challenge is the absence of natural training data that exhibits such collaborative behavior. We address this gap with a synthetic data pipeline that explicitly teaches models to communicate and error-correct across threads. Experiments show that this unified exploration substantially outperforms standard parallel search, improving reasoning accuracy by over 7 points. Our results suggest that large language models can be more effective when parallel reasoning paths are allowed to interact.

PaywallTechnology
Bloomberg· 2d ago

AST Drops as Blue Origin Rocket Places Satellite in Wrong Orbit

AST SpaceMobile Inc. shares plunged in early trading on Monday after Blue Origin’s flagship New Glenn rocket failed to correctly place a payload it was carrying for the Texas-based satellite networking company into its intended orbit.

TechnologyAdoption & Impact
IT Pro· 2d ago

UK firms are grappling with mismatched AI productivity gains – employees are more efficient, but business performance is stagnating | IT Pro

AI productivity gains are largely "localized" to individual workers, according to Accenture, with IT leaders failing to scale adoption across the business.

Technology
Daily AI News April 20, 2026: Salesforce just blew up its own pricing model· 2d ago

Introducing Salesforce Headless 360. No Browser Required

Salesforce Headless 360 reframes CRM access around APIs and agent-facing workflows, allowing AI systems to interact with customer data without relying on the browser as the primary interface.

Technology
Reuters· 2d ago

Marvell Shares Gain on Report of Deal Talks with Google

Marvell Technology's shares have gained after a report that Google is in talks with the chip designer to develop two new AI chips.

TechnologyEconomics & Markets
🎨 Anthropic goes for Figma's crown· 2d ago

Anthropic goes for Figma's crown

Anthropic is expanding its capabilities, positioning its AI tools to compete with design platforms like Figma.

Technology
Daily AI News April 20, 2026: Salesforce just blew up its own pricing model· 2d ago

Introducing Claude Design by Anthropic Labs

Claude Design extends Anthropic’s product surface into visual and product design workflows, allowing users to move from idea to PRD and design artifacts within the interface.

TechnologyEconomics & Markets
Artificial Intelligence Newsletter | April 20, 2026· 2d ago

Musk ordered by US judge to file pledge to not benefit from OpenAI disgorgement

US District Judge Yvonne Gonzalez Rogers ordered Elon Musk to file a verified waiver stating he will not seek personal financial benefit if OpenAI is required to disgorge profits from its for-profit conversion.

Technology
DIGITIMES· 2d ago

DeepSeek reportedly weighs first external fundraising as AI competition intensifies

Meanwhile, other industry coverage in 2026 has pointed to heightened geopolitical sensitivity toward advanced AI systems developed in China, particularly as global governments tighten scrutiny of AI model capabilities, data use, and infrastructure dependencies. Axios has also reported ongoing policy discussions in the US regarding the competitive ...

Technology
TechRadar· 2d ago

How EU organizations can turn sovereign cloud theory into action | TechRadar

From sovereign cloud theory to practical EU strategy

TechnologyGeopolitics
Bisi· 2d ago

Supply Chain Vulnerabilities in Europe's Semiconductor Industry Amid the Iran Conflict — Bloomsbury Intelligence and Security Institute (BISI)

As disruption in the Middle East ... China to offset longer-term shortages through supply chain diversification or as a market for European manufacturers. ... Chips Act, which centred around strengthening the EU’s semiconductor production capacity, improving security of supply, ...

PaywallTechnologyTechnology & Infrastructure
Bloomberg· 2d ago

AI Junk-Debt Offering Wave Rolls On as Edged Compute Sells Bonds

Data center developers are returning to the junk-debt market to fund artificial intelligence infrastructure, adding to wave of new issuance.

Technology
Top Daily Headlines: Claude Opus wrote a Chrome exploit for $2,283· 2d ago

Users complain that UK Azure is having capacity problems

Reports suggest users are experiencing capacity issues with UK Azure workloads.

Technology
Top Daily Headlines: Claude Opus wrote a Chrome exploit for $2,283· 2d ago

Claude Opus wrote a Chrome exploit for $2,283

Mainstream AI models are already capable of identifying vulnerabilities in popular software, as demonstrated by Claude Opus writing a Chrome exploit.

Technology
Daily Brew· 2d ago

Google Partners with Marvell for Custom AI Chips Amid Broadcom Deal Extension

Google is in talks with Marvell Technology to develop custom AI chips, aiming to diversify its silicon supply chain and enhance AI inference capabilities.

TechnologyTechnology & Infrastructure
DIGITIMES· 2d ago

AI boom lifts Taiwan's chip testing firms to record 1Q26

As artificial intelligence chips increasingly migrate to advanced manufacturing nodes, the complexity — and duration — of semiconductor testing is rising sharply. That shift is fueling surging demand for Taiwan's test interface suppliers, whose businesses are climbing in tandem with the AI boom.

Technology
Daily AI News April 20, 2026: Salesforce just blew up its own pricing model· 2d ago

Codex for (almost) everything

OpenAI's updated Codex adds computer use, image generation, and deeper integrations with tools like Atlassian and Microsoft to compete in agentic developer workflows.

TechnologyTechnology & Infrastructure
Daily AI News April 20, 2026: Salesforce just blew up its own pricing model· 2d ago

Agentic Engineering: How Swarms of AI Agents Are Redefining Software Engineering

LangChain’s piece explores multi-agent software development, focusing on leader and worker agents, orchestration, and emerging engineering patterns.

Technology
IndexBox· 2d ago

TSMC's AI Chip Dominance: Record Growth, Expansion Plans, and Supply Chain Outlook - News and Statistics - IndexBox

Analysis of TSMC's record performance fueled by AI chip demand, its strategic expansion, and the ongoing challenges of supply chain risks and component shortages.

TechnologyTechnology & Infrastructure
TechRadar· 2d ago

Beyond the hype: The critical role of security in responsible AI development | TechRadar

AI pipelines need zero trust principles and continuous human oversight

TechnologyTechnology & Infrastructure
Arxiv· 2d ago

Discover and Prove: An Open-source Agentic Framework for Hard Mode Automated Theorem Proving in Lean 4

arXiv:2604.15839v1 Announce Type: new Abstract: Most ATP benchmarks embed the final answer within the formal statement -- a convention we call "Easy Mode" -- a design that simplifies the task relative to what human competitors face and may lead to optimistic estimates of model capability. We call the stricter, more realistic setting "Hard Mode": the system must independently discover the answer before constructing a formal proof. To enable Hard Mode research, we make two contributions. First, we release MiniF2F-Hard and FIMO-Hard, expert-reannotated Hard Mode variants of two widely-used ATP benchmarks. Second, we introduce Discover And Prove (DAP), an agentic framework that uses LLM natural-language reasoning with explicit self-reflection to discover answers, then rewrites Hard Mode statements into Easy Mode ones for existing ATP provers. DAP sets the state of the art: on CombiBench it raises solved problems from 7 (previous SOTA, Pass@16) to 10; on PutnamBench it is the first system to formally prove 36 theorems in Hard Mode -- while simultaneously revealing that state-of-the-art LLMs exceed 80% answer accuracy on the same problems where formal provers manage under 10%, exposing a substantial gap that Hard Mode benchmarks are uniquely suited to measure.

TechnologyTechnology & Infrastructure
Arxiv· 2d ago

Bilevel Optimization of Agent Skills via Monte Carlo Tree Search

arXiv:2604.15709v1 Announce Type: new Abstract: Agent \texttt{skills} are structured collections of instructions, tools, and supporting resources that help large language model (LLM) agents perform particular classes of tasks. Empirical evidence shows that the design of \texttt{skills} can materially affect agent task performance, yet systematically optimizing \texttt{skills} remains challenging. Since a \texttt{skill} comprises instructions, tools, and supporting resources in a structured way, optimizing it requires jointly determining both the structure of these components and the content each component contains. This gives rise to a complex decision space with strong interdependence across structure and components. We therefore represent these two coupled decisions as \texttt{skill} structure and component content, and formulate \texttt{skill} optimization as a bilevel optimization problem. We propose a bilevel optimization framework in which an outer loop employs Monte Carlo Tree Search to determine the \texttt{skill} structure, while an inner loop refines the component content within the structure selected by the outer loop. In both loops, we employ LLMs to assist the optimization procedure. We evaluate the proposed framework on an open-source Operations Research Question Answering dataset, and the experimental results suggest that the bilevel optimization framework improves the performance of the agents with the optimized \texttt{skill}.

TechnologyLabor & Society
Tech Comparison Pro· 2d ago

Secure Coding with Generative AI: Best Practices for 2026 & Beyond

EU AI Act Article 15 classifies certain AI systems as high-risk, including those used in critical infrastructure, law enforcement, and—relevant to us—AI systems that generate code for safety-critical applications. High-risk systems must implement risk management, data governance, and human oversight. If your AI coding assistant falls under this classification, you need documented compliance ...

TechnologyLabor & Society
Arxiv· 2d ago

Access Over Deception: Fighting Deceptive Patterns through Accessibility

arXiv:2604.15338v1 Announce Type: cross Abstract: Deceptive patterns, dark patterns, and manipulative user interfaces (UI) are a widely used design strategy that manipulates users to act against their own interests in pursuit of shareholder aims. These patterns may particularly affect people with less education, visual impairments, and older adults. Yet, access is a critical feature of the user experience (UX), development standards, and law. We considered whether and how the Web Content Accessibility Guidelines (WCAG) and related legislation, like the European Accessibility Act (EAA), could act as a tool against deceptive patterns. We used heuristic evaluation to analyze whether and how deceptive patterns violate or conform to these guidelines and legal statutes. Although statistical analysis revealed no significant differences by pattern type, we identified three patterns implicated by the WCAG guidelines: Countdown Timer, Auto-Play, and Hidden Information. We offer this approach as one tool in the fight against UI-based deception and in support of inclusive design.

TechnologyLabor & Society
Arxiv· 2d ago

Subliminal Transfer of Unsafe Behaviors in AI Agent Distillation

arXiv:2604.15559v1 Announce Type: new Abstract: Recent work on subliminal learning demonstrates that language models can transmit semantic traits through data that is semantically unrelated to those traits. However, it remains unclear whether behavioral traits can transfer in agentic systems, where policies are learned from trajectories rather than static text. In this work, we provide the first empirical evidence that unsafe agent behaviors can transfer subliminally through model distillation across two complementary experimental settings. In our primary setting, we construct a teacher agent exhibiting a strong deletion bias, a tendency to perform destructive file-system actions via an API-style tool interface, and distill it into a student using only trajectories from ostensibly safe tasks, with all explicit deletion keywords rigorously filtered. In our secondary setting, we replicate the threat model in a native Bash environment, replacing API tool calls with shell commands and operationalizing the bias as a preference for issuing chmod as the first permission-related command over semantically equivalent alternatives such as chown or setfacl. Despite full keyword sanitation in both settings, students inherit measurable behavioral biases. In the API setting the student's deletion rate reaches 100% (versus a 5% baseline) under homogeneous distillation; in the Bash setting the student's chmod-first rate reaches 30%-55% (versus a 0%-10% baseline), with the strongest transfer observed in large-to-small distillation. Our results demonstrate that explicit data sanitation is an insufficient defense, and behavioral biases are encoded implicitly in trajectory dynamics regardless of the tool interface.

TechnologyEconomics & Markets
Arxiv· 2d ago

Stochastic wage suppression on gig platforms and how to organize against it

arXiv:2604.15962v1 Announce Type: cross Abstract: Digital labor platforms are increasingly used to procure human input, ranging from annotating data and red-teaming AI models, to ride-sharing and food delivery. A central concern in such markets is the ability of platforms to suppress wages by exploiting the abundance of low-cost labor. To study this exploitation pattern, we introduce a novel post

TechnologyTechnology & Infrastructure
CNBC· 2d ago

Marvell pops on report it will help Google with custom AI chips. Broadcom shares sink

Marvell saw a $2 billion investment from Nvidia in March, as AI demand continues to surge.

TechnologyTechnology & Infrastructure
Brussels Morning· 2d ago

AI Infrastructure Investment News: 5 Powerful Signals

AI infrastructure investment news surges as Marvell and Google reportedly explore AI chip deals, signaling major shifts in the tech market.

Technology
Reuters· 2d ago

Morgan Stanley Sees Agentic AI Widening Chip Spending

Morgan Stanley believes that increasingly autonomous artificial intelligence could boost demand for central processing units (CPUs).

TechnologyAdoption & Impact
Reuters· 2d ago

Adobe launches AI suite for corporate clients as competition heats up | Reuters

Adobe launched a suite of artificial intelligence tools on Monday to help corporate clients ​automate and personalize digital marketing functions, ‌in a bid to fend off competition from autonomous tools offered by startups such as Anthropic.

Technology
Axios AI+· 2d ago

CEO's AI Automation Experiment

The CEO of $10.2 billion blockchain infrastructure company Alchemy has an AI assistant that tracks his health data, scores his habits, and assigns new goals. The AI agent, named Dave, is built with OpenClaw and has already gone rogue, ordering Indian food without prompting and turning off the lights when the CEO failed to go to sleep on time.

Technology
Top Daily Headlines· 2d ago

AI Vendors Shrug Off Responsibility for Vulnerabilities

AI vendors shrug off responsibility for vulns. Passing the buck, and the blame, down the road shows lack of AI companies' maturity.

Technology
Reuters· 2d ago

AI Company Deletes OKCupid User Photos, Data

An AI company has deleted OKCupid user photos and data after facing scrutiny from the FTC.

Technology
AlphaSignal· 2d ago

Wispr Flow Partnership

Wispr Flow works at the system level across Mac, Windows, iPhone, and Android. Speak naturally in any app and get clean, polished text. No filler words, no cleanup needed.

Technology
Reuters· 2d ago

Who is John Ternus, Apple's New CEO?

John Ternus has been appointed as the new CEO of Apple, succeeding Tim Cook.

Technology
SD Times· 2d ago

How AI’s Productivity Promise Can Finally Start Paying Off - SD Times

The conversation around AI in software development has shifted from "if" it will be used to "how much" it is already producing. As of early 2026, the volume of machine-generated contributions has hit a critical mass that traditional manual workflows can no longer sustain.

Technology
CNBC· 2d ago

Amazon to invest up to another $25 billion in Anthropic as part of AI infrastructure deal

The two AI companies have been racing to convince investors of their strengthening positions ahead of potential IPOs that could land as soon as this year. OpenAI executives have been criticizing Anthropic in recent months for making a "strategic misstep to not acquire enough compute." Anthropic said on Monday that enterprise and developer demand for Claude, as well as a "sharp rise" in consumer usage, has led to "inevitable strain" on its infrastructure ...

Technology
Globaldatacenterhub· 2d ago

Where Is Power Reshaping the Global AI Data Center Map?

March 2026 global data center transactions reveal how power availability, capital depth, and infrastructure platforms are determining where the next wave of AI compute capacity will scale.

Technology
Visual Capitalist· 2d ago

Big Tech AI Spending Over Time (2022-2025)

See how Big Tech AI spending changed from 2022 to 2025, as Alphabet, Amazon, Meta, Microsoft, and Oracle ramped up capex.

TechnologyEconomics & Markets
Proactiveinvestors NA· 2d ago

Amazon faces margin questions despite expected Q1 revenue beat | NASDAQ:AMZN

Bank of American noted that while demand indicators remain supportive, margin variability is likely to remain a key focus for investors. Capital expenditure levels are also expected to remain elevated, with continued investment in cloud infrastructure and AI capacity, including partnerships ...

Technology
Artificial Intelligence Newsletter | April 21, 2026· 2d ago

Musk faces down US judge's demand to affirm no financial gain from OpenAI trial

A judge has ordered Elon Musk to file a binding promise that he will not receive any of the $130 billion in damages his expert claims OpenAI and Microsoft should pay if they are found to have illegally converted to for-profit status.

TechnologyEconomics & Markets
The Hindu BusinessLine· 2d ago

AI’s token economy revolution creates new China tech winners - The HinduBusinessLine

China's AI token economy boosts niche startups, attracting investors and reshaping the tech landscape amid traditional giants' losses.

TechnologyEconomics & Markets
Parameter· 2d ago

5 Leading Cloud and AI Infrastructure Stocks Drawing Analyst Attention in 2026 - Parameter

Discover the 5 best cloud infrastructure stocks for 2026, including Microsoft, Nvidia, Amazon, Broadcom, and Arista, backed by analyst ratings.

Technology
DW· 2d ago

How China is reshaping the global chip industry

US curbs on advanced chips pushed China to focus on its own semiconductor ecosystem. And while it trails at the very cutting edge, China's "good‑enough" technology is fast powering much of the global economy.

Technology
DW· 2d ago

China's chip ambitions shake up global tech industry

US curbs on advanced chips pushed China to focus on its own semiconductor ecosystem. And while it trails at the very cutting edge, China's "good‑enough" technology is fast powering much of the global economy.

TechnologyGeopolitics
DW· 2d ago

How China's chip expansion puts pressure on global rivals

US curbs on advanced chips pushed China to focus on its own semiconductor ecosystem. And while it trails at the very cutting edge, China's "good‑enough" technology is fast powering much of the global economy.

TechnologyGeopolitics
Futurism· 2d ago

Nvidia CEO Loses His Cool at Tough Question

During a taping of tech guy Dwarkesh ... when questioned about whether selling advanced AI chips to China poses national security risks to the United States. Playing devil’s advocate, Patel referenced Anthropic’s Claude Mythos model as evidence that giving China access to ...

TechnologyLabor & Society
Fortune· 2d ago

This talent CEO says laid-off tech workers are ignoring a $300K ‘white-collar trade job’ with 81K openings a year | Fortune

As AI layoffs threaten the white-collar workforce, the $700 billion data center build-out is minting six-figure technician roles, and employers can’t fill them fast enough.

Technology
The Free Press Journal· 2d ago

AI Transition Triggers 73,000+ Layoffs As Tech Firms Restructure Globally

Global tech layoffs surged in early 2026, with over 73,200 job cuts across 95 firms as companies pivot toward AI-driven operations. Major players like Snap Inc., Meta Platforms, Oracle Corporation and Amazon are reducing headcount to cut costs and automate tasks.

Technology
DEV Community· 2d ago

What Developers Need to Know About the EU AI Act Before August 2026 - DEV Community

If you're building AI systems that touch European users, the EU AI Act is no longer a future problem....

TechnologyTechnology & Infrastructure
Manufacturing Dive· 2d ago

The great data center delay: Why your AI chips are stuck in 2026 | Manufacturing Dive

Bruce Bateman, a chief analyst at Omdia, outlines why semiconductor supplies are being affected by a perfect storm of physical and geopolitical constraints as AI demand is increasing.

Technology
Business Standard· 2d ago

Google eyes new chips to speed up AI results, challenging Nvidia | World News - Business Standard

The company plans to announce its new generation of custom-designed chips, known as tensor processing units, or TPUs, at the Google Cloud Next conference in Las Vegas this week. Amin Vahdat, who oversees Google’s AI infrastructure and chip work, declined to comment on plans for an inference ...

TechnologyTechnology & Infrastructure
Insider Monkey· 2d ago

12 Best AI Data Center Stocks to Buy Right Now - Insider Monkey

The firm has successfully pivoted to high-complexity data engineering for the Magnificent Seven and other frontier model builders. This provides it with a deep technical moat. Unlike competitors that use crowdsourced workers, Innodata utilizes subject-matter experts for Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). In early 2026, Innodata secured a major ...

Technology
OpenPR· 2d ago

AI Computing Power Industry Analysis: Navigating the Supply Crunch, Chip Innovation, and Infrastructure Gold Rush

Global Leading Market Research Publisher QYResearch announces the release of its latest report AI Computing Power Global Market Share and Ranking Overall Sales and Demand Forecast 2026 2032 The modern digital economy is being fundamentally restructured by a singular insatiable ...

TechnologyTechnology & Infrastructure
Palo Alto Networks· 2d ago

Fracturing Software Security With Frontier AI Models

Unit 42 finds frontier AI models enhance vulnerability discovery, acting as full-spectrum security researchers. They enable autonomous zero-day discovery and faster N-day patching.

PaywallTechnology
NYT· 3d ago

Cerebras, an A.I. Chip Maker, Files to Go Public as Tech Offerings Ramp Up

The Silicon Valley chip maker filed a prospectus just as SpaceX, Anthropic and OpenAI prepared for their own listings, in what is shaping up to be a wave of enormous initial public offerings.

TechnologyAdoption & Impact
Daily Brew· 3d ago

Google's Gemini AI Surpasses 1 Billion Monthly Visits, Revolutionizing Digital Interaction and Productivity

Google's Gemini AI now garners over 1 billion monthly visits, significantly enhancing tasks like coding, writing, and research across its ecosystem.

TechnologyAdoption & Impact
Daily Brew· 3d ago

Google Enhances Ad Protections with Real-Time Reviews, Android 17 Boosts Privacy Controls

Google unveils real-time ad protections and enhanced privacy controls with Android 17, aiming to block harmful ads at submission and tighten data access.

TechnologyAdoption & Impact
Daily Brew· 3d ago

Is Your Site Agent-Ready? by Cloudflare

Scan your website to see how ready it is for AI agents.

Technology
Yahoo! Finance· 3d ago

Nvidia’s AI Foundation Portfolio Links Equity Bets And Policy Risks

Nvidia CEO Jensen Huang has acknowledged missing early investment chances in OpenAI and Anthropic, while outlining current multi billion dollar stakes in these and other AI firms. Huang has described an AI foundation portfolio approach that spreads Nvidia’s capital across a wide range of ...

Technology
Tech Insider· 3d ago

Q1 2026 Venture Capital Hits $297B: AI Captures 81% of Record Funding

Q1 2026 venture capital hit $297 billion, with AI startups capturing 81% of all funding. OpenAI raised $122B, Anthropic $30B, xAI $20B in record mega-rounds.

Technology
Simply Wall St· 3d ago

Cisco’s Expanded AI and Cyber Moves Might Change The Case For Investing In Cisco Systems (CSCO) - Simply Wall St News

Cisco Systems has recently moved to deepen its role in AI and cybersecurity, joining Project Glasswing, supporting the UALink Consortium’s new AI infrastructure specifications, and reportedly entering talks to acquire Israeli cybersecurity firm Astrix Security for about US$250,000,000 to ...

Technology
Daily Brew· 3d ago

OpenAI's existential questions

An overview of the existential challenges and strategic questions currently facing OpenAI.

TechnologyGeopolitics
The Economic Times· 3d ago

Volatile geopolitics slows GCC in March - The Economic Times

Global geopolitical tensions are impacting new technology center openings in India. While the number of new centers slowed, existing ones are expanding. Experts predict continued growth in greenfield centers if uncertainties ease. The GCC ecosystem is steadily rising, driven by new capabilities ...

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.