AI Daily Brief
Government Public Sector
Latest Intelligence
The latest AI stories, analysis and developments relevant to Government Public Sector — curated daily by Best Practice AI.
Use Casesfor Government Public Sector
200 articles
Malaysian City Johor Bahru Pilots 5G-Enabled AI Uses for Smart City Rollout
The Malaysian city of Johor Bahru is piloting 5G-enabled AI use cases under Malaysia's Digital Ministry grant, focusing on real-world urban applications rather than policy concepts.
China Signals Tailored Approach to AI Law as New Risks Mount
A senior Chinese legislative official has offered a fresh glimpse into Beijing's evolving approach to legislation on artificial intelligence, signaling new priorities while maintaining continuity with earlier policy thinking.
Growing AI Power Consumption Prompts Inquiry
Growing AI power slurpage prompts MPs to examine low-energy computing. Committee launches inquiry into emerging chip designs to curb datacenter energy use
UK Invests in Sovereign AI Venture
UK.gov kicks off half-a-billion quid sovereign AI venture with £80M invite. Companies get to keep IP developed for government projects
Australian Federal Court Allows Generative AI Use with Strict Guidelines
The Federal Court of Australia will allow generative artificial intelligence to be used in court, but Chief Justice Debra Mortimer warned that inaccurate or misleading statements could result in financial or legal consequences.
Singapore Pushes for Faster Global AI Standards Framework
Singapore is pushing for faster global AI governance by promoting earlier standard-setting and stronger testing frameworks, as officials warn standards must keep pace with rapid technological change.
EU Countries Asked for Flexibility on AI Act Changes
EU governments are being asked for feedback on compromise proposals, to weigh in on different options and to indicate their flexibility on five key unresolved issues in AI Act amendments, ahead of a final deal expected on April 28.
Japan's Justice Ministry Sets Civil-Law Panel on AI-Era Voice, Portrait Rights
Japan's justice ministry is moving to clarify when generative artificial intelligence users may face civil liability for copying a person's voice or likeness without consent, setting up an expert panel that could become one of the government's clearest public statements yet on how existing civil-law doctrines apply to deepfakes and voice cloning.
EU Governments, Lawmakers Agree on Less Controversial AI Law Changes
EU governments and lawmakers have agreed on some of the less controversial aspects of a legislative package to amend the EU's AI law, including the provisions on AI literacy, the legal basis for processing sensitive data, and the reinstatement of the registration requirement for AI systems the providers don't deem high-risk.
Singapore Proposes Global AI Standard
Singapore is spearheading a global initiative with the proposed ISO/IEC 42119-8 standard to enhance testing and reliability of generative AI systems.
Community-Led AI Integration for Wildfire Risk Assessment: A Participatory AI Literacy and Explainability Integration (PALEI) Framework in Los Angeles, CA
Climate-driven wildfires are intensifying, particularly in urban regions such as Southern California. Yet, traditional fire risk communication tools often fail to gain public trust due to inaccessible design, non-transparent outputs, and limited contextual relevance. These challenges are especially critical in high-risk communities, where trust depends on how clearly and locally information is presented.
India's AI Future
India is striving to build a self-sufficient AI ecosystem leveraging Digital Public Infrastructure to drive economic growth and avoid dependency on global AI providers.
China Unveils Draft Plan for AI Development
China’s top data regulator has released a draft action plan to build industry-grade, high-quality datasets to support artificial intelligence development, marking Beijing’s latest effort to address a key bottleneck hindering AI development and deployment across sectors from manufacturing to smart driving.
Singapore Proposes Global Standard for Generative-AI Testing
Singapore has proposed a new international standard, ISO/IEC 42119-8, to establish consistent testing methodologies for generative AI systems, focusing on benchmarking and red teaming.
UK.gov kicks off half-a-billion quid sovereign AI venture with £80M invite
Companies get to keep IP developed for government projects The UK government is opening £80 million in AI procurement talks with tech firms, drawing on its £500 million sovereign capability fund.…
UK announces Sovereign AI fund to fast-track domestic AI startups
The new scheme will offer funding, supercomputing access, and regulatory support to selected British AI firms.
AI Sovereign Compute Infrastructure Program (Canada) - fundsforNGOs
Deadline: 01-Jun-2026 The Government of Canada has launched the AI Sovereign Compute Infrastructure Program to build a national AI supercomputing system. With approximately $890 million in funding, the initiative aims to boost compute capacity, strengthen AI innovation, and ensure secure, ...
India's AI Future: Building a Global Powerhouse with a Humanity-Centric Approach
India is striving to build a self-sufficient AI ecosystem leveraging Digital Public Infrastructure to drive economic growth and avoid dependency on global AI providers.
Alex Bores rolls out "AI dividend" plan
Democratic House candidate Alex Bores is proposing an "AI dividend" to address potential job displacement, aiming to share the economic gains of AI productivity with the American public.
Stop Asking Whether to Regulate AI—Start Asking What It’s For | Opinion - Newsweek
"Well-designed governance does not suppress innovation.... it shapes the direction of innovation in socially beneficial ways."
Estimating Government Worker Skills
arXiv:2604.15819v1 Announce Type: new Abstract: We propose a new approach to estimate government worker skills, a setting where output is hard to observe and wages may be uninformative about skills. The approach uses wages in comparable jobs in the private sector and machine learning tools to link skills to skill-related observables. We apply the approach to rich Indonesian household-level panel data from 1988-2014, showing two main applications. First, government skills have continuously declined relative to the private sector, driven by the most skilled workers ending up in the private sector. Second, the Indonesian government pays a wage premium of 43% conditional on skills.
Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures
arXiv:2604.15514v1 Announce Type: new Abstract: In November 2025, the Government of Canada operationalized its commitment to transparency by releasing its first Federal AI Register. In this paper, we argue that such registers are not neutral mirrors of government activity, but active instruments of ontological design that configure the boundaries of accountability. We analyzed the Register's complete dataset of 409 systems using the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework, combining quantitative mapping with deductive qualitative coding. Our findings reveal a sharp divergence between the rhetoric of "sovereign AI" and the reality of bureaucratic practice: while 86\% of systems are deployed internally for efficiency, the Register systematically obscures the human discretion, training, and uncertainty management required to operate them. By privileging technical descriptions over sociotechnical context, the Register constructs an ontology of AI as "reliable tooling" rather than "contestable decision-making." We conclude that without a shift in design, such transparency artifacts risk automating accountability into a performative compliance exercise, offering visibility without contestability.
Improving Recycling Accuracy across UK Local Authorities: A Prototype for Citizen Engagement
arXiv:2604.15345v1 Announce Type: cross Abstract: Despite public motivation to recycle, significant barriers hinder effective household recycling in the UK. Decentralised local authority waste management creates citizen confusion and "wishcycling" (disposing of non-recyclable items in recycling bins). The recent Simpler Recycling Policy further complicates this landscape by mandating new identification, sorting, and cleaning requirements that will require citizen guidance to ensure they understand how these will impact their recycling practices. This mixed methods study (surveys n=50, expert interviews, design activities) used the Value Proposition Canvas to identify citizen pain points: confusion about logos, logistical constraints, and information gaps about local requirements. We then developed an interactive prototype application providing location-specific guidance, visual sorting aids, and material-specific information to address these painpoints. Focus group evaluation showed the prototype improved recycling accuracy by 60 percent, with marked improvements in packaging assessment. Technology-enabled solutions grounded in user-centred design can measurably improve recycling behaviours and reduce contamination. However, such solutions are most effective when complementing (rather than substituting for) systemic improvements in local authority communication and service design.
Alex Bores' AI Dividend Plan
Alex Bores, a Democratic House candidate in New York, is rolling out a plan to create an 'AI dividend' in response to potential large-scale job displacement from artificial intelligence. The dividend would fund direct payments to Americans and be invested into workforce training and education.
Germany's Merz says industrial AI needs less stringent EU regulation | Reuters
Germany's Chancellor Friedrich Merz said on Sunday that artificial intelligence for industrial use will require more regulatory freedom in the European Union than other AI areas such as consumer use to boost productivity.
It Takes 2 Minutes to Hack the EU's New Age-Verification App
Security researchers demonstrate that the EU's new age-verification app can be compromised in minutes.
Anthropic CEO visits White House amid hacking fears over new AI model
We cannot provide a description for this page right now.
White House and Anthropic hold 'productive' meeting amid fears over Mythos model
The discussion is a sign the AI firm's technology may be too critical for even the US government to do without.
China Central Bank’s Pan Flags AI Risks, Opportunities at IMF
China’s central bank governor Pan Gongsheng said artificial intelligence is driving a new wave of technological and industrial transformation that brings both opportunities and risks to the global economy.
UK launches $675M Sovereign AI fund to break dependence on US tech giants
The UK government has announced a new fund aimed at developing domestic AI capabilities to reduce reliance on major U.S. technology firms.
EU in Critical Talks with Anthropic Over Cybersecurity AI Regulations and Deployment
The European Commission is negotiating with Anthropic over the integration of its cybersecurity AI, Claude Mythos, within EU regulations due to its dual-use concerns.
It is time to ban the sale of precise geolocation
An argument for legislative action to prohibit the commercial sale of precise geolocation data to protect individual privacy.
Anthropic's relationship with the Trump administration seems to be thawing
Signs suggest a warming of relations between AI company Anthropic and the Trump administration.
Illinois is OpenAI and Anthropic’s latest battleground as the state tries to assess liability for catastrophes caused by AI
OpenAI is backing SB 3444, under which frontier AI developers would not be liable for the death or serious injury of 100 or more people or more than $1 billion in property damage.
Britain puts €573 million on the table as Sovereign AI names its first startup cohort
The UK government has formally launched Sovereign AI, a new €573 million (£500 million) initiative designed to back British AI startups working in areas seen as strategically important for both economic growth and national security. Positioned as a more agile, state-backed effort, the programme is intended to support homegrown AI companies with not only capital, […]
China outlines AI-led investment push and industrial upgrading under new five-year plan framework
China is preparing a new round of large-scale investment and industrial upgrading centered on AI infrastructure, advanced manufacturing, and strategic emerging industries, according to a briefing by the National Development and Reform Commission (NDRC) at a State Council Information Office ...
The Labor Department wants to teach you to use AI more. Here's what we found
An employee works a server rack at an Amazon Web Services lab in Austin, Texas, on February 3, 2026. The Labor Department's course also links to at least one risky piece of advice, directing students to check out a video titled "101 ways to use AI." The video suggests that students can ask a chatbot whether or ...
Lawmakers gathered quietly to talk about AI. Angst and fears of 'destruction' followed - Newsday
Lawmakers on Capitol Hill conducted a roundtable with leading AI executives and academics to discuss the potential transformative impacts of the technology on American society.
GeoAgentBench: A Dynamic Execution Benchmark for Tool-Augmented Agents in Spatial Analysis
arXiv:2604.13888v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into Geographic Information Systems (GIS) marks a paradigm shift toward autonomous spatial analysis. However, evaluating these LLM-based agents remains challenging due to the complex, multi-step nature of geospatial workflows. Existing benchmarks primarily rely on static text or code matching, neglecting dynamic runtime feedback and the multimodal nature of spatial outputs. To address this gap, we introduce GeoAgentBench (GABench), a dynamic and interactive evaluation benchmark tailored for tool-augmented GIS agents. GABench provides a realistic execution sandbox integrating 117 atomic GIS tools, encompassing 53 typical spatial analysis tasks across 6 core GIS domains. Recognizing that precise parameter configuration is the primary determinant of execution success in dynamic GIS environments, we designed the Parameter Execution Accuracy (PEA) metric, which utilizes a "Last-Attempt Alignment" strategy to quantify the fidelity of implicit parameter inference. Complementing this, a Vision-Language Model (VLM) based verification is proposed to assess data-spatial accuracy and cartographic style adherence. Furthermore, to address the frequent task failures caused by parameter misalignments and runtime anomalies, we developed a novel agent architecture, Plan-and-React, that mimics expert cognitive workflows by decoupling global orchestration from step-wise reactive execution. Extensive experiments with seven representative LLMs demonstrate that the Plan-and-React paradigm significantly outperforms traditional frameworks, achieving the optimal balance between logical rigor and execution robustness, particularly in multi-step reasoning and error recovery. Our findings highlight current capability boundaries and establish a robust standard for assessing and advancing the next generation of autonomous GeoAI.
Anthropic CEO met White House chief of staff as US seeks access to Mythos model
Dario Amodei met with Susie Wiles despite lawsuits over whether AI lab is a national security threat
Trump Seeks More Time in Lawsuit Against IRS Over His Tax Returns
The Justice Department has not responded to the president’s suit, which has created a conflict of interest for the government’s lawyers.
White House and Anthropic Hold ‘Productive’ Meeting, Aiming for a Compromise
Friday’s meeting at the White House followed the introduction of Anthropic’s powerful new artificial intelligence model, Mythos, which U.S. officials believe could be critical for security.
Landmark Canada AI Supercomputer: $890M Federal Initiative
Canada AI Supercomputer gets $890 million through the AI Sovereign Compute Infrastructure Program. Applications open until June 1, 2026.
Finance Minister eyes Korea as UN AI hub - The Korea Herald
Finance Minister Koo Yun-cheol set out to turn South Korea into a global center for artificial intelligence, saying Seoul could eventually host the AI arms of U
Data centers have staying power as a Wisconsin election issue | Government | captimes.com
Massive projects to support A.I. are becoming a more important issue for Wisconsin voters, and for candidates running for governor and the Legislature.
Finance minister stresses commitment to making S. Korea global AI hub
WASHINGTON - Finance Minister Koo Yun-cheol on Thursday underscored his commitment to making South Korea a global hub for artificial intelligence (AI), pointing to international organizations' plans to install their key AI platforms in the Asian country.
Centre Sets Up AI Governance Body To Align Policy Across Ministries
AIGEG will act as the government’s top decision-making group on AI policy, bringing together key ministries, regulators and advisors under one framework.
Make crappy moves around AI and face voter backlash, govts warned
When the taxpayers are wondering whose side you are on...
The Labor Department wants to teach you to use AI more. Here's what we found
The short course provides solid basics for using AI. But it also misidentifies AI products, links out to bad advice and raises ethical concerns about the products it promotes
Energy secretary sees ‘growing’ opposition to AI as 'very real' risk | FedScoop
The agency, as well as the broader administration, is trying to change the public perception of AI, according to DOE’s Chris Wright.
Liz Kendall urges UK public to embrace AI as government makes first £500m fund investment
Technology secretary plays down fears over jobs and cyber security as stake taken in British startup The UK technology secretary has urged the country to “make AI work for Britain”, brushing off fears about its impact on jobs and cybersecurity as the government announced its first investment under a £500m sovereign AI fund. Liz Kendall said the UK had to “seize” the opportunity offered by AI despite concerns underlined this month when US startup Anthropic revealed it had developed an AI model that posed a potentially significant cyber threat. Asked how the government makes the case for embracing a technology that could disrupt jobs and now cybersecurity, Kendall said: “We have to seize this to make it work, for Britain, for our jobs, for solving the biggest challenges we face as a world.” Speaking on Thursday as the government unveiled its first investment in a UK company as part of a £500m sovereign AI fund, Kendall acknowledged “people are worried about the risks and what it means for their jobs”, but AI entrepreneurs also believed they can “make it work … they can create jobs”. Continue reading...
European consumer protection chief brings digital regulation message to Silicon Valley
Michael McGrath, the European Commission's consumer protection chief, visited Silicon Valley to discuss the EU's planned Digital Fairness Act and potential cooperation with the California Privacy Protection Agency.
Cyber rules shift as geopolitics & AI reshape policy
NCC Group says geopolitics, digital sovereignty and AI are driving tougher cyber rules, with boards facing greater accountability and scrutiny.
Research reveals that EU AI rules stop at its borders with little accountability for human rights impacts abroad · Global Voices
On paper, many of these technologies are described as civilian. In practice, the line between civilian and military use is almost always blurred, particularly in authoritarian contexts and conflict zones.
States can—and should—regulate AI in criminal justice | Brookings
States should set guardrails around AI use in criminal justice, but an executive order from President Trump threatens to chill such efforts.
AI agents covered by EU rules 'mainly' as general-purpose AI systems, Virkkunen says
EU technology chief Henna Virkkunen stated that AI agents fall under the AI Act as general-purpose AI systems, with the European Commission continuing to assess associated risks.
US states can't account for datacenter tax breaks. Literally
A report indicates that authorities are failing to disclose revenue lost to server farm subsidies, effectively flouting rules.
Geographic Blind Spots in AI Control Monitors: A Cross-National Audit of Claude Opus 4.6
arXiv:2604.13069v1 Announce Type: new Abstract: Artificial intelligence (AI) control protocols assume that trusted large language model (LLM) monitors reliably assess proposed actions across all deployment contexts. This paper tests that assumption in the geographic dimension. We audit Claude Opus 4.6-the monitor specified in Apart Research's AI Control Hackathon Track 3 benchmark-for systematic gaps in its factual knowledge of the global AI landscape. We develop the AI Control Knowledge Framework (ACKF), a six-dimension thematic scheme, and operationalise it with 17 verified indicators drawn from the Global AI Dataset v2 (GAID v2): 24,453 indicators across 227 countries published on Harvard Dataverse. A five-category response classification scheme distinguishes verifiable fabrication (VF) from honest refusal (HR); logistic regression with country-clustered standard errors combined with difference-in-differences (DiD) estimation quantifies geographic disparities in monitor accuracy across 2,820 country-metric-year observations. Contrary to our initial hypothesis, Claude Opus 4.6 produces higher fabrication rates for Global North queries than for Global South counterparts-a pattern consistent with a partial-knowledge mechanism in which the model attempts answers more frequently for Global North contexts but commits to incorrect values. This fabrication profile constitutes an exploitable vulnerability, where an adversarial AI system could frame harmful actions in governance or public attitude terms to reduce the probability of detection. This study provides the first cross-national, multi-domain audit of an AI control monitor's geographic knowledge gaps, with direct implications for the design of control protocols.
South Korea moves to rationalize regulatory system under presidential body
South Korea is overhauling its regulatory framework by elevating its decision-making body to a presidential-level committee to better manage AI and other strategic industries.
Making AI Compliance Evidence Machine-Readable
arXiv:2604.13767v1 Announce Type: new Abstract: AI Assurance -- producing the machine-readable evidence required to demonstrate compliance with AI governance frameworks -- has mature policy scaffolding but lacks the infrastructure to operationalize it. Organizations building high-risk AI systems under the EU AI Act face a gap: frameworks such as the EU AI Act, ISO/IEC 42001, and NIST AI RMF specify what to assure but provide no executable format for how. This paper proposes OSCAL -- the NIST standard adopted for FedRAMP cybersecurity compliance -- as a candidate interchange format for AI governance, complementing rather than replacing the emerging JTC21 standards stack. We define 16 property extensions covering lifecycle phases, enforcement semantics, risk traceability, and risk-acceptance justification, and present a three-layer Compliance-as-Code architecture (policy, evidence, enforcement) that generates assurance evidence as a byproduct of model training. The SDK produces native OSCAL Assessment Results validated against the NIST JSON schema. We test the approach on two Annex III high-risk systems: a credit scoring model and a medical imaging segmentation system. The architecture and reference implementation are open-source under Apache 2.0.
Make crappy moves around AI and face voter backlash, govts warned
When the taxpayers are wondering whose side you are on... Britain's government faces a public backlash against AI unless it can show ordinary people that they stand to benefit from its push to inject the technology into every area of the UK in the name of growth.…
Making AI operational in constrained public sector environments
The AI boom has hit across industries, and public sector organizations are facing pressure to accelerate adoption. At the same time, government institutions face distinct constraints around security, governance, and operations that set them apart from their business counterparts. For this reason, purpose-built small language models (SLMs) offer a promising path to operationalize AI in…
Digital platforms called to chip in on EU pledges on AI usage in elections
The European Commission has begun preparing guidance on the use of AI in elections, launching a targeted stakeholder consultation with a closed-door meeting on April 23.
Exclusive: Jeremy Renner bets on the tech that could have saved his life faster. ‘There’s 150 people that are responsible for me not dying’
Renner says he hates AI himself, but then again, he’s an actor: “When AI is used as a tool ... this is the most powerful, most magnificent tool.”
IRS considers using Palantir for smarter audits
The IRS is exploring the use of Palantir's technology to help decide which taxpayers should be flagged for audits.
US states can't account for datacenter tax breaks. Literally
Report says authorities are flouting rules by failing to disclose revenue lost to server farm subsidies Many US states and local authorities are violating generally accepted accounting principles (GAAP) by failing to disclose revenue lost to datacenter tax subsidy schemes, according to Good Jobs First.…
Federal agencies skirt Trump's Anthropic ban to test its advanced AI ...
Federal agencies skirt Trump’s Anthropic ban to test its advanced AI model, Politico reports | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv Item 1 of 2 FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo [1/2]FILE PHOTO: Anthropic logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo Purchase Licensing Rights, opens new tab - Companies Follow Follow April 14 (Reuters) - Federal agencies and government officials are quietly sidestepping U.S. President D
UK told its Big Tech habit is now a national security risk
Open Rights Group says years of reliance on US giants have left Britain exposed Britain has spent years wiring its public sector into US Big Tech, and a new report says that dependence could quickly become a national security headache.…
Woman jailed for 6 months after facial recognition flagged her as a ...
Woman jailed for 6 months after facial recognition flagged her as a suspect - The Washington Post Democracy Dies in Darkness HERMITAGE, Pa. — To investigators, the case against Kimberlee Williams appeared straightforward. “It’s very obvious it’s you,” an officer in Montgomery County, Maryland, said to Williams, who was handcuffed to a table in the police department.
Cross-Cultural Simulation of Citizen Emotional Responses to Bureaucratic Red Tape Using LLM Agents
arXiv:2604.12545v1 Announce Type: cross Abstract: Improving policymaking is a central concern in public administration. Prior human subject studies reveal substantial cross-cultural differences in citizens' emotional responses to red tape during policy implementation. While LLM agents offer opportunities to simulate human-like responses and reduce experimental costs, their ability to generate culturally appropriate emotional responses to red tape remains unverified. To address this gap, we propose an evaluation framework for assessing LLMs' emotional responses to red tape across diverse cultural contexts. As a pilot study, we apply this framework to a single red-tape scenario. Our results show that all models exhibit limited alignment with human emotional responses, with notably weaker performance in Eastern cultures. Cultural prompting strategies prove largely ineffective in improving alignment. We further introduce \textbf{RAMO}, an interactive interface for simulating citizens' emotional responses to red tape and for collecting human data to improve models. The interface is publicly available at https://ramo-chi.ivia.ch.
RED ALERT: Tennessee is about to make building chatbots a Class A felony
Proposed legislation in Tennessee could classify the development of certain chatbots as a Class A felony, carrying severe prison sentences.
PolicyLLM: Towards Excellent Comprehension of Public Policy for Large Language Models
arXiv:2604.12995v1 Announce Type: cross Abstract: Large Language Models (LLMs) are increasingly integrated into real-world decision-making, including in the domain of public policy. Yet, their ability to comprehend and reason about policy-related content remains underexplored. To fill this gap, we present \textbf{\textit{PolicyBench}}, the first large-scale cross-system benchmark (US-China) evaluating policy comprehension, comprising 21K cases across a broad spectrum of policy areas, capturing the diversity and complexity of real-world governance. Following Bloom's taxonomy, the benchmark assesses three core capabilities: (1) \textbf{Memorization}: factual recall of policy knowledge, (2) \textbf{Understanding}: conceptual and contextual reasoning, and (3) \textbf{Application}: problem-solving in real-life policy scenarios. Building on this benchmark, we further propose \textbf{\textit{PolicyMoE}}, a domain-specialized Mixture-of-Experts (MoE) model with expert modules aligned to each cognitive level. The proposed models demonstrate stronger performance on application-oriented policy tasks than on memorization or conceptual understanding, and yields the highest accuracy on structured reasoning tasks. Our results reveal key limitations of current LLMs in policy understanding and suggest paths toward more reliable, policy-focused models.
US FTC shouldn’t be lead AI enforcer, Chairman Ferguson says
US Federal Trade Commission Chairman Andrew Ferguson stated the agency should not become the general all-purpose AI regulator.
Cooperate to detect ‘LLM grooming,’ EU official tells AI companies
EU official Chiara Zannini warned AI companies to cooperate on detecting 'LLM grooming,' a practice of manipulating chatbot outputs. Reports show Russia and China are increasingly using AI for manipulation.
Hong Kong's Judiciary To Consult Legal Profession On AI Guidelines
The Hong Kong Judiciary is drafting guidelines on the use of generative artificial intelligence for legal practitioners and court users, with plans to consult the legal profession later this year.
Philippines Launches Initiative To Counter Disinformation And Deepfakes
The Philippines on Monday launched an initiative to combat online disinformation and AI-generated deepfakes, formalizing coordination among justice, communications and technology agencies.
Veterans Affairs has lost track of software licenses amid $985M bill
Department putting systems in place to manage 'restrictive licensing practices'.
France’s digital directorate dumping Windows desktops, adopting Linux instead
Plans call for a move away from American software and hardware.
US AI Framework and the Politics of Deregulation — Bloomsbury Intelligence and Security Institute (BISI)
This approach risks reinforcing ... (US) AI regulatory environment increasingly fragmented and politically contested. On 20 March 2026, the Trump Administration released its · National Policy Framework for Artificial Intelligence, outlining priorities including innovation, free speech, national security, and workforce development...
Ireland to invest €17m in leading facilities for AI, medtech and more
The new facilities will serve projects in advanced materials, semiconductors and chips, high-speed communications, medical devices, geoengineering, and AI and high-performance computing. Read more: Ireland to invest €17m in leading facilities for AI, medtech and more
Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft
Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft News Suche - Blogs - Bilder - News Sie befinden sind hier: Startseite> Börse> Ueberblick> Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft ... Die US-Regierung will mit einem neuen Bundesgesetz die KI-Regulierung vereinheitlichen und durch massive Infrastrukturinvestitionen die technologische Führungsposition sichern. Bidens KI-Strategie: Von Sicherheit zu nationaler Vorherrschaft - Foto: über boerse-global.de Die US-Regierung will mit einem neuen Gesetzesrahmen die fragmentierte KI-Regulierung der Bundesstaaten beenden und die Infrastruktur massiv ausbauen. Das Ziel: Amerikas technologische Dominanz sichern. ## Einheitliches Gesetz statt Flickenteppich Am 20. März 2026 stellte
Anthropic briefed Trump administration on Mythos
An Anthropic co-founder confirmed that the company provided a briefing to the Trump administration regarding its Mythos project.
Anthropic’s Mythos a game-changer, NCSC chief tells Oireachtas
'We’re in a race whether we choose to accept it or not,' Richard Browne said. Read more: Anthropic’s Mythos a game-changer, NCSC chief tells Oireachtas
Google hosts AI for the Economy Forum in Washington D.C.
Google hosts AI for the Economy Forum in Washington D.C. # Bringing people together at AI for the Economy Forum Apr 14, 2026 Share Copy link We’ll share new investments in research to better understand how AI will change the economy and training to help workers prepare. James Manyika SVP, Research, Labs, Technology & Society Read AI-generated summary ## Bullet points - "Bringing People Together at AI for the Economy Forum" discusses AI's impact on jobs and the economy. - Google's investing in research to help governments and researchers make smart decisions about AI. - They're also providing training to give people the skills they need in a changing economy. - Google's funding programs to train healthcare workers and create apprenticeships in high-demand fields. - They're committed to partnerships and investments to ensure everyone b
White House Ramps up Efforts to Counter Risks from Powerful AI Tools as Models Grow More Capable - ITP.net
White House Ramps up Efforts to Counter Risks from Powerful AI Tools as Models Grow More Capable - ITP.net / Sean Cairncross, National Cyber Director SHARE The White House is accelerating efforts to address emerging risks from powerful artificial intelligence systems, as the next wave of advanced AI models raises concerns around cybersecurity and critical infrastructure protection. Reports indicate that senior U.S. officials are working across agencies and with private sector leaders to identify vulnerabilities that could be exploited by increasingly capable AI tools. At the centre of this push is National Cyber Director Sean Cairncross, who is leading a coordinated government response aimed at strengthening defences across key systems, including energy, finance, and digital infrastructure. The urgency reflects a broader shift. AI models are evolving from tools that assist with tas
Realistically, Where Can and Can’t You Regulate AI?
Realistically, Where Can and Can’t You Regulate AI? # Bradley Tusk’s Substack SubscribeSign in # Realistically, Where Can and Can’t You Regulate AI? Apr 14, 2026 1 Share The title of this column may be misleading because realistic and thoughtful are typically not the two words that drive most legislation and policy (there’s only one phrase that matters 99% of the time — politically advantageous). However, for the first time since I’ve worked in politics, we have something that is societally, technologically and economically transformational, has the ability to solve existential problems and also the ability to destroy humanity itself. Clearly, thoughtful regulation is needed. But because AI is also so new and is developing so quickly, exactly what you even could regulate right now in a way that makes sense and does more than propel a tweet and a press release is unclear at best.
GAO Finds AI Acquisition Challenges in Government
GAO Finds AI Acquisition Challenges in Government Processing.... Federal agencies face barriers in AI procurement, from talent gaps to cost complexity, according to a new report from the Government Accountability Office. Logo: Government Accountability Office Share this Federal agencies are expanding their use of artificial intelligence, but a new Government Accountability Office report identified key challenges in buying and deploying the technology. Leaders from GAO, the Pentagon and other agencies will explore the challenges in deploying AI in federal and mission-critical systems at the Potomac Officers Club’s 2026 Digital Transformation Summit on April 22.
China now the ‘good guy’ on AI as Trump takes ‘wild west’ approach, MPs told
Experts say China is backing attempts at global governance, while US has set up race between profit-hungry companies China is now the “good guy” on AI rather than Donald Trump’s US, where the technology is being pursued in a dangerous “wild west” manner, a former UN and UK government adviser has told MPs. Prof Dame Wendy Hall, who was a member of the UN’s AI advisory board and co-wrote a review of
AI & Tech Brief: Radical activists and AI safety - The Washington Post
AI & Tech Brief: Radical activists and AI safety - The Washington Post Democracy Dies in Darkness By Benjamin Guggenheim Not a subscriber? Sign up here to get this newsletter in your inbox. - AI safety advocates and industry groups are casting blame on each other as the nation reckons with bouts of violence related to AI and data centers. - While Congress remains preoccupied, the fight over AI regulation is spilling out into states including Colorado and Illinois. (Plus, Anthropic hires Ballard Partners to lobby.) - Should the outputs of AI chatbots be treated like free speech? Court decisions will determine if AI companies are shielded from liability associated with the outputs of their models.
Tom Duff Gordon takes charge of OpenAI’s policy efforts across EMEA - CXO Digitalpulse
Tom Duff Gordon takes charge of OpenAI’s policy efforts across EMEA - CXO Digitalpulse CXO pulse Content Hub Search CXO Musings - CXO Digitalpulse CXO Musings - CXO Digitalpulse CXO Musings - CXO Digitalpulse CXO pulse Content Hub Search CXO pulse Content Hub Search # Tom Duff Gordon takes charge of OpenAI’s policy efforts across EMEA April 14, 2026 161 OpenAI has appointed Tom Duff Gordon, formerly Vice President of International Policy at Coinbase, as its new Head of Policy for Europe, the Middle East, and Africa. The move reflects OpenAI’s growing focus on navigating complex regulatory environments as artificial intelligence adoption accelerates globally. Gordon spent nearly four years at Coinbase, where he led international policy efforts and worked closely wit
Sarawak Unveils AI-Powered Public Services
Sarawak is advancing its digital economy with AI, launching a virtual assistant, Dayang, and the Sarawak Artificial Intelligence Centre to guide policy and adoption. The initiative aims to improve public services while emphasizing inclusive design and local capability-building, with a focus on rural communities and data sovereignty.
France’s digital directorate dumping Windows desktops, adopting Linux instead
Après ça, le déluge, as plans call for move away from plenty more American software and hardware France’s Interministerial Directorate for Digital Affairs (DINUM) will drop Windows desktops, and adopt Linux instead.…
Approvals for Nvidia and AMD AI chip exports to China stall under government bottleneck — 20% staff turnover hobbles Bureau of Industry and Security | Tom's Hardware
Under Secretary Jeffrey Kessler is personally signing off on nearly every license.
Lessons from America's AI policy | The Express Tribune
In March 2026, the White House published a new policy framework for Artificial Intelligence (AI), comprising a set of legislative recommendations for the US Congress. It reflects the current administration's philosophy that AI regulation should not impede competitiveness, and aims to strike a balance between innovation, economic growth and safety. Meanwhile, in March 2026, Punjab approved its first Artificial Intelligence (AI) Roadmap after the federal government ...
Europe is dismantling its own rulebook to compete with America
The EU’s Digital Omnibus weakens the AI Act and GDPR in the name of competitiveness. But deregulation won’t fix Europe’s real problems.
UN Panel On AI Begins Work On Global Impact Study
The UN's Independent International Scientific Panel on Artificial Intelligence - the first global body of its kind - is gearing up for its inaugural in-person summit.
Sweden's privacy watchdog to see budget increase for AI Act duties
The Swedish government proposed a SEK 1 million budget increase for its Data Protection Authority to support new market surveillance duties under the EU's AI Act.
States help buoy US’s ranking in global AI policy index
The US sits in the middle of the pack in a prominent annual report on AI policy, with state-level lawmaking potentially offsetting a federal shift toward deregulation.
AI-Powered Cyberattack Exposes Records
A sophisticated AI-assisted cyberattack compromised nine Mexican government agencies, exfiltrating hundreds of millions of citizen records between December 2025 and February 2026. Despite the advanced tools, traditional vulnerabilities like outdated software and weak credentials were exploited, underscoring the need for fundamental security practices.
Sarawak Unveils AI-Powered Public Services with Virtual Assistant Dayang and AI Talent Development Initiative
Sarawak is advancing its digital economy with AI, launching a virtual assistant, Dayang, and the Sarawak Artificial Intelligence Centre to guide policy and adoption.
Policy and Governance | The 2026 AI Index Report | Stanford HAI
Policy and Governance | The 2026 AI Index Report | Stanford HAI Policy and Governance | The 2026 AI Index Report | Stanford HAI Skip to content * * * * * * * * * * * * ###### Navigate * [ A
Congress should require analytic integrity and AI literacy across government intelligence
Artificial intelligence raises the stakes. Generative tools can draft quickly, summarize broadly, and sound authoritative even when they are wrong. They can hallucinate facts, flatten nuance, and conceal analytical weakness behind polished language.
Artificial Intelligence: Impact on Employment - Hansard
Artificial Intelligence: Impact on Employment - Hansard - UK Parliament # Artificial Intelligence: Impact on Employment ## Debated on Monday 13 April 2026 Previous debate Next debate House of Commons data for 26 March 2026 is currently being processed. Data for this date may temporarily appear incomplete. This debate is sourced from the uncorrected (rolling) version of Hansard and is subject to correction. Question 3.04pm Asked by Share a link to this specific contribution: To ask His Majesty’s Government what assessment they have made of the impact of developments in artificial intelligence on current levels of employment. [The Parliament
The missing work of co-ordination in Canada’s skills ecosystem
Filling Canada’s skills gap requires better co-ordination, stronger intermediaries and long-term workforce planning are essential.
COUNTERPOINT: AI needs rules — and states cannot be forced to wait
COUNTERPOINT: AI needs rules — and states cannot be forced to wait ### Republican Rep. Tony Gonzales of Texas says he will retire after acknowledging affair with staffer April 13, 2026 at 7:48 pm J.B. Branch. (InsideSources/InsideSources/TNS) Getting your Trinity Audio player ready... Congress has not enacted meaningful artificial intelligence legislation, yet some in Washington insist that states should be blocked from legislating on AI. This argument asks Americans to accept the federal government will do nothing to regulate AI, and, meanwhile, states must also do nothing. For some, this passes as “sound governance.” In reality, it is a failure of leadership. States have often served as the country’s first responders when new products begin
EU Appoints Former Von der Leyen Aide to Top Competition Job
The European Commission has appointed Anthony Whelan, a former aide to President Ursula von der Leyen, to lead its competition policy arm amid growing clashes with the US over regulating Big Tech and a push for merger rules that boost European giants.
UK tech minister could face challenge over AI use in online safety consultation
Campaigners are threatening legal action against the UK government over the use of AI to process responses in a consultation on children's online safety, citing privacy and bias concerns.
Challenges linked to agentic AI outpace existing oversight, say UK regulators
The Digital Regulation Cooperation Forum stated that autonomous systems are blurring lines of liability and control, prompting calls for cross-regulator coordination and outcome-based rules.
AI Can Be Efficient But Isn't Likely to Replace Judicial Decision Making
Artificial intelligence is unlikely to replace judicial decision making but the technology may help judges by creating efficiencies, and the judiciary should be open to using it, US District Judge Amit Mehta said Friday.
AI-Powered Cyberattack Exposes Millions of Mexican Records, Highlights Vulnerabilities in Government Security
A sophisticated AI-assisted cyberattack compromised nine Mexican government agencies, exfiltrating hundreds of millions of citizen records between December 2025 and February 2026.
POINT: Congress must embrace sensible federal guidelines
POINT: Congress must embrace sensible federal guidelines ### Republican Rep. Tony Gonzales of Texas says he will retire after acknowledging affair with staffer April 13, 2026 at 7:48 pm A photo taken on Sept. 1, 2025, shows the letters “AI” for Artificial Intelligence on a laptop screen (right) next to the logo of the ChatGPT application on a smartphone screen in Frankfurt am Main, western Germany. Getting your Trinity Audio player ready... “The main thing is to keep the main thing the main thing,” famously said Stephen Covey, the renowned organizational consultant. With AI legislation, what matters most is common sense. That means first not killing, or stagnating, the benefits of AI from 50 states’ cumbersome and contradictory laws regarding AI model development and how it runs. Accompanying this should be related sensible federal guidelines. [Both pa
U.S. White House releases National Policy AI Framework - InfoGovANZ
U.S. White House releases National Policy AI Framework - InfoGovANZ - Skip to main content - Skip to footer On 26 March 2026, the White House released its National Policy Framework for Artificial Intelligence together with legislative recommendations, which builds on America’s AI Action Plan released in July 2025 and the 11 December 2025 Executive Order – Ensuring a National Policy Framework for Artificial Intelligence. Taken together, these documents set out the […] Member only content (join now or login)
AI companies know they have an image problem. Will funding policy papers and thinktanks dig them out?
The aggressive effort by major players aims to reshape the narrative as polls show increasing public disapproval of AI OpenAI made a surprise announcement this week – not an update to ChatGPT or another multibillion-dollar datacenter – but a policy paper that called for a reimagining of the social contract based around “a slate of people-first ideas”. It’s the latest move in an aggressive effort by the major AI players to reshape the narrative around their industry, as polls show public disapproval of AI increasing. OpenAI’s 13-page paper, titled Industrial Policy for the Intelligence Age, follows its surprise acquisition of tech-friendly podcast TBPN and its announcement of plans to open a Washington DC office that will feature a dedicated space called the OpenAI workshop for non-profits and policymakers to learn about and discuss the company’s technology. Continue reading...
UN Panel Advocates for Human-Centric AI Governance
The UN's new Independent International Scientific Panel on AI is set to release its first report, emphasizing the importance of augmented intelligence over full automation. The panel will focus on ethics, governance, and the need for AI models that reflect diverse cultures, avoiding biases and ensuring transparency.
The $500 Billion Question: Can Small Nations Afford Their Own AI? | by Matpcul | Apr, 2026 | Medium
The $500 Billion Question: Can Small Nations Afford Their Own AI? | by Matpcul | Apr, 2026 | Medium Sign up Get app Sign up 8 min read 1 day ago -- Share The $500 Billion Question: Can Small Nations Afford Their Own AI? Sovereign AI sounds great in theory but the math tells a different story By Matthew Culwell | Chickasaw Citizen | Doctoral Candidate, Strategic Leadership Press enter or click to view image in full size Let me throw some numbers at you. Training GPT-4 reportedly cost over $100 million. Building the data center infrastructure to support it cost billions more. Microsoft has committed over $80 billion to AI infrastructure in a single year. Google, Amazon, and Meta are spending at similar scales. The global AI infrastructure buildout is projected to exceed $500 billion by 2027. Now imagine you're the technology minister of a country with a GDP of $50 billion. Or
AI data centers may become a very good midterm issue
Blake Moore in September 2025 when he described proposed legislation, the Unleashing Low-Cost Rural AI Act which would require the U.S. Departments of Energy, Interior, and Agriculture to study the impact
AI could vastly streamline policing. Skeptics urge caution. - The Washington Post
Police agencies are increasingly using artificial intelligence to help their criminal investigations. The results can be dramatic, but skeptics urge caution.
Both parties in Congress want to regulate AI. Here's where they differ.
Both parties in Congress want to regulate AI. Here's where they differ. ### Subscribe Now This article is currently available to Reason Plus subscribers only. If you are a Reason Plus subscriber, log in here. Or subscribe to Reason Plus now. Your Reason Plus subscription gives you instant access to brand new Reason magazine content and 50 years of Reason magazine archives. ## Recommended #### I WA
Putting humans at the centre: UN AI panel begins work on global impact study | UN News
Putting humans at the centre: UN AI panel begins work on global impact study | UN News Global perspective Human stories English Search Search Global perspective Human stories # Putting humans at the centre: UN AI panel begins work on global impact study © Unsplash/Igor Omilaev The UN Security Council is meeting to discuss the impact of artificial intelligence on international peace and security. # Putting humans at the centre: UN AI panel begins work on global impact study By Conor Lennon and Khaled Mohamed 11 April 2026 Economic Development The UN’s Independent International Scientific Panel on AI – the first global body of its kind – is gearing up for its inaugural in-person summit. Tasked with navigating the volatile intersection of innovation and ethics, this group of world-leading experts is launching a landmark s
Labor Secretary Faces Civil Rights Complaints From Department Staff
Three employees described a hostile work environment under Labor Secretary Lori Chavez-DeRemer.
South Korea to map public-sector AI training data in first nationwide survey
South Korea's science ministry is launching a government-wide survey of AI training data held by public bodies to map assets and improve their use for AI development.
UK Invests in AI-Powered Crime Mapping
UK to spend £15M on AI-powered crime mapping in knife violence crackdown. Home Office hopes tech will help cops target hotspots as ministers push to halve offenses.
We Need Strong Preconditions For Using Simulations In Policy
arXiv:2604.07838v1 Announce Type: new Abstract: Simulations, and more recently LLM agent simulations, have been adopted as useful tools for policymakers to explore interventions, rehearse potential scenarios, and forecast outcomes. While LLM simulations have enormous potential, two critical challenges remain understudied: the dual-use potential of accurate models of individual or population-level human behavior and the difficulty of validating simulation outputs. In light of these limitations, we must define boundaries for both simulation developers and decision-makers to ensure responsible development and ethical use. We propose and discuss three preconditions for societal-scale LLM agent simulations: 1) do not treat simulations of marginalized populations as neutral technical outputs, 2) do not simulate populations without their participation, and 3) do not simulate without accountability. We believe that these guardrails, combined with our call for simulation development and deployment reports, will help build trust among policymakers while promoting responsible development and use of societal-scale LLM agent simulations for the public benefit.
OpenAI Supports Bill Limiting AI Liability in Mass Deaths or Financial Crises - El-Balad.com
OpenAI has recently expressed its support for a new Illinois bill aimed at limiting liability for AI developers in severe incidents. This legislation, known as SB 3444, focuses on cases involving serious societal harms, such as mass casualties or substantial financial loss.
Florida launches investigation into OpenAI
The Florida Attorney General has initiated an investigation into OpenAI regarding safety and security concerns.
South Korea begins nationwide AI data census to unlock public datasets
South Korea's science ministry is launching the first government-wide census of AI training data to secure a steady pipeline of high-quality inputs for AI development.
AI Is the New Front Door to Government. The Bots Need Help.
If artificial intelligence tools struggle to find official guidance, too often the answers they generate are wrong. Governments need to make their information readable by machines as well as humans.
France to ditch Windows for Linux to reduce reliance on US tech
France is moving to replace Windows with Linux on government desktops to decrease its dependence on American technology providers.
Cyber Security In Focus As Bank CEO's Race to DC
US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to give them an urgent warning: an artificial intelligence tool from Anthropic PBC marks the beginning of a new era of cybersecurity. The April 7 meeting in Washington was focused on Mythos, a new AI model that Anthropic says is so good at finding vulnerabilities in software and computer syste
South Africa unveils draft AI policy, proposes new institutions and ...
South Africa unveils draft AI policy, proposes new institutions and incentives | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv - Summary - Policy proposes new institutions for AI governance and oversight - Plans incentives like tax breaks and grants - Prioritises investment in data centres and networks - Notes concerns over reliance on foreign infrastructure JOHANNESBURG, April 10 (Reuters) - South Africa on Friday unveiled a draft national AI policy, seeking public comment on sweeping proposals to regulate and accelerate AI adoption. The policy, published by the Department of Communications and Digital Technologies, aims to position South Africa as a continental leader in AI innovation while addressing ethical, social and economic challenges. It also marks a significant step in
Путин: комиссия по ИИ должна стать штабом по развитию этой технологии - РИА Новости, 10.04.2026
Путин: комиссия по ИИ должна стать штабом по развитию этой технологии - РИА Новости, 10.04.2026 Регистрация пройдена успешно! Пожалуйста, перейдите по ссылке из письма, отправленного на Отправить еще раз Путин: комиссия по ИИ должна стать штабом по развитию этой технологии # Путин: комиссия по ИИ должна стать штабом по развитию этой технологии в стране © РИА Новости / POOL| Перейти в медиабанкПрезидент России Владимир Путин © РИА Новости / POOL Президент России Владимир Путин. Архивное фото Читать ria.ru в МОСКВА, 10 апр - РИА Новости. Комиссия по искусственному интеллекту должна стать настоящим штабом по развитию этой технологии в стране, заявил президент РФ Владимир Путин. Путинпроводит в пятницу совеща
82% of U.S. government agencies have adopted AI agents, IDC survey finds
82% of U.S. government agencies have adopted AI agents, IDC survey finds Back to: All AI News # 82% of U.S. government agencies have adopted AI agents, IDC survey finds 82% of U.S. government agencies now use AI agents, per an April 2026 IDC study of 118 officials. 60% believe they're ahead of the private sector in adoption. Categorized in: AI News Government Published on: Apr 10, 2026 Become an A
Vance, Bessent questioned tech giants on AI security ... - Reuters
Vance, Bessent questioned tech giants on AI security before Anthropic's Mythos release, CNBC reports | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv U.S. Vice President JD Vance speaks next to U.S. Treasury Secretary Scott Bessent at the White House in Washington, D.C., U.S., September 25, 2025. REUTERS/Kevin Lamarque Purchase Licensing Rights, opens new tab April 10 (Reuters) - U.S. Vice President JD Vance and Treasury Secretary Scott Bessent questioned leading tech CEOs about AI model security and how to respond to cyber attacks a week before
Assessing the Feasibility of a Video-Based Conversational Chatbot Survey for Measuring Perceived Cycling Safety: A Pilot Study in New York City
arXiv:2604.07375v1 Announce Type: new Abstract: Bicycle safety is important for bikeability and transportation efficiency. However, conventional surveys often fall short in capturing how people actually perceive cycling environments because they rely heavily on respondents' recall rather than in-the-moment experience. By leveraging large language models (LLMs), this study proposes a new method of combining video-based surveys with a conversational AI chatbot to collect human perceptions of cycling safety and the reasons behind these perceptions. The paper developed the AI chatbot using a modular LLM architecture, integrating prompt engineering, state management, and rule-based control to support the structure of human-AI interaction. This paper evaluates the feasibility of the proposed video-based conversational chatbot using complete responses from sixteen participants to the pilot survey across nine street segments in New York City. The method feasibility was assessed using a seven-point scale rating for user experience (i.e., ease of use, supportiveness, efficiency) and a five-point scale for chatbot usability (i.e., personality, roboticness, friendliness), yielding positive results with mean scores of 5.00 out of 7 (standard deviation = 1.6) and 3.47 out of 5 (standard deviation = 0.43), respectively. The data feasibility was assessed using multiple techniques: (1) Natural language processing (NLP), such as KeyBERT, for overall safety and feature analysis to extract built-environment attributes; (2) K-means clustering for semantic analysis to identify reasons and suggestions; and (3) regression to estimate the effects of built-environment and demographic variables on perceived safety outcomes. The results show the potential of AI chatbots as a novel approach to collecting data on human perception, behavior, and future visions for transport planning.
Hungary's election is flooded with AI deepfakes — and nobody is stopping them - EU Perspectives
Hungary's election is flooded with AI deepfakes — and nobody is stopping them - EU Perspectives # Hungary’s election is flooded with AI deepfakes — and nobody is stopping them 10. 04. 2026 Fear travels faster than fact — and in Hungary, it arrives on your phone / Photo: Unsplash A groom is dragged from the altar straight to the battlefield, where he dies — one of many AI-generated videos flooding
At David Sacks’s Behest, White House Barrels Forward on Industry-Friendly AI Policy
In the run-up to the midterms, the Trump administration is planning to emphasize AI’s economic benefits, but some allies are warning of political blowback.
From experimentation to engagement: on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises
arXiv:2604.06219v1 Announce Type: new Abstract: Across the Global North, calls for participatory artificial intelligence (AI) to improve the responsible, safe, and ethical use of AI have increased, particularly efforts that engage citizens and communities whose well-being and safety may be directly impacted by AI and other algorithmic tools. These initiatives include surveys, community consultations, citizens' councils and assemblies, and co-designing AI models and projects. Far fewer efforts, however, have been made in the Global South, particularly in contexts related to humanitarian crises and forced displacement, where the deployment of AI and algorithmic tools is accelerating. In this paper, we critically examine participatory AI methods and their limitations in these contexts and explore the opinions and perceptions of AI held by displaced and crisis-affected communities. Based on a pilot exercise with communities living in Kakuma Refugee Camp in northwestern Kenya, we find important limitations in some participatory AI approaches which, if used in humanitarian contexts, could increase risks of so-called 'participation washing' and algorithmic harm. We argue that these risks are not predominantly driven by varying levels of understanding and awareness of AI but more closely linked to the fundamental power dynamics embedded within the humanitarian sector: between humanitarian aid recipients, service providers, donor governments, and host nations, as well as the power differentials and incentives that exist between AI companies and humanitarian actors. These structural conditions make the case not only for more rigorous participatory methods, but for independent governance architecture capable of holding humanitarian AI to account.
Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’
Opting out of personal data use won't be an option because the Minister says that's a 'very big obstacle' to AI adoption.
UK's AI Development Faces Uphill Battle
UK's grand plan to fuel AI with public data faces uphill battle. Agents will look for info elsewhere unless official sources sharpen up.
AI pioneer Stephen Thaler sues Indian government over copyright deadlock
Stephen Thaler has filed a petition in the Delhi High Court challenging India's stance on non-human authorship, seeking copyright recognition for art created by his DABUS system.
Japan Relaxes Privacy Laws for AI Development
Japan relaxes privacy laws to make itself the 'easiest country to develop AI'. Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption.
Florida AG opens probe into OpenAI ahead of potential IPO
Florida Attorney General James Uthmeier on Thursday launched an investigation into OpenAI and its chatbot ChatGPT, as the artificial intelligence firm prepares for an IPO that could value it at up to $1 trillion.
Market regulator in Chinese city of Hangzhou explores AI-agent use, flags risk
The market supervising bureau in the Chinese city of Hangzhou is exploring the use of artificial intelligence agents in its work while warning of associated risks.
Governing frontier general-purpose AI in the public sector: adaptive risk management and policy capacity under uncertainty through 2030
arXiv:2604.06215v1 Announce Type: new Abstract: The governance of frontier general-purpose artificial intelligence has become a public-sector problem of institutional design, not merely a technical issue of model performance. Recent evidence indicates that AI capabilities are advancing rapidly, though unevenly, while knowledge about harms, safeguards, and effective interventions remains partial and lagged. This combination creates a difficult policy condition: governments must decide under uncertainty, across multiple plausible trajectories of progress through 2030, and in environments where adoption outcomes depend on organizational routines, data arrangements, accountability structures, and public values. This article argues that public governance for frontier AI should be based on adaptive risk management, scenario-aware regulation, and sociotechnical transformation rather than static compliance models. Drawing on the International AI Safety Report 2026, OECD foresight and policy documents, and recent scholarship in digital government, the article first reconstructs the conceptual foundations of the 'evidence dilemma', differentiated AI risk categories, and the limits of prediction. It then examines how AI adoption in government depends on organizational redesign, public-sector institutional dynamics, and data collaboration capacity. On that basis, it proposes an adaptive governance framework for public institutions that integrates capability monitoring, risk tiering, conditional controls, institutional learning, and standards-based interoperability. The article concludes that effective AI governance requires stronger policy capacity, clearer allocation of responsibility, and governance mechanisms that remain robust across divergent technological futures.
OpenAI made economic proposals — here’s what DC thinks of them
An analysis of how Washington D.C. is reacting to the economic policy proposals put forward by OpenAI.
London mayor takes aim at social media companies over disinformation
Research published by City Hall found accounts aligned with extreme right are among those smearing capital online
OpenAI pauses UK data centre project over regulation, costs - Reuters
OpenAI pauses UK data centre project over regulation, costs | Reuters Exclusive news, data and analytics for financial market professionalsLearn more aboutRefinitiv OpenAI logo is seen in this illustration taken June 18, 2025. REUTERS/Dado Ruvic/Illustration Purchase Licensing Rights, opens new tab - Summary - Companies - OpenAI says will continue to explore Stargate UK - UK government: continuing to work with OpenAI, other leading AI companies - PM has pledged to make UK an AI superpower - OpenAI has been boosting global investment in data centres LONDON, April 9 (Reuters) - ChatGPT-maker OpenAI is pausing its main data centre project in Britain ov
China issues guidelines for AI ethics governance | The Manila Times
CHINA has issued a trial guideline on the ethics review and service of AI technology, the Ministry of Industry and Information Technology said on April 3.
Viksit Bharat 2047: The role of AI in India’s growth strategy- The Week
In this context, we recommend the development of a new technology ecosystem for AI innovation for a Viksit Bharat by 2047, consisting of three elements: 30 new Global Research Universities, supporting 30 new Research Parks that house technology enterprises; all these located in 30 new University Townships or Viksit Bharat Nagars. Together, these will become AI Innovation hubs ...
AI Will Test Governments on Jobs, Training, and Public Trust | Stanford Graduate School of Business
Researchers and former policymakers discuss how governments can manage workplace disruption that's arriving with unprecedented speed.
Ireland Ai Job Losses Report as the AI Shift Reaches a Turning Point - El-Balad.com
ireland ai job losses report is now more than a headline about future disruption; it marks a turning point for how Ireland thinks about work, income, and public finances. A new ESRI and Department of Finance research scenario suggests AI could affect up to 7% of current jobs in the short to ...
AI's sinister takeover of British politics - New Statesman
The British government’s ability ... and an internal fight over whose responsibility it is. When James Phillips met with a group of MPs in 2021 to warn them that AI models could be imbued with someone else’s politics, he was asked: “What’s AI?” · After ChatGPT launched the following year, the British government preoccupied itself with a new question: who gets to be in charge of this new policy area? One person who was in No 10 at the time told me that a “tense” competition began for ...
AI & Job Creation: World Bank on Productive Use: Rediff Moneynews
AI & Job Creation: World Bank on Productive Use: Rediff Moneynews Hi Guest Feedback| Sign In| Sign-up # AI & Job Creation: World Bank on Productive Use By Rediff Money Desk, New Delhi 1 Minute Read Listen to Article Share: Apr 09, 2026 18:18 x World Bank official Franziska Ohnsorge says countries should remove AI adoption obstacles to boost business productivity and job creation. New Delhi, Apr 9 (PTI) Countries should work towards removing obstacles in the adoption of artificial intelligence so that businesses can make productive use of AI and create more jobs, World Bank South Asia Chief Economist Franzisk
The ATOM Project
The ATOM Project aims to bolster U.S. open-source AI development to remain competitive with international advancements, focusing on transparency and innovation.
The shift to containment in modern AI cybersecurity
The shift to containment in modern AI cybersecurity # The shift to containment in modern AI cybersecurity - Whitepaper - April 9, 2026 - 8:00 am - Eve Goode ISJ hears exclusively from Trevor Dearing, Director of Critical Infrastructure at Illumio. The European Parliament recently announced that it was disabling the AI features on tablets it provides to lawmakers. Tools such as writing aids and virtual assistants were blocked due to AI using cloud services to perform tasks that could be handled locally, sending data off the device. As stated by the Parliament’s cybersecurity and personal data protection teams, it was assessing the full extent of the data shared with service prov
Transatlantic cooperation on AI and national security - Atlantic Council
The Atlantic Council—through ... it develops, and the communities it builds—shapes policy choices and strategies to create a more free, secure, and prosperous world. The Atlantic Council’s Europe Center conducts research and uses real-time analysis to inform the actions and strategies of key transatlantic decision-makers in the face of great-power competition and a geopolitical rewiring of ...
Transatlantic cooperation on AI and national security - Atlantic Council
Transatlantic cooperation on AI and national security - Atlantic Council ### Bottom lines up front - Donate - The links between AI and national security are a key test of the transatlantic relationship. - Barriers to closer cooperation on AI include the unpredictability of US policy, European countries’ positions on China, and the limited flexibility in Washington’s priorities for accommodating European interests. - Other tech sectors such as legacy chips, where EU and US positions and interests are more closely aligned than on AI, offer potential for pragmatic cooperation. ## Executive summary The links between AI and national security are a key issue for testing the strength of the transatlantic relationship and assessing its prospects. US export controls on AI, while largely targeted at China, have profound implications for Europe’s
White House Releases National AI Policy Framework - FABBS
White House Releases National AI Policy Framework – FABBS The Trump administration proposed a national framework for artificial intelligence (AI) that encourages Congress to establish a single, unified set of rules rather than leaving AI governance to the states. The framework seeks to minimize barriers to innovation, leverage existing federal agencies for oversight, support widespread access to AI technologies, and protect children. Many stakeholders are questioning whether the guidelines provide sufficient safeguards to address the risks and challenges posed by AI. What the Framework Includes The proposed national AI framework outlines several key priorities: 1. News 2. Protecting Children and Families – C
UAE Workshop on AI
The UAE showcased its leadership in AI governance at a workshop, emphasizing the role of specialized infrastructure and regulatory frameworks in driving AI adoption and transforming employment.
Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’
Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption Japan’s Minister for Digital Transformation Hisashi Matsumoto has declared the nation will become the easiest place in the world to develop AI apps, thanks to legal changes that mean organizations won’t need to secure consent to use some personal information.…
[AI DAILY NEWS RUNDOWN] The Liability Shift: AI Psych-Meds, Target's Chatbot Trap, and the OpenAI Expose
Open AI just published a 13-page policy document with ideas to help society navigate superintelligence and its societal impacts, asking Washington to tax AI -driven profits, create a wealth fund, implement a 4-day workweek, and more.
Governance and Regulation of Artificial Intelligence in Developing Countries: A Case Study of Nigeria
arXiv:2604.06018v1 Announce Type: new Abstract: This study examines the perception of legal professionals on the governance of AI in developing countries, using Nigeria as a case study. The study focused on ethical risks, regulatory gaps, and institutional readiness. The study adopted a qualitative case study design. Data were collected through 27 semi-structured interviews with legal practitioners in Nigeria. A focus group discussion was also held with seven additional legal practitioners across sectors such as finance, insurance, and corporate law. Thematic analysis was employed to identify key patterns in participant responses. Findings showed that there were concerns about data privacy risks and the lack of enforceable legal frameworks. Participants expressed limited confidence in institutional capacity and emphasized the need for locally adapted governance models rather than direct adoption of foreign frameworks. While some expressed optimism about AI's potential, this was conditional on the presence of strong legal oversight and public accountability. The study contributes to the growing discourse on AI governance in developing countries by focusing on the perspectives of legal professionals. It highlights the importance of regulatory approaches that are context-specific, inclusive, and capable of bridging the gap between global ethical principles and local realities. These insights offer practical guidance for policymakers, regulators, and scholars working to shape responsible AI governance in similar environments.
The Politics of AI Regulation: Federal Government v. the States | Polsinelli - JDSupra
Key Takeaways - On December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.” The EO addressed key elements including: 1)...
Two AI Strategies, One day: Challenging what we think about East ...
Two AI Strategies, One day: Challenging what we think about East vs West | Global Policy Journal By Adam Chalmers and Elise Antoine - 08 April 2026 ### Adam Chalmers and Elise Antoine introduce research that challenges the tidy story of western democracies converging in opposition to Chinese AI exceptionalism. On 20 March 2026, Scotland published its much anticipated and updated national AI strategy. A few hours later, the United States released a National Policy Framework for Artificial Intelligence under the Presidential seal. Two anglophone democracies, the same technology, the same day. We ran both documents through x.Machina(our
Industrial policy for the Intelligence Age
This paper lays out OpenAI’s early policy ideas for keeping people first as AI advances, focusing on prosperity, institutions, and social impact.
AI Policy at the Frontier: A Conversation with Nathan Calvin (Encode ...
AI Policy at the Frontier: A Conversation with Nathan Calvin (Encode) and Fin Moorhouse (Forethought) - Harvard Law School | Harvard Law School Back to events list What does it take to govern a technology that might reshape the world within the decade? Answering that requires both big-picture thinking about where AI is heading and close engagement with the policy fights shaping it in the present. This event brings together two speakers who sit on opposite ends of that spectrum — one (Nathan Calvin) is shaping frontier AI legislation in statehouses today, the other (Fin Moorhouse) is thinking through what rapid AI progress could mean for the century ahead. Each will give a short talk, followed by a joint Q&A. In-person event (Harvard ID holders only). Lunch will be served. Part of the AI Governance Speaker Series co-sponsored by AISST and the HLS A
AI regulation is stalling as governments clash over control
Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time
Four Pages That Could Reshape American AI Policy | TechPolicy.Press
Four Pages That Could Reshape American AI Policy | TechPolicy.Press Perspective # Four Pages That Could Reshape American AI Policy Neil Chilson / Apr 8, 2026 Neil Chilson is the head of AI policy at the Abundance Institute. President Donald Trump delivers remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian) Late last month the White House released its National Policy Framework for Artificial Intelligence. It’s four pages long. Critics immediately dismissed it as [empty](https://www.brookings.edu/articles/the-empty-national-ai-policy-framework-who-is-i
Ohio man becomes first to be convicted under new AI statute for sexually explicit images
An Ohio man has been convicted under new legislation targeting the creation of sexually explicit AI-generated content.
Nigeria moves toward comprehensive AI regulation | IAPP
Nigeria is a strong proponent for AI regulation, made clear through various attempts at regulating AI and governance proposals made in the National AI Strategy.
Exclusive: Inside OpenAI's case for an AI New Deal
The lead researcher of OpenAI's new policy blueprint argues that society needs a New Deal-like rethink of AI's role in the world before major disruptions occur.
Bridging India's AI Adoption Gap for National Resilience | - The Times of India
Bridging India's AI Adoption Gap for National Resilience | - The Times of India Edition IN English Sign In - Videos - News - Technology News - AI is core to national resilience, but India’s adoption gap remains Trending # AI is core to national resilience, but India’s adoption gap remains Comments Share AA +Text Size - Small - Medium - Large . AI is fast emerging as a foundational layer for national resilience – enabling govts and enterprises to anticipate disruptions, respond faster, and operate at scale. But while India has built some of the world’s strongest digital public infrastructure (DPI), panelists at a discussion we had last week said the country’s adoption of AI remains uneven, with govt often ahead of industry and execution
EU AI simplification package risks 'loophole' in AI Act scope, civil society warns
A coalition of organizations warned that proposed changes to the EU's AI omnibus package could exclude key systems from high-risk rules, potentially weakening consumer protection.
The implementation of the EU AI Act with a focus on general-purpose AI models | Digital Watch Observatory
The European Union is progressing into the implementation phase of its Artificial Intelligence Act, with emerging obligations for providers of general-purpose AI models. Guidance from the European Commission and the AI Office outlines compliance expectations as the EU operationalises its risk-based ...
Big Tech shouldn't be writing the rules for AI - The Korea Times
MEXICO CITY—The ongoing dispute between Anthropic and U.S. President Donald Trump’s administration reveals something deeply troubling about the cur...
European Commission consultation closes on draft AI Act procedure rules | Digital Watch Observatory
Draft AI Act implementing rules on model access and procedural safeguards are moving forward as the Commission consultation closes.
Commentary: Trump’s AI Framework: What Does It Accomplish? | Redmond Spokesman
Commentary: Trump’s AI Framework: What Does It Accomplish? | Redmond Spokesman Log out Log in Activate Subscribe My Account - Editorials - Submit Letter to the Editor - Place An Ad - Jobs - Public Notices - Statewide Public Notices - Hawaii Trip for Two Contest Letters To The Editor [Submit Letter to the Editor](https://redmonds
The White House’s National Policy Framework for Artificial Intelligence: what it means and what comes next | Consumer Finance Monitor
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence. This Framework contains a sweeping set of
UAE Workshop: AI Revolutionizes Jobs, Calls for Skill Upskilling and Strategic Investment
The UAE showcased its leadership in AI governance at a workshop, emphasizing the role of specialized infrastructure and regulatory frameworks in driving AI adoption.
AI data centers a boon for US lifestyle and technology | Opinion - AOL
AI data centers a boon for US lifestyle and technology | Opinion - AOL Wed, April 8, 2026 at 9:06 AM UTC 0 As AI heats up globally, several state legislatures and U.S. Sen. Bernie Sanders are calling for a pause on AI data center development, citing risks of job loss, superintelligence concerns and risks to working people. Their concerns aren’t baseless, but a moratorium is not a solution. Such a proposal would put the U.S. at a disadvantage to China. If the United States is to win the AI race, global adoption is critical. While hardware and model innovation for top performance is significant, it is arguably more important that the development ecosystem is accessible and widespread enough to encourage AI developers to choose U.S.-based platforms. In the same way Microsoft’s Windows Phone arrived to the smartphone
Something to consider as some in congress propose data center moratorium like this https://www.sanders.senate.gov/press-releases/news-sanders-ocasio-cortez-announce-ai-data-center-moratorium-act/
Something to consider as some in congress propose data center moratorium like this https://www. sanders. senate.
British GovTech scale-up Fivium secures BGF investment to scale public sector software offering
London-based GovTech scale-up Fivium, a public sector software provider, has raised a “multi-million pound” minority investment from growth capital investor BGF to back its next phase of expansion. The funding will enable Fivium to further scale its SaaS platform and digital solutions offering, strengthening its go-to-market and product development capabilities, as well as its ability […] The post British GovTech scale-up Fivium secures BGF investment to scale public sector software offering appeared first on EU-Startups.
It's Here: CHT's Robust Set of AI Solutions
The report identifies three domains where pressure actually shifts behavior in the tech ecosystem: norms (what society considers acceptable), laws (the rules that create accountability), and product design (the choices that determine how AI impacts people). Each principle in the report maps interventions across all three, because no single reform is sufficient on its own.
Speech by Vice Chair Jefferson on the economic outlook and the labor market - Federal Reserve Board
Thank you for the warm welcome. It is an honor to speak here at the University of Detroit Mercy. 1 I spent most of my career as an economics professor bef
Huawei Cloud Accelerates AI Adoption Across Thailand's Public Sector, Showcasing Real-World Impact in Government Services - The Korea Herald
BANGKOK, April 7, 2026 /PRNewswire/ -- Huawei Cloud is driving Thailand's digital transformation by enabling the large-scale deployment of artificial intelligen
What is Your Government's Responsible AI Activity? - ICTworks
USA government has a 91% AI non-compliance rate among the very systems most capable of harming people. At least it published a public list.
The Democratic Ontology Deficit: How AI Systems Fail to Represent What Democracy Requires
arXiv:2604.03865v1 Announce Type: new Abstract: Democratic public life depends on institutions that make roles, responsibilities, relationships, and purposes intelligible as lived orientation. Contemporary AI systems are trained on web-scale corpora and aligned for helpfulness, harmlessness, and honesty, but the representational structure of democratic institutional life has not been treated as an alignment target. This paper identifies and tests the democratic ontology deficit: the structural mismatch between the representational conditions democratic agency requires and the ontology contemporary AI systems are built to learn and reproduce. We apply representation engineering to three instruction-tuned models (Llama-2-13b-chat, Mistral-7B-Instruct-v0.2, and Meta-Llama-3-8B-Instruct), extracting reading vectors for civic reasoning and its four component primitives using contrastive stimuli. The model's default ontology is organized under independence rather than civic structure. The deepest deficit is in role: the model's representation of what a person is defaults almost entirely to individual rather than communal identity. Honesty, measured on the same model at the same layer using the same method, scores 0.707; civic role scores -0.047. The pattern replicates across architectures and training generations. These findings open a concrete research program for civic alignment using the tools the field already possesses.
Trump Administration’s National AI Policy Framework: Key Takeaways
Trump Administration’s National AI Policy Framework: Key Takeaways # Trump Administration’s National AI Policy Framework: Key Takeaways April 7, 2026 April 7, 2026 29 The Trump administration has unveiled a comprehensive National Policy Framework for Artificial Intelligence, signaling a decisive push to move AI governance from a fragmented collection of state rules to a unified federal standard. The proposal, which builds upon an executive order issued last year, aims to streamline how the U.S. Manages the rapid deployment of generative AI while attempting to secure American dominance in the global technology race. At the heart of the White House framework for artificial intelligence regulation is a call for Congress to establish a national standard that would preempt the current “patchwork” of state laws. For developers and startups, the current envi
OpenAI calls for taxing AI use to shore up fraying safety nets - InvestmentNews
OpenAI calls for taxing AI use to shore up fraying safety nets - InvestmentNews CONTINUE TO SITE CONTINUE TO SITE # OpenAI calls for taxing AI use to shore up fraying safety nets If robots are taking jobs, the argument goes, they should pay into the system those jobs used to support — and CEO Sam Altman is now making that case to Washington. APR 07, 2026 OpenAI is floating a plan for the US government to tax automated labor and shift the country's tax base away from wages – a policy pivot that could have direct consequences for the social programs millions of Americans depend on, as well as the investors and advisors who help them plan around those programs. The proposals are laid out in a 13-page policy document calling for sweeping economic reforms, titled "Industrial Policy for the Intelligence Age: Ideas to Keep People First," released as Congress prepares to debate AI legisla
Pegasystems Launches Government Tool To Modernize Legacy IT Systems
Pegasystems announced the launch of Pega Blueprint for Government, an AI-powered design tool built to help U.S. federal agencies modernize legacy IT systems and improve operational efficiency. The new platform is authorized at the FedRAMP High level, enabling agencies to securely redesign mission-critical workflows while meeting strict federal compliance and security standards. The post Pegasystems Launches Government Tool To Modernize Legacy IT Systems appeared first on Pulse 2.0.
Innovation Nation: Inside Japan's Ambitious AI and Automation Drive
Paid Program: Innovation Nation: Inside Japan’s Ambitious AI and Automation Drive # Innovation Nation: Inside Japan’s Ambitious AI and Automation Drive Japan’s AI push goes beyond policy—linking governance, infrastructure and semiconductors in a coordinated national project. Japan’s digital ambitions have long been framed in sweeping terms: using technology to lift productivity, support an aging population and modernize public services. Today, those ambitions are set to become reality, with a new industrial strategy designed to accelerate innovation without eroding public trust. Under Prime Minister Sanae Takaichi, AI and semiconductors have been put at the heart of this renewed industrial approach. As AI drives up global demand for computing power, countries without secure access to chips, energy and infrastructure risk being sidelined. Japan’s response is to position itself as a tr
Classifying Problem and Solution Framing in Congressional Social Media
arXiv:2604.03247v1 Announce Type: new Abstract: Policy setting in the USA according to the ``Garbage Can'' model differentiates between ``problem'' and ``solution'' focused processes. In this paper, we study a large dataset of US Senator postings on Twitter (1.68m tweets in total). Our objective is to develop an automated method to label Senatorial posts as either in the problem or solution streams. Two academic policy experts labeled a subset of 3967 tweets as either problem, solution, or other (anything not problem or solution). We split off a subset of 500 tweets into a test set, with the remaining 3467 used for training. During development, this training set was further split by 60/20/20 proportions for fitting, validation, and development test sets. We investigated supervised learning methods for building problem/solution classifiers directly on the training set, evaluating their performance in terms of F1 score on the validation set, allowing us to rapidly iterate through models and hyperparameters, achieving an average weighted F1 score of above 0.8 on cross validation across the three categories using a BERTweet Base model.
White House seeks deep NASA cuts as Artemis II breaks spaceflight record
'Proposal resurrects an existential threat to US leadership in space science and exploration' First, the good news: the Artemis II crew has successfully swung around the far side of the Moon and surpassed Apollo 13's record for the farthest distance traveled by humans in space. Now the bad news: the White House is sharpening the budget blade once again.…
Gartner: The Sovereign AI Shockwave: Why Organisations Must Rethink their AI Strategy in 2026 | The AI Journal
Gartner: The Sovereign AI Shockwave: Why Organisations Must Rethink their AI Strategy in 2026 | The AI Journal Sovereign AI is often framed as a policy issue, yet it is increasingly shaping organisational reality. What began as government ambition, now influences how organisations build and govern AI. In 2026, sovereign AI is no longer peripheral to strategy; it is redefining how AI is built and leveraged. Across the US, China, the EU, the UK, UAE, Saudi Arabia, Canada, and India, public investment, including support from private sector partners, now exceeds $1 trillion. These commitments target AI innovation, infrastructure, data ecosystems, and talent – determining where capability resides and who controls access to it. The global AI landscape is being reshaped by design. ## From national ambition to organisational impact Sovereign AI has moved rapidly from intent to execution. Gov
5 Ways State and Local Governments Will Operationalize AI in 2026 - NYC Today
State and local governments are moving beyond AI pilots and are instead focusing on embedding the technology directly into real-world processes where it can operate reliably and at scale. Five key shifts are already taking shape, including modernizing grant management and rural health initiatives, ...
IBM Expands FedRAMP-Authorized Portfolio
IBM secures FedRAMP approval for 11 AI and automation offerings, expanding secure capabilities for federal agencies.
The future of federal AI: Building sovereign infrastructure from the ...
The future of federal AI: Building sovereign infrastructure from the ground up | Federal News Network # The future of federal AI: Building sovereign infrastructure from the ground up In AI, governance sets the rules. Infrastructure determines whether the game can be played at all. April 6, 2026 3:51 pm 6 min read Washington is having the wrong argument about artificial intelligence, at least partly. The national conversation around federal AI has become dominated by policy frameworks, ethics guidance and regulatory compliance. These are necessary discussions, but they are incomplete. You cannot govern what you cannot run. And today, too many federal agencies are being asked to implement ambitious AI strategies on infrastructure that was never designed to support them. AI readiness is not just a matter of paperwork. It’s also a matter of architecture. The administration’s [AI Acti
Sam Altman says AI superintelligence is so big that we need a 'New Deal.'
Sam Altman advocates for a new policy framework for AI, though critics argue his proposals are a form of regulatory nihilism.
As the Federal Government Rushes Toward AI, Here Are Three Cautionary Tales — ProPublica
We’ve been reporting on cybersecurity for years. As President Donald Trump and his Cabinet say artificial intelligence will transform the nation, the messaging isn’t new. It follows a familiar pattern.
OpenAI releases policy document focused on financial impacts and risks of AI
OpenAI Group PBC today released a policy framework with suggestions on how to address the risks posed by artificial intelligence. The 13-page document is titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” It’s at least the second such paper OpenAI has published in the past two years. The document’s release follows new reports […] The post OpenAI releases policy document focused on financial impacts and risks of AI appeared first on SiliconANGLE.