AI Daily Brief

Education

Latest Intelligence

The latest AI stories, analysis and developments relevant to Education — curated daily by Best Practice AI.

Use Case Library

Use Casesfor Education

200 articles

Education
Arxiv· Yesterday

The University AI Didn't Replace -- Rethinking Universities in the AI Era

arXiv:2605.07056v1 Announce Type: new Abstract: Generative artificial intelligence (AI) is reshaping higher education, yet many universities remain in early stages of adoption where AI innovation occurs informally and without institutional recognition. This paper presents a framework describing four levels of AI adoption in universities and illustrates these dynamics through a case study of AI-enabled curriculum initiatives in several units. We contend that the key institutional challenge is moving from isolated innovation to strategic integration, where universities redesign learning around AI-supported reasoning and align policies, workload models, and recognition systems to support educational transformation.

EducationLabor & Society
Arxiv· Yesterday

LLM hallucinations in the wild: Large-scale evidence from non-existent citations

arXiv:2605.07723v1 Announce Type: cross Abstract: Large language models (LLMs) are known to generate plausible but false information across a wide range of contexts, yet the real-world magnitude and consequences of this hallucination problem remain poorly understood. Here we leverage a uniquely verifiable object - scientific citations - to audit 111 million references across 2.5 million papers in arXiv, bioRxiv, SSRN, and PubMed Central. We find a sharp rise in non-existent references following widespread LLM adoption, with a conservative estimate of 146,932 hallucinated citations in 2025 alone. These errors are diffusely embedded across many papers but especially pronounced in fields with rapid AI uptake, in manuscripts with linguistic signatures of AI-assisted writing, and among small and early-career author teams. At the same time, hallucinated references disproportionately assign credit to already prominent and male scholars, suggesting that LLM-generated errors may reinforce existing inequities in scientific recognition. Preprint moderation and journal publication processes capture only a fraction of these errors, suggesting that the spread of hallucinated content has outpaced existing safeguards. Together, these findings demonstrate that LLM hallucinations are infiltrating knowledge production at scale, threatening both the reliability and equity of future scientific discovery as human and AI systems draw on the existing literature.

PaywallEducationLabor & Society
Bloomberg· Yesterday

What If AI’s Biggest Impact Isn’t Jobs, But Minds?

In this episode of Merryn Talks Money, Merryn Somerset Webb speaks to Tom Slater, manager of Scottish Mortgage Investment Trust and partner at Baillie Gifford, about his provocative paper, “AI Isn’t Coming for Your Job. It’s Coming for Your Mind,” and why the real risk may be a world that looks more productive while quietly losing the judgement, learning and expertise that make progress possible. They also discuss what that means for the future workforce and smart investing. (Source: Bloomberg)

Education
Arxiv· Yesterday

Vibe coding before the trend

arXiv:2605.07751v1 Announce Type: new Abstract: Early 2025 we ran a series of vibe coding challenges across four different student cohorts. The cohorts included 54 ICT students, 24 digital marketing students, and 7 journalism students at Fontys University of Applied Sciences (Netherlands), and 22 BA Communication students at North-West University (South Africa). From the student reflections, five major patterns emerged. Students reported that AI tools shifted their focus from syntax to higher-order thinking; they also described a skill shift from memorizing to evaluating; they viewed AI proficiency as career-essential; they framed their relationship with AI as partnership rather than replacement; and finally non-technical students showed the strongest appreciation for the accessibility these tools provide. This practitioner report documents what we observed during the classroom experiments, we reflect on how the landscape has shifted in the year since, and shares practical lessons for educators considering similar experiments. We present the observations as what they are: patterns from practice, not proven conclusions, in the beleif that sharing early stage experiences contributes to the overall field of AI and education.

EducationLabor & Society
Daily Brew· Yesterday

Rakan Tutor Wins €10 Million EU Grant to Expand AI Education Across Southeast Asia

Rakan Tutor has received a €10 million grant from the EU to expand AI education across Southeast Asia, aiming to empower students with practical AI skills.

EducationAdoption & Impact
Arxiv· Yesterday

Cognitive Agent Compilation for Explicit Problem Solver Modeling

arXiv:2605.07040v1 Announce Type: cross Abstract: Large language models (LLMs) are widely used for tutoring, feedback generation, and content creation, but their broad pretraining makes them hard to constrain and poor substitutes for controllable learners. Educational systems often require inspectable and editable knowledge states: educators want to know what a system assumes the learner knows, and learners benefit when the system can justify actions in terms of explicit skills, misconceptions, and strategies. Inspired by cognitive architectures, we propose Cognitive Agent Compilation (CAC), a framework that uses a strong teacher LLM to compile problem-solving knowledge into an explicit target agent. CAC separates (i) knowledge representation, (ii) problem-solving policy, and (iii) verification and update rules, with the goal of making bounded problem solving more inspectable and editable in educational settings. We present an early proof of concept implemented with Small Language Models that surfaces key design trade-offs, particularly between explicit control and scalable generalization, and positions CAC as an initial step toward bounded-knowledge AI for educational applications.

EducationLabor & Society
Guardian· 2d ago

I knew my writing students were using AI. Their confessions led to a powerful teaching moment | Micah Nathan

The problem wasn’t just the perfectly polished, yet mediocre prose. It’s what’s lost when we surrender the struggle to translate thought into words I have been teaching fiction writing at MIT since 2017. Many of my students last wrote fiction in middle school, and very few have experienced a proper workshop, so at the start of every semester I offer these directions for writer and reader alike: Read the story at least twice. Mark what works and what doesn’t – underline great sentences, flag clunky syntax, gaps in logic and unrealistic dialogue. Ask yourself: does the story work? Why or why not? What could improve it? Answer in a signed letter to the author, attached to their story. Give your honest opinions. Remember that an effective peer review demands close reading of the text accompanied by a boldness of spirit. Continue reading...

EducationLabor & Society
Daily Brew· 3d ago

Google Launches AI-Assisted Interviews for Junior Developers

Google is piloting a program that integrates AI assistants into software engineering interviews to focus on creative problem-solving.

Education
Substack· 3d ago

AI Has Reached Research Mathematics. Not as a Genius. As a Very Strange Junior Collaborator.

The headline is blunt: “The job description is changing.” Tao is not saying mathematicians are about to vanish into a GPU cluster. The Nature piece frames the current moment more carefully: many researchers still think the hype is overdone, but AI has moved in the past year from solving school-level problems to becoming useful in research mathematicians’ daily work.

EducationAdoption & Impact
Arxiv· 4d ago

LaTA: A Drop-in, FERPA-Compliant Local-LLM Autograder for Upper-Division STEM Coursework

arXiv:2605.05410v1 Announce Type: new Abstract: Large-language-model (LLM) graders promise to relieve the grading burden of upper-division STEM courses, but most deployments to date send student work to third-party APIs, violating FERPA and exposing institutions to data risk while requiring substantial assignment modification. We present $\textbf{LaTA}\ (\textit{LaTeX Teaching Assistant})$, a drop-in, open-source autograder that runs entirely on commodity on-premises hardware and assumes a LaTeX-native workflow already adopted by many engineering and physics courses. LaTA implements a four-stage pipeline (ingest, segment, grade, report) using a locally hosted open-weight chain-of-thought LLM grader (gpt-oss:120b) that compares student work to an instructor-authored reference solution and applies a YAML rubric with binary per-item scoring. We deployed LaTA in Winter~2026 in ME 373 (Mechanical Engineering Methods) at Oregon State University, grading every weekly assignment for approximately 200 students on a single Mac Studio at \$0 marginal cost per assignment and 1--3 minutes of wall-clock time per submission, enabling regrading of corrected assignments and greatly expanded TA office hour offerings. The instructor-confirmed grading-error rate held at roughly $0.02$--$0.04\%$ per rubric line item across the term. Relative to the same instructor's previous traditionally-graded cohort, the LaTA-graded cohort outperformed by approximately $11\%$ on the midterm exam and $8\%$ on the final exam, and reported large gains in self-assessed confidence on every stated learning objective ($N = 159$ survey responses, $\Delta \geq +1.49$ Likert points, $p < 10^{-27}$ on every comparison). We release the code under AGPLv3.

EducationLabor & Society
Arxiv· 4d ago

The Pedagogy of AI Mistakes: Fostering Higher-Order Thinking

arXiv:2605.05472v1 Announce Type: new Abstract: As generative AI becomes increasingly integrated into higher education, its frequent errors and hallucinations, often seen as limitations, offer a unique pedagogical opportunity. By framing AI as a ``learning companion'' whose imperfect outputs prompt analysis, evaluation, and reflection, we argue that instructors can engage students in the fundamental processes of higher-order thinking. This paper presents a design-oriented study in which an AI-integrated syllabus in a \textit{database design} course deliberately leverages AI's limitations to foster critical thinking and higher-order cognitive skills aligned with Bloom's taxonomy of learning. Using a mixed-methods approach, we examine how structured interaction with AI-generated errors supports metacognitive engagement, reinforces disciplinary rigor, and relates to students' perceived AI literacy and subject-matter competency.

EducationLabor & Society
Arxiv· 4d ago

Breaking In and Reaching Out: Networking for Women in Computer Science

arXiv:2605.06195v1 Announce Type: new Abstract: Networking is central to careers in computer science, where a globally distributed and diverse community increasingly collaborates across institutional and geographic boundaries, often in hybrid and remote settings. However, access to effective networking is shaped by structural and personal factors, including geography, funding, language, identity, personality, and caregiving responsibilities. Building on prior work, this workshop focuses on women in computing to examine lived experiences of networking and the barriers they encounter. Through a community-driven discussion grounded in a factor-based framework, the workshop aims to surface overlooked challenges and foster shared understanding. Ultimately, it seeks to inform more inclusive, equitable, and accessible networking practices within the computer science community.

Education
Investopedia· 4d ago

Graduating in the Age of AI? Here are the Fastest-Growing Positions According to LinkedIn

AI is reshaping the labor market for recent college graduates, with the fastest-growing position this year being an AI engineer.

EducationAdoption & Impact
Arxiv· 4d ago

The Missing Evaluation Axis: What 10,000 Student Submissions Reveal About AI Tutor Effectiveness

arXiv:2605.05648v1 Announce Type: new Abstract: Current Artificial Intelligence (AI)-based tutoring systems (AI tutors) are primarily evaluated based on the pedagogical quality of their feedback messages. While important, pedagogy alone is insufficient because it ignores a critical question: what do students actually do with the feedback they receive? We argue that AI tutor evaluation should be extended with a behavioral dimension grounded in student interaction data, which complements pedagogical assessment. We propose an evaluation framework and apply it to 10,235 code submissions with corresponding AI tutor feedback from an introductory undergraduate programming course to measure whether students act on tutor feedback and whether those actions are applied correctly. Using this framework to compare two deployed AI tutors across different semesters in a large-scale introductory computer science course reveals substantial differences in student engagement patterns that are not captured by pedagogy-only evaluation. Moreover, these engagement-based behavioral signals are more strongly associated with student perception of helpful feedback than pedagogical quality alone, providing a more complete and actionable picture of AI tutor performance.

EducationLabor & Society
EdTech Innovation Hub· 4d ago

Trinity College Dublin and Microsoft reveal Ireland AI skills gap | ETIH EdTech News — EdTech Innovation Hub

Trinity College Dublin and Microsoft Ireland find a widening AI maturity gap between SMEs and large organizations in the AI Economy Ireland 2026 report. ETIH edtech news on AI skills, workforce development, gender confidence gaps, and why formal AI policy drives ten times higher productivity gains.

EducationAdoption & Impact
Arxiv· 5d ago

LLMs learn scientific taste from institutional traces across the social sciences

arXiv:2603.16659v2 Announce Type: replace-cross Abstract: Reinforcement-learned reasoning has powered recent AI leaps on verifiable tasks, including mathematics, code, and structure prediction. The harder bottleneck is evaluative judgment in low-verifiability domains, where no oracle anchors reward and the core question is which untested ideas deserve attention. We test whether institutional traces, the record of what fields published, where, and at which tier, can serve as a training signal for AI evaluators. Across eight social science disciplines (psychology, economics, communication, sociology, political science, management, business and finance, public administration), we built held-out four-tier research-pitch benchmarks and supervised-fine-tuned (SFT) LLMs on field-specific publication outcomes. The fine-tuned models cleared the 25 percent chance baseline and exceeded frontier-model performance by wide margins, with best single-model accuracy ranging from 55.0 percent in public administration to 85.5 percent in psychology. In management, evaluated against 48 expert gatekeepers, 174 junior researchers, and 11 frontier reasoning models, the best single fine-tuned model (Qwen3-4B) reached 59.2 percent, 17.6 percentage points above expert majority vote (41.6 percent, non-tied) and 28.1 percentage points above the frontier mean (31.1 percent). The fine-tuned models also showed calibrated confidence: confidence rose when predictions were correct and fell when wrong, mirroring how a skilled reviewer can say "I'm sure" versus "I'm guessing." Selective triage on this signal reached very high accuracy on the highest-confidence subsets in every field. Institutional traces, we conclude, encode a scalable training signal for the low-verifiability judgment on which science depends.

EducationLabor & Society
Arxiv· 5d ago

Stop Automating Peer Review Without Rigorous Evaluation

arXiv:2605.03202v1 Announce Type: new Abstract: Large language models offer a tempting solution to address the peer review crisis. This position paper argues that today's AI systems should not be used to produce paper reviews. We ground this position in an empirical comparison of human- versus AI-generated ICLR 2026 reviews and an evaluation of the effect of automated paper rewriting on different AI reviewers. We identify two critical issues: 1) AI reviewers exhibit a hivemind effect of excessive agreement within and across papers that reduces perspective diversity. 2) AI review scores are trivially gameable through paper laundering: prompting an LLM to rewrite a paper could significantly increase the scores from AI reviewers, demonstrating that LLM reviewers are easy to game through stylistic changes rather than scientific results. However, non-gameability and review diversity are necessary but not sufficient conditions for automation. We argue that addressing the peer review crisis requires a science of peer review automation -- not general-purpose LLMs deployed without rigorous evaluation.

EducationAdoption & Impact
Arxiv· 5d ago

A Dialogue-Based Framework for Correcting Multimodal Errors in AI-Assisted STEM Education

arXiv:2605.04131v1 Announce Type: cross Abstract: Large Language Models (LLMs) are democratizing access to personalized tutoring; however, their effectiveness is hindered by challenges in processing multimodal content, which limits AI's potential to provide equitable, high-quality STEM support. This study evaluates LLM performance on multimodal physics problems, identifies specific failure modes through an empirical error taxonomy, and tests practical interventions designed to overcome multimodal processing limitations. We assessed three publicly available LLMs (Claude, Gemini, and ChatGPT) on multimodal physics problems from the OpenStax database and compared the results with text-only performance. An empirically derived error taxonomy was developed through pilot testing, followed by evaluation of a structured multimodal dialogue intervention. All three models achieved near-ceiling accuracy (96%) on text-only physics problems. Performance declined substantially on multimodal problems, consistent with what we term the Multimodal Interference Effect. Error analysis identified four failure modes: visual processing errors, context misinterpretation, mathematical computational errors, and hybrid errors, with visual processing errors being the most prevalent. The structured dialogue intervention corrected 82% of errors overall; visual processing errors were corrected at 100% across all models. Educators and students can implement these interventions immediately, requiring no model retraining, to improve AI tutoring reliability on image-rich STEM content, advancing equitable access to high-quality learning support.

EducationAdoption & Impact
Arxiv· 5d ago

Guidelines for Designing AI Technologies to Support Adult Learning

arXiv:2605.04616v1 Announce Type: new Abstract: AI-powered educational technologies have demonstrated measurable benefits for learners, but their design and evaluation have largely centered on K-12 contexts. As a result, many AI-supported learning systems remain poorly aligned with the needs, constraints, and goals of adult learners. To better understand how AI systems function in adult education, this paper examines the deployment of several AI learning technologies developed within a multidisciplinary, national research institute in the United States focused on adult learning and online education. Drawing on longitudinal deployment data, we conducted a reflexive thematic analysis to identify recurring challenges and design considerations across systems. These insights were synthesized into a set of 19 design guidelines intended to inform future AI-supported adult learning technologies. We demonstrate the utility of these guidelines through a heuristic evaluation of the deployed systems. Lastly, we present a guideline exploration tool that aids in the ideation of technologies by connecting the guidelines to stakeholder statements surfaced in the analysis process.

Education
LEX 18· 5d ago

Class of 2026 faces tough job market and AI concerns as graduation season approaches

Recent college graduates face a 4-year-high unemployment rate and AI concerns as the Class of 2026 enters a challenging job market.

EducationLabor & Society
LBC· 5d ago

When it comes to AI and the UK’s NEET crisis digital skills are now a social mobility issue | LBC

The UK’s employment challenge is no longer just about creating jobs; it is about ensuring people have the skills and confidence needed to access them.

EducationLabor & Society
Guardian· 5d ago

UK schools should remove pupils’ online photos as AI blackmail threat grows, say experts

Criminals are manipulating pictures found on school websites and social media to create sexually explicit images UK schools should remove pictures of pupils’ faces from their websites and social media accounts because blackmailers are using them to create sexually explicit images, experts have said. Child safety experts and the UK’s National Crime Agency (NCA) warn that criminals are using AI to manipulate photos of children and then demand cash not to publish them. Continue reading...

EducationTechnology & Infrastructure
The Conversation· 5d ago

The AI scientist: now academic papers can be fully automated, what does this mean for the future of research?

The AI Scientist scans existing ... a full research paper – largely without human involvement. It reasons, fails and revises, just as a junior scientist would. The proof? An AI Scientist academic paper describing “a pipeline for automating the entire scientific process end to end” was accepted by the International Conference on Learning Representations and published in the scientific journal Nature in March 2026, following ...

EducationAdoption & Impact
Arxiv· 6d ago

PERSA: Reinforcement Learning for Professor-Style Personalized Feedback with LLMs

arXiv:2605.01123v1 Announce Type: new Abstract: Large language models (LLMs) can provide automated feedback in educational settings, but aligning an LLMs style with a specific instructors tone while maintaining diagnostic correctness remains challenging. We ask how can we update an LLM for automated feedback generation to align with a target instructors style without sacrificing core knowledge? We study how Reinforcement Learning from Human Feedback (RLHF) can adapt a transformer-based LLM to generate programming feedback that matches a professors grading voice. We introduce PERSA, an RLHF pipeline that combines supervised fine-tuning on professor demonstrations, reward modeling from pairwise preferences, and Proximal Policy Optimization (PPO), while deliberately constraining learning to style-bearing components. Motivated by analyses of transformer internals, PERSA applies parameter efficient fine-tuning. It updates only the top transformer blocks and their feed-forward projections, minimizing global parameter drift while increasing stylistic controllability. We evaluate our proposed approach on three code-feedback benchmarks (APPS, PyFiXV, and CodeReviewQA) using complementary metrics for style alignment and fidelity. Across both Llama-3 and Gemma-2 backbones, PERSA delivers the strongest professor-style transfer while retaining correctness, for example on APPS, it boosts Style Alignment Score (SAC) to 96.2% (from 34.8% for Base) with Correctness Accuracy (CA) up to 100% on Llama-3, and Gemma-2. Overall, PERSA offers a practical route to personalized educational feedback by aligning both what it says (content correctness) and, crucially, how it says it (instructor-like tone and structure).

EducationLabor & Society
Arxiv· 6d ago

Did US Worker Retraining Reduce Participant Automation Exposure?

arXiv:2605.03767v1 Announce Type: new Abstract: This paper evaluates whether the U.S. Workforce Innovation and Opportunity Act (WIOA) supported American worker resilience to technological automation. Analyzing over 23 million WIOA participation records (2017-2023), we introduce the "Retrainability Index," which measures program outcomes through post-intervention wage recovery and shifts in Routine Task Intensity (RTI). We show WIOA rarely shifts workers into less automation-exposed work, with a significant portion of participants simply returning to their prior field. Successful outcomes driven mostly by wage gains, possibly due to "catch-up" mean reversion, rather than changes in occupation. Outcomes are moderated by a person's prior occupational skill set and area of work, as well as their local economy. We find evidence that employer led programs--notably apprenticeships--are associated with the highest incidence of success. This suggests the United States' existing public active labor market programming can support baseline wage recovery for vulnerable populations, but is not well-equipped to support the large-scale, cross-industry labor transitions.

EducationTechnology & Infrastructure
Bebeez· 6d ago

University of Southern Denmark brings AI supercomputer online in Sønderborg

The University of Southern Denmark (SDU) has announced that its new national AI supercomputer in Sønderborg has been brought online. – Biljana Weber/HPC The supercomputer, dubbed Bitten, was built in partnership with Danfoss and HPE. The facility was designed to serve researchers and students across the Danish university system through UCloud, a sovereign research cloud […]

Education
Daily Brew· 5 May 2026

ChinAI #357

ChinAI #357 discusses the latest developments in AI, including the use of AI in Chinese universities.

EducationAdoption & Impact
Arxiv· 5 May 2026

A Large-Scale Observational Study on Obtaining Lightweight, Randomized Weekly Student Feedback

arXiv:2605.02281v1 Announce Type: new Abstract: Conventional methods of obtaining student feedback on course experience face a fundamental tradeoff between feedback frequency and quality: as feedback requests become more frequent, participation often declines, and responses become less thoughtful over time. To obtain both timely and thoughtful feedback from students, Kim and Piech (Learning at Scale, 2023) recently proposed a simple, lightweight course feedback mechanism: surveying each student a small number of times per term during randomly selected weeks. Named High-Resolution Course Feedback (HRCF), this method has been shown to elicit feedback that instructors find helpful without imposing excessive burden on students. An important question, however, remains unanswered: is the use of this simple method associated with measurable improvements in students' actual course experiences? We study HRCF use across 103 course offerings, totaling 24,216 student enrollments, over four years from Fall 2021 through Fall 2025, spanning 42 unique computer science courses at an R1 institution. Through a regression analysis of four end-of-term student evaluation items for these courses, we find that first-time use of HRCF is not associated with a measurable change in average student ratings. However, among small- and medium-enrollment (<250 students) course offerings, continued HRCF use is associated with average rating increases of 0.045 to 0.048 points per additional term of use for learning-related items. We observe no statistically significant associations for large-enrollment (250 or more students) course offerings, nor for items measuring instructional quality and course organization. Together, these findings suggest that sustained HRCF use may support improvements in students' learning experiences, but that further design enhancements may be needed to produce measurable improvements in instructional quality and course organization.

EducationAdoption & Impact
Arxiv· 5 May 2026

The "Astonishing Regularity'' Revisited: Sensitivity of Learning-Rate Estimates to Practice-Sequence Length

arXiv:2605.01690v1 Announce Type: new Abstract: A 2023 \textit{PNAS} study by Koedinger et al. (2023) fit the individual Additive Factors Model (iAFM) to 27 educational datasets and reported an ``astonishing regularity'' in student learning rates: students vary substantially in initial knowledge but learn at remarkably similar rates with practice. We probe a largely unexamined assumption underlying this finding -- that observation length in student log data is ignorable for mixed-effects estimation -- by refitting the iAFM on 26 of the original datasets while systematically truncating practice sequences at various depths, holding the set of students and knowledge components constant. Capping at the first ten opportunities per student-skill pair inflates the median estimated IQR of student learning rates by 75\%; capping at five inflates it by 205\%, with individual datasets ranging from negligible to 17-fold. The magnitude of this sensitivity diverges from what standard estimation theory predicts under ignorable truncation, and the dataset-specific heterogeneity is substantial. Three candidate mechanisms from adjacent literatures could account for the pattern -- informative observation length, functional-form misspecification, and identification weakness from sparse per-pair data -- but observational analysis on these data alone cannot adjudicate among them. We argue that practice sequence length distributions are an unexamined property of mixed-effects estimation on observational learning data, deserving explicit reporting before conclusions about learning-rate heterogeneity are drawn.

PaywallEducationLabor & Society
Bloomberg· 5 May 2026

Nvidia Billionaire Mark Stevens Gives $200 Million to Alma Mater USC for AI Research

Billionaire Nvidia Corp. director Mark Stevens and his wife Mary are gifting $200 million to the University of Southern California to advance AI research and education across the school.

Education
arXiv· 4 May 2026

ProPACT: A Proactive AI-Driven Adaptive Collaborative Tutor for Pair Programming

Effective pair programming depends on coordination of attention, cognitive effort, and joint regulation over time, yet most adaptive learning systems remain individual-centric and reactive. This paper introduces ProPACT, a proactive AI-driven adaptive collaborative tutor that treats collaboration itself as the object of instruction. ProPACT constructs a multimodal dyadic learner model based on Joint Visual Attention (JVA), Joint Mental Effort (JME), and individual mental effort, and employs an X...

Education
arXiv· 4 May 2026

AcademiClaw: When Students Set Challenges for AI Agents

Benchmarks within the OpenClaw ecosystem have thus far evaluated exclusively assistant-level tasks, leaving the academic-level capabilities of OpenClaw largely unexamined. We introduce AcademiClaw, a bilingual benchmark of 80 complex, long-horizon tasks sourced directly from university students' real academic workflows -- homework, research projects, competitions, and personal projects -- that they found current AI agents unable to solve effectively. Curated from 230 student-submitted candidates...

Education
Stanford Social Innovation Review· 4 May 2026

Education for Career Readiness in the Age of AI

What the research says about education, jobs, AI, and what students will need to succeed as future workers and citizens.

Education
Daily Brew· 4 May 2026

Bangladesh Launches AI Course

Bangladesh is launching the Think AI course to integrate AI literacy into its education system, aiming to bridge the digital skills gap through a public-private partnership model.

Education
Reuters· 4 May 2026

Duolingo's growth outlook moderates

Duolingo posted strong first-quarter results but signaled a more measured growth trajectory ahead, as the language-learning app prioritizes user engagement and product improvements.

EducationAdoption & Impact
Inc· 4 May 2026

Struggling to Keep Up? These 15 Edtech Companies Are Using AI to Give Teachers Their Lives Back

Generative AI is transforming the classroom. From reducing teacher burnout to "AI-proofing" careers, meet the 15 innovative companies reshaping the future of education in 2026.

EducationLabor & Society
ThePrint· 4 May 2026

More courses and certifications won't fix India's skills gap

Expectations need recalibration. Students equate degrees with employability, employers expect instant productivity, policymakers push curriculum mandates — all reinforce a flawed assumption.

EducationLabor & Society
Arxiv· 4 May 2026

AI Adoption Among Teachers: Insights on Concerns, Support, Confidence, and Attitudes

arXiv:2605.00343v1 Announce Type: new Abstract: The study examines the adoption of artificial intelligence (AI) tools in education by analyzing the roles of institutional support, teacher confidence, and teacher concerns. It aims to determine whether teacher concerns moderate the relationship between institutional support and two outcomes: teacher confidence and attitudes toward AI adoption. The sample included 260 teachers from the Philippines. Composite scores were calculated for institutional support, confidence, concerns, and attitudes. Moderated multiple regression analysis showed that institutional support significantly predicted both teacher confidence and attitudes toward AI. However, teacher concerns did not significantly moderate these relationships. A follow-up mediation analysis tested whether confidence explains the effect of institutional support on attitudes. Results showed full mediation. The indirect effect was significant based on the Sobel test, and the direct effect became non-significant when confidence was included in the model. This shows that institutional support improves teacher attitudes by increasing their confidence. The study recommends that institutions provide structured and ongoing support to strengthen teacher confidence. Professional development, mentoring, and AI integration in teacher education programs can increase readiness and support effective AI adoption.

EducationLabor & Society
Arxiv· 4 May 2026

Pedagogical Promise and Peril of AI: A Text Mining Analysis of ChatGPT Research Discussions in Programming Education

arXiv:2605.00361v1 Announce Type: new Abstract: GenAI systems such as ChatGPT are increasingly discussed in programming education, but the ways in which the research literature conceptualizes and frames their role remain unclear. This chapter applies text mining to publications indexed in a leading academic database to map scholarly discourse on ChatGPT in programming education. Term frequency analysis, phrase pattern extraction, and topic modeling reveal four dominant themes: pedagogical implementation, student-centered learning and engagement, AI infrastructure and human-AI collaboration, and assessment, prompting, and model evaluation. The literature prioritizes classroom practice and learner interaction, with comparatively limited attention to assessment design and institutional governance. Across studies, ChatGPT is positioned both as a learning aid that supports explanation, feedback, and efficiency and as a pedagogical risk linked to overreliance, unreliable outputs, and academic integrity concerns. These findings support responsible integration and highlight the need for stronger assessment and governance mechanisms.

EducationLabor & Society
Yale Insights· 4 May 2026

The Real Job Destruction from AI Is Hitting Before Careers Can Start | Yale Insights

Yale’s Jeffrey Sonnenfeld and his co-authors say that the impact of AI can be seen among recent college graduates, who are finding it harder and harder to get that first job. With no entry to the workforce, how will younger people develop the skills and wisdom to lead in the future?

EducationLabor & Society
Daily Brew· 4 May 2026

Bangladesh Launches 'Think AI' Course to Boost Digital Skills and Bridge Global Development Gap

Supported by Meta, this new educational initiative aims to integrate AI literacy into the Bangladeshi school system through a public-private partnership.

Education
The American Bazaar· 3 May 2026

Can AI-literate students gain edge in the changing US job market?

The data shows that 81% of AI users ... their skills for future progression. For first-generation students, that kind of visibility can be especially valuable in navigating unfamiliar career paths. The responsibility for higher education is to ensure AI tools are accessible, practical, and embedded into learning in a way that connects directly to workforce outcomes, so they support mobility for all rather than widen gaps...

PaywallEducationLabor & Society
FT· 2 May 2026

Paranoid parenting in the age of AI

It’s an illusion to think we can robot-proof our kids’ education choices

EducationLabor & Society
Outsourceaccelerator· 1 May 2026

83% of U.S. job seekers want formal AI training, survey finds - Outsource Accelerator

A new Express Employment Professionals–Harris Poll survey found that 83% of United States job seekers want companies to formally train employees on AI.

Education
Arxiv· 1 May 2026

Policy-Governed LLM Routing with Intent Matching for Instrument Laboratories

arXiv:2604.26955v1 Announce Type: new Abstract: AI tutoring systems in engineering labs face a tension between providing sufficient assistance and preserving learning opportunities. Existing systems typically offer instructors limited control over assistance timing, content, or cost. This paper describes a routing and governance system for LLM-based lab assistance comprising two components: Routiium, an OpenAI-compatible gateway that manages multiple LLM backends with configurable prompt modifications and usage logging, and EduRouter, a policy-aware routing service that enforces per-lab budgets, approval workflows, and embedding-based question matching. We evaluated the system using trace-driven simulation calibrated from two engineering labs (LED characterization, RC circuit analysis) and a 100-query replay through live models. In simulations, governed policies (P1/P2) increased challenge-alignment index from 0.90 to 0.98 and overlay-adherence score from 0.69 to 0.87 compared to ungoverned operation (P0). The productive-struggle window metric increased from 1.4 to 3.6 simulated turns before high-scaffold hints appeared. In the 100-query replay, EduRouter routed 75% of queries to a local model, reducing token costs by 66% ($0.087 vs. $0.26 for all-premium routing) while maintaining canonical hit rate of 1.0 for the curated 89-intent question bank. We release Routiium, EduRouter, canonical-task tooling, and simulator configurations to support replication and future classroom studies.

Education
Arxiv· 1 May 2026

Simulating Validity: Modal Decoupling in MLLM Generated Feedback on Science Drawings

arXiv:2604.26957v1 Announce Type: new Abstract: In science education, students frequently construct hand-drawn visual models of scientific phenomena. These drawings rely on a visual structure where information is encoded through visual objects, their attributes, and relationships. Multimodal large language models (MLLMs) are increasingly used to generate feedback on students' hand-drawn scientific models. However, the validity of such feedback depends on whether model claims are grounded in the specific visual evidence of the student drawing. This study uncovers grounding failures, consistent with modal decoupling, in off-the-shelf MLLM feedback, where outputs remain pedagogically plausible in form while contradicting the drawing or treating depicted elements as missing. Using N = 150 middle school drawings from a kinetic molecular theory unit spanning five modeling tasks and three competence levels, we generated N = 300 feedback instances with GPT-5.1. All outputs were coded for four grounding error types: object mismatch, attribute mismatch, relation mismatch, and false absence. Grounding failures were common: 41.3% of feedback instances contained at least one error. An inventory-list-first workflow reduced several error categories and lowered the overall error rate, but it did not resolve the underlying limitation: approximately one in three outputs remained flawed, with false absence as the dominant failure mode. Moreover, feedback that appears visually grounded offered little diagnostic value for identifying invalid instances. The findings indicate that modal decoupling is a substantial limitation and that valid feedback will require grounding mechanisms beyond common prompting strategies.

Education
Arxiv· 1 May 2026

DeepTutor: Towards Agentic Personalized Tutoring

arXiv:2604.26962v1 Announce Type: new Abstract: Education represents one of the most promising real-world applications for Large Language Models (LLMs). However, conventional tutoring systems rely on static pre-training knowledge that lacks adaptation to individual learners, while existing RAG-augmented systems fall short in delivering personalized, guided feedback. To bridge this gap, we present DeepTutor, an agent-native open-source framework for personalized tutoring where every feature shares a common personalization substrate. We propose a hybrid personalization engine that couples static knowledge grounding with dynamic multi-resolution memory, distilling interaction history into a continuously evolving learner profile. Moreover, we construct a closed tutoring loop that bidirectionally couples citation-grounded problem solving with difficulty-calibrated question generation. The personalization substrate further supports collaborative writing, multi-agent deep research, and interactive guided learning, enabling cross-modality coherence. To move beyond reactive interfaces, we introduce TutorBot, a proactive multi-agent layer that deploys tutoring capabilities through extensible skills and unified multi-channel access, providing consistent experience across platforms. To better evaluate such tutoring systems, we construct TutorBench, a student-centric benchmark with source-grounded learner profiles and a first-person interactive protocol that measures adaptive tutoring from the learner's perspective. We further evaluate foundational agentic reasoning abilities across five authoritative benchmarks. Experiments show that DeepTutor improves personalized tutoring quality while maintaining general agentic reasoning abilities. We hope DeepTutor provides unique insights into next-generation AI-powered and personalized tutoring systems for the community.

Education
Arxiv· 1 May 2026

Addressing the Reality Gap: A Three-Tension Framework for Agentic AI Adoption

arXiv:2604.27245v1 Announce Type: new Abstract: Generative AI has rapidly entered education through free consumer tools, outpacing the ability of schools and universities to respond. Now a new wave of more autonomous agentic AI systems--with the capacity to plan and act towards goals--promises both greater educational personalization and greater disruption. This chapter argues that successfully navigating these innovations requires balancing three core tensions: (1) Implementation Feasibility, or the practical capacity to integrate AI sustainably into real classrooms; (2) Adaptation Speed, or the mismatch between fast-evolving AI capabilities and the slower pace of educational change; and (3) Mission Alignment, or the need to ensure AI applications uphold educational values such as equity, privacy, and pedagogical integrity. First, we review early evidence of generative and agentic AI in various sectors and in frontline education to illustrate these tensions in context. Then, we present a three-tension framework to guide decision-makers in evaluating and designing AI initiatives across K-12 and higher education. We provide examples of how the framework can be applied to plan responsible AI deployments, and we identify emerging trends--such as curriculum-linked AI agents and educator-informed AI design--along with open research directions. We conclude the chapter with recommendations for educational leaders to proactively engage with the opportunities and challenges of AI, so that this technology can be harnessed to enhance teaching and learning in the decade ahead.

Education
Arxiv· 1 May 2026

A Discipline-Agnostic AI Literacy Course for Academic Research: Architecture, Pedagogy, and Implementation

arXiv:2604.27225v1 Announce Type: new Abstract: The rapid integration of generative AI into academic workflows demands curricula that equip students not only with tool proficiency but with the critical judgment to use those tools responsibly in scholarly work. Existing offerings cluster around two inadequate poles: technical AI development courses serving narrow specialist audiences, and brief general-literacy interventions that cannot develop the sustained, practice-based competencies rigorous research requires. This paper reports the design, theoretical rationale, and implementation of BSTA 495/395: Getting Started with AI-Assisted Research, developed and delivered at Lehigh University (Spring 2026). The course addresses an underserved gap: the competencies required for rigorous AI-assisted literature review. Its architecture organizes instruction into four sequential modules aligned with the cognitive demands of that task: comprehension of individual papers, construction and validation of knowledge taxonomies, identification of research gaps, and synthesis and production of complete literature reviews. Each module embeds an explicit verification discipline and standardized AI attribution practice. Prerequisite-free and discipline-agnostic, the course enrolls upper-level undergraduates and graduate students across all fields with differentiated assessment expectations. Pre- and post-course survey data from the inaugural offering indicate substantial self-reported confidence gains, with the largest in hallucination detection (d = +1.45), responsible AI use (d = +1.33), and AI attribution practice (d = +2.40), consistent with the course's design emphasis. The course constitutes a replicable model for the emerging genre of AI research literacy curricula.

Education
Arxiv· 1 May 2026

Designing Ethical Learning for Agentic AI: Toegye Yi Hwang's Ethical Emotion Regulation Framework

arXiv:2604.26958v1 Announce Type: new Abstract: Agentic AI systems capable of autonomous goal setting and proactive intervention introduce new challenges for regulating moral-emotional processes in learning environments. Existing frameworks typically treat emotion as reactive feedback or engagement optimization, overlooking the need for normative regulation across autonomous decision cycles.This paper proposes an ethical emotion regulation framework for agentic AI learning design inspired by Toegye Yi Hwang's moral-emotional philosophy. The Ethical Emotion Feedback System (EEFS) is reconstructed as a five-stage architecture aligned with agentic cycles, articulating stage-specific design principles and scenario classifications.An EEFS Evaluation Instrument is introduced to enable systematic assessment of moral-emotional alignment in agentic AI systems.

Education
Arxiv· 1 May 2026

Learning-to-Explain through 20Q Gaming: An Explainable Recommender for Cybersecurity Education

arXiv:2604.26964v1 Announce Type: new Abstract: The growing sophistication of contemporary cyber threats necessitates a more effective and adaptive approach to cybersecurity training. Intuitive and adaptive approaches to learning, which are often required, are not provided in traditional learning methods. In this article, we present a new educational framework, "Learning to Explain Cybersecurity with Q20 Game", based on explainable AI (XAI), an educational game to enhance interactivity in learning. We propose a novel, game-inspired framework - the Explainable Q20 Cybersecurity Recommender (EQ-20CR), that learns to elicit the minimal set of evidential facts needed to justify cybersecurity defensive action. By casting "Why should I execute this mitigation?" as a 20 questions (Q20) game, a policy-based reinforcement-learning (RL) agent actively queries an environment until it can both (i) recommend the optimal security education and (ii) explain that decision with a concise dialogue trace. The article draws from "Playing 20 Question Game with Policy-Based Reinforcement Learning" [1] and "Learning-to-Explain: Recommendation Reason Determination through Q20 Gaming" [2]. The framework uses a policy-based reinforcement learning (RL) agent that leads the user through a sequence of questions to recognize and articulate a targeted cybersecurity concept, attack vector, or defense strategy. Furthermore, users are gradually exposed to informative questions by the system, revealing complicated, structured way at an adaptive difficulty level. In this paper, we design the architecture, its application to various concepts of cybersecurity through illustrative case studies, and its transformative potential on the training and awareness of cybersecurity recommendations.

EducationEconomics & Markets
Arxiv· 1 May 2026

The Impact of LLM Self-Consistency and Reasoning Effort on Automated Scoring Accuracy and Cost

arXiv:2604.26954v1 Announce Type: new Abstract: Strategic model selection and reasoning settings are more effective than ensembling for optimizing automated scoring with large language models (LLMs). We examined self-consistency (intra-model majority voting) and reasoning effort for scoring conversation-based assessment items in high school mathematics, evaluating 900 student conversations agains

EducationLabor & Society
Arxiv· 1 May 2026

Unpacking Vibe Coding: Help-Seeking Processes in Student-AI Interactions While Programming

arXiv:2604.27134v1 Announce Type: new Abstract: Generative AI is reshaping higher education programming through vibe coding, where students collaborate with AI via natural language rather than writing code line-by-line. We conceptualize this practice as help-seeking, analyzing 19,418 interaction turns from 110 undergraduate students. Using inductive coding and Heterogeneous Transition Network Ana

EducationEconomics & Markets
Arxiv· 30 Apr 2026

Marshall meets Bartik: Revisiting the mysteries of the trade

arXiv:2604.26457v1 Announce Type: new Abstract: We identify a causal effect of top inventor inflows on the patent productivity of local inventors by combining the idea-generating process described by Marshall (1890) with the Bartik (1991) instruments involving the state taxes and commuting zone characteristics of the United States. We find that local productivity gains go beyond organizational boundaries and co-inventor relationships, which implies the partially nonexcludable good nature of knowledge in a spatial economy and pertains to the mysteries of the trade in the air. Our counterfactual experiment suggests that the spatial distribution of inventive activity is substantially distorted by the presence of state tax differences.

EducationLabor & Society
Arxiv· 30 Apr 2026

Sociodemographic Biases in Educational Counselling by Large Language Models

arXiv:2604.25932v1 Announce Type: new Abstract: As Large Language Models (LLMs) are increasingly integrated into educational settings, understanding their potential biases is critical. This study examines sociodemographic biases in LLM-based educational counselling. We evaluate responses from six LLMs answering questions about 900 vignettes describing students in diverse circumstances. Each vignette is systematically tested across 14 sociodemographic identifiers - spanning race and gender, socioeconomic status, and immigrant background - along with a control condition, yielding 243,000 model responses. Our findings indicate that (1) all models exhibit measurable biases, (2) bias patterns partially align with documented human biases but diverge in notable ways, (3) the magnitude of these biases is strongly influenced by the precision of the student descriptions, where vague or minimal information amplifies disparities nearly threefold, while concrete, individualised metrics substantially reduce them, and (4) bias profiles vary substantially across models. These results demonstrate the importance of context-rich and personalised educational representations, suggesting that AI-driven educational decisions benefit from detailed student-specific information to promote fairness and equity.

EducationLabor & Society
Arxiv· 30 Apr 2026

Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context

arXiv:2604.26494v1 Announce Type: cross Abstract: Generative AI tools are widely used by youth and have introduced new privacy and safety challenges. While prior research has explored youth's safety in GenAI within western context, it often overlooks the cultural, religious, and social dimensions of technology use that strongly shape youths digital experiences in countries like Saudi Arabia. To address the gap, this study explores children (aged 7 to 17), parents and teachers interactions with GenAI tools and risk perceptions through non-western lens. Through a mixed methods approach, we analyzed 736 Reddit and 1,262 X(Twitter) posts and conducted interviews with 31 Saudi Arabian participants (8 youth, 13 parents, 10 teachers). Our findings highlight context dependent and relational privacy and safety of GenAI from non-western context which often formed by communal structure and prescribed norms. We found significant risks tied to youths disclosure of personal and family information, which conflict with culturally rooted expectations of modesty, privacy, and honor, particularly when youth seek emotional support from GenAI. These risks further compounded by socio economic factors such as cost-saving practices leading to the use of shared GenAI accounts (e.g.ChatGPT) within families or even among strangers. We provide design implication reflecting on parents and teachers expectation of how youth should use GenAI. This work lays groundwork for inclusive, context sensitive parental controls that adhere to cultural norms and values.

EducationAdoption & Impact
Arxiv· 30 Apr 2026

Human-in-the-Loop Benchmarking of Heterogeneous LLMs for Automated Competency Assessment in Secondary Level Mathematics

arXiv:2604.26607v1 Announce Type: new Abstract: As Competency-Based Education (CBE) is gaining traction around the world, the shift from marks-based assessment to qualitative competency mapping is a manual challenge for educators. This paper tackles the bottleneck issue by suggesting a "Human-in-the-Loop" benchmarking framework to assess the effectiveness of multiple LLMs in automating secondary-level mathematics assessment. Based on the Grade 10 Optional Mathematics curriculum in Nepal, we created a multi-dimensional rubric for four topics and four cross-cutting competencies: Comprehension, Knowledge, Operational Fluency, and Behavior and Correlation. The multi-provider ensemble, consisted of open-weight models -- Eagle (Llama 3.1-8B) and Orion (Llama 3.3-70B) -- and proprietary frontier models Nova (Gemini 2.5 Flash) and Lyra (Gemini 3 Pro), was benchmarked against a ground truth defined by two senior mathematics faculty members (kappa_w = 0.8652). The findings show a marked "Architecture-compatibility gap". Although the Gemini-based Mixture-of-Experts (Sparse MoE) models achieved "Fair Agreement" (kappa_w ~ 0.38), the larger Orion (70B) model exhibited "No Agreement" (kappa_w = -0.0261), suggesting that architectural compliance with instruction constraints outweighs the scale of raw parameters in rubric-constrained tasks. We conclude that while LLMs are not yet suitable for autonomous certification, they provide high-value assistive support for preliminary evidence extraction within a "Human-in-the-Loop" framework.

Education
insidehighered.com· 30 Apr 2026

As AI Skills Surge, Entry-Level Jobs Lag

As AI Skills Surge, Entry-Level Jobs Lag April 30, 2026 # As AI Skills Surge, Entry-Level Jobs Lag A new Handshake report suggests colleges have a role to play in helping students navigate the rise of AI in a tight job market. By AI-related skills are appearing more often in internship and full-time job postings, a new Handshake report finds. Mariia Vitkovska/iStock/Getty Images As artificial intelligence becomes embedded in both college classrooms and the job market, a new report from Handshake finds a rapidly closing gap between student adoption of AI tools and employer demand for those skills. The job platform’s Class of 2026 report drew on data collected last month from 1,248 bachelor’s degree students graduating this year from nearly 500 institutions nationwide. It shows AI adoption among seniors has shifted from equally split to nearly universal: 85 percent now report using

EducationLabor & Society
insidehighered.com· 30 Apr 2026

4 in 10 Students Say AI Will Influence Their Career Choice

4 in 10 Students Say AI Will Influence Their Career Choice April 30, 2026 # 4 in 10 Students Say AI Will Influence Their Career Choice Students report feeling “uncertain,” “concerned,” “nervous” and “depressed” about how AI will impact their future careers, a new survey finds. Also, high cost of living deters them from pursuing a college degree. By Asked how they feel about AI’s impact on their future career, half of student respondents answered, “Uncertain.” Photo illustration by Justin Morrison/Inside Higher Ed | PhonlamaiPhoto/iStock/Getty Images | ruizluquepaz/E+/Getty Images Nearly half—42 percent—of college-eligible students say that artificial intelligence will influence which career they pursue, and 10 percent report that they have already changed their planned major due to concerns about AI, according to a report released Tuesday. “AI is upending the value equation in hi

EducationLabor & Society
Tech For Good· 30 Apr 2026

Tech For Good - Beyond free initiatives: Rethinking the UK’s AI talent strategy

Faye Ellis, Principal Training Architect at Pluralsight, explores why the UK’s AI talent strategy must evolve beyond free courses to deliver measurable business impact and close the skills gap.

EducationLabor & Society
education.economictimes.indiatimes.com· 30 Apr 2026

AI in Education: Lessons from India, Europe, and the U.S., ETEducation

AI in Education: Lessons from India, Europe, and the U.S., ETEducation - Higher Education - 6 min read # AI’s new classroom: what India, Europe, and the U.S. can teach each other The article compares India’s rapid AI adoption in education, Europe’s return to traditional learning methods, and the U.S.’ cautious experimentation, arguing that AI in classrooms must balance innovation with human oversight, ethics, fairness, and strong educational safeguards. - Published On Apr 30, 2026 at 10:50 AM IST AI in Education: Innovation vs Human Oversight By Shubhashree S. Iyer and Ritu GuptaIndia’s higher education system is embracing AI-based assessments, Scandinavia is retreating to pen-and-paper, and U.S. universities are cautiously experimenting. Together, these paths reveal how AI in education must be balanced with control.Intro: Three Continents, Three ChoicesWhen Indian higher education

Education
LinkedIn· 29 Apr 2026

LinkedIn India: Log In or Sign Up

Artificial Intelligence (AI) 1,290+ course · Cloud Computing 1,860+ course · Cybersecurity 1,420+ course · Data Science 1,940+ course · Database Management 630+ courses · DevOps 670+ courses · Hardware 110+ courses · IT Help Desk 430+ courses · Mobile Development 520+ courses ·

Education
LinkedIn· 29 Apr 2026

LinkedIn Job Search: Find US Jobs, Internships, Jobs Near Me

Artificial Intelligence (AI) 1,280+ course · Diversity, Equity, and Inclusion (DEI) 340+ courses · Hardware 110+ courses · Professional Development 2,590+ courses · Business Software and Tools 3,470+ courses · Cybersecurity 1,420+ course · No more previous content ·

Education
Arxiv· 29 Apr 2026

Curiosity and Metacognition: Towards a Unified Framework for Learning and Education in the Age of AI

arXiv:2604.25648v1 Announce Type: new Abstract: This chapter examines the relationship between curiosity and metacognition as critical drivers of autonomous and self-regulated learning. We synthesize recent research to propose a unified framework integrating behavioral, computational, and psychoeducational dimensions, arguing that curiosity, i.e. the intrinsic drive to acquire new knowledge, relies fundamentally on metacognitive monitoring and control. From an educational perspective, we evaluate interventions designed to enhance curiosity in classroom settings. While promising, our review indicates that these interventions yield mixed results, often proving differentially effective for struggling learners, thereby underscoring the necessity for approaches tailored to individual profiles. Finally, we address the paradigm shift introduced by Generative AI. While Large Language Models (LLMs) offer unprecedented scalability for personalized inquiry, we argue that their default interaction modes pose significant risks to the dynamics of curiosity-driven learning. To mitigate these challenges, we review strategies to transform AI from a potential cognitive shortcut into a powerful partner for sustained epistemic development.

Education
Arxiv· 29 Apr 2026

Hands-on PDC in Undergraduate Computing Education

arXiv:2604.25812v1 Announce Type: new Abstract: Parallel and Distributed Computing (PDC) is a critical yet conceptually challenging area of the undergraduate computer science curriculum. While students often encounter these concepts in theory, few gain exposure to experience in real high-performance computing (HPC) environments. Research shows that when students are engaged in project-based learning they retain knowledge more effectively. They also develop a deeper understanding of concepts taught in the classroom. This paper presents a practical assignment in which students engage directly with the University of Florida's HiPerGator supercomputer to implement and benchmark matrix multiplication using Python and C (via POSIX threads and OpenMP). Students navigate batch scheduling, core allocation, and performance tuning, experiences that are rarely accessible at the undergraduate level. We describe the assignment in detail and provide a three-year evaluation across multiple course offerings, highlighting how structured access to real HPC infrastructure can deepen student understanding of parallelism and multithreading.

EducationLabor & Society
Human Resources Online· 29 Apr 2026

AI is changing jobs. Is your workforce ready to keep up? | Human Resources Online

Johnathan Sim, Domain Chair, Technology, Innovation & Analytics at Temasek Polytechnic School of Business, outlines how AI is reshaping jobs, why organisations must move beyond basic adoption, and how redesigning work, building AI literacy, and strengthening human skills will drive productivity ...

PaywallEducationLabor & Society
NYT· 29 Apr 2026

In Backlash Against Tech in Schools, Parents Are Winning Rollbacks

From Salt Lake City to New York City, parents are demanding more sway over the digital tools that schools give children.

Education
Arxiv· 29 Apr 2026

From Prototype to Classroom: An Intelligent Tutoring System for Quantum Education

arXiv:2604.24807v1 Announce Type: new Abstract: Quantum computing instructors face a compounding problem: the concepts are counterintuitive, the mathematical formalism is dense, and qualified faculty are scarce outside a small number of well-resourced institutions. Our prior work introduced a knowledge-graph-augmented tutoring prototype with two specialized LLM agents: a Teaching Agent for dynamic interaction and a Lesson Planning Agent for lesson generation. Validated on simulated runs rather than in a real course, that prototype left open whether more aggressive agent specialization would be needed to handle the full range of quantum education tasks under real student load. This paper answers the three questions that the prototype could not answer. Can agent specialization solve the reliability problem in a domain as technically demanding as quantum information science? Can the system run in a real course, not a demonstration? Does the instructor gain actionable intelligence from the deployment? We present ITAS (Intelligent Teaching Assistant System), a multi-agent tutoring system built around four contributions: a five-module QIS curriculum grounded in Watrous's information-first framework, a Spoke-and-Wheel teaching architecture with quantum-specialized agents, a cloud infrastructure designed for production use and regulatory compliance, and a conversational analytics layer for instructors and content developers. Piloted in a quantum computing course at Old Dominion University, the system supports all three answers: deployment evidence is consistent with specialization addressing the task-boundary failures observed in the prototype, cloud infrastructure supports classroom-scale concurrency at sub-textbook cost, and the analytics agent surfaces curriculum gaps the instructor could not otherwise see.

Education
Arxiv· 29 Apr 2026

Barriers and Enablers of Online Instruction in Hospitality Education in the Philippines: An Exploratory Study

arXiv:2604.25047v1 Announce Type: new Abstract: This study examined the barriers and enablers of online instruction in hospitality education. A sequential exploratory design was implemented with hospitality teachers from both public and private higher educational institutions in the Philippines. Thematic analysis of interviews identified four key themes: technological barriers, pedagogical challenges, institutional and personal support, and integration of artificial intelligence (AI). These themes were transformed into survey constructs and tested for reliability. Pedagogical challenges, including difficulties in teaching hands-on subjects and sustaining student engagement, emerged as the most critical concerns. Technological barriers such as unstable internet and limited devices were moderately rated, while institutional and personal support received mixed evaluations. Teachers viewed AI integration as helpful but also expressed caution and emphasized the need for training. Reliability analysis showed acceptable to good internal consistency across constructs. The findings highlight the importance of strengthening pedagogical training, providing clear institutional support, and fostering responsible competence in AI use. Future studies should validate these results with larger and more diverse samples.

Education
Arxiv· 29 Apr 2026

Towards the Development of Detection of Learned Helplessness in Mathematics: Design and Data Collection Challenges from a Developing Country Perspective

arXiv:2604.25054v1 Announce Type: new Abstract: This study investigates the challenges in designing, data collection, and implementation of a web-based Tutoring System (TS) for teaching linear equations within a developing country context. Originally designed as an Android app, the system was redeveloped as a web application to facilitate cross-platform access and data collection. This redesign enabled enhanced tracking through interaction logs and included features like problem skipping, hints, difficulty-based problem sequencing, and game modes with adaptable progression (e.g., easy-to-hard, hard-to-easy). The main objective was to document the design and data collection challenges encountered in data collection for the development of a model capable of detecting learned helplessness in students' behaviors while using a web application for solving linear equation. Challenges included outdated devices, unreliable internet, and logistical constraints such as limited session durations and delays in obtaining approvals. Environmental disruptions like class cancellations and curriculum gaps further complicated the process, with only 118 out of 410 students eligible and actively participating. These obstacles highlight the complexities of collecting interaction data for detecting learned helplessness in real-world, resource-constrained educational settings.

Education
Arxiv· 29 Apr 2026

Adoption of TikTok as a Learning Tool in Physical Education: Evidence from the Philippines

arXiv:2604.25049v1 Announce Type: new Abstract: This study examines the factors that influence the adoption of TikTok as a learning tool for physical education (PE)-related content among tertiary students in the Philippines. The study applies the Technology Acceptance Model (TAM) and Uses and Gratification Theory (UGT) to assess Information Seeking, Personal Identity, Social Interaction, Entertainment, Perceived Usefulness (PU), Perceived Ease of Use (PEOU), and Intention to Use (IU). A cross-sectional design and Structural Equation Modeling (SEM) were employed. The sample included 1,075 regular TikTok users with an average age of 19 years, the majority of whom were female. The analysis revealed that PU and PEOU were the strongest predictors of IU TikTok for PE related content. The results indicate that TikTok provides an engaging and accessible medium that supports active learning and participation in PE. The study offers empirical evidence from the Philippines and contributes to the academic discussion on the role of short-form video platforms in PE.

EducationLabor & Society
Arxiv· 29 Apr 2026

Coasting Through Class: Learning Opportunity Loss from Practice Avoidance During Individual Seatwork

arXiv:2604.25014v1 Announce Type: new Abstract: Measures of disengagement provide insights into unproductive use of learning opportunities. Although measures of active disengagement, such as gaming the system and mind-wandering, are well studied, loss of practice time due to outright task avoidance remains relatively understudied. The current study addresses this gap by extending existing within-task measures (idle time) with two new session-level measures (delayed start and early stop) to capture loss of practice time due to task avoidance. We characterize the combined lost time as coasted time and the associated behavior as coasting behavior. Using ASSISTments logs (N = 1,425), we find that students dedicate only 40% of available classwork time to math practice and coast through the remaining 60%. Of the coasted time, 36% resulted from delayed starts, 2% from mid-practice idling, and 62% from stopping early. Delayed start and early stop showed moderate temporal stability (G = 0.73 and 0.71, respectively), suggesting that coasting is a consistent behavioral pattern. Even after excluding early stops attributable to assignment completion (i.e., early stop = 0), coasted time remained substantial at 32%. While we observe significant differences in coasting by gender and IEP status, we do not observe them by other demographic factors or school locale. Critically, students who continued working beyond the first assignment completion ("extra effort") performed significantly better on standardized tests. For research, coasting offers a new lens on opportunity loss by combining session-level disengagement with within-task disengagement. For practitioners, our results highlight the need for platform affordances that support sustained engagement and more productive use of available practice time.

EducationLabor & Society
Arxiv· 28 Apr 2026

Early Academic Capital as the Causal Origin of Dropout in Constrained Educational Systems -- Evidence from Longitudinal Data and Structural Causal Models

arXiv:2604.22772v1 Announce Type: new Abstract: Dropout in higher education is commonly analysed through observable academic events such as course failure or repetition. However, these event-based perspectives may obscure the underlying structural dynamics that shape student trajectories. In this study, we adopt a causal computational social science approach to identify the origins of dropout in a constrained engineering curriculum. Using longitudinal administrative data from 16,868 students who survived to their second active term, and a leakage-free panel design, we estimate the causal effect of early academic capital accumulation on three-year dropout. Treatment is defined as low early progress (passing at most 1 subject by the end of the second term). We employ G-estimation of structural nested mean models, complemented by marginal structural models with inverse probability weighting. We find a large and robust causal effect: low early academic capital increases dropout probability by 25.3 percentage points (G-estimation), closely matched by a 27.4 pp estimate from IPTW models. This effect is approximately twice as large as the estimated direct impact of later academic events such as first-time gateway-course repetition (12.7 pp). These findings suggest that dropout does not originate in isolated academic failures, but in early trajectory misalignment between academic progress and system-imposed temporal constraints. This perspective shifts the focus of intervention from downstream events to early-stage trajectory formation.

Education
TechRadar· 28 Apr 2026

‘You feel radicalized’: A Meta AI exec watched agents beat her top workers. Now she’s built a nonprofit to help Gen Z find jobs before they disappear | TechRadar

The entry level job market has been shrunk by AI, and Gen Z need new tools to find work.

EducationAdoption & Impact
Forbes· 28 Apr 2026

How Higher Ed Can Put The Student AI Bill Of Rights To Work

94% of higher ed staff use AI at work. 46% can't name a governing policy. The Student AI Bill of Rights closes the gap if institutions wire it into procurement.

EducationAdoption & Impact
Government Technology· 28 Apr 2026

Opinion: 5 AI Moves Leaders Must Make for Next School Year

If this past school year was about adults figuring out how to adapt systems and approaches to AI, the next school year should be about students actually experiencing something better because of the work the adults did.

EducationAdoption & Impact
Arxiv· 28 Apr 2026

Cross-Course Generalizability of SRL-Aligned Predictive Models Using Digital Learning Traces

arXiv:2604.22812v1 Announce Type: new Abstract: STEM dropout rates remain high at universities, particularly in computer science programs with theory-intensive courses. Digital learning environments now capture rich behavioral data that could help identify struggling students early, yet the generalizability of data-driven prediction models across courses and institutions remains uncertain. Guided by self-regulated learning (SRL) theory, this study analyzed multimodal digital-trace data from three undergraduate theoretical computer science courses (N1 = 137, N2 = 104, N3 = 148) at two universities. Weekly SRL-aligned digital-trace indicators were modeled using Elastic Net, Random Forest, and XGBoost to evaluate predictive performance over time and across settings, and model calibration both within and across courses. Early prediction of at-risk students was feasible, with SRL-related behaviors such as time management, effort regulation, and sustained engagement emerging as key predictors. While Random Forest achieved the highest in-sample accuracy, Elastic Net generalized more robustly across contexts. Out-of-sample accuracy and calibration declined between institutions with different base rates, underscoring the contextual nature of predictive analytics in higher education. These findings suggest that digital learning traces enable early identification of at-risk students within courses, but generalizing predictive models beyond their original context requires caution, particularly if the at-risk rates differ between contexts.

EducationLabor & Society
Tech Edition· 28 Apr 2026

Gen Z faces rising job fears as former Meta executive launches AI skills nonprofit - Tech Edition

Former Meta executive launches nonprofit to help Gen Z gain AI skills as fears grow over automation and job security.

EducationAdoption & Impact
Arxiv· 28 Apr 2026

Learning in Blocks: A Multi Agent Debate Assisted Personalized Adaptive Learning Framework for Language Learning

arXiv:2604.22770v1 Announce Type: new Abstract: Most digital language learning curricula rely on discrete-item quizzes that test recall rather than applied conversational proficiency. When progression is driven by quiz performance, learners can advance despite persistent gaps in using grammar and vocabulary during interaction. Recent work on LLM-based judging suggests a path toward scoring open-ended conversations, but using interaction evidence to drive progression and review requires scoring protocols that are reliable and validated. We introduce Learning in Blocks, a framework that grounds progression in demonstrated conversational competence evaluated using CEFR-aligned rubrics. The framework employs heterogeneous multi-agent debate (HeteroMAD) in two stages: a scoring stage where role-specialized agents independently evaluate Grammar, Vocabulary, and Interactive Communication, engage in debate to address conflicting judgments, and a judge synthesizes consensus scores; and a recommendation stage that identifies specific grammar skills and vocabulary topics for targeted review. Progression requires demonstrating 70% mastery, and spaced review targets identified weaknesses to counter skill decay. We benchmark four scoring and recommendation methods on CEFR A2 conversations annotated by ESL experts. HeteroMAD achieves a superior score agreement with a 0.23 degree of variation and recommendation acceptability of 90.91%. An 8-week study with 180 CEFR A2 learners demonstrates that combining rubric-aligned scoring and recommendation with spaced review and mastery-based progression produces better learning outcomes than feedback alone.

EducationLabor & Society
ThePrint· 28 Apr 2026

Lesson for India from the West: AI impact on white-collar jobs

Post-2022 as AI has spread in developed economies, it is leading to another round of polarisation—the middle class jobs are being lost in offices rather than in factories.

EducationTechnology & Infrastructure
Arxiv· 28 Apr 2026

When VLMs 'Fix' Students: Identifying and Penalizing Over-Correction in the Evaluation of Multi-line Handwritten Math OCR

arXiv:2604.22774v1 Announce Type: new Abstract: Accurate transcription of handwritten mathematics is crucial for educational AI systems, yet current benchmarks fail to evaluate this capability properly. Most prior studies focus on single-line expressions and rely on lexical metrics such as BLEU, which fail to assess the semantic reasoning across multi-line student solutions. In this paper, we present the first systematic study of multi-line handwritten math Optical Character Recognition (OCR), revealing a critical failure mode of Vision-Language Models (VLMs): over-correction. Instead of faithfully transcribing a student's work, these models often "fix" errors, thereby hiding the very mistakes an educational assessment aims to detect. To address this, we propose PINK (Penalized INK-based score), a semantic evaluation metric that leverages a Large Language Model (LLM) for rubric-based grading and explicitly penalizes over-correction. Our comprehensive evaluation of 15 state-of-the-art VLMs on the FERMAT dataset reveals substantial ranking reversals compared to BLEU: models like GPT-4o are heavily penalized for aggressive over-correction, whereas Gemini 2.5 Flash emerges as the most faithful transcriber. Furthermore, human expert studies show that PINK aligns significantly better with human judgment (55.0% preference over BLEU's 39.5%), providing a more reliable evaluation framework for handwritten math OCR in educational settings.

EducationLabor & Society
Arxiv· 28 Apr 2026

Buying the Right to Monitor:Editorial Design in AI-Assisted Peer Review

arXiv:2604.23645v1 Announce Type: new Abstract: Generative AI acts as a disruptive technological shock to evaluative organizations. In academic peer review, it enters both sides of the market: authors use AI to polish submissions, and reviewers use it to generate plausible reports without exerting evaluative effort. We develop a three-sided equilibrium model to analyze this dual adoption and deri

Education
Fortune· 27 Apr 2026

Reed Hastings says AI will drive a return to humanities: ‘I’d be doubling down on emotional skills’

Reed Hastings donated $50 million to his alma mater Bowdoin College to fund a program integrating AI and the humanities.

Education
LinkedIn· 27 Apr 2026

Artificial Intelligence (AI) Online Training Courses | LinkedIn Learning, formerly Lynda.com

Artificial intelligence (AI) is a field of computer science that aims to create machines and systems that can perform tasks that normally require human intelligence . Whether you want to explore the fundamentals, applications, or ethics of AI, ...

EducationGeopolitics
Arxiv· 27 Apr 2026

Rethinking Publication: A Certification Framework for AI-Enabled Research

arXiv:2604.22026v1 Announce Type: new Abstract: AI research pipelines now produce a growing share of publishable academic output, including work that meets existing peer-review standards for quality and novelty. Yet the publication system was built on the assumption of universal human authorship and lacks a principled way to evaluate knowledge produced through automated pipelines. This paper proposes a two-layer certification framework that separates knowledge quality assessment from grading of human contribution, allowing publication systems to handle pipeline-generated work consistently and transparently without creating new institutions. The paper uses normative-conceptual analysis, framework design under four explicit constraints, and dry-run validation on two representative submission cases spanning key attribution scenarios. The framework grades contributions as Category A (pipeline-reachable), Category B (requiring human direction at identifiable stages), and Category C (beyond current pipeline reach at the formulation stage). It also introduces benchmark slots for fully disclosed automated research as both a transparent publication track and a calibration instrument for reviewer judgment. Contribution grading is contemporaneous, based on pipeline capability at the time of submission. Dry-run validation shows that the framework can certify knowledge appropriately while tolerating irreducible attribution uncertainty. The paper argues that publication has always certified both that knowledge is valid and that a human made it. AI pipelines separate these functions for the first time. The framework is implementable within existing editorial infrastructure and grounds recognition of frontier human contribution in epistemic achievement rather than unverifiable claims of human origin.

EducationLabor & Society
Forbes· 27 Apr 2026

AI Adoption Depends On People. What 15,000 Workers Reveal

Discover insights from broad surveys about what it will take to succeed with AI. Learn how to implement AI with people practices that ensure adoption and adaptation.

Education
The Ticker· 27 Apr 2026

AI implementation in the workplace must not threaten Gen Z jobs - The Ticker

It has become apparent that artificial intelligence will have a prevalent role in the modern world. Generation Z is considered the “tech-savvy” generation because devices are such a profound part of their education from a young age. More than half of Gen Z in the U.S. are regularly using ...

EducationAdoption & Impact
Times Higher Education· 27 Apr 2026

Research funders ‘flooded with AI-assisted applications’

A 142 per cent rise in bids for Marie Curie fellowships shows peer review must adapt to ChatGPT era, says Nature study

EducationLabor & Society
Ethan Mollick· 27 Apr 2026

AI-Driven Output Surge Strains the Integrity of Scientific Publishing Systems

Analysis of management journal submissions reveals that AI is being used to increase the volume of research output rather than its quality. This trend threatens to overwhelm the human-centric systems of scientific review and validation.

EducationLabor & Society
AEI· 27 Apr 2026

Training for the Wrong Job | American Enterprise Institute - AEI

Workforce development is not being displaced by AI. It is being asked to solve one of the defining problems of the next decade: how to grow judgment when the experiences that produce it are increasingly pressured by automation.

EducationLabor & Society
Metaintro· 27 Apr 2026

9 Cash and Retraining Lifelines for... | Metaintro

AI-driven layoffs in 2026? Tap nine cash stipends, retraining grants, and employer-funded programs for displaced workers. Get the help you have earned.

EducationAdoption & Impact
Arxiv· 27 Apr 2026

A Systematic AI Adoption Framework for Higher Education: From Student GenAI Usage to Institutional Integration

arXiv:2604.22030v1 Announce Type: new Abstract: The rapid development of GenAI technologies is transforming learning, assessment, and academic production in higher education. Despite increasing student adoption, many institutions lack operational mechanisms to systematically align regulations and curricula with evolving generative artificial intelligence practices, creating regulatory ambiguity and academic integrity risks. This study investigates how students utilize generative artificial intelligence tools in computer science-oriented disciplines and develops a structured, lightweight framework supporting institutional adaptation to pervasive GenAI usage. We conducted a case study at the University of Applied Sciences and Arts Hannover (Germany), combining document analysis with an online survey (N = 151) targeting Business Information Systems and E-Government students. Quantitative responses were analyzed statistically, while open-ended responses underwent thematic synthesis. Generative artificial intelligence adoption was widespread, with ChatGPT as the dominant tool. Students primarily used generative artificial intelligence for research assistance, programming support, and text processing. However, substantial policy uncertainty was observed: many students were unaware of or unsure about institutional generative artificial intelligence regulations. Document analysis revealed regulatory gaps, ambiguous terminology, and inconsistencies between formal rules and teaching practices. To address these shortcomings, we propose the AI Adoption Framework for Higher Education, an iterative and operational model integrating document analysis, empirical observation, synthesis of findings, and targeted updates of regulations and curricula. The framework addresses governance, assessment validity, and academic integrity under generative artificial intelligence conditions and provides practical guidance for institutional adaptation.

EducationLabor & Society
nature.com· 25 Apr 2026

Mechanisms of employability development for applied talents in the artificial intelligence era

# Mechanisms of employability development for applied talents in the artificial intelligence era | Scientific Reports Published: 2026-04-25T09:29:59+00:00 ## Summary The article is published under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, allowing non-governmental use of any material in any form, as long as appropriate. However, if material is not included in the license is not permitted by law, it will require permission from the copyright holder. The article also mentions the potential for a lawsuit against the government. ## Story Mechanisms of employability development for applied talents in the artificial intelligence era | Scientific Reports ### Subjects - Business and management - Engineering - Mathematics and computing - Science, technology and society ## Abstract Rapid AI integration is transforming global labor markets while wi

EducationAdoption & Impact
Ethan Mollick· 25 Apr 2026

The Impending Disruption of Academic Research Processes by Autonomous AI Agents

AI agents are demonstrating the capability to independently reconstruct complex research papers, challenging traditional academic workflows. This shift necessitates a re-evaluation of research validation processes and the integration of AI in peer review.

Education
herald.kaist.ac.kr· 25 Apr 2026

[2026 April Feature] AI’s Real Impact On The Labor Market < Feature < 기사본문 - The KAIST Herald

[2026 April Feature] AI’s Real Impact On The Labor Market < Feature < 기사본문 - The KAIST Herald ## 본문영역 이전 기사보기 다음 기사보기 [2026 April Feature] AI’s Real Impact On The Labor Market 바로가기 복사하기 본문 글씨 줄이기 본문 글씨 키우기 스크롤 이동 상태바 - 기자명 Hyunseung Lee Staff Reporter - Approved 2026.04.25 15:30 - Comments 0 바로가기 복사하기 본문 글씨 줄이기 본문 글씨 키우기 Ever since Large Language Models (LLMs) were made accessible to the general public in 2022 with the launch of ChatGPT, anxiety about the impacts of AI on today’s labor market has only grown over the years. With a rapid increase in LLMs’ capabilities and usage, this anxiety spreads across many career paths, whether these are low-skilled, repetitive jobs or high-skilled careers that require advanced degrees. With more data over the years, an increasing number of studies are reinforcing this fear. Most recently, Anthropic, the AI company behind popular LLM model Cl

Education
Substack· 24 Apr 2026

Ep 73. Why students cancelled an AI surveillance contract

They all sit at the intersection of risk, security, education, and technology/ artificial intelligence (AI).School shootings, mass shootings, extremism, terrorism, and systemic gun violence are not separate domains.

EducationLabor & Society
Arxiv· 24 Apr 2026

Learning AI Without a STEM Background: Mixed-Methods Evidence from a Diverse, Mixed-Cohort AIED Program

arXiv:2604.20870v1 Announce Type: new Abstract: Despite growing interest in AI education, most AIED initiatives remain narrowly targeted toward STEM-prepared students, limiting participation by non-STEM learners and adults seeking to engage with AI in public-interest, policy, or workforce contexts. This paper presents and evaluates an NSF-funded, innovative mixed-cohort AI education model that intentionally integrates non-STEM undergraduates and adult learners into a shared learning environment centered on ethical reasoning, socio-technical judgment, and applied AI literacy rather than technical proficiency alone. Drawing on mixed-methods data from course surveys, open-ended reflections, and educator reports, we examine learners' academic agency, confidence navigating AI concepts, critical engagement with ethical tradeoffs, and perceived expansion of postsecondary and career trajectories. Quantitative results indicate significant gains in confidence and perceived relevance of AI across cohorts' participants, while qualitative analyses reveal a consistent emphasis on responsibility, judgment, and contextual reasoning over technical mastery. Instructors and near-peer mentors corroborated high levels of engagement and productive challenge, particularly in dialogic and scenario-based learning activities. Our findings suggest that human-centered instructional supports, such as ethical scaffolding, mentorship, and structured discussion, are essential components of equitable AI education, especially in heterogeneous and non-traditional learner populations. We argue that ethical judgment should be treated as a core learning outcome in AIED alongside AI literacy, and we offer design implications for expanding access to AI education in policy-relevant and workforce-adjacent contexts.

EducationLabor & Society
EdTech Innovation Hub· 24 Apr 2026

OpenAI finds 18 percent of US jobs face AI automation risk in new framework | ETIH EdTech News — EdTech Innovation Hub

OpenAI Chief Economist Ronnie Chatterji and economist Alex Martin Richmond publish AI Jobs Transition Framework finding 18 percent of US jobs face short-term AI automation risk, while teachers, nurses and lawyers stay insulated. ChatGPT usage tripled in highest-exposure roles. ETIH edtech news on AI

EducationLabor & Society
Arxiv· 24 Apr 2026

Sibling Rivalry in the Ivory Tower: Mass Science, Expanding Scholarly Families, and the Reshaping of Academic Stratification

arXiv:2604.20864v1 Announce Type: new Abstract: This paper investigates mechanisms underlying scientific stratification in the transition from elite to mass science. Existing scholarship has examined stratification through the Matthew effect framework, but this approach is increasingly limited as mass, team-based research becomes dominant. While scientists now share institutions and lineages, substantial career outcome differences remain unexplained. We propose integrating demographic concepts into science studies. Drawing parallels between biological families and scholarly lineages as fundamental reproductive units, we adapt the birth order concept to examine how doctoral student sequence within a lineage shapes career trajectories. Using data on over one million U.S. doctoral graduates, we find that later students of the same advisor systematically underperform earlier ones across multiple achievement dimensions, both short and long term. Examining underlying mechanisms reveals that although advisors invest comparable resources in all students, later students receive less cognitive stimulation from mature scholars than peers and specialize in narrower niches under peer differentiation pressure. Both of these factors constrain intellectual development and subsequent success. By introducing a demographic framework, this paper offers new perspectives on scientific stratification and demonstrates how demographic concepts can fruitfully analyze broader social and epistemic systems.

EducationLabor & Society
Arxiv· 24 Apr 2026

Beyond the Binary: Motivations, Challenges, and Strategies of Transgender and Non-binary Software Engineering Students

arXiv:2604.20866v1 Announce Type: new Abstract: When software is designed by people from diverse identities and experiences, it is more likely to be inclusive and address a broader range of user needs. However, for transgender and non-binary students in software engineering, the path to becoming such creators may be marked by unique challenges. While existing research explores gender minorities in professional software engineering, limited attention has been given to their educational journey, a key phase for ensuring equal opportunities and preventing exclusion in the tech workforce. This study aims to address this gap by investigating the experiences of transgender and non-binary students in software engineering, with a particular focus on their motivations for entering the field, the obstacles they encounter, and potential strategies for fostering greater inclusivity within their academic environments. Based on 13 semi-structured interviews with transgender and non-binary students across the globe, we found that gender identity plays an indirect role in their decision to pursue software engineering. Key factors include the appeal of remote work and a personal desire to create more inclusive technologies. Although the participants did not report direct discrimination within their universities, many described experiencing verbal insults, judgment, intolerance, and hostility, all of which negatively impacted their mental health. These challenges often stem from socio-cultural norms and a lack of representation. Despite these obstacles, the students remain committed to their choice of study but call for greater institutional support, structural changes, and increased representation. From these findings, we suggest concrete steps to support students, regardless of gender identity.

Education
Arxiv· 23 Apr 2026

AI to Learn 2.0: A Deliverable-Oriented Governance Framework and Maturity Rubric for Opaque AI in Learning-Intensive Domains

arXiv:2604.19751v1 Announce Type: new Abstract: Generative AI is entering research, education, and professional work faster than current governance frameworks can specify how AI-assisted outputs should be judged in learning-intensive settings. The central problem is proxy failure: a polished artifact can be useful while no longer serving as credible evidence of the human understanding, judgment, or transfer ability that the work is supposed to cultivate or certify. This paper proposes AI to Learn 2.0, a deliverable-oriented governance framework for AI-assisted work. Rather than claiming element-wise novelty, it reorganizes adjacent ideas around the final deliverable package, distinguishes artifact residual from capability residual, and operationalizes the result through a five-part package, a seven-dimension maturity rubric, gate thresholds on critical dimensions, and a companion capability-evidence ladder. AI to Learn 2.0 allows opaque AI during exploration, drafting, hypothesis generation, and workflow design, but requires that the released deliverable be usable, auditable, transferable, and justifiable without the original large language model or cloud API. In learning-intensive contexts, it additionally requires context-appropriate human-attributable evidence of explanation or transfer. Worked scoring across contrastive cases, including coursework substitution, a symbolic-regression governance contrast, teacher-audited national-exam practice forms, and a self-hosted lecture-to-quiz pipeline with deterministic quality control, shows how the framework separates polished substitution workflows from bounded, auditable, and handoff-ready AI-assisted workflows. AI to Learn 2.0 is proposed as a governance instrument for structured third-party review where capability preservation, accountability, and validity boundaries matter.

Education
Arxiv· 23 Apr 2026

Exploring Data Augmentation and Resampling Strategies for Transformer-Based Models to Address Class Imbalance in AI Scoring of Scientific Explanations in NGSS Classroom

arXiv:2604.19754v1 Announce Type: new Abstract: Automated scoring of students' scientific explanations offers the potential for immediate, accurate feedback, yet class imbalance in rubric categories particularly those capturing advanced reasoning remains a challenge. This study investigates augmentation strategies to improve transformer-based text classification of student responses to a physical science assessment based on an NGSS-aligned learning progression. The dataset consists of 1,466 high school responses scored on 11 binary-coded analytic categories. This rubric identifies six important components including scientific ideas needed for a complete explanation along with five common incomplete or inaccurate ideas. Using SciBERT as a baseline, we applied fine-tuning and test these augmentation strategies: (1) GPT-4--generated synthetic responses, (2) EASE, a word-level extraction and filtering approach, and (3) ALP (Augmentation using Lexicalized Probabilistic context-free grammar) phrase-level extraction. While fine-tuning SciBERT improved recall over baseline, augmentation substantially enhanced performance, with GPT data boosting both precision and recall, and ALP achieving perfect precision, recall, and F1 scores across most severe imbalanced categories (5,6,7 and 9). Across all rubric categories EASE augmentation substantially increased alignment with human scoring for both scientific ideas (Categories 1--6) and inaccurate ideas (Categories 7--11). We compared different augmentation strategies to a traditional oversampling method (SMOTE) in an effort to avoid overfitting and retain novice-level data critical for learning progression alignment. Findings demonstrate that targeted augmentation can address severe imbalance while preserving conceptual coverage, offering a scalable solution for automated learning progression-aligned scoring in science education.

Education
Arxiv· 23 Apr 2026

Using Learning Theories to Evolve Human-Centered XAI: Future Perspectives and Challenges

arXiv:2604.19788v1 Announce Type: new Abstract: As Artificial Intelligence (AI) systems continue to grow in size and complexity, so does the difficulty of the quest for AI transparency. In a world of large models and complex AI systems, why do we explain AI and what should we explain? While explanations serve multiple functions, in the face of complexity humans have used and continue to use explanations to foster learning. In this position paper, we discuss how learning theories can be infused in the XAI lifecycle, as well as the key opportunities and challenges when adopting a learner-centered approach to assess, design and evaluate AI explanations. Building on past work, we argue that a learner-centered approach to Explainable AI (XAI) can enhance human agency and ease XAI risks mitigation, helping evolve the practice of human-centered XAI.

EducationEconomics & Markets
Arxiv· 23 Apr 2026

Do Small Language Models Know When They're Wrong? Confidence-Based Cascade Scoring for Educational Assessment

arXiv:2604.19781v1 Announce Type: new Abstract: Automated scoring of student work at scale requires balancing accuracy against cost and latency. In "cascade" systems, small language models (LMs) handle easier scoring tasks while escalating harder ones to larger LMs -- but the challenge is determining which cases to escalate. We explore verbalized confidence -- asking the LM to state a numerical confidence alongside its prediction -- as a routing signal. Using 2,100 expert-scored decisions from student-AI math conversations, we evaluate cascade systems built from GPT-5.4, Claude 4.5+, and Gemini 3.1 model pairs. We find that: (1) confidence discrimination varies widely across small LMs, with the best achieving AUROC 0.857 and the worst producing a near-degenerate confidence distribution; (2) confidence tracks human scoring difficulty, with lower LM confidence where annotators disagreed and took longer to score; (3) the best cascade approached large-LM accuracy (kappa 0.802 vs. 0.819) at 76% lower cost and 61% lower latency. Confidence discrimination is the bottleneck: the two small LMs with meaningful confidence variance yielded cascades with no statistically detectable kappa loss, while the third -- whose confidence was near-degenerate -- could not close the accuracy gap regardless of threshold. Small LMs with strong discrimination let practitioners trade cost for accuracy along the frontier; those without it do not.

EducationEconomics & Markets
Rest of World· 23 Apr 2026

Edtech’s pandemic boom is over as K-12 startup funding craters - Rest of World

Venture capital is moving away from K-12 edtech worldwide as investors prioritize AI tools and workforce training with clearer returns.

EducationLabor & Society
Newsclip· 23 Apr 2026

Rishi Sunak Warns: AI is Flattening Young People's Job Market · Newsclip

In a recent commentary, former Prime Minister Rishi Sunak highlighted the urgent challenges facing young job seekers due to AI advancements. He argues for immediate policy changes to revive recruitment opportunities and reimagine employment in an age increasingly dominated by technology.

Education
The Verge· 23 Apr 2026

AI threatens to widen the inequality gap. | The Verge

A new survey suggests that AI will only help the rich get richer. “The rhetoric out there is that the tools are going to be democratizing. But the reality is that . . . you require a certain degree of education, abstract and quantitative skills, familiarity with computers and coding in order ...

EducationLabor & Society
HR Dive· 23 Apr 2026

Organizations and employees race to prove AI expertise | HR Dive

Skillsoft reported a 994% year-over-year increase in artificial intelligence-related skills benchmark completions as companies look to justify their tech investments.

EducationLabor & Society
Arxiv· 22 Apr 2026

Critical Thinking in the Age of Artificial Intelligence: A Survey-Based Study with Machine Learning Insights

arXiv:2604.18590v1 Announce Type: cross Abstract: The growing use of artificial intelligence (AI) in education, professional work, and everyday problem-solving has raised important questions about its effect on human reasoning. While AI can improve efficiency, save time, and support learning, repeated dependence on it may also encourage cognitive offloading, reduce productive struggle, and weaken independent critical thinking. This paper investigates the relationship between AI-use behavior and critical-thinking performance through an interview-based survey combined with short logic and reasoning tasks. The findings reveal a mixed pattern: participants largely viewed AI as a tool for speed, convenience, and learning support, yet many also reported reduced patience for sustained effort. Objective reasoning performance varied considerably across individuals, and the analyses suggest that reduced patience and stronger dependence-related tendencies are more closely associated with lower reasoning performance than background characteristics alone. Exploratory clustering further indicates that AI users do not form a single homogeneous group, but instead reflect tentative behavioral profiles, including over-reliant users, mixed-strategy users, and balanced support-seekers. Although the findings are exploratory, they indicate that AI does not affect critical thinking in a uniformly negative or positive way. Instead, its influence appears to depend on the manner in which it is used. The paper therefore argues that effective human-AI collaboration should support reflection, verification, and sustained cognitive effort rather than substitute for them.

EducationAdoption & Impact
Arxiv· 22 Apr 2026

Fairness Audits of Institutional Risk Models in Deployed ML Pipelines

arXiv:2604.19468v1 Announce Type: new Abstract: Fairness audits of institutional risk models are critical for understanding how deployed machine learning pipelines allocate resources. Drawing on multi-year collaboration with Centennial College, where our prior ethnographic work introduced the ASP-HEI Cycle, we present a replica-based audit of a deployed Early Warning System (EWS), replicating its model using institutional training data and design specifications. We evaluate disparities by gender, age, and residency status across the full pipeline (training data, model predictions, and post-processing) using standard fairness metrics. Our audit reveals systematic misallocation: younger, male, and international students are disproportionately flagged for support, even when many ultimately succeed, while older and female students with comparable dropout risk are under-identified. Post-processing amplifies these disparities by collapsing heterogeneous probabilities into percentile-based risk tiers. This work provides a replicable methodology for auditing institutional ML systems and shows how disparities emerge and compound across stages, highlighting the importance of evaluating construct validity alongside statistical fairness. It contributes one empirical thread to a broader program investigating algorithms, student data, and power in higher education.

EducationLabor & Society
Arxiv· 22 Apr 2026

Students Know AI Should Not Replace Thinking, but How Do They Regulate It? The TACO Framework for Human-AI Cognitive Partnership

arXiv:2604.18737v1 Announce Type: new Abstract: As generative artificial intelligence becomes increasingly embedded in educational practice, a central concern is whether students use AI as cognitive support or as a substitute for thinking. Prior research shows that learners recognise this boundary conceptually and acknowledge that "AI should not replace thinking." However, whether such awareness translates into structured regulation during actual AI use remains unclear. Drawing on data from Hong Kong secondary students, this study examines how learners perceive their management of the boundary between assistance and outsourcing in practice. Findings show that awareness did not consistently translate into regulation; ethical belief did not necessarily lead to strategic execution; and conceptual endorsement did not guarantee operational behaviour. These findings suggest that the challenge is not teaching students that AI should not replace thinking, as they already know this, but providing them with structured mechanisms to regulate how AI is used within learning processes. In response, the study introduces the TACO framework (Think-Ask-Check-Own), a process-oriented model designed to operationalise the boundary between cognitive support and cognitive substitution. By shifting attention from ethical awareness to cognitive regulation, the study contributes a learner-grounded approach to sustaining AI as a dynamic cognitive partner in education.

EducationLabor & Society
Arxiv· 22 Apr 2026

Physical and Augmented Reality based Playful Activities for Refresher Training of ASHA Workers in India

arXiv:2604.18959v1 Announce Type: cross Abstract: Recent health surveys in India highlight the alarming child malnutrition levels and lower rates of complete child immunization in many parts of India. Previous researches report that the conventional training pedagogy of the CHWs (Community Healthcare Workers) or the ASHAs (Accredited Social Health Activists) in India is ineffective in enhancing their capacity. Considering that the CHWs are getting equipped with smartphones, it calls for a rethinking of their training pedagogy using the ICT approach. Two refresher training tools were developed to make learning the child immunization schedule more exciting and conceptually engaging for ASHAs. The physical and AR (Augmented Reality) versions of designed card games were compared for effectiveness and knowledge retention, pre, and post-intervention through questionnaire tests conducted immediately before and after playing multiple sessions. The AR-based play was found to be better in learning and knowledge retention with more engagement, mainly due to its interactive and intuitive nature of play.

EducationLabor & Society
Arxiv· 22 Apr 2026

Who Benefits from AI? Self-Selection, Skill Gap, and the Hidden Costs of AI Feedback

arXiv:2409.18660v2 Announce Type: replace Abstract: Feedback from artificial intelligence (AI) is increasingly easy to access and research has already established that people learn from it. But individuals choose when and how to seek such feedback, and more engaged and motivated individuals may seek it more, creating an illusion of effectiveness that masks self-selection. We investigate how the e

Education
Punch· 21 Apr 2026

Babcock varsity VC urges AI adoption, commercialisation of innovation

Learn why Babcock University's VC champions AI adoption and commercialising research to create economic and societal value in the 21st-century economy.

Education
Axios AI+· 21 Apr 2026

Microsoft Partners with Construction Unions

Microsoft is partnering with construction unions to train workers for the AI economy. The partnership will offer free AI literacy courses and industry-recognized credentials to millions of skilled craft professionals.

EducationLabor & Society
Effective Altruism Forum· 21 Apr 2026

Introducing the Labor Automation Forecasting Hub — EA Forum

Metaculus has launched the Labor Automation Forecasting Hub, a continuously updated view of how AI may reshape the US labor market through 2035. …

Education
Higher Ed Dive· 21 Apr 2026

Employers say they struggle to find graduates with the right AI skillset | Higher Ed Dive

AI is changing entry‑level roles amid a rapid decrease in the durability of skills, leaving workforce readiness at risk, a report found.

EducationLabor & Society
The Economic Times· 21 Apr 2026

Job augmentation: Humans and AI collaborating rather than competing — Vishaal Gupta, Pearson - The Economic Times

Pearson's Vishaal Gupta sees AI as a job enhancer, not a destroyer, especially for India's youth. He emphasizes applied learning and personalized education to leverage India's graduate pool for AI skills. Pearson is actively partnering with government and industry to bridge the skills gap and ...

EducationLabor & Society
Click2Houston· 21 Apr 2026

AI is changing how Texas universities teach computer science as job market slows

Admissions to Texas computer science programs are down roughly 20%, professors said, but they still see a future for their students.

Education
Daily Brew· 20 Apr 2026

India Trains 2,500 Artisans in AI

India's MSME Ministry has trained over 2,500 artisans in AI tools under the PM Vishwakarma Scheme, aiming to enhance digital skills and market competitiveness.

Education
Guardian· 20 Apr 2026

Teacher v chatbot: my journey into the classroom in the age of AI – podcast

I was a newcomer, negotiating all of the usual classroom difficulties for the first time. Throwing AI into the mix felt like downing a coffee in the middle of a panic attack By Peter C Baker. Read by Adam Sims Continue reading...

EducationLabor & Society
Daily Brew· 20 Apr 2026

India Trains 2,500 Artisans in AI for Inclusive Digital Transformation Under PM Vishwakarma Scheme

India's MSME Ministry has trained over 2,500 artisans in AI tools under the PM Vishwakarma Scheme, aiming to enhance digital skills and market competitiveness.

Education
Linkdood· 20 Apr 2026

Economists Are Changing Their New Minds on Why AI’s Threat Jobs Now Seriously - Linkdood Technologies

This challenges the traditional ... and job creation. Economists are now focusing on the risk of structural displacement—a scenario where: ... One of the most alarming trends is the erosion of entry-level jobs. ... AI is now automating many of these positions, creating a bottleneck for new workers entering the labor market...

EducationLabor & Society
World Bank· 20 Apr 2026

Where Training Meets Opportunity: Djibouti’s Path from Skills to Jobs

At the heart of the Djibouti Skills Development for Employment Project - known as the SKILLS Project - supported by the World Bank Group and implemented in partnership with the Ministries of Education and Labor; lies a strong emphasis on market relevance. This principle underpins how training ...

EducationTechnology & Infrastructure
Product Hunt· 20 Apr 2026

Vantage in Google Labs

Practice & assess future-ready skills with AI-simulated team.

EducationLabor & Society
Times of India· 19 Apr 2026

How fears of AI disruption are reshaping career choices and increasing interest in graduate education among youth - The Times of India

News News: As artificial intelligence reshapes hiring patterns and disrupts traditional entry-level roles, a growing number of young graduates are rethinking the.

EducationLabor & Society
Daily Brew· 19 Apr 2026

Punjab Leads AI Education Revolution with Google and Intel Workshops, PSEB Embeds AI in Curriculum

Punjab's education system is set to integrate AI and robotics into its curriculum, starting this academic session, as announced at a recent conference.

Education
Washington Post· 17 Apr 2026

Opinion | AI detectors are hurting honest students. Schools should ban them. - The Washington Post

Increasingly, teachers and schools fretting over students using artificial intelligence to complete their assignments are turning to AI detectors to catch would-be cheaters. That may sound like a smar.

EducationAdoption & Impact
Arxiv· 17 Apr 2026

Enhancing Large Language Model-Based Systems for End-to-End Circuit Analysis Problem Solving

arXiv:2512.10159v2 Announce Type: replace Abstract: LLMs have demonstrated strong performance in data-rich domains such as programming, yet their reliability in engineering tasks remains limited. Circuit analysis--requiring multimodal understanding and precise mathematical reasoning--highlights these challenges. Although Gemini 2.5 Pro shows improved capabilities in diagram interpretation and analog-circuit reasoning, it still struggles to consistently produce correct solutions when given both textual problem descriptions and circuit diagrams. Meanwhile, engineering education demands scalable AI tools capable of generating accurate solutions for applications such as automated homework feedback. This paper presents an enhanced end-to-end circuit problem-solving framework built upon Gemini. We first conduct a systematic benchmark on undergraduate circuit problems and identify two key failure modes: 1) circuit-recognition hallucinations, particularly incorrect source polarity detection, and 2) reasoning-process hallucinations, such as incorrect current direction assumptions. To address recognition errors, we integrate a fine-tuned YOLO detector and OpenCV-based processing to isolate voltage and current sources, enabling Gemini to accurately re-identify source polarities from cropped images. To mitigate reasoning errors, we introduce an ngspice-driven verification loop, in which simulation discrepancies trigger iterative solution refinement with optional HITL feedback. Experimental results demonstrate that the proposed pipeline achieves 97.59% accuracy, substantially outperforming Gemini's baseline of 79.52%. Furthermore, on four variations of hand-drawn circuit diagrams, accuracy improves from 56.06%--71.21% to 93.94%--95.45% with statistically significant gains. These results highlight the robustness, scalability, and practical applicability of the proposed framework for engineering education and real-world circuit analysis tasks.

EducationLabor & Society
Arxiv· 17 Apr 2026

Generative AI in Higher Education: Evidence from an Elite College

arXiv:2508.00717v2 Announce Type: replace Abstract: Generative AI is transforming higher education, yet systematic evidence on student adoption, usage patterns, and perceived learning impacts remains scarce. Using survey data from a selective U.S. college, we document rapid generative-AI adoption, reaching over 80 percent within two years of ChatGPT's release. Adoption varies sharply across disci

Education
Arxiv· 16 Apr 2026

Who Decides in AI-Mediated Learning? The Agency Allocation Framework

arXiv:2604.13534v1 Announce Type: new Abstract: As AI-mediated learning systems increasingly shape how learners plan, decide, and progress through education, learner agency is becoming both more consequential and harder to conceptualize at scale. Existing research often treats agency as a proxy for engagement and self-regulation, leaving unclear who actually holds decision-making authority in large-scale, automated learning environments. This paper reframes learner agency as the allocation of decision authority across learners, educators, institutions, and AI systems. We introduce the Agency Allocation Framework (AAF) for analyzing how decisions are distributed, how choices are architected, what evidence supports them, and over what time horizons their consequences unfold. Drawing on a focused review of Learning@Scale literature and an illustrative tutoring-system example, we identify four recurring challenges for studying learner agency at scale: (1) conceptual ambiguity, (2) reliance on behavioral proxies, (3) trade-offs between efficiency and learner control, and (4) the redistribution of agency through AI-mediated systems. Rather than advocating more or less automation, the AAF supports systematic analysis of when AI scaffolds learners' capacity to act and when it substitutes for it. By making decision authority explicit, the framework provides researchers and designers with analytic tools for studying, comparing, and evaluating agency-preserving learning systems in increasingly automated educational contexts.

Education
Arxiv· 16 Apr 2026

Lessons from Skill Development Programs -- Livelihood College of Dhamtari

arXiv:2604.13317v1 Announce Type: new Abstract: Skill training is crucial for enabling dignified livelihood opportunities. In India, various schemes and initiatives aim to provide skill training in different domains, with ICT and digital technologies playing a vital role. However, there is limited research on understanding on-ground capacities \& constraints and the use of digital tools in these programs. In this study, we look into the mobilization, counseling, and training stages of the 5-stage skill development process that also includes placement and tracking, adopted in Dhamtari's Livelihood College in Chhattisgarh, India, and other programs nationwide. Through the immersion/crystallization approach and mixed-method analysis including GIS mapping, video analysis of CCTV streams, quantitative analysis, and unstructured conversations with administrators, trainers, mobilizers, counselors, and nearby industry personnel for over a year, we identified three major challenges. A lack of inclusive and gendered access to skilling; a tedious manual counseling process with insufficient support staff; and inconsistent trainee attendance alongside sub-standard utilization of digital assets. Finally, we discuss, ways to improve access to skill training by leveraging Vocational Training Partners(VTPs), ways to improve the utilization of existing digital assets, and considerations for improving the counseling process. We conclude by summarizing that skill development programs currently lack institutional elements that enable effective information exchange between stakeholders, thereby creating information bottlenecks that result in inefficiencies, hindering the service delivery. In sum, our study informs the HCI and ICTD literature on the on-ground challenges and constraints faced by stakeholders and the role of technology in supporting such initiatives.

EducationLabor & Society
Siliconrepublic· 16 Apr 2026

Are we ready to place lab experiments in non-human hands?

Stephen D Turner of the University of Virginia explores the importance of governance and oversight around AI in the design and execution of lab experiments. Read more: Are we ready to place lab experiments in non-human hands?

Education
Arxiv· 16 Apr 2026

Automatically Inferring Teachers' Geometric Content Knowledge: A Skills Based Approach

arXiv:2604.13666v1 Announce Type: new Abstract: Assessing teachers' geometric content knowledge is essential for geometry instructional quality and student learning, but difficult to scale. The Van Hiele model characterizes geometric reasoning through five hierarchical levels. Traditional Van Hiele assessment relies on manual expert analysis of open-ended responses. This process is time-consuming, costly, and prevents large-scale evaluation. This study develops an automated approach for diagnosing teachers' Van Hiele reasoning levels using large language models grounded in educational theory. Our central hypothesis is that integrating explicit skills information significantly improves Van Hiele classification. In collaboration with mathematics education researchers, we built a structured skills dictionary decomposing the Van Hiele levels into 33 fine-grained reasoning skills. Through a custom web platform, 31 pre-service teachers solved geometry problems, yielding 226 responses. Expert researchers then annotated each response with its Van Hiele level and demonstrated skills from the dictionary. Using this annotated dataset, we implemented two classification approaches: (1) retrieval-augmented generation (RAG) and (2) multi-task learning (MTL). Each approach compared a skills-aware variant incorporating the skills dictionary against a baseline without skills information. Results showed that for both methods, skills-aware variants significantly outperformed baselines across multiple evaluation metrics. This work provides the first automated approach for Van Hiele level classification from open-ended responses. It offers a scalable, theory-grounded method for assessing teachers' geometric reasoning that can enable large-scale evaluation and support adaptive, personalized teacher learning systems.

EducationLabor & Society
EdTech Innovation Hub· 16 Apr 2026

Stanford AI Index 2026 shows AI adoption pressure on education and jobs | ETIH EdTech News — EdTech Innovation Hub

AI in education, edtech AI tools, and AI skills demand surge in Stanford AI Index 2026. Up to 84 percent of students use AI, while jobs, infrastructure, and policy struggle to keep pace. ETIH edtech news covers digital learning, workforce skills, and AI adoption impact

EducationLabor & Society
Arxiv· 15 Apr 2026

Can Persona-Prompted LLMs Emulate Subgroup Values? An Empirical Analysis of Generalisability and Fairness in Cultural Alignment

arXiv:2604.12851v1 Announce Type: new Abstract: Despite their global prevalence, many Large Language Models (LLMs) are aligned to a monolithic, often Western-centric set of values. This paper investigates the more challenging task of fine-grained value alignment: examining whether LLMs can emulate the distinct cultural values of demographic subgroups. Using Singapore as a case study and the World Values Survey (WVS), we examine the value landscape and show that even state-of-the-art models like GPT-4.1 achieve only 57.4% accuracy in predicting subgroup modal preferences. We construct a dataset of over 20,000 samples to train and evaluate a range of models. We demonstrate that simple fine-tuning on structured numerical preferences yields substantial gains, improving accuracy on unseen, out-of-distribution subgroups by an average of 17.4%. These gains partially transfer to open-ended generation. However, we find significant pre-existing performance biases, where models better emulate young, male, Chinese, and Christian personas. Furthermore, while fine-tuning improves average performance, it widens the disparity between subgroups when measured by distance-aware metrics. Our work offers insights into the limits and fairness implications of subgroup-level cultural alignment.

EducationLabor & Society
Arxiv· 15 Apr 2026

Mathematics Teachers Interactions with a Multi-Agent System for Personalized Problem Generation

arXiv:2604.12066v1 Announce Type: new Abstract: Large language models can increasingly adapt educational tasks to learners characteristics. In the present study, we examine a multi-agent teacher-in-the-loop system for personalizing middle school math problems. The teacher enters a base problem and desired topic, the LLM generates the problem, and then four AI agents evaluate the problem using criteria that each specializes in (mathematical accuracy, authenticity, readability, and realism). Eight middle school mathematics teachers created 212 problems in ASSISTments using the system and assigned these problems to their students. We find that both teachers and students wanted to modify the fine-grained personalized elements of the real-world context of the problems, signaling issues with authenticity and fit. Although the agents detected many issues with realism as the problems were being written, there were few realism issues noted by teachers and students in the final versions. Issues with readability and mathematical hallucinations were also somewhat rare. Implications for multi-agent systems for personalization that support teacher control are given.

EducationLabor & Society
Arxiv· 15 Apr 2026

GoodPoint: Learning Constructive Scientific Paper Feedback from Author Responses

arXiv:2604.11924v1 Announce Type: new Abstract: While LLMs hold significant potential to transform scientific research, we advocate for their use to augment and empower researchers rather than to automate research without human oversight. To this end, we study constructive feedback generation, the task of producing targeted, actionable feedback that helps authors improve both their research and its presentation. In this work, we operationalize the effectiveness of feedback along two author-centric axes-validity and author action. We first curate GoodPoint-ICLR, a dataset of 19K ICLR papers with reviewer feedback annotated along both dimensions using author responses. Building on this, we introduce GoodPoint, a training recipe that leverages success signals from author responses through fine-tuning on valid and actionable feedback, together with preference optimization on both real and synthetic preference pairs. Our evaluation on a benchmark of 1.2K ICLR papers shows that a GoodPoint-trained Qwen3-8B improves the predicted success rate by 83.7% over the base model and sets a new state-of-the-art among LLMs of similar size in feedback matching on a golden human feedback set, even surpassing Gemini-3-flash in precision. We further validate these findings through an expert human study, demonstrating that GoodPoint consistently delivers higher practical value as perceived by authors.

EducationLabor & Society
Arxiv· 15 Apr 2026

Design and Deployment of a Course-Aware AI Tutor in an Introductory Programming Course

arXiv:2604.11836v1 Announce Type: new Abstract: Large Language Models (LLMs) have become part of how students solve programming tasks, offering immediate explanations and even full solutions. Previous work has highlighted that novice programmers often heavily rely on LLMs, thereby neglecting their own problem-solving skills. To address this challenge, we designed a course-specific online Python tutor that provides retrieval-augmented, course-aligned guidance without generating complete solutions. The tutor integrates a web-based programming environment with a conversational agent that offers hints, Socratic questions, and explanations grounded in course materials. Students used the system during self-study to work on homework assignments, and the tutor also supported questions about the broader course material. We collected structured student feedback and analyzed interaction logs to investigate how they engaged with the tutor's guidance. We observed that students used the tutor primarily for conceptual understanding, implementation guidance, and debugging, and perceived it as a course-aligned, context-aware learning support that encourages engagement rather than direct solution copying.

EducationLabor & Society
@emollick· 15 Apr 2026

As someone who teaches at a business school, I can tell you that the pre-professional student population (as opposed to liberal arts or thos

As someone who teaches at a business school, I can tell you that the pre-professional student population (as opposed to liberal arts or those pursuing academia or a topic of personal excitement) is extremely sensitive to signs of expected market demand in their fields of interest

Education
Arxiv· 14 Apr 2026

Good Question! The Effect of Positive Feedback on Contributions to Online Public Goods

arXiv:2604.10360v1 Announce Type: cross Abstract: Online platforms where volunteers answer each other's questions are important sources of knowledge, yet participation is declining. We ran a pre-registered experiment on Stack Overflow, one of the largest Q&A communities for software development (N = 22,856), randomly assigning newly posted questions to receive an anonymous upvote. Within four weeks, treated users were 6.3% more likely to ask another question and 12.9% more likely to answer someone else's question. A second upvote produced no additional effect. The effect on answering was larger, more persistent, and still significant at twelve weeks. Next, we examine how much of these effects are due to algorithmic amplification, since upvotes also raise a question's rank and visibility. Algorithmic amplification is not important for the effect on asking additional questions, but it matters a lot for the effect on answering other questions. The increase in visibility increases the probability that another user provides an answer, and that experience appears to shift the poster toward broader community participation.

EducationLabor & Society
Arxiv· 14 Apr 2026

Assessing the Pedagogical Readiness of Large Language Models as AI Tutors in Low-Resource Contexts: A Case Study of Nepal's K-10 Curriculum

arXiv:2604.09619v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into educational ecosystems promises to democratize access to personalized tutoring, yet the readiness of these systems for deployment in non-Western, low-resource contexts remains critically under-examined. This study presents a systematic evaluation of four state-of-the-art LLMs--GPT-4o, Claude Sonnet 4, Qwen3-235B, and Kimi K2--assessing their capacity to function as AI tutors within the specific curricular and cultural framework of Nepal's Grade 5-10 Science and Mathematics education. We introduce a novel, curriculum-aligned benchmark and a fine-grained evaluation framework inspired by the "natural language unit tests" paradigm, decomposing pedagogical efficacy into seven binary metrics: Prompt Alignment, Factual Correctness, Clarity, Contextual Relevance, Engagement, Harmful Content Avoidance, and Solution Accuracy. Our results reveal a stark "curriculum-alignment gap." While frontier models (GPT-4o, Claude Sonnet 4) achieve high aggregate reliability (approximately 97%), significant deficiencies persist in pedagogical clarity and cultural contextualization. We identify two pervasive failure modes: the "Expert's Curse," where models solve complex problems but fail to explain them clearly to novices, and the "Foundational Fallacy," where performance paradoxically degrades on simpler, lower-grade material due to an inability to adapt to younger learners' cognitive constraints. Furthermore, regional models like Kimi K2 exhibit a "Contextual Blindspot," failing to provide culturally relevant examples in over 20% of interactions. These findings suggest that off-the-shelf LLMs are not yet ready for autonomous deployment in Nepalese classrooms. We propose a "human-in-the-loop" deployment strategy and offer a methodological blueprint for curriculum-specific fine-tuning to align global AI capabilities with local educational needs.

EducationLabor & Society
Fortune· 14 Apr 2026

Like Elon Musk, he was coding at 12 and became one of Google’s youngest ever CMOs—but now says Gen Z are better off ice skating than learning to code

Like Elon Musk and Mark Zuckerberg, Alon Chen coded his way to Google CMO and a seven-figure exit. Now he says AI has made the skill 'obsolete' for Gen Z

EducationLabor & Society
Arxiv· 14 Apr 2026

From Understanding to Creation: A Prerequisite-Free AI Literacy Course with Technical Depth Across Majors

arXiv:2604.09634v1 Announce Type: new Abstract: Most AI literacy courses for non-technical undergraduates emphasize conceptual breadth over technical depth. This paper describes UNIV 182, a prerequisite-free course at George Mason University that teaches undergraduates across majors to understand, use, evaluate, and build AI systems. The course is organized around five mechanisms: (1) a unifying conceptual pipeline (problem definition, data, model selection, evaluation, reflection) traversed repeatedly at increasing sophistication; (2) concurrent integration of ethical reasoning with the technical progression; (3) AI Studios, structured in-class work sessions with documentation protocols and real-time critique; (4) a cumulative assessment portfolio in which each assignment builds competencies required by the next, culminating in a co-authored field experiment on chatbot reasoning and a final project in which teams build AI-enabled artifacts and defend them before external evaluators; and (5) a custom AI agent providing structured reinforcement outside class. The paper situates this design within a comparative taxonomy of cross-major AI literacy courses and pedagogical traditions. Instructor-coded analysis of student artifacts at four assessment stages documents a progression from descriptive, intuition-based reasoning to technically grounded design with integrated safeguards, reaching the Create level of Bloom's revised taxonomy. To support adoption, the paper identifies which mechanisms are separable, which require institutional infrastructure, and how the design adapts to settings ranging from general AI literacy to discipline-embedded offerings. The course is offered as a documented resource, demonstrating that technical depth and broad accessibility can coexist when scaffolding supports both.

EducationLabor & Society
Arxiv· 14 Apr 2026

DeepReviewer 2.0: A Traceable Agentic System for Auditable Scientific Peer Review

arXiv:2604.09590v1 Announce Type: new Abstract: Automated peer review is often framed as generating fluent critique, yet reviewers and area chairs need judgments they can \emph{audit}: where a concern applies, what evidence supports it, and what concrete follow-up is required. DeepReviewer~2.0 is a process-controlled agentic review system built around an output contract: it produces a \textbf{traceable review package} with anchored annotations, localized evidence, and executable follow-up actions, and it exports only after meeting minimum traceability and coverage budgets. Concretely, it first builds a manuscript-only claim--evidence--risk ledger and verification agenda, then performs agenda-driven retrieval and writes anchored critiques under an export gate. On 134 ICLR~2025 submissions under three fixed protocols, an \emph{un-finetuned 196B} model running DeepReviewer~2.0 outperforms Gemini-3.1-Pro-preview, improving strict major-issue coverage (37.26\% vs.\ 23.57\%) and winning 71.63\% of micro-averaged blind comparisons against a human review committee, while ranking first among automatic systems in our pool. We position DeepReviewer~2.0 as an assistive tool rather than a decision proxy, and note remaining gaps such as ethics-sensitive checks.

EducationLabor & Society
Daily Brew· 14 Apr 2026

Q&A: MIT SHASS and the future of education in the age of AI

An exploration of how the School of Humanities, Arts, and Social Sciences at MIT is adapting educational frameworks to address the challenges and opportunities presented by AI.

EducationLabor & Society
hiringlab.org· 14 Apr 2026

Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? - Indeed Hiring Lab

Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? - Indeed Hiring Lab Share Share Speaker delivering a motivating training session to a group of diverse employees. # Workers Want Training but Employers Don’t Always Deliver. Can Policy Help? Workers across countries see skill development as essential but many feel employers consider it less of a priority. Recent evidence from Spain shows that incentivizing companies to offer permanent contracts can lead them to offer more training. By Carlos Victoria Lanzón& Jonas Jessen& Pawel Adrjan April 14, 2026 Key points: - Data - Workers worldwide say skills development is essential, but many also say they feel employers treat it as less

EducationLabor & Society
Arxiv· 14 Apr 2026

The Division of Understanding: Specialization and Democratic Accountability

arXiv:2604.09871v1 Announce Type: new Abstract: This paper studies how the organization of production shapes democratic accountability. I propose a model in which learning economies make specialization productively efficient: most workers perform one-domain tasks, while a small set of integrators with cross-domain knowledge keep the system coherent. When policy consequences run across domains, integrators understand them better than specialists. Electoral competition then tilts targeted services toward integrators' interests, while low aggregate system knowledge weakens governance and reduces the fraction of public resources converted into citizen-valued services. Labor markets leave these civic margins unpriced, failing to internalize the political returns to system knowledge. Broadening routine specialists can therefore raise welfare relative to the market allocation. The model speaks to debates on liberal art education and the effects of AI.

EducationLabor & Society
Arxiv· 14 Apr 2026

Explainability and Certification of AI-Generated Educational Assessments

arXiv:2604.09622v1 Announce Type: new Abstract: The rapid adoption of generative artificial intelligence (AI) in educational assessment has created new opportunities for scalable item creation, personalized feedback, and efficient formative evaluation. However, despite advances in taxonomy alignment and automated question generation, the absence of transparent, explainable, and certifiable mechanisms limits institutional and accreditation-level acceptance. This chapter proposes a comprehensive framework for explainability and certification of AI-generated assessment items, combining self-rationalization, attribution-based analysis, and post-hoc verification to produce interpretable cognitive-alignment evidence grounded in Bloom's and SOLO taxonomies. A structured certification metadata schema is introduced to capture provenance, alignment predictions, reviewer actions, and ethical indicators, enabling audit-ready documentation consistent with emerging governance requirements. A traffic-light certification workflow operationalizes these signals by distinguishing auto-certifiable items from those requiring human review or rejection. A proof-of-concept study on 500 AI-generated computer science questions demonstrates the framework's feasibility, showing improved transparency, reduced instructor workload, and enhanced auditability. The chapter concludes by outlining ethical implications, policy considerations, and directions for future research, positioning explainability and certification as essential components of trustworthy, accreditation-ready AI assessment systems.

EducationLabor & Society
Arxiv· 14 Apr 2026

Leveraging Machine Learning Techniques to Investigate Media and Information Literacy Competence in Tackling Disinformation

arXiv:2604.09635v1 Announce Type: new Abstract: This study develops machine learning models to assess Media and Information Literacy (MIL) skills specifically in the context of disinformation among students, particularly future educators and communicators. While the digital revolution has expanded access to information, it has also amplified the spread of false and misleading content, making MIL essential for fostering critical thinking and responsible media engagement. Despite its relevance, predictive modeling of MIL in relation to disinformation remains underexplored. To address this gap, a quantitative study was conducted with 723 students in education and communication programs using a validated survey. Classification and regression algorithms were applied to predict MIL competencies and identify key influencing factors. Results show that complex models outperform simpler approaches, with variables such as academic year and prior training significantly improving prediction accuracy. These findings can inform the design of targeted educational interventions and personalized strategies to enhance students' ability to critically navigate and respond to disinformation in digital environments.

Education
Substack· 13 Apr 2026

AI Explained: From Tokens to Intelligence — Class 101

They can call external tools search engines, calculators, databases, APIs, and incorporate the results into their output. This is how a model can tell you today’s weather, execute a code snippet, or query a live database. The model itself has not changed.

Education
Arxiv· 13 Apr 2026

Enhancing LLM Problem Solving via Tutor-Student Multi-Agent Interaction

arXiv:2604.08931v1 Announce Type: new Abstract: Human cognitive development is shaped not only by individual effort but by structured social interaction, where role-based exchanges such as those between a tutor and a learner, enable solutions that neither could achieve alone. Inspired by these developmental principles, we ask the question whether a tutor-student multi-agent system can create a synergistic effect by pushing Large Language Model (LLM) beyond what it can do within existing frameworks. To test the idea, we adopt autonomous coding problem domain where two agents instantiated from the same LLM assigned asymmetric roles: a student agent generates and iteratively refines solutions, while a tutor agent provides structured evaluative feedback without access to ground-truth answers. In our proposed framework (PETITE), we aim to extract better problem-solving performance from one model by structuring its interaction through complementary roles, rather than relying on stronger supervisory models or heterogeneous ensembles. Our model is evaluated on the APPS coding benchmark against state-of-the-art approaches of Self-Consistency, Self-Refine, Multi-Agent Debate, and Multi-Agent Review. The results show that our model achieves similar or higher accuracy while consuming significantly fewer tokens. These results suggest that developmentally grounded role-differentiated interaction structures provide a principled and resource-efficient paradigm for enhancing LLM problem-solving through structured peer-like interactions. Index Terms- Peer Tutoring, Scaffolding, Large Language Models, Multi-Agent Systems, Code Generation

EducationLabor & Society
Theregister· 13 Apr 2026

China wants AI to prepare school lessons and mark homework

PLUS: Toyota wheels out basketball bot; Arm scores AI server win with SK Telecom; India ponders payment pauses to foil fraudsters; And more! Asia In Brief China’s National Data Administration last Friday published its action plan for AI in education which calls for upskilling of the nation’s citizens to ensure they can put the technology to work.…

Education
technode.global· 13 Apr 2026

The rise of AI and why governments globally are stepping in to support Gen Z in the job market - TNGlobal

The rise of AI and why governments globally are stepping in to support Gen Z in the job market - TNGlobal ## The rise of AI and why governments globally are stepping in to support Gen Z in the job market April 13, 2026• AI, Future of Work, Opinion, TNGlobal Insider• There is a widespread belief that Gen Z is somehow ill-equipped for the modern workplace: unambitious, disengaged, unwilling to put in the hours. It’s an easy headline, but it is the wrong conclusion to draw. The reality is far more nuanced. Gen Z is hard-working, entrepreneurial, and represents the future of the global workforce and economy. What they are facing is not a lack of motivation, but a fundamentally di

EducationLabor & Society
Arxiv· 13 Apr 2026

Towards Generalizable Representations of Mathematical Strategies

arXiv:2604.08693v1 Announce Type: new Abstract: Pretrained encoders for mathematical texts have achieved significant improvements on various tasks such as formula classification and information retrieval. Yet they remain limited in representing and capturing student strategies for entire solution pathways. Previously, this has been accomplished either through labor-intensive manual labeling, which does not scale, or by learning representations tied to platform-specific actions, which limits generalizability. In this work, we present a novel approach for learning problem-invariant representations of entire algebraic solution pathways. We first construct transition embeddings by computing vector differences between consecutive algebraic states encoded by high-capacity pretrained models, emphasizing transformations rather than problem-specific features. Sequence-level embeddings are then learned via SimCSE, using contrastive objectives to position semantically similar solution pathways close in embedding space while separating dissimilar strategies. We evaluate these embeddings through multiple tasks, including multi-label action classification, solution efficiency prediction, and sequence reconstruction, and demonstrate their capacity to encode meaningful strategy information. Furthermore, we derive embedding-based measures of strategy uniqueness, diversity, and conformity that correlate with both short-term and distal learning outcomes, providing scalable proxies for mathematical creativity and divergent thinking. This approach facilitates platform-agnostic and cross-problem analyses of student problem-solving behaviors, demonstrating the effectiveness of transition-based sequence embeddings for educational data mining and automated assessment.

EducationLabor & Society
washingtonpost.com· 13 Apr 2026

The hottest college major hit a wall. What happened?

Computer science majors are disappearing. Is AI to blame? - The Washington Post Democracy Dies in Darkness Column by Shira Ovide and Shira is eager to hear from college students and their families about how you’re feeling about the job market. Drop her a line at shira.ovide@washpost.com. A lot of students took the advice to learn to code. Since the Great Recession left technology as a rare spot of optimism in American industry, computer science has been among the fastest-growing college majors in the country, according to indispensable degree data from the National Center for Education Statistics.

EducationLabor & Society
PR Newswire· 13 Apr 2026

New Pearson and AWS Global Research: 53% of Employers Struggle to Find AI-Ready Graduates

/PRNewswire/ -- Pearson (FTSE: PSON.L), the world's lifelong learning company, and Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN),...

EducationLabor & Society
hr-brew.com· 13 Apr 2026

AI readiness gap is slowing productivity gains - HR Brew

AI readiness gap is slowing productivity gains April 13, 2026 • less than 3 min read Imagine if NASA built a rocket ship, laid out a plan to go to the moon, but didn’t train the astronauts on how to crew the new vessel. NASA would never. In fact, the Artemis II crew trained for three years in a replica Orion capsule practicing real-world (pun intended) scenarios in order to master the tech they’d be using away from Earth. The crew sometimes spent more than 30 hours at a time training ahead of its mission, which safely splashed down in the Pacific Ocean Friday. Compare that to your department’s last AI training session. A gap in AI readiness is impacting AI’s potential gains across industries, according to a new report from study.com on the

PaywallEducationLabor & Society
FT· 13 Apr 2026

Robot-Proof — can the next generation keep a step ahead of the machines?

Vivienne Ming argues for a change in how we prepare the young for a near-future dominated and ‘deprofessionalised’ by AI

EducationLabor & Society
futurism.com· 13 Apr 2026

Recent Grads Say AI Is Making It Impossible to Find a Job - Futurism

Recent Grads Say AI Is Making It Impossible to Find a Job Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images ## Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Sign Up Thank you! As debate rages over whether AI’s effects on the job market are real or illusory, one thing is clear: recent college grads are entering a labor force that has no room for them regardless. In a poll conducted by Gallup over the final three months of 2025, a whopping 72 percent of respondents said it was a “bad time” to find a quality job. From December of last year to March, the [labor force participation rate fell](https://fred.stlouisfed.org/release/tables?rid=50&ei

EducationLabor & Society
buildcognitiveresonance.substack.com· 13 Apr 2026

An illustrated guide to resisting "AI is inevitable" in education

An illustrated guide to resisting "AI is inevitable" in education SubscribeSign in # An illustrated guide to resisting "AI is inevitable" in education ### Enough, enough, enough Apr 13, 2026 56 1 14 Share ### 1. Ask the AI-in-education enthusiast to clarify their premise. (Slide created by Jane Rosenzweig, Director of Harvard College Writing Center) ### 2. Ask the AI-in-education enthusiast if they are familiar with recent research indicating that generative AI leads to widespread “cognitive surrender.” Shot: “Conceptually distinct from cognitive offloading, which involves strategically outsourcing a discrete task to an external tool (e.g., using a calculator), cognitive surrender represents a deeper abdication of critical evaluation, where the user relinquishes cognitive control and adopts the AI’s judgment as their own…. Across our studies, we observe that when System 3 [

Education
Substack· 12 Apr 2026

The System Works. That’s the Problem. Les Misérables AI Warning: What Hugo Saw Coming

A designer uses AI to generate work faster than her rate allows. A student uses it to complete assignments he doesn’t fully understand.

Education
WFMD-AM· 12 Apr 2026

The AI revolution threatens office jobs, but revives demand for skilled trades | 930 WFMD Free Talk

AI threatens white-collar jobs more than blue-collar trades, highlighting an urgent need to realign education with workforce demands in America.

EducationLabor & Society
@emollick· 12 Apr 2026

I had read your paper as the opposite - more senior operators were unable to find jobs after automation, and younger workers found other work. Is that incorrect?

I had read your paper as the opposite - more senior operators were unable to find jobs after automation, and younger workers found other work. Is that incorrect?

EducationLabor & Society
Exponentialview· 12 Apr 2026

🔮 Exponential View #569: When the future is uncertain, what do you teach?

On education & AI, Mythos, GLP-1s++

Education
Reddit· 11 Apr 2026

Google Engineer Uses AI to Sue Universities

A Google engineer is using AI to sue universities for racial discrimination.

Education
asianintelligence.ai· 11 Apr 2026

Agap Ai And The Philippines Education Led | Asian Intelligence

Agap Ai And The Philippines Education Led | Asian Intelligence Skip to main content Quick Take ## What this page helps answer A source-first analysis of AGAP.AI and the Philippines’ education-led AI capacity strategy, focused on literacy, workforce formation, and public-sector. Who, How, Why Who Asian Intelligence Editorial Team How Prepared from cited public sources and reviewed against the site’s editorial standards. Why To give readers sourced context on AI policy, company strategy, and technology development in Asia. Region Asia Topic AI policy, company strategy, and technology development 3 min read Published by Asian Intelligence Editorial Team Published Apr 11, 2026 Updated Apr 11, 2026 Editorial team [Editorial standards](https://

Education
elperiodicomediterraneo.com· 11 Apr 2026

CARRERA UNIVERSIDAD | Un estudio del Deutsche Bank afirma que el título universitario “ya no es garantía de éxito laboral” tras la irrupción de la IA

CARRERA UNIVERSIDAD | Un estudio del Deutsche Bank afirma que el título universitario “ya no es garantía de éxito laboral” tras la irrupción de la IA Saltar al contenido principalSaltar al pie de página Imagen de archivo de alumnos universitarios realizando un examen. / EP Madrid11 ABR 2026 6:00 Actualizada 11 ABR 2026 6:00 El mercado de trabajo ya no volverá a ser el mismo tras la irrupción definitiva de la inteligencia artificial generativa. Un reciente informe de Deutsche Bank advierte de que la IA ha dejado de ser una habilidad complementaria para convertirse en un requisito excluyente en los sectores más dinámicos de la economía. Esta transformación está generando una paradoja preocupante. Y es que, a pesar de la formación superior, los [jóvenes graduados](https://www.elperiodicomediterraneo.com/tags/unive

Education
Arxiv· 10 Apr 2026

From Core to Periphery? Assessing Remote Works Potential to Rebalance EU Regional Development

arXiv:2604.08252v1 Announce Type: new Abstract: The rapid expansion of remote work following the last pandemic has renewed interest in whether spatial decoupling of residence from workplace can contribute to rebalancing regional development across the European Union. This paper examines four interrelated dimensions of remote work-induced residential mobility using the R-MAP survey dataset, a large-scale cross-sectional survey of over 7,400 remote workers across Europe collected in 2024. First, the spatial direction of post-2020 relocations is analysed, revealing that mobility occurs overwhelmingly within the same urbanisation tier, with urban-to-urban moves accounting for 67% of all relocations. Counter-urban flows to- ward rural areas remain marginal at just 2% of moves, though their relative demograph- ic impact on small rural populations is non-trivial. Second, the motivational structure of relocation decisions is examined, showing that quality-of-life considerations dominate (cited by 78% of movers), followed by economic and housing factors (70%), while digital infrastructure ranks among the least cited reasons. Third, amenity preferences are compared across residential contexts, documenting striking convergence between urban and rural remote workers, with statistically significant differences emerging only for public transport and restaurant access. Fourth, logistic regression models reveal that remote work intensity is a consistent positive predictor of relocation probability, with a transition from 50% to fully remote work associated with a 6.5 percentage point in- crease in relocation likelihood. Age, education, and industry sector also shape mobility patterns. Overall, the findings suggest that remote work primarily stretches metropolitan systems and reinforces peri-urban zones rather than triggering large-scale redistribution toward structurally weaker peripheral regions.

Education
Arxiv· 10 Apr 2026

Google, AI Literacy, and the Learning Sciences: Multiple Modes of Research, Industry, and Practice Partnerships

arXiv:2604.07601v1 Announce Type: new Abstract: Enabling AI literacy in the general population at scale is a complex challenge requiring multiple stakeholders and institutions collaborating together. Industry and technology companies are important actors with respect to AI, and as a field, we have the opportunity to consider how researchers and companies might be partners toward shared goals. In this symposium, we focus on a collection of partnership projects that all involve Google and all address AI literacy as a comparative set of examples. Through a combination of presentations, commentary, and moderated group discussion, the session, we will identify (1) at what points in the life cycle do research, practice, and industry partnerships clearly intersect; (2) what factors and histories shape the directional focus of the partnerships; and (3) where there may be future opportunities for new configurations of partnership that are jointly beneficial to all parties.

Education
Arxiv· 10 Apr 2026

PRISM: Evaluating a Rule-Based, Scenario-Driven Social Media Privacy Education Program for Young Autistic Adults

arXiv:2604.07531v1 Announce Type: new Abstract: Young autistic adults may garner benefits through social media but also disproportionately experience privacy harms. Prior research found that these harms often stem from perceiving the affordances of social media differently than the general population, leading to unintentional risky behaviors and interactions with others. While educational interventions have been shown to increase social media privacy literacy for the general population, research has yet to focus on effective educational interventions for autistic young adults. We address this gap by developing and deploying Privacy Rules for Inclusive Social Media (PRISM), a classroom-based educational intervention tailored to the unique risks and neurodevelopmental differences of this population. Twenty-nine autistic students with substantial (level 2) support needs participated in a 14-week social media privacy literacy class. During these classes, participants often communicated their existing rule-based "all or nothing" approaches to privacy management (such as completely disengaging from social media to avoid privacy issues). Our course focused on empowering them by providing more nuanced guidance on safe privacy practices through the use of scenario-based formats and contextual, rule-based scenarios. Using pre- and post-knowledge assessments for each of our 6 course topics, our intervention led to a statistically significant increase in their making safer social media privacy decisions. We conclude with recommendations for how privacy educators and technology designers can leverage neuro-affirming educational interventions to increase privacy literacy for autistic social media users.

Education
techaimint.substack.com· 10 Apr 2026

The role of humans in an AI world - by Leslie D'Monte

The role of humans in an AI world - by Leslie D'Monte # Mint Tech & AI SubscribeSign in # The role of humans in an AI world ### Two new reports outline how humans can meaningfully stay relevant in an AI world. Apr 10, 2026 17 Share AI systems today can finish in minutes what would take humans months. That’s not just acceleration but a shift toward capabilities that could surpass even the smartest humans, with or without AI assistance. Some call this “super-intelligence”. This month, two reports—one from OpenAI and another from the Center for Humane Technology (CHT)—outline how to keep humans from being sidelined in this speedy race by meaningfully keeping them in the loop. In today’s edition of Mint Tech Talk: OpenAI’s blueprint for sharing AI dividends Some hard truths about AI’s race Google Gemma 4’s open challenge AI Tool of the Week: Google Vids Anthropic’s Mythos, Gla

Education
m.cnyes.com· 10 Apr 2026

擔心被AI取代 美國47%大學生「認真考慮」換主修 六分之一付諸實行 | 鉅亨網 - 美股雷達

擔心被AI取代 美國47%大學生「認真考慮」換主修 六分之一付諸實行 | 鉅亨網 - 美股雷達 美股 # 擔心被AI取代 美國47%大學生「認真考慮」換主修 六分之一付諸實行 鉅亨網編譯許家華2026-04-10 12:00 一項最新調查顯示,人工智慧 (AI) 正深刻影響大學生對未來職涯的規劃。由 Lumina 基金會和蓋洛普共同發布的《2026 年高等教育現況報告》指出,約 47% 的美國大學生曾「認真考慮」因 AI 影響而轉換主修領域,顯示 AI 已成為影響學習選擇的重要因素。 (圖:Shutterstock) 該調查於 2025 年 10 月進行,針對 3801 名年齡介於 18 至 59 歲、就讀學士或副學士學位的美國學生進行線上訪問。結果顯示,約六分之一學生已實際因 AI 改變主修,其中 13% 的學士生與 19% 的副學士生表示已經轉換科系。 若進一步觀察考慮轉換的比例,整體學生中有 47% 表示曾至少「相當程度」思考更換主修,其中學士生為 42%,副學士生則高達 56%,顯示後者對就業市場變化更為敏感。 據 Lumina 基金會影響力與規劃副總裁 Courtney Brown 博士指出,AI 正改變學生「思考未來的方式」,許多人在媒體報導影響下,擔憂 AI 將取代大量工作,進而質疑投入時間與金錢取得學位是否值得。 調查顯示,學生轉換主修的首要原因在於對未來就業機會的不確定性。學生普遍關心「哪個科系能確保畢業後找到工作」,而副學士生轉換比例較高,可能與其課程更直接對應當前勞動市場需求有關。 然而,無論是學士或副學士學生,都面臨相同困境,即難以判斷哪些科系在 AI 時代仍具「長期價值」。在科技與技職領域中,分別有 27% 與 17% 的學生表示曾高度考慮轉換主修,但同時這些領域也是學生轉入的熱門選擇,反映出對未來方向的不確定與矛盾心理。 除了影響科系選擇,AI 也成為學生決定是否進入高等教育的重要因素。約七分之一學生表示,為因應科技進步 (包括 AI) 是其就讀大學的主要原因之一,另有 12% 則因擔心 AI 對就業市場的衝擊而選擇升學

EducationLabor & Society
Inside Higher Ed· 10 Apr 2026

What Do Employers Mean by “AI Skills,” Anyway? (opinion)

Across the board, students wondered, ... with labor and resources.” · But in practical terms, coupled with a fear of missing out or not having a competitive edge in the job market, students expressed confusion over what skills they really need. As one student reported, “I am having a hard time finding concrete information regarding what employers mean by ‘AI skills.’ ...

Education
arXiv· 9 Apr 2026

Human-AI Collaboration Reconfigures Group Regulation from Socially Shared to Hybrid Co-Regulation

Generative AI (GenAI) is increasingly used in collaborative learning, yet its effects on how groups regulate collaboration remain unclear. Effective collaboration depends not only on what groups discuss, but on how they jointly manage goals, participation, strategy use, monitoring, and repair through co-regulation and socially shared regulation. We compared collaborative regulation between Human-AI and Human-Human groups in a parallel-group randomised experiment with 71 university students compl...

EducationLabor & Society
Siliconrepublic· 9 Apr 2026

How are software engineering graduates adjusting to AI?

BearingPoint’s Karl Byrne, Holly Daly and Fiona Eguare discuss the effects of AI on software engineering and how it has affected graduates in particular. Read more: How are software engineering graduates adjusting to AI?

PaywallEducation
NYT· 9 Apr 2026

Half of Gen Z Uses AI, but Their Feelings Are Souring, Study Shows

A new study from Gallup found that young adults have grown less hopeful and more angry about artificial intelligence.

Education
Arxiv· 9 Apr 2026

Knowledge Markers: An AI-Agnostic Concept for the Design of Programming Courses

arXiv:2604.06331v1 Announce Type: new Abstract: Generative AI enables students to produce plausible code quickly. Producing working code is therefore no longer a reliable indicator of understanding. This is particularly problematic in non-computer-science programmes, where time constraints make it hard to balance conceptual foundations with sufficient application practice. Empirical studies of AI tutors, educational chatbots, and code-assistance systems report useful but often case-specific findings, while learning theory remains too abstract to directly guide course design. As a result, instructors lack a simple, reusable way to make learning intent explicit and translate it into concrete teaching structures and student learning behaviour. This paper contributes knowledge markers as a lightweight, AI-agnostic, course-level operationalisation for course design. The markers label learning units by their primary emphasis: (A) Application knowledge (implementation), (S) Structure knowledge (concepts and mental models), or (P) Procedure knowledge (systematic methods, decision making, and verification). We show how the labels can be embedded at fine granularity in open teaching artifacts (interactive website, PDF script, and notebooks), paired with communication elements and optional AI-usage guidance. We demonstrate the approach by analysing, redesigning, and descriptively evaluating an introductory programming course using marker distributions derived from the table of contents. The paper is design- and artifact-oriented and does not claim measured learning gains; empirical evaluation is future work.

Education
Arxiv· 9 Apr 2026

ProofSketcher: Hybrid LLM + Lightweight Proof Checker for Reliable Math/Logic Reasoning

arXiv:2604.06401v1 Announce Type: new Abstract: The large language models (LLMs) might produce a persuasive argument within mathematical and logical fields, although such argument often includes some minor missteps, including the entire omission of side conditions, invalid inference patterns, or appeals to a lemma that cannot be derived logically out of the context being discussed. These omissions are infamously hard to notice solely out of the text, as even the misconstrued construction still may seem mostly accurate. Conversely, interactive theorem provers like Lean and Coq have rigorous reliability by ensuring that syntactic and semantic statements only accept statements that can pass all the syntactic and semantic steps in the program which is a small trusted kernel of the language type-checks with. Despite the fact that this technique provides strong guarantees, it comes at quite a heavy price: the evidence must be completely formalized, and the evidence user or a auxiliary search program must provide an avalanche of low-level information. This paper presents a hybrid pipeline where an LLM generates a typed proof sketch in a compact DSL and a lightweight trusted kernel expands the sketch into explicit proof obligations.

Education
Daily Brew· 9 Apr 2026

A philosophy of work

A new perspective on the evolving nature of work and its philosophical implications in the modern era.

Education
Business A.M· 9 Apr 2026

Entrepreneurship as solution to Nigeria’s job market crisis

To bridge the skills gap effectively, we should look to something like the Industrial Training Fund (ITF) model, which provides work experience with ongoing education for students attached to industries and also encourages employers to regularly train their employees to acquire skills. This ITF system compensates industries that train their staff through refunds and tax benefits. The economy ...

Education
theslowai.substack.com· 9 Apr 2026

Slow AI Curriculum Q1 2026: What Scholars Discovered About AI Bias, Empathy, and Security

Slow AI Curriculum Q1 2026: What Scholars Discovered About AI Bias, Empathy, and Security # Slow AI SubscribeSign in The Slow AI Curriculum # Quarterly Reflective Check-in: January to March 2026 ### Three months of the Slow AI Curriculum have passed. Here is what the cohort found, what surprised me, and what is coming next. Apr 09, 2026 ∙ Paid 40 23 4 Share The cohort taught me more than the curriculum did. Three sessions. Bias, empathy, security. Three prompts, three sets of results that none of us fully anticipated. What follows is not a summary. It is a reflection on what happened when a global cohort of scholars tested these ideas against their own tools, their own assumptions, and each other. #### In this post I will: Walk through the three sessions and what the cohort

EducationLabor & Society
Sify· 9 Apr 2026

With AI Set to Destroy Millions of Indian Jobs, Traditional Career Advice is a Death Trap - Sify

AI to Destroy Millions of Indian Jobs, Traditional Career Advice is a Death Trap

EducationLabor & Society
AI Insider· 9 Apr 2026

Tech Skills Platform Startup Flashpass Raises $4.1M in Seed Funding to Address 'AI Doomerism'

Flashpass, a workforce training platform focused on building "an antidote to AI doomerism and labor displacement," announced it has raised $4.1 million in an oversubscribed seed round as it looks to expand its offerings and scale adoption across government and education programs.

EducationLabor & Society
hrexecutive.com· 9 Apr 2026

Closing the growing skills gap caused by AI

Closing the growing skills gap caused by AI - About HR Executive - Advertise with us - Awards - Privacy Policy Search type here... SEARCH # When employees become tools of the tool: AI’s risk to employee development Emerging HR Tech Guest viewpoints HR Technology Date: April 9, 2026 Share post: (Image: Adobe) Chris Tatarka, Ph.D., is a leadership trainer, organizational development consultant and adjunct faculty member. With more than 35 years of military and federal service, he speciali

Education
Arxiv· 9 Apr 2026

Are LLMs Ready for Computer Science Education? A Cross-Domain, Cross-Lingual and Cognitive-Level Evaluation Using Professional Certification Exams

arXiv:2604.06898v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly applied in computer science education for tasks such as tutoring, content generation, and code assessment. However, systematic evaluations aligned with formal curricula and certification standards remain limited. This study benchmarked four recent models, including GPT-5, DeepSeek-R1, Qwen-Plus, and Llama-3.3-70B-Instruct, using a dataset of 1,068 questions derived from six certification exams covering networking, office applications, and Java programming. We evaluated performance across language (Chinese vs. English), cognitive levels based on Bloom's Taxonomy, domain knowledge, confidence-accuracy alignment, and robustness to input masking. Results showed that GPT-5 performed best on English-language certifications, while Qwen-Plus performed better in Chinese contexts. DeepSeek-R1 achieved the most balanced cross-lingual performance, whereas Llama-3.3 showed clear limitations in higher-order reasoning and robustness. All models performed worse on more complex tasks. These findings provide empirical support for the integration of LLMs into computer science education and offer practical implications for curriculum design and assessment.

Education
Arxiv· 9 Apr 2026

Better Balance in Informatics 2.0: The First-Year Students

arXiv:2604.06731v1 Announce Type: new Abstract: Diversity among computer scientists and technologists is necessary for the sustainable development of society through technological innovation. At UiT The Arctic University of Norway, only 13% of computer science students are women. Many find the learning curve in introductory computer science courses to be very steep, and thus, they drop out. Female students tend to be overrepresented in this group. The goal of this project was to improve the gender balance among computer science students at UiT by focusing on female first-year students and ensuring that they do not drop out of the study programs in the first year of study. The project established a seminar series for strengthening the basic programming-technical skills that many first-year students lack, and exposing them to different aspects and career paths within the computer science subject beyond the focus area of the study program. Results show positive developments, particularly related to the students' perceived introduction to basic technical topics. A comparison between 2024 and 2025 shows improvements in several of the areas addressed in the technical workshops, including use of file systems, terminals, debugging and the code development process. However, effects on dropout and study experience require more long-term measures.

Education
Arxiv· 9 Apr 2026

Thinking in Graphs with CoMAP: A Shared Visual Workspace for Designing Project-Based Learning

arXiv:2604.06200v1 Announce Type: new Abstract: Designing project-based learning (PBL) demands managing highly interdependent components, a task that both traditional linear tools and purely conversational AI struggle with. Traditional tools fail to capture the non-linear nature of creative design, while conversational systems lack the persistent, shared context necessary for reflective collaboration. Grounded in theories of distributed cognition, we introduce CoMAP, a system that embodies a graph-based collaboration paradigm. By providing a shared visual workspace with dual-modality AI support, CoMAP transforms the human-AI relationship from a prompt-and-response loop into a transparent and equitable partnership. Our study with 30 educators shows CoMAP significantly improves teachers' design expression, divergent thinking, and iterative practice compared to a dialogue-only baseline. These findings demonstrate how a nonlinear, artifact-centric approach can foster trust, reduce cognitive load, and \textcolor{fix}{support} educators to take control of their creative process. Our contributions are available at: https://comap2025.github.io/.

Education
LinkedIn· 8 Apr 2026

Lilah Kole - The AI Collective | LinkedIn

We’re now in the 3D content creation phase of our spatial computing ( AI + creativity) program with Boys &amp; Girls Club Chicago (course runs for 6 weeks). One of the things I love most about this work is introducing students to powerful new tools—like the one I discovered this morning.

Education
arXiv· 8 Apr 2026

Why teaching resists automation in an AI-inundated era: Human judgment, non-modular work, and the limits of delegation

Debates about artificial intelligence (AI) in education often portray teaching as a modular and procedural job that can increasingly be automated or delegated to technology. This brief communication paper argues that such claims depend on treating teaching as more separable than it is in practice. Drawing on recent literature and empirical studies of large language models and retrieval-augmented generation systems, I argue that although AI can support some bounded functions, instructional work r...

Education
Epicenter NYC· 8 Apr 2026

Parents say NYC schools’ AI policy leaves out what matters most: students - Epicenter NYC

Late last month, the city released its initial guidance on the use of artificial intelligence (AI) in schools and asked for public feedback by May 8. The response from students, […]

Education
arXiv· 8 Apr 2026

XR-CareerAssist: An Immersive Platform for Personalised Career Guidance Leveraging Extended Reality and Multimodal AI

Conventional career guidance platforms rely on static, text-driven interfaces that struggle to engage users or deliver personalised, evidence-based insights. Although Computer-Assisted Career Guidance Systems have evolved since the 1960s, they remain limited in interactivity and pay little attention to the narrative dimensions of career development. We introduce XR-CareerAssist, a platform that unifies Extended Reality (XR) with several Artificial Intelligence (AI) modules to deliver immersive, ...

Education
Daily Brew· 8 Apr 2026

Adobe Unveils AI Study Tool

Adobe has unveiled Acrobat Spaces, a free AI-powered tool within Acrobat, aimed at revolutionizing study methods by transforming PDFs, links, and notes into interactive learning materials.

Education
@erikbryn· 8 Apr 2026

What an awesome way to start my AI Awakening class at Stanford! The legendary @FutureJurvetson has a knack for seeing around corners thanks to his deep technical knowledge. I can't wait to see the projects the students deliver in 10 weeks.

What an awesome way to start my AI Awakening class at Stanford! The legendary @FutureJurvetson has a knack for seeing around corners thanks to his deep technical knowledge. I can't wait to see the projects the students deliver in 10 weeks.

Education
usaii.org· 8 Apr 2026

The 2026 AI Skills Gap: Turning Training into Workforce ...

The 2026 AI Skills Gap: Turning Training into Workforce Power - Email : ask@usaii.org # The 2026 AI Skills Gap: Turning Training into Workforce Power Apr 07, 2026 The inclusion of artificial intelligence in the daily activities of businesses is becoming a necessity, but most companies are finding that the implementation of AI does not necessarily equate to the actual ability of the organization to operate the workforce. Businesses are still putting funds into training programs and workshops, yet the anticipated business contribution is usually minimal. The EY Survey, 2026, states that 88% of organizations use AI in the workplace, but only 28% have managed to empower employees to utilize AI to revoluti

Education
MaharlikaNews· 8 Apr 2026

AI Research Is Getting Harder to Separate From Geopolitics - MaharlikaNews | Canada’s Leading Online Filipino Newspaper – No. 1 Information Hub for Filipino-Canadians with 250K Visitors in 2020

A policy change announced by NeurIPS, the world’s leading AI research conference, drew widespread backlash from Chinese researchers this week and then was quickly reversed. ... The world’s top AI research conference, the Conference on Neural Information Processing Systems—better known as NeurIPS—became the latest organization this week to become embroiled in a growing clash between geopolitics ...

Education
IT Brief Asia· 8 Apr 2026

Forrester finds AI training gap stalls workplace gains

Forrester says workplace AI rollouts are outpacing staff training, with only 16% of employees reaching high readiness in 2025.

Education
St. Louis Post-Dispatch· 8 Apr 2026

AI reshapes which workers gain and lose in job market

WashU survey research finds workers who adopt AI as a learning and skill-building tool increase effort and may see long-term career and earnings advantages.

Education
mitsloan.mit.edu· 8 Apr 2026

How to accelerate AI transformation

How to accelerate AI transformation | MIT Sloan As organizations deploy artificial intelligence more widely, routine tasks are being automated, and some work is shifting from people to AI systems. Yet, many firms are finding that the returns on their investments in such systems are elusive. The problem is not adoption of the technology alone but how organizations adapt the way they work to use it, according to MIT Sloan School of Management visiting senior lecturer Paul McDonagh-Smith. AI at WorkResearch and insights powering the intersection of AI and business, delivered monthly. Yes, I’d also like to subscribe to the Thinking Forward newsletter Email Leave this field blank “There is an interdependency between three elements: the work itself, the workforce, and the workplace,” he said. “If organizations fail to align these three elements, it becomes difficult to generate measurab

Education
export· 8 Apr 2026

A Multi-Agent Approach to Validate and Refine LLM-Generated Personalized Math Problems

arXiv:2604.05160v1 Announce Type: new Abstract: Students benefit from math problems contextualized to their interests. Large language models (LLMs) offer promise for efficient personalization at scale. However, LLM-generated personalized problems may often have problems such as unrealistic quantities and contexts, poor readability, limited authenticity with respect to students' experiences, and occasional mathematical inconsistencies. To alleviate these problems, we propose a multi-agent framework that formalizes personalization as an iterative generate--validate--revise process; we use four specialized validator agents targeting the criteria of solvability, realism, readability, and authenticity, respectively. We evaluate our framework on 600 problems drawn from a popular online mathematics homework platform, ASSISTments, personalizing each problem to a fixed set of 20 student interest topics. We compare three refinement strategies that differ in how validation feedback is coordinated into revisions. Results show that authenticity and realism are the most frequent failure modes in initial LLM-personalized problems, but that a single refinement iteration substantially reduces these failures. We further find that different refinement strategies have different strengths on different criteria. We also assess validator reliability via human evaluation. Results show that reliability is highest on realism and lowest on authenticity, highlighting the need for better evaluation protocols that consider teachers' and students' personal characteristics.

AI Daily Brief — leaders actually read it.

Free email — not hiring or booking. Optional BPAI updates for company news. Unsubscribe anytime.

Include

No spam. Unsubscribe anytime. Privacy policy.