The need to develop an AI-ready workforce is creating a new challenge for secondary and higher education leaders: how to equip students with essential AI skills while ensuring fair and authentic learning.
Striking the right balance between AI literacy and academic integrity demands a shift from reactive policies to proactive, evidence-based strategies.
To help inform institutions making this transition, we’ve drawn on (often surprising) data insights from:
- Crossroads: Navigating the intersection of AI in education —Turnitin and Vanson Bourne’s (2025) global research with over 2,500 university students, academic administrators, and educators.
- The UK’s Higher Education Policy Institute’s Student Generative AI Survey 2025 .
In this article, you’ll find evidence and practical steps to confidently reshape institutional policy, support learning, and accelerate progress toward an AI-ready workforce.
Why is building an AI-ready workforce urgent for higher education?
The AI-ready workforce is needed—here and now. AI literacy features on
LinkedIn’s top employability skill list
this year, based on their analysis of thousands of job postings on their site.
The responsibility for building an AI-ready workforce rests on the shoulders of education and industry, building AI proficiency in future and existing employees, respectively.
However, the fast rate of AI innovation has left many educational institutions struggling to adapt pedagogy, curricula, and tools at an equivalent pace.
This, combined with a reactive and often punitive approach to student use of AI, is putting graduate AI literacy, student outcomes, and graduate employability at risk.
In this article, we’ll make the case for more proactive and AI-positive institutional strategies to ensure universities meet their duty—to both learners and the economy—to create an AI-ready workforce. Plus, provide strategies to do so.
Key findings
Students
- Learners expect their institutions to provide opportunities to develop AI literacy.
- Despite fears, most aim to use AI to support, not bypass, learning.
- But many lack the knowledge or confidence to use it effectively.
Educators
- The majority of educators know students need AI proficiency for employability.
- They feel a duty to develop student AI skills but aren’t confident in their own.
- Nearly half want to use AI in their work, but don’t know how.
Institutions
- Must adapt AI policies to create an AI-ready workforce.
- Need to boost AI literacy in both staff and students.
- Should provide AI tools for equality and fair access.
How can institutions align AI strategies with workforce needs?
Institutional AI policies have emerged largely in response to the perceived threat posed by AI to academic integrity. Rapid, widespread access to generative AI tools and rising misconduct cases have understandably led many institutions into a reactive—often punitive—approach to student AI use.
However, experts are now recommending a course correction, as the current approach is not sufficient in empowering students to harness AI confidently and responsibly.
- Most institutions’ approach to student AI use is reactive and doesn’t represent the emerging reality of student attitudes to AI.
- Blanket bans on AI use deny students the opportunity to develop essential AI skills, risking both graduate employability and institutional reputation.
- Punitive policies make students overly cautious about using AI, while a lack of support risks reinforcing socio-economic divides in AI competency.
How are students using AI to support their learning?
The number of students using generative AI to write assignments has
risen from 53% last year to 88% in 2025
, leading to fears of a Wild West of rampant plagiarism. While it is true that
AI cheating investigations are up,
the overall picture is more nuanced.
Our 2025 research survey featuring the perspectives of university students found a mature and measured approach to AI use.
- Students have a more cautious approach to AI than faculty: 64% of students are worried about the use of AI in education, compared to 50% of educators and 41% of administrators.
- They’re more likely than faculty to view AI as cheating: 63% of students consider submitting a full AI-generated assignment as cheating, compared to 55% of educators and 45% of administrators.
The Higher Education Policy Institute (HEPI) research reinforces our findings.
- Students need more institutional support: While students ‘overwhelmingly believe’ that AI skills are essential, only 36% have received support from their institution to develop them.
- AI policies are deterring appropriate AI use: Of students who don’t use AI, 31% say it is because their ‘institution discourages or bans the use of AI’.
HEPI also finds that students are most likely to use AI to develop and improve their work, rather than outsource it. When asked how they use AI, students responded:
- To save time – 51%
- To improve the quality of their work – 50%
- To get instant support – 40%
- To receive personalized support – 32%
In terms of what students deem acceptable use of AI, ‘Explaining concepts’ came out on top with 63%, while ‘Use in an assessment without editing’ came last at 6%.
This shows students lean towards ethical, self-service use of AI tools to develop their work outside the classroom, dispelling the popular narrative that AI access automatically leads to AI misconduct.
Clear institutional policies on AI use, reported by 80% of students in HEPI’s survey, likely shape these norms. However, some restraint may also stem from fear of detection.
- 76% of students believe their institution can detect AI use.
- 53% of students who avoid AI cite ‘being accused of cheating’ as the reason (though this does not imply the intention to cheat, simply the desire to be above suspicion).
These findings show the complex interplay between institutional policy and student behavior—and could explain why our research finds 50% of students say they don’t know how to get the most out of AI.
What risks do restrictive AI policies pose for institutions?
It’s clear that institutional AI strategies need to evolve to reflect the changing dynamic between students and artificial intelligence. Failure to do so risks:
- Student outcomes – Decline due to lost AI learning opportunities.
- Graduate employability – Impacted by low AI literacy and readiness.
- Institutional reputation – Damaged by unprepared graduates.
- Rising costs of investigating misconduct – Estimated at £95,000 per year in the UK .
However, HEPI finds that institutions still lean towards risk-averse strategies around AI.
- "Institutions appear to find concerns about cheating more pressing than the need to support students to develop AI skills. As a result, efforts to safeguard assessments are more advanced than efforts to boost students’ AI literacy."
While sympathetic to these concerns, they recommend that institutions regularly review their student AI use policy to reflect the growing nuance in the debate over AI’s place in learning.
- "We recommend that institutions keep their assessment practices under constant review, particularly as AI becomes more powerful and students become more proficient with AI tools [...] However, institutions should not adopt a mainly punitive approach; instead, their AI policies should reflect that AI use by students is inevitable and often beneficial. Institutions should share best practices and work together to design effective teaching and learning strategies."
The HEPI guidance is clear: reactive restrictions are ineffective and undermine broader goals to create an AI-ready workforce. To equip students for the future of work, institutional AI policies must move from prohibitive and punitive to proactive and pro-AI proficiency.
How can education leaders lead AI adoption effectively?
In light of this, the opportunity emerging for educational leaders is to strategically pivot pedagogy, policy, and tools to:
- Better equip educators to support students navigating new AI realities.
- Cultivate student AI literacy and create AI-ready graduates.
- Boost authentic student learning and graduate quality.
However, to achieve this, institutions don’t just need to address the learner AI literacy gap. Our research shows that academic administrators and educators are just as keen as students to integrate AI into their workflows—but they’re equally unsure how.
Building an AI-ready workforce is therefore a two-fold imperative for education leaders.
- Institutions need to equip students and graduates with the AI proficiency that employers are coming to demand.
- They also need to empower their academic staff to deliver AI literacy in a way that enhances learning, while safeguarding academic integrity.
As AI proficiency becomes a key differentiator for graduates entering an increasingly competitive workforce, institutions have an opportunity to stand out by embedding ethical AI use and literacy throughout the curriculum and student experience.
Equipping students with practical, responsible AI skills not only prepares them for future careers but also builds trust with prospective students and employers. And by demonstrating they’re ready to meet the evolving demands of learners and the workplace, this commitment can strengthen an institution’s reputation, boost retention, and even drive new enrollments.
Why are faculty struggling to teach AI skills effectively?
Our research finds 89% of academic administrators and 80% of educators believe AI-readiness is essential for graduates—and 86% of our respondents believe institutions need to teach it.
As demand for an AI-ready workforce grows, teaching, learning, and assessment must evolve to support this priority. But—while educators are charged with helping students navigate AI use—many report they’re struggling to use it optimally themselves.
- 80% of educators say they feel overwhelmed by the plethora of AI tools available.
- 47% of educators and administrators would like to use AI for better decision-making but don’t know how.
- 39% of educators don’t know how to get the best out of AI in their role.
Meanwhile, HEPI finds student perception of tutor AI competency rising (
from 18% in 2024 to 42% in 2025
). But this still leaves more than half of students feeling under-supported, prompting HEPI recommending institutions address the following three issues.
- How AI tools are used effectively to support instruction, learning, and engagement.
- How staff are trained to have a deeper working knowledge of AI tools.
- How to prevent divides in use of AI by different socioeconomic groups.
Why should education leaders update AI policies now?
The message is clear: the future of work demands AI fluency, and higher education must lead the way. For education leaders, the mandate is not simply to respond to AI, but to shape its role in learning.
This means revising institutional AI policies to move from restriction to readiness, empowering educators with the training and tools they need to confidently guide students, and embedding AI literacy across disciplines.
By taking a proactive stance, institutions can achieve broader strategic goals: improving student engagement and outcomes, enhancing graduate employability, and safeguarding academic integrity through robust, forward-looking assessment.
The goal is not to eliminate risk, but to manage it—supporting the many students who use AI responsibly, while deterring misuse by the minority through transparent policy.
There’s an important order to achieving a truly AI-ready workforce: building the foundational skills that have always mattered, long before AI entered the scene. Using AI effectively requires individuals to:
- Think critically
- Apply knowledge to real-world contexts
- Continue learning with curiosity and independence
In this respect, an AI-ready workforce is not about AI alone, but about ensuring learners have these enduring skills first. AI then becomes a tool that amplifies these abilities, rather than a shortcut that replaces them.
Central to building those skills is the act of writing.
Writing deepens comprehension, reinforces reasoning, and cultivates critical thinking.
While these are essential skills for effective AI use, they’re also the very skills that can be undermined if students rely on AI to bypass the writing process. This is why it is essential to support students to not just develop AI literacy, but to combine authentic composition practices with supportive AI tools to hone their writing skills.
Actions institutions can take to build an AI-ready workforce
Support for student AI literacy
Involve students in developing and defining AI policies—building on their collective experience in navigating the use of AI in their own education. Communicate openly and consistently across the academic community, including students, educators, and academic administrators.
Define acceptable use of AI to support learning in specific areas. For example, coursework vs. exams vs. revision. Leverage AI policies to set expectations and reorient student mindsets. AI holds many opportunities students may not be aware of yet.
Teach students how AI can be used to support and develop their critical thinking skills—rather than replace them or take them away. Show that AI is most effective as a learning tool when paired with human judgment and curiosity, allowing students to explore topics in depth rather than at-speed.
Provide appropriate AI tools for students. Unequal access to AI technology across different socioeconomic groups—as well as
differences in attitude to AI use between genders
—is creating an AI literacy gap. This threatens to reinforce existing inequalities in the AI-ready workforce.
Inform students about the importance of AI after their education—including where AI can be an extension of their knowledge and how opportunities for efficiency through AI use may provide them with more time to focus on creativity and critical thinking.
Support for educator AI literacy
Provide relevant AI tools and training to academic and administrative staff. Staff need to understand the AI tools students may use, to be alert to potential overreliance or misuse. They also need AI-powered tools to boost their teaching, assessment, and grading practices.
Encourage collaboration between educators and students on AI policy—With a focus on shared understanding of AI guidelines, as well as transparency and open conversations around appropriate use.
Make student writing and AI use more visible—not to police students, but to bring awareness to the dynamics at play in the writing (and learning) process. This
invaluable student data
helps educators provide timely, personalized guidance and build trust in how AI is being used. See below to learn how Turnitin Clarity makes this possible.
Further guidance
Develop AI policy that reflects that student use of AI ‘is inevitable and often beneficial’ (HEPI). If assessments are robust, institutions can allow use of AI to boost learning without fear of it undermining academic integrity.
Stress-test assessments using the most powerfully available public AI tools. Assignments that are easy to score highly on with good AI skills, but without engaging deeply with the course, should be immediately rewritten.
How Turnitin Clarity supports development of an AI-ready workforce
Transparency is the common thread to success with AI in education and beyond—and it’s at the heart of why Turnitin Clarity exists.
Bringing greater transparency to the student writing process
, Turnitin Feedback Studio now offers the Turnitin Clarity add-on, forming a single, comprehensive platform to evaluate student work and nurture skills more effectively and efficiently in the AI-dominated landscape.
Students benefit from secure composition space with optional built-in AI tools designed to provide instant, personalized support as they draft and refine their work. This gives students the opportunity to use AI the way they say they want to—for tailored, self-service support outside the classroom.
With behind-the-scenes visibility into students’ writing process and how AI has been leveraged, educators are empowered to better support students with targeted interventions for optimal, responsible AI use and inform their own teaching practice.
At the same time, it equips institutions to meet growing regulatory and accreditation demands, providing assurance that quality standards and academic integrity can be maintained as AI adoption expands.