AI in education, adaptive learning, digital literacy, and the evidence behind EdTech. Practical guidance for teachers navigating technology in the classroom. Updated for 2026.
Educational technology is one of the most heavily marketed domains in education and one of the most poorly evidenced. The arrival of generative AI in 2022 compressed a decade of EdTech adoption into two years, leaving many schools uncertain about what to embrace and what to resist. The honest answer from research is that technology is a delivery mechanism: it amplifies good pedagogy and it amplifies bad pedagogy. Hattie (2009) found that educational technology has a moderate average effect size, but the variance is enormous; the determining factor is almost always how the technology is used, not which technology is used.
Adaptive learning platforms adjust content difficulty in response to learner performance, promising personalised instruction at scale. The evidence for adaptive learning is genuinely promising: Ma et al. (2014) found effect sizes between 0.08 and 0.55 across 35 studies. The best adaptive systems are those built on sound cognitive science principles: spaced retrieval, immediate corrective feedback, and mastery gating before progression. Blended learning, which combines face-to-face and digital instruction, has stronger evidence still, particularly when the digital component handles retrieval practice and the teacher uses the resulting data to inform instruction. AI tutoring systems, the newest category, show early promise, but the evidence base remains thin at the time of writing. This hub covers the landscape honestly and gives you the evidence to make informed decisions.
Start with AI in Modern Education for the current landscape, then follow the pathway below.
| Category | What It Does | Evidence Strength | Best Use Case |
|---|---|---|---|
| Adaptive Learning | Adjusts content difficulty and sequence based on real-time learner performance data. | Moderate (Ma et al., 2014). Effect sizes 0.08-0.55 across 35 studies. | Maths and reading fluency practice; retrieval and spaced practice at home; differentiation at scale. |
| Classroom Response Systems | Allow all learners to respond simultaneously (polling, quizzes) so teachers can see the whole class's understanding at once. | Moderate-high. Works because it enables formative assessment and retrieval practice. | Hinge questions, exit tickets, retrieval starters, live misconception diagnosis. |
| AI Writing and Tutoring Tools | Generate content, provide feedback on writing, or answer questions in natural language conversation. | Emerging. Early studies are promising but the evidence base is thin and rapidly evolving. | Providing immediate feedback on drafts; generating differentiated question sets; supporting EAL learners. |
| Learning Management Systems | Platforms (Google Classroom, Teams) for distributing resources, collecting work, communicating, and tracking progress. | Weak as standalone. Value depends almost entirely on how teachers use them pedagogically. | Organising retrieval tasks, flipping instruction, building homework routines, parental communication. |
| Blended Learning | Combines face-to-face instruction with digital learning in a deliberate, integrated way. | Moderate-high. Consistently outperforms purely online or purely face-to-face when well designed. | Freeing teacher time for high-value interactions while technology handles practice and retrieval. |
A balanced overview of generative AI in schools: what it can do, what it cannot, and how to make evidence-informed decisions about adoption.
The two EdTech approaches with the strongest evidence. Both require deliberate design to work: understand the conditions that make them effective.
Build digital literacy across your curriculum. Use response systems to make formative assessment instant and whole-class.
It depends entirely on how it is used. Hattie (2009) found an average effect size of 0.33 for technology-enhanced instruction, which is below his 0.40 hinge point for meaningful impact. However, this average conceals enormous variation: the same type of technology used by different teachers in different ways can produce effects ranging from negative to strongly positive. The clearest finding from the research is that technology does not improve learning simply by being present; it improves learning when it supports retrieval practice, provides immediate feedback, enables adaptive difficulty, or frees teacher time for high-value interactions. Technology that replaces teachers with screens typically shows the weakest effects; technology that extends and amplifies what teachers do shows the strongest.
Adaptive learning systems adjust the difficulty, sequence, and type of content presented to a learner based on their real-time performance. At its best, an adaptive system functions like a tutor that knows exactly which concept a learner has mastered and which they need to revisit, presenting the right material at the right time. Ma et al. (2014) reviewed 35 studies and found effect sizes ranging from 0.08 to 0.55, with an average around 0.37. The systems that work best are those built on sound cognitive science: spaced retrieval, corrective feedback before progression, and mastery gating. Systems that simply adjust difficulty without these underlying mechanisms show weaker effects. Teacher involvement matters too: adaptive platforms that feed data back to teachers, who then use it to inform instruction, consistently outperform those used as standalone homework tools.
Generative AI tools present a genuine pedagogical challenge: they can complete many tasks that schools use as proxies for learning (writing essays, answering questions, solving problems) without producing any learning in the learner who uses them. The evidence-based response is threefold. First, redesign assessment tasks to require evidence of the cognitive process, not just the product: oral questioning, in-class writing, iterative drafts with teacher feedback, and tasks that require local or personal knowledge. Second, teach AI literacy explicitly: learners who understand how large language models work, what they hallucinate, and how to evaluate their output critically are better placed to use them productively. Third, use AI as a feedback tool rather than a writing tool: asking an AI to critique a draft the learner has written builds skill; asking it to write the draft does not.
Blended learning combines face-to-face instruction with digital learning components in a deliberate, integrated design. It is not simply adding technology to existing lessons; it requires decisions about which activities are best done digitally, which require the teacher, and how both components inform each other. A US Department of Education meta-analysis (2010) found that blended learning produced better outcomes than either purely online or purely face-to-face instruction in the studies reviewed. The most effective blended designs use the digital component for retrieval practice and spaced repetition (things technology does efficiently) and reserve face-to-face time for explanation, discussion, feedback, and the kinds of social learning that technology cannot replicate. The rotation model, where learners rotate between digital practice and teacher-led work, has the strongest implementation evidence.
Digital literacy is the ability to find, evaluate, create, and communicate information using digital technologies. It encompasses online safety, source evaluation, understanding algorithms, data privacy, and increasingly AI literacy. The research on how to teach it effectively points to two key principles. First, it must be embedded in subject teaching, not treated as a standalone computing lesson: learners who evaluate the reliability of a historical source in history lessons are practising digital literacy in context, which produces more transfer than decontextualised skills. Second, it must be explicitly taught rather than assumed: research consistently shows that learners who are fluent technology users are not necessarily able to evaluate online information critically. The SIFT method (Stop, Investigate the source, Find better coverage, Trace claims) is one of the best-evidenced frameworks for source evaluation and can be embedded across subjects.
The evidence on 1:1 device programmes is mixed and consistently disappointing when devices are deployed without accompanying teacher training, curriculum redesign, or clear pedagogical purpose. The OECD (2015) found that countries which had invested most heavily in school technology showed no improvement in PISA reading, maths, or science scores. The reason is that providing devices does not change pedagogy: teachers in under-resourced schools given tablets typically use them for activities that could be done equally well with paper. Where 1:1 programmes show positive effects, they are accompanied by sustained professional development focused on specific pedagogical uses, integration with formative assessment systems, and deliberate design of activities that are genuinely better on a device than on paper. The device is never the intervention; the instructional design is the intervention.
The evidence on mobile phone bans is one of the clearer findings in recent EdTech research. Beland and Murphy (2016) studied the introduction of mobile phone bans in English secondary schools and found that banning phones improved test scores by 6.4% overall, with the gains concentrated among disadvantaged learners and lower-achieving learners. The mechanism is attention: mobile phones are a source of distraction that disproportionately affects learners with weaker self-regulation, meaning the learners who already face the most educational barriers suffer most from unrestricted phone access. The UK government introduced mandatory guidance recommending phone-free school environments in 2024, aligned with the evidence. The strongest argument for phone bans is not anti-technology; it is about cognitive load, attention, and equity.
The Structural Learning platform has CPD courses, interactive lesson planning tools, and a growing library of resources built on the research above. Open a free account to browse.
No credit card required.
About this hub. Articles are written by practising educators and reviewed against peer-reviewed research. Citations follow author-date format. New content added regularly. Get in touch if you cannot find what you need.