April 2026, Leon Furze
In October 2023, the Victorian ICT Network for Education (VINE) published its first set of Generative AI (GenAI) guidelines for schools. ChatGPT was less than a year old. Schools were caught between panic and possibility, trying to decide whether to ban the technology or adopt it. The guidelines, written for a fictional “VINE School” and published under a Creative Commons license, gave member schools a shared starting point: a template they could adapt, a set of principles they could argue from, and a collection of practical strategies they could use on Monday morning.
Three years is a long time in AI. By early 2026, it was clear the 2023 document had done its job and that its job had changed.
This article reflects on the process of updating those guidelines: what we found when we went back to VINE member schools, what had shifted in both education and AI, and how we have tried to produce new guidelines that hold up in a world that moves much faster than any policy cycle.
Why update?
In 2023, GenAI was a standalone product. Students went to ChatGPT in a browser tab. Schools wondered whether to allow it, or how they might block it. The original guidelines were written for that moment: an ecosystem of text generators, unreliable detection tools, and a profession still working out what to feel about the whole thing.
By 2026, GenAI is no longer a text-based chatbot in a browser. The technology is now woven through the fabric of the systems schools use every day: Microsoft Copilot, Google Gemini, Adobe Firefly, Canva’s Magic suite. While ChatGPT remains the most widely used application by students, GenAI is now far more than that single application. In some ways, GenAI has already become part of the background of our digital technologies.
As one director of learning technologies put it:
“It seems comical now to think about a room full of people doing tasks alongside ChatGPT and the gasping and the wonder that existed at that point.”
The gasps have faded. What has replaced them is something more complicated: a growing gap between what schools said about AI and what was happening in classrooms, staffrooms, and boardrooms.
Updating the guidelines
The update was commissioned in January 2026, with a firm deadline: March 31, 2026 when VINE would host a face-to-face launch day in Melbourne for 50+ school leaders from across the independent school sector.
The update unfolded in three overlapping phases:
Phase 1: Listen
We began by surveying VINE member schools in February, mainly receiving input from ICT managers. Leon also carried out interviews with school leaders focused on digital learning, ICT, and AI pedagogy, recording and analyzing them for common themes.
Phase 2: Draft and Iterate
The first draft, released in early February, replaced the previous document’s sections with three pillars: Teaching and Learning, Ethics and Wellbeing, and Privacy and Security. This draft was revised five times by March using feedback from interviews, surveys, and VINE’s subcommittee. A test website was also launched in parallel.
Phase 3: Soft Launch
On March 31, we held a participatory soft launch. Attendees gave feedback on printed guidelines through workshops, Padlet collaboration, and informal discussions.

Five themes from the consultation
1. The reality gap
The single most consistent finding across every interview and survey response was that formal policy and ground-level practice had diverged dramatically since 2023.
One ICT manager described the three layers plainly: the system-level position (what the sector says), the school leadership position (what the school wants), and the reality of what’s happening in teacher and student workflows. They illustrated this point with an example from the classroom:
“Every assignment we give students is now a group assignment. One of the group members is an AI.” — ICT manager
A digital learning leader at another school described the shift in starker terms:
“In 2023, everybody was talking about it, no one was really an expert. Now I’ve got people who are embedding it into their day-to-day workflows and other people have gone the other direction.” — Digital learning leader
The updated guidelines had to acknowledge this gap honestly; not pretend it away, and not treat it as a compliance failure.
2. Shadow AI is a symptom, not the disease
Every school we consulted reported shadow IT as a moderate-to-significant concern. But the interviews surfaced an important distinction: shadow IT (someone tries a new tool without checking) versus covert IT (someone uses a tool they know they shouldn’t be using and deliberately hides it).
The root cause was the same everywhere: when the system doesn’t respond fast enough, people go around it.
“If the system of the school doesn’t respond quickly enough, the teacher will just go somewhere else.” — Digital learning leader
One school discovered that a staff member had been using an AI transcription tool in meetings for five weeks without anyone’s knowledge. Another found that parents were joining parent-teacher interviews with AI assistants auto-joining the call. Neither school had anything in writing to respond to either situation.
“Is the productivity that this staff member gains worth the impact on the psychological safety in the room?” — Director of learning technologies
The 2026 guidelines responded to this by reframing the problem. The goal isn’t zero AI tool use outside the approved list. The goal is zero covert AI tool use. The guidelines introduced a risk-zoning model (low/medium/high based on data sensitivity), a “paved road” principle (make the official path easier than the shadow path), and a transition template for bringing unapproved tools into the light rather than simply banning them.
3. Cognitive offloading became the dominant concern
In 2023, academic integrity was the headline worry in education. By 2026, the conversation had matured. The word that kept coming up in every interview wasn’t “cheating.” It was “offloading.”
“Is it pampering learners by doing the work for them – the cognitive offloading – or is it actually personalising learning for them?” — Digital learning leader
“A new thing that emerged in 2025, as opposed to 2023, was there was much more of a dialogue around cognition, as well as students self-admitting a growing reliance on AI.” — AI learning specialist
Survey respondents identified cognitive offloading as a top ethical concern equal with the risk of deepfakes. Schools are dealing not just with whether students are using AI, but with what happens to their thinking when they do. The question has shifted from “are they cheating?” to “are they learning?”
The updated guidelines make cognitive offloading a guiding principle and a dedicated policy section, with a practical toolkit titled “Good Use of Your Brain, or a Good Use of the Technology”: a discussion framework designed to help teachers and students navigate the distinction without moralising.
4. Staff polarisation
One of the most striking findings was the emergence of three distinct groups among teaching staff.
The groups, as described by multiple interviewees, are power users, embedding AI into daily workflows and building custom tools; blockers, reverting to examination-style assessments and AI detection software; and confident amateurs, using AI actively and enthusiastically but without understanding how it works. The “confident amateurs” were identified as the highest risk from an ICT governance perspective:
“They’re a dangerous group of people, because they can do a lot of damage without knowing they’ve done the damage.” — Digital learning leader
This polarisation had direct implications for professional learning. A one-size-fits-all PD session on “the basics of GenAI” was seen as a waste of time for the blockers and condescending for the power users. The updated guidelines acknowledged this explicitly and called for differentiated approaches: governance guardrails and the surfacing of shadow IT for power users; evidence-based, pragmatic examples for the blockers; and foundational technical understanding for the confident amateurs.
5. From amusement to anxiety
Perhaps the most poignant shift since 2023 was in the nature of student questions.
“Now we’ve got Year 10 students saying: what’s the point of a career in graphic design? What’s the purpose of learning how to code in JavaScript?” — ICT manager
In 2022, a common student response to AI was curiosity and amusement with a technology that they could “get away with” using in and out of class. By 2026, that wonder has been replaced, in many classrooms, by something closer to existential concern. Students aren’t just asking about AI capabilities. They are asking whether education still matters.
This shift is situated within a broader cultural moment: Jonathan Haidt’s The Anxious Generation, growing anti-edtech sentiment, new Australian social media laws, and a media landscape that oscillates between AI utopia and AI apocalypse.
“There very much feels like there’s a groundswell at the moment… AI has poured oil onto that fire.” — ICT manager
The updated guidelines don’t try to answer these questions, but they name the anxiety honestly and provide frameworks for schools to have age-appropriate conversations about what’s changing, what isn’t, and why learning is still important.
The 2026 Guidelines
While the core commitment – practical, principles-based guidance written for a fictional “VINE School” that real schools can adapt – remains the same, the 2026 guidelines are a substantially different document from their 2023 predecessor. The purpose of the 2026 Guidelines is to be an aspirational document. We don’t expect every school to be able to adopt every suggestion or guiding statement, but we offer these statements as an indication of what member schools told us they would like to achieve with AI. This idea of aspirational guidelines resonated with many members:
“An incoming tide raises all boats in the port. If a school does take it on, it is going to up the general baseline.” — Digital learning leader
“Having [the guidelines] as a benchmark to always reach forwards to… even areas where we haven’t quite hit that target… having that as a benchmark I think is super useful.” — AI learning specialist
“It’s about levelling the field, and as a sector, as Victorian schools, to be growing together in this space.” — Director of learning technologies
We have updated these aspirational guidelines as follows:
From strategies to tools. The 2023 document included “practical strategies for schools”. The 2026 version replaced these with concrete, ready-to-deploy tools: an assessment design checklist, a cognitive offloading discussion toolkit, a deepfake response protocol, an AI tool vetting checklist, a shadow AI audit template, educational chatbot design principles, and an AI meeting recording policy template, among others.
From permissions to principles. The 2023 document was, unavoidably, a document about whether schools should allow GenAI. The 2026 version assumes that ship has sailed. The question is no longer whether to permit it but how to govern it across the whole school, not just the classroom.
A stakeholder map. The 2026 guidelines open with an acknowledgement that the document lands differently depending on who’s reading it. Ten distinct roles, from board directors to students, are mapped with their primary concerns and typical tensions.
Seven guiding principles. The principles anchor the entire document and give schools something to argue from when making specific decisions. They include “AI is a technology, not a teacher”, “Privacy is non-negotiable”, and “Critical thinking over compliance”.
A living document. At the explicit request of interviewees, the guidelines include a prominent call-out committing to regular review and suggesting a specific cadence: annual review, with a lighter-touch check each semester.
“We can’t just write it and forget it like we do with a lot of policies. It’s got to be realistic and realistically updated on a regular basis.” — ICT manager
The VINE School Guidelines for Generative AI (2026) are published under CC BY-NC-SA 4.0 and are available here.
Leon Furze is author of Practical AI Strategies, and author of the original 2023 VINE GenAI Guidelines. He holds a PhD in generative AI in education.