I have finally had a chance to peruse the new OECD Digital Education Outlook 2026. There are some highlights worth pausing on. The first thing that stood out to me was the numbers 37% of lower secondary teachers report using AI in their work, 57% agree that AI helps to write or improve lesson plans, and 72% believe AI can harm academic integrity by allowing students to pass off work as their own.  What do you notice?” In many educational settings, I continue to hear “data-driven decision making,” to which I always respond, “What data are we using, and what are the stories behind the numbers?” This is where both a deeper understanding of quantitative measures and a consideration of what the “Street Data” (H/T Dugan and Safir) is are needed. 37% of teachers report using generative AI for work tasks such as lesson planning, lesson plan improvements, and content summarising (my guess is this number is actually higher). At the same time (72% of those surveyed), concerns persist that AI can harm academic integrity. This isn’t about hypocrisy. That label flattens what is actually a deeper conversation about authority and power in AI-powered systems while simultaneously missing the real issue. It’s about positional authority in two ways: Power Hoarding and Gatekeeping. We have seen this recursive pattern time and time again. When educators use AI, it’s framed in terms of productivity, time savings, and efficiency. When students use AI, it’s still far too often framed as misconduct. That binary approach reduces a complex shift in cognition and authority to a simple moral divide.

Therefore, we should be asking:
Who gets to augment their thinking?
Who must prove “unaided” cognition?
Who defines integrity in an AI-powered system?

If standards shift by role, that’s not just policy. It’s power dynamics. AI literacy must include an interrogative reflection on power dynamics. And yet, that layer is noticeably absent in most published “frameworks.” Without it, we risk teaching tool fluency while leaving authority unquestioned. By intentionally adding this layer, we increase our capacity to examine who benefits, who is constrained, and how the rules of engagement are shaped, and by whom.