Edufiction and the Quiet Drift of Values in the Age of AI
Artificial intelligence does not storm the gates of our values. It rearranges the furniture.
I have spent significant time reading and thinking about how AI shapes young minds, their writing habits, their study skills, their attention spans. It’s a huge issue. But recently, I considered something more subtle and perhaps more profound: how AI is shaping our values, our school cultures, and by extension, the very world we perceive each day.
Not through scandal.
Not through obvious ethical failure.
But through convenience.
In schools, increasingly generative AI is being usedby students to draft emails, write essays and other homework tasks, and by teaching staff to create lesson plans, summarise meetings, write reports, justify policies, and respond to parents. In language that sounds confident, thoughtful, reasonable and complete.
And that is precisely where the subtle shift begins.
When AI can instantly generate polished explanations, it doesn’t just save time. It gradually reshapes what counts as a “good reason.”
Smoothness starts to stand in for sincerity.
Efficiency begins to substitute for care.
Plausibility becomes confused with accountability.
This is value drift: not a scandal, not a dramatic ethical collapse, but a thousand tiny adjustments in how we justify, decide and explain. The most powerful way to make young learners aware of this phenomenon is not through lecturing, but through “experiential” story.
That is where edufiction becomes truly useful. After all, we are wired for stories, right?
Why Edufiction Works Here
Edufiction blends narrative storytelling with embedded knowledge. Instead of telling young readers/listeners, “AI may reshape ethical norms,” it shows them characters navigating that shift: feeling it, questioning it, noticing it. When readers emotionally experience drift through a character, they learn to recognise it in real life.
Consider this short illustrative story:
The Perfect Reply
At Maple Grove High, we all use ReplyRight now for just about anything. At first, I told myself it’s just to smooth out my awkward sentences. The red underlines disappear. My paragraphs sound clearer, smarter. Teachers smile when I hand back my assignments. “Such improvement,” they say.
It feels great.
After a while, I stopped drafting altogether. Why would I wrestle with a blank page when ReplyRight can generate a thoughtful reflection in seconds? It always sounds balanced. Mature. Finished.
When I forgot my best friend’s birthday, panic twisted in my stomach. I quickly opened ReplyRight and typed a quick prompt: Write a heartfelt apology. The message it gave me was warm and tender, full of exactly the right phrases.
She will forgive me after reading this...Relief washed over me.
But something small and uncomfortable lingered.
Later that week, our teacher asked us to write about a serious bullying incident at school. I let ReplyRight handle most of it. When our teacher read out a few responses, I noticed how similar they sounded. All calm, measured, carefully balanced.
“These are very mature responses,” the teacher said, “I’m impressed with your thoroughness.”.
My chest tightened. What I handed in wasn’t wrong. Every sentence made sense. It just didn’t feel like mine. It felt like I borrowed someone else’s voice and signed my name at the bottom.
That afternoon, I opened ReplyRight again to thank my grandmother for the sweater she knitted for me. The AI produced a flawless, effectionate message. Articulate and perfect. It even imitated a friendly voice and avoided using overly complex words, I would never use.
I stared at the screen.
When did writing something myself became like next to impossible? Well maybe not impossible, but take way too much time, right?
My fingers hovered over the keyboard. Then I deleted what ReplyRight had written.
The blank page stares back at me. My heart beats faster. The silence felt heavier without the instant answer waiting for me. But slowly, words began forming . Clumsy, uneven, but real. I do admit, what ReplyRight had written inspired me, but doing it myself I could pick the words for the ones that actually match what I felt.
Why This Story Reveals Value Drift
The shift in Maple Grove High is not dramatic or immoral. No one cheats. No one lies. The AI produces thoughtful language.
Yet several quiet transformations occur:
Effort becomes optional.
Authenticity becomes secondary to polish.
Emotional responsibility becomes outsourced.
“Good communication” begins to mean “AI-generated fluency.”
Maya and students teh world overhave not rejected honesty. But their relationship to expression, accountability and care has subtly shifted.
That is value drift.
The brilliance of narrative is that readers feel Maya’s discomfort before they can articulate the ethical issue. The blank page “feels challenging and real.” That feeling is a signal. Through edufiction, young readers learn to recognise moments when convenience begins to redefine meaning.
Making Drift Visible to Emerging Leaders
Past research on value drift emphasises that values are not static rules written once and enforced forever. They are lived through habits, how we justify decisions, how we speak, how we take responsibility.
Edufiction can dramatise those habits changing.
Imagine stories where:
A class adopts an AI-written “Respect Policy” everyone agrees to, but when a conflict erupts, they realise they never decided what respect actually looks like. They had just approved what sounded right.
A student uses AI to write increasingly thoughtful apologies, until saying sorry becomes about sending perfect words instead of taking real responsibility.
Two friends rely on AI to help with every homework reflection, and slowly begin believing that sounding insightful matters more than actually thinking deeply.
The debate club wins every round using AI-crafted rebuttals, and gradually shifts from caring about truth to caring about what feels most persuasive.
In each case, the drift is subtle, plausible and incremental, just as it is in real life.
The educational layer can be woven through dialogue, teacher reflections, or discussion prompts at the end of chapters:
Who owned the reasoning here?
What changed in how the characters defined fairness or care?
Did technology solve a problem or reshape the standard?
Teaching Vigilance, Not Fear
The goal is not to portray AI as villainous. Value drift happens precisely because the technology is helpful.
Edufiction should avoid dystopian extremes and instead cultivate ethical awareness.
Young readers should come away asking:
What does this tool help me do faster or better?
Could it be changing what I think is “good,” “fair,” or “kind” without me realising it?
Am I making my own choices or just going along with what’s easiest?
The Role of Story in an AI-Shaped Future
If generative AI changes the language we use to explain ourselves, then stories must become the training ground for ethical literacy.
Through edufiction, we can help young people recognise that:
Values are not static slogans.
Technology and ethics shape each other.
Ethical literacy awareness is not about freezing principles in place, but about noticing when they begin to shift.
A thousand small decisions can change a culture. But a single well-told story can teach a reader to see the change happening.
And once they can see it, they can choose deliberately.
That may be the most important lesson edufiction can offer in the age of AI.
Your privacy matters. In line with GDPR, I only use your details to send you the information you requested — nothing more, nothing less.