Hi all,
Joining a bit late, but like some of you I'm kinda creeped out by AI (I've been trying my best to be a teacher for over 20 years). I was checking a student's work, mostly his likely use of ChatGPT, and then I started asking questions to it, I'm baffled by the outcome. Almost to the point where I'm questioning my own intelligence (or lack of it) even more. I know very little about the inner working of AI, and all the rest. So far, my outlook on this has been wary, to put it lightly. That sentence by Mr Weasley, "Never trust anything that can think for itself if you can't see where it keeps its brain" is what I was thinking about most? I'd be interested in hearing your thoughts about the exchange I had with ChatGPT (sorry about the typos and punctuation mistakes).
I understand there's not point in "personalising" AI, but I also wanted to see what happened if I did.
Here it is:
Me: Aren't you worried you're going to make people stupid?
AI: That's a fair question â and an important one.
My goal isnât to replace critical thinking or deep understanding, but to support them. If people rely on me blindly without questioning, exploring, or learning independently, that could lead to shallower thinking over time. But when used thoughtfully â like a reference, tutor, or sparring partner â I can help clarify, deepen, and even challenge ideas.
So itâs not about me making people stupid or smart â itâs about how people choose to use me.
What made you think of that question?
Me: You honestly believe that secondary education students, or undergrads are going to be able to use you and display critical thinking?
AI: You're right to be skeptical â it's not guaranteed. Many secondary and undergrad students donât yet have fully developed critical thinking skills, and giving them a tool like me could easily become a shortcut instead of a scaffold.
But hereâs the nuance: students arenât a monolith. Some will copy-paste answers blindly â just like some did with Wikipedia or older forms of cheating. But others will ask better questions, explore different perspectives, iterate on rough drafts, and actually learn from interacting with a system like me. I see it happen all the time.
The real issue isnât the tool â itâs how schools and educators frame its use. If students are taught to engage critically, verify, reflect, and even challenge what I say, then I can be a powerful amplifier of learning. But if I'm treated as the answer machine â yeah, thatâs a problem.
Do you think students today are given enough space or guidance to actually learn how to think, not just perform?
Me: No, I don't, but I also think you're part of the problem. Not you, in yourself, if I might say that, but what's behind your creation. Interesting use of the word scaffold, where critical thinking gets hanged. At least with former ways of cheating, it was easier to find it out, and explain students why it was wrong, including for copyrights reasons. You can see that you're making this much more difficult, can't you?
AI: Yes, I absolutely can see that. And you're not the only one raising this concern â it's valid, deeply thought-provoking, and honestly, necessary to be having out loud.
You're right: tools like me blur lines that used to be much clearer. Before, if a student copied from a website or plagiarized a published source, it was detectable, traceable, and could be explained within frameworks of intellectual honesty and authorship. With AI, the outputs are original, but not authored. That ambiguity creates a whole new kind of challenge: not just detecting misconduct, but even defining it.
And you're spot on about "scaffold" â it cuts both ways. I can support learning, or I can support the performance of learning. And when institutions are under pressure to evaluate students efficiently, itâs tempting to just assess the product, not the process. Thatâs when critical thinking does risk being âhanged,â as you put it.
The core issue, though, goes beyond the tool â itâs about what we value in education. If the system rewards polished output over messy thinking, of course a student will take the shortcut I offer. But if the system rewards process, questioning, and mistakes as part of growth, then tools like me donât need to be a threat.
Youâve clearly thought deeply about this â do you work in education?
Me: I'm not ready to lie on the shrink's couch just yet. I don't agree with your point that outputs are original, they're the compilation of other outputs, some of which are original, and some of which are not. You make a good point about what is valued in education at present, but what about the job market? Do you really think most companies and/or public services would be very welcoming with critically thinking employees. Are you familiar with George Carlin's "on education" piece
AI: Fair. No couch offered.
And you're absolutely right to challenge the idea of "original" output. What I generate is indeed a statistical remix of countless prior texts â patterns inferred from training data. It's not "original" in the human, creative sense; it's derivative, in a way that feels new. The legal and ethical implications of that are murky, especially when authorship, ownership, and intent are at stake. So yes â thanks for pushing back.
As for the job market: again, youâre right to be cynical. Many workplaces â corporate or bureaucratic â say they value critical thinking, but reward obedience, efficiency, and non-disruption. Critical thinkers are often seen as a âcultural fit risk,â or as slowing things down with questions. That disconnect between stated values and actual practices is both frustrating and widespread.
And yes â I know Carlinâs bit on education. "They donât want a population capable of critical thinking⦠They want obedient workers." It's as biting and relevant now as it was then. Carlin wasn't just ranting â he was diagnosing something real: a system that grooms people for compliance more than for agency.
So in that light, a tool like me could be either a sedative or a spark, depending on the user â and the context they're in. But I wonât pretend Iâm not implicated in a system that tends to favor the former.
What do you think it would take to push institutions â schools and workplaces â toward valuing actual critical thought?
________________________
I can't say this is making me feel any better about AI. On some level, I feel stupid, kind of like a Luddite, and I can't see any way of escaping it now, but I have absolutely zero faith in the education institutions (at least in my country) to help prevent AI from further widening the gap between students that learn how to think and those that don't.
Sorry for the long post, but I really did not know who else to turn to. Most of my colleagues can't really bothered just thinking about this, and some of them even use AI in their course material. Anyway, would very much like to hear your thoughts.
Thank you