The mindset is the skill.
The tool a student uses today will be obsolete in 18 months. The way they think when they use it won't be. We train the mental habits — curiosity, skepticism, craft — that hold up no matter which model they open.
Untrained habits
What happens when AI shows up and no one teaches a posture toward it.
- Accept the first answer
Treat AI output as truth. Copy, paste, submit. No instinct that the model could be wrong.
- Outsource the thinking
Hand the prompt over and walk away. The student never wrestles with the problem.
- Skip the craft
Voice, structure, and revision atrophy because the model 'already wrote it.'
- Confuse fluent with correct
AI sounds confident even when it's wrong. Untrained readers can't tell the difference.
Trained habits
The reps we run with students until these become automatic.
- Interrogate before you trust
Default reflex: 'How would I know if this is wrong?' Source-check, cross-check, push back.
- Use AI to stretch, not skip
The model handles drudgery so the student can attempt harder problems, not easier ones.
- Iterate in public
Draft, critique, regenerate, refine. Treat first output as a starting line, never a finish line.
- Direct, don't dictate to
Approach AI like a junior collaborator: give context, set constraints, judge the work.
- Ship something real
Apply the mindset by building. A working product proves the thinking actually works.
- Stay curious, stay human
Ask the questions a model can't ask itself. Taste, ethics, and intent stay with the student.
“Stop teaching kids to fear AI or worship AI. Teach them how to think when a machine is in the room. That's the only literacy that survives the next model.”