Afterwords

After the panel

The parts we didn't get to say.

Panels run sixty minutes. Thinking does not. Each panelist was asked to write the part they would have said if the clock had cooperated — unedited, in their voice.

Three afterwords. One published, two on the way. Reading time · 12–15 minutes for what's here.

Akesha Horton, PhD

Director of Academic Engagement and Learning, Indiana University Bloomington

What I'd add now, with the time I didn't have on stage.

Notes after the panel.

The vendor capture problem nobody is naming

Most institutional AI rollouts I've seen copy the model we use for everything else. An expert disseminates. Faculty sit. Certificates get issued. That model assumes there is a stable best practice somewhere upstream that we just need to push downstream. There isn't one. Not yet. Probably not for a while.

What that mismatch produces, in practice, is vendor capture. Institutions adopt tools because peer institutions did, with thin or nonexistent research on outcomes, equity, or labor implications. Procurement happens before pedagogy. Then the pedagogy bends to fit the tool. We owe our students more than that, and the cost of getting it wrong falls hardest on the people with the least leverage in the room (which is almost always the same set of students, for the same set of reasons).

The places I've watched real fluency build don't look like that. They look like communities of practice. Educators experimenting in groups, in public, with explicit permission to try things that don't work. The faculty I work with who've moved the furthest aren't the ones who attended the best webinar. They're the ones who found a peer group.

If I had a budget line item to write, it would say: unstructured peer learning. Time. Space. Permission to fail in your own classroom before we expect you to scale anything. That's not a workshop. That's not a vendor contract. That's the thing.

The friction question, sharper

We talked on the panel about friction. AI is designed to remove it. Learning depends on it. You've heard some version of this argument by now. Here's the part I didn't get to.

When I work with faculty, the question I'm modeling for them isn't "where can I save time." It's "where am I designing friction out that students actually need?" That distinction matters because faculty are watching. What I do in my own practice ripples into theirs. If I optimize my own workflow without naming what I gave up to do it, I'm teaching them to do the same.

Every failure of AI is showing where human expertise is needed. Which means, uncomfortably, that you can't recognize what AI got wrong unless you've already done the foundational work. That's an argument for slowing down on automating the parts of learning that build the expertise in the first place. We risk optimizing away the conditions under which the next generation of educators learns to evaluate AI. That's a long-game problem, and we are mostly not making long-game decisions.

The "why" I should have led with

Asked on the panel why I'm doing this, I gave a Punya Mishra answer. There's no such thing as ed tech, the market doesn't pay to build for educators, AI puts us in the design seat. All true. But the cleaner version, the one I've been thinking about since, is one word.

Agency.

For the first time in my career I can originate a tool instead of adapting someone else's. Build something for a specific learning problem I see, on my own time, without a vendor or a procurement cycle in the way. The reason that matters isn't novelty. It's that the people closest to a learning problem finally have the means to design for it. If you've spent any time in instructional design, you know how rare that is.

What's actually working in my own practice

Three things I didn't get to say.

  1. I learn by writing about it. I publish a Substack called Spilled and Studied, and the writing is the forcing function. If I can't explain what I just learned, I haven't actually learned it. Putting words on the page is where the half-formed ideas either harden or fall apart, and either outcome is useful.
  2. I run a calibration test every couple of weeks. I take a real piece of work I'm doing and run it through a tool I haven't used yet. Not to find a winner. To stay honest about what's changed in the field. The tools that were best six months ago are not the tools that are best now, and the only way I know that is because I keep checking.
  3. The peer network I rely on is heterogeneous on purpose. The people who've taught me the most aren't the ones giving keynotes. They're using AI in contexts I don't sit in. A K-12 ELD coach. A community college instructional designer. A first-gen doctoral student translating academic norms for the first time in her family. I get more from a 20-minute conversation with someone whose problems aren't mine than from another tutorial on prompt engineering.

The thinking shift I didn't get to articulate

When the moderator asked how my thinking on opportunities and risks has evolved, I gave the Safiya Noble and Ruha Benjamin answer. I stand by it. What I didn't get to say is that my position hasn't actually reversed. It's gotten more disciplined.

Three years ago, what I had was an instinct: pay attention to costs alongside benefits. What I have now is a question I run on every adoption decision. Who is paying for this? Who is benefiting? Where is it being used as a social good versus where is it just extracting value from people who have no leverage in the design? That isn't a softer question than I was asking before. It's a more specific one.

The big debate in education isn't whether to use AI. That ship is gone. The debate is what we're embracing and what we're giving up, and we have to be willing to answer the second half. Someone at the conference said efficiency erases authenticity, and I scribbled it down, because that's the trade I see being made constantly. Faster drafts. Less voice. A response that sounds professional and means nothing.

For the colleague who feels behind

A version of this came up in audience Q&A, and I want to redo it.

Stop trying to learn AI. Pick one thing you actually need to do this week and ask AI to help you do it. That's the whole curriculum.

You don't have to read the white paper. You don't have to know what a transformer is. You have to drop in on something small, see what happens, and pay attention. The fluency builds from there.

And if you're feeling behind, you're not. There is no behind. The field is being built right now, by people who showed up and started practicing. You can be one of them.

Lines I keep coming back to, for the next room I'm in

On equity
The cost of getting AI integration wrong falls hardest on the students who already have the least leverage. That has to be the first question, not the last.
On faculty resistance
Resistance isn't always wrong. Sometimes it's the only signal we have that something we care about is at stake. The job isn't to overcome it. It's to listen to what it's telling us.
On where to start
Start with a problem you actually have. Not the problem AI is good at. The one keeping you up at night. Then ask whether AI can help.
On efficiency vs. authenticity
Efficiency is not the same as quality. A faster draft is not a better draft. We have to be honest about what we're trading.
On hype vs. reality
Most of what gets sold as AI in education is the same old PD model with a new logo. The real work is unstructured, peer-led, project-based, and slow.

Yousuf Marvi

Math Teacher and ELD Coach, Sierra Vista Middle School (Irvine USD)

Yousuf is finishing his afterword — the parts of the panel he didn't get to share. In his voice, unedited, with the framings on effort, equity, and what AI flattens that didn't make the sixty-minute cut. It'll appear here when he's ready.

Anne Fensie, PhD

Director, Center for Teaching and Learning, University of Maine at Presque Isle

Anne is finishing her afterword — the parts of the panel she didn't get to share. In her voice, unedited, with the framings on calibration, faculty learning, and what holds up under change that didn't make the sixty-minute cut. It'll appear here when she's ready.