© 2025 Spokane Public Radio.
An NPR member station
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

A discussion about artificial intelligence and using it in the classroom

Artificial intelligence is a tool for a good and a tool for not-so-good. It can help stimulate our imaginations so that we do better work or we can let it to do our work for us.

In the classroom, teachers can use AI to help students find answers to problems. But what’s the best way to navigate this ever-evolving technology?

Gonzaga University faculty members Anny Fritzen Case and Jay Yang are developing a fellowship for K-12 teachers to develop new ways to use artificial intelligence in the classroom.
Gonzaga University faculty members Anny Fritzen Case and Jay Yang are developing a fellowship for K-12 teachers to develop new ways to use artificial intelligence in the classroom.

Two Gonzaga professors, Anny Fritzen Case and Jay Yang, have been talking about how to merge AI and classroom instruction. Fritzen Case is a professor in the Department of Teacher Education. Yang is the director for the Institute for Informatics and Applied Technology.

20251122_AI-classrooms 1_online.mp3
Anny Fritzen Case and Jay Yang talk about how artificial intelligence is used in the classroom.

AC: It's interesting that AI comes to education on the heels of COVID, right? Which was a rapid unexpected, somewhat traumatic pivot to technology-driven education. Schools have been dramatically impacted by social media. And everything else just in the larger moment that I have sensed, all of that combined has created sort of a sense of overwhelm and tech fatigue in many schools, especially K-12 schools. And so we're in the very, very early stages of what this means.

I think right now, what it looks like is early adopters taking, using ChatGPT or Copilot or some widely available chat bot to help them create worksheets or assessments or give them ideas for something, right? I think that to a great extent, teachers are just using it individually, primarily for that labor-saving purpose.

Kids are similarly using it for a labor-saving purpose. And schools are trying to figure out, how do we manage this? Nobody wants kids to just have, use AI to cheat, use AI to produce work, to shortcut learning. But the policies and the infrastructure and the shared understanding is still quite emergent.

So I'm seeing schools who have some sort of a general AI policy that says, obviously, you can't just put your name on work that a chat bot created and turn it in. They're asking teachers to specifically identify appropriate and inappropriate ways to use AI. But in terms of really systems level, AI integration, that is still emerging.

That's sort of how I read the landscape. A few early adopters, students have certainly figured out how to use it, not just to cheat, but also to help. We're hearing students saying, I could not understand this. And so I asked AI to explain it to me in a different way. So some tutoring, some labor saving, some cheating, but it's not very coherent or very organized.

JY: Just a little bit, okay. So first of all, the chat bots, right, chat GPT, cloud, perplexity, all of this, they're both, all of them are chat kind of based. Because of that, people equate it with generating content that I can use. But what I have heard, right, throughout technology and non-technology fields, that the good cases are always addressing blind spots, always addressing things you cannot do.

Those bad cases are the ones that is replacing me to do the things I'm just lazy to do. So that narrative that if we can challenge your listener to think about, I don't wanna use it just to do things that I can do easily, just that it saved me from a minute to 30 seconds, versus, there's no way I can do this. How do I use this technology to help me to do that? For example, emails, there's a lot of people have email problems. What exactly did I miss from last week? I don't even remember how to ask. Those are the problem, there's challenge, right? That's not something easy summarizing other things. So keep that in mind, addressing blind spot versus addressing things I'm just lazy not to do, that's the difference.

I think that right now that Anny is a great example. She started using early on, she's an early adopter. I don't know whether you remember how you first started it, but now she's an expert user and she's thinking a lot more. She's really familiar with the technology, so she's not using it just to save time. She's using it to generate ideas, be innovative, addressing blind spot, those elements. And there are a lot of other technologies that build on AOMs that's not called ChatGPT. For you listening out there, don't worry about ChatGPT. There are many other technologies that's kind of powered by generating AI AOMs, that's actually for education purposes.

DN: Okay, so Anny, how are you as an expert user, how do you use it beyond the ChatGPT model?

AC: Increasingly, I am using these chatbots as thought partners. So I'm pitting one large language model against another. So for example, here's an issue I'm wrestling with. This is my best thinking. Maybe I'll ask ChatGPT to respond to that and to also ground what it says in a source, not just in patterns, but in actual information, which now it's integrated with the internet so it can do that. And then I'll ask Cloud the same thing and then I'll say, talk to each other, right? Respond. So it sort of creates this machine-machine human interaction that can be quite interesting.

I use it for a lot of intellectual games, as it were, right? Playful experimentation with ideas and different ways of developing ideas, conceptualizing ideas, communicating ideas. And so it doesn't actually save me time. It actually extends the time I spend, for example, on a writing or research project, but it adds new layers and new perspectives.

DN: What you've done is you've given me an entry point into how do I use it, which is what I've been puzzling about. Are there other ways to enter into that, Jay, that you might suggest that normal people could use?

JY: For the normal chatbot-based Generated AI, one of the things that people can look it up is called system prompts. The idea is that you are telling your bot to work with you in certain ways. So I have a colleague, he’s putting some system prompt to say, whenever I ask you something, give me at least three potential options. So because he wants the AI to generate options for him, not just give him one response. One of the dangers, concerns about this generated AI bots is that they may appear to be very authoritative, right? They are confident, they know what they're talking about, but they may not. So you can program or configure these chatbots putting some system prompt to say, I want you to give me more than one option every single time. I want you to tell me, I want you to remind me of this. So personally, you may want it to say, I want this interaction to be certain ways so that you can tailor to what I want to address, the blind spots. I want to emphasize the blind spots. Don't make it into your echo chamber. Don't just get the chatbot to tell you what you want to hear, because that doesn't help.

There is another risk and concern out there, say this could be your romantic partner. This is someone you are counseling with. That will be very dangerous if you not use properly, right? You need to have those system prompt again to address the blind spots. If I'm talking to a human being, if that human being is a good counselor, good advisor, good mentor, that person will look at my blind spot, remind me of a few things, even I don't want to hear it. But if you don't do that and you try to ask, just tell me what I want to hear, that's very dangerous. So I will suggest the regular common users think about how you want to configure or write a system prompt so that this technology is addressing your blind spots.

AC: One thing that I really care deeply about, I'm enamored by the possibilities, I see an opportunity, I see a need to invigorate and improve education, expand our imagination around that. I also am really concerned about the dangers that this poses, right? Particularly with young people. I do not want to repeat the social experiment that we had with social media where the adults sort of left it to the kids to figure out at tremendous cost to their well-being. I think AI could take us in that direction, perhaps to even more detrimental human outcomes. So in the same way that we've all are familiar with Jonathan Haidt and Anxious Generation and his two-pronged approach, which is we have to teach kids how to work with the technology, but we have to put as much energy into what they are doing when they're not using technology. Same approach with AI.

We need to teach kids fundamental digital literacy. We need to teach them what artificial intelligence is and isn’t. We need to help them be really clear about this reality that though it sounds like a human, it's not a human. And we need to absolutely double or triple our efforts in providing experiences and opportunities for kids to learn in embodied ways, to learn away from machines, to learn with each other, to solve problems in the real world, to develop social emotional skills, to learn discernment, to nurture identities, to live by their values, all of these human things. If we do not do that, then we will become the victims of the machine. And that's just one of the many potential negative outcomes that the age of AI could bring. So, I just don't think any discussion about AI in education is complete without acknowledging the only thing we really care about is the well-being of our children and that cannot be left to a machine, as capable as it might be.

JY: I would echo Anny’s point. I'll add to that that the notion of responsible AI, I push for this word responsible AI. There are a lot of interpretations on that. I didn't use the word ethic, I used it's responsible. The reason is that I think it's our educated responsibility that really think that, to Anny’s point, that we need to think about how to properly educate students, learners, and ourselves about what this technology means and what this technology does not mean, right?

The human agency is what I want to emphasize on. The opportunity, and I want to turn this from concerns into opportunities, right? So, we didn't address it well with social media. And banning it is not a solution. It didn't work, right? And I want to also encourage whoever's out there, I know this thing sounds scary, it's a black box, nobody understands what it is. A lot of colleague I work with, we're able to kind of convert some of the daughters, right, in religious study, in philosophy. “Oh, now I understand a little bit. I get a glimpse of what this larger language model is.”

Yes, it is going to be uncertain. Yes, it's going to be fast evolving. Yes, you will still not understanding it. But educators, we don't need to understand everything in order to teach. So, let's co-learn with students. That will be my message. Let's co-learn, use this, turn this into opportunity to co-learn this technology with students.

Fritzen Case and Yang are developing a fellowship for teachers interested in learning how to best use artificial intelligence in their classrooms.

20251122_AI-classrooms 2_online.mp3
Fritzen Case and Yang talk about the AI for Instruction Fellowship.

AC: The program is initially targeting charter school educators, primarily because charter schools by design are meant to be seedbeds of innovation and they are smaller and they tend to be able to be more nimble, right? So we are first inviting educators from charter schools around the state. We also will have some capacity for public school K-12 teachers to participate and we think that cross dialogue across those sectors will be really generative as well.

JY: We’re looking for people who are curious, open minded, want to be innovative, have this passion for really wanting to shape education. If someone were to come, ‘I just want to learn how to prompt,’ that’s not the type of people we want involved.

DN: Fritzen Case and Wang will choose from the teachers who applied for the fellowship and invite them to Gonzaga for three days of hands-on A-I training.

JY: After that, graduates from the three-day camp will have choices to continue to be coached by Gonzaga School and myself, the AI engineers here as well as the teachers and educators here at Gonzaga. That's one option, be coached, right? What things you can do, what can give them guidance.

There's other options like workshop. We're thinking about one of the key elements of education today is assessment, is grading. How you grade dictates what students study. So if we want it to be very flexible, open-minded, discreet or thinking, it may not be easy to be true-false questions, right? It's not be multiple choice questions. So how do we help faculty and teachers to design the grade, the exams, the assessments? That will be good, how do we do that? And we'll also have what we call mini grants because some of this technology adoption needs money. So they maybe have ideas that I want to adopt this, I work with students, we need a camera, whatever, things that, it's not just generating AI chat bots only. So we want to provide them funding, allowing them to do this mini grants.

DN: Wang and Fritzen Case expect to welcome teachers to campus early next spring. The fellowship program is scheduled to run for two years, but they hope the ideas developed will carry on for much longer.

AC: One of our primary goals is to create a network, a statewide, at least starting statewide, network of educators who are experimenting with AI, who are in whatever context they're in. Some schools are going to be engaged to a greater or lesser extent but this network can then cross pollinate, they can support each other and yes, like you said, this is, this innovation, if it happens and if it happens well, it's going to be both bottom up and top down.

Hear a longer segment of Doug Nadvornick's interview with Anny Fritzen Case and Jay Yang here.

Doug Nadvornick has spent most of his 30+-year radio career at Spokane Public Radio and filled a variety of positions. He is currently the program director and news director. Through the years, he has also been the local Morning Edition and All Things Considered host (not at the same time). He served as the Inland Northwest correspondent for the Northwest News Network, based in Coeur d’Alene. He created the original program grid for KSFC. He has also served for several years as a board member for Public Media Journalists Association. During his years away from SPR, he worked at The Pacific Northwest Inlander, Washington State University in Spokane and KXLY Radio.