Demystifying Instructional Design

S3E4: ChatGPT - the potential is as vast as the challenges and concerns - a conversation with Autumm Caines and Lance Eaton

February 05, 2023 Rebecca J. Hogue Season 3 Episode 4
Demystifying Instructional Design
S3E4: ChatGPT - the potential is as vast as the challenges and concerns - a conversation with Autumm Caines and Lance Eaton
Show Notes Transcript

In this episode I interview Autumm Caines and Lance Eaton about ChatGPT and how they see it affecting higher ed and instructional design from a variety of perspectives. This episode is insightful and also humorous at times.

Autumm Caines:
Autumm Caines is a liminal space. Part technologist, part artist, part manager, part synthesizer she aspires to be mostly educator. You will find Autumm at the place where different disciplines and fields intersect, always on the threshold, and trying to learn something new. Autumm currently works full-time as an Instructional Designer at the University of Michigan – Dearborn and part-time as Instructional Faculty at College Unbound where she teaches courses in Digital Citizenship as well as Web and Digital Portfolio.

Lance Eaton:
Lance Eaton is the Director of Digital Pedagogy at College Unbound, a part-time instructor at North Shore Community College and Southern New Hampshire University, and a PhD student at the University of Massachusetts, Boston with a dissertation that focuses on how scholars engage in academic piracy. He has given talks, written about, and presented at conferences on artificial intelligence generative tools in education, academic piracy, open access, OER, open pedagogy, hybrid flexible learning, and digital service-learning. His musings, reflections, and ramblings can be found on his blog: http://www.ByAnyOtherNerd.com as well as on Twitter: @leaton01

Support the show

Please consider making a donation to my Patreon account to help support this podcast financially: patreon.com/rjhogue

Rebecca Hogue:

Welcome to Demystifying Instructional Design, a podcast where I interview various instructional designers to figure out what instructional designers do. I'm Rebecca Hogue, your podcast host. If you enjoyed this podcast, please consider subscribing or leaving a comment on the Show Notes blog post and consider helping to support the podcast with a donation to my Patreon account. Welcome Autumm and Lance to Demystifying Instructional Design. This is a bit of a different episode of a podcast. We're going to talk today a little bit about ChatGPT the AI phenomena that is lighting up higher ed anyways, if not other spaces as well. And so the first thing I'm going to ask is if you could do a quick introduction and give us a little bit of context. And I'll start with my context is I teach instructional designers. So I'm teaching at the master's level and my students are instructional designers. And so for me, I'm looking at it largely as is this tool useful for my students? That is the context that I have for it. And I'll pass it over to Autumm and then Lance.

Autumm Caines:

Hi. I'm Autumm Caines. I am an instructional designer in a faculty development office at the University of Michigan, Dearborn. I always think it's important to add what office you're in when you're talking about instructional design, because as Rebecca's audience will know, the context in which you work is an instructional designer can have a huge impact in the type of work that you do. So coming from the perspective of a faculty development office, it's more than just instructional technology and it's more than just instructional design. It's also faculty development. We can get into that a little bit more as we go forward. But just to set up the context, I'm also instructional faculty at College Unbound where I know Lance from, and I actually know Lance from this podcast as well. That's one of the reasons that I reached out to Lance was because I heard him on this podcast and then we ended up becoming colleagues at College Unbound, But I teach two classes at College Unbound. One is Digital Citizenship and the other one is Web and Digital Portfolio. So that's just a little bit about my context in terms of where I'm coming atthings from, in terms of ChatGPT. I have been looking into large language models since probably 2021 with the upset with the firing of Timnit Gebru at Google from when she was working, she was the head of Ethics and was working on the Lambda large language model and started paying attention to some of the advances that were happening in that technology around that. So I do have a tendency to come at it from a little bit more of a critical and ethical perspective. But I don't want to go for too long. I want to turn things over to Lance and let him introduce himself.

Lance Eaton:

Sure. Hi, I'm Lance Eaton. I am Director of Digital Pedagogy at College Unbound. I would say what department I'm from, but we don't really have departments because we're still a new enough college and have that new college smell where there's lots of different hats and ways that we try to work around action teams as opposed to traditional departments. But if I did, it would probably be Academic Affairs, which is I think where I mostly sit. And my role is that mixture of working with faculty, doing faculty development and kind of helping to support them in the development of teaching and learning courses and using different tools. And sometimes that tool is a pen and pad and sometimes it's an LMS and sometimes it's artificial intelligence. For me, I've had a lot of interesting thinking around or just looking at AI for a couple of years now. I just started to develop a more critical view of technology over the 2010s. It started to pop up on my radar and then really for much of the from the start of the pandemic until bout a year and a half ago, I was I guess that was halfway through the pandemic or whatever phase we're in now. I was working at the Berkman Klein Center for Internet and Society and was both helping to run programs and programing around Internet and society, and a lot of that focused on AI. And so I got to see a lot of different people in the industry coming with those critical lenses. And so that stuck in my head a lot, especially as ChatGPT became like the pet rock of the 2022-

Speaker 2:

2023.

Lance Eaton:

And people really started to pay attention to a AI generative tools in a way that they certainly hadn't previously.

Rebecca Hogue:

Thank you both very much for your introductions and a little bit of context there. I'd love to ask you a little bit about what guidance you're giving students regarding the use of Chat GPT And if you could tell me what that stands for, that would be really helpful because I think the audience would find that useful.

Autumm Caines:

It's a generative pre-trained Transformer ChatGPT And I do think that Lance is the perfect person to answer this. I'm going to answer really briefly and just say that I was a little taken aback when this was first opened up, and I really wasn't sure what to do as a teacher. As an instructional designer. I had some ideas, but as a teacher I felt a little lost.

Speaker 1:

And working at College Unbound, Lance is the person that

Autumm Caines:

I go to. When I have questions, right, And I knew people were going to be coming to me. So I went to him and the school just responded in an amazing way that Lance is going to tell you more about now.

Lance Eaton:

Thank you. Yeah. So this is again, being a fairly new school there's things we don't have always to fall back on. And also a lot of our practice and thinking is student centered. And so I was playing with it, thinking about it as well along in terms of instructional design and students and whatnot. And I get an email from Autumm saying, I think I have a student that has used it. And so that generated a discussion between me and her, and it was at the end of the semester and like there's so much else going on, it gave us an opportunity to really think about in this moment what is what is most useful to do. And this is where I think both of our collaborative nature and the way that CU structures itself is to go after the student wasn't the right approach. There was a part of our reaction that was just like a hot dang! Like, go student for being that quick and figuring it out. We had that moment and we celebrated that for a moment. I want to give this person like some kudos for creativity. We had the moment of just having that frustration and a mixture of just not... Sometimes it's a little ego driven, but like not being happy that the student did it. And then we just also were like, What's behind this? And I think that's where we really got our momentum and what has gotten our momentum for our school as a whole is really understanding the students and their uses of this. I go back to I want to say it's folks like James Lange and when people are doing this type of thing, which is framed in all sorts of deficit language of they're cheating, they're stealing, they're what have you, they're often indicating things aren't working for them. They're often indicating this is more of a sign for help, a sign for a lack of trust, lots of different things. And so very quickly, me and Autumm realized like, why don't we try to find out? And so our first goal was to craft an email. And I sent it out to students saying, Hey, this tool exists. And I think some students may have used it and we are interested in learning more about it in this non-punitive way. We just want to understand what led you to it. What might we be able to learn about why you found your way to this tool? Nobody responded to that. We hoped... nobody did. That was okay. My partner recommended, well what if you did an anonymous survey? And so we put that out to our students at the very end of the semester. We probably got, I think, four or five students, and this was at the end of the semester and during break that before everything really exploded, before you were seeing references to it in podcasts and in mainstream stuff, we had three or four students who were saying like, yeah, like we started to use this and we're using it in these ways. And so we thought that was interesting. And that kind of jolted me to think about the semester for us started January 9th. This is before most schools start their fall semester. And so we needed something in place. And so we developed a policy that we felt was like... recognizing we don't really understand the fullest implications of these tools. We don't want to just do a blanket ban and be like, Nope, you can't use it under any condition. And we wanted to create safe conditions that if students use it, then it can they can identify that they use it. So that can also invite questions and understanding and things like that. I think the potential for it can be is as vast as some of the challenges and the concerns around it. Right. So there's a lot of knee jerk reactions. There's a lot of valid reactions about the ways this interferes with how we demonstrate learning by students. But I think there's lots of possibilities for us to leverage it. If we can find versions of it that aren't steeped in all sorts of exploitative practices. But I'll pass to Autumm for her take.

Autumm Caines:

Yeah. So when I saw some responses that I thought potentially could be synthetic text, my first thought wasn't like cheating. My first thought. like Lance said, it was curiosity. It was like, wow, if they figured this out at the same time, I did worry. It's so new. It's such a new technology. This is December of 2022. It dropped on November 30th of 2022. The technology has been around for a while now. We can go all the way back to the sixties if we're just talking about chat bots. Eliza. was in the sixties. Weizenbaum's ELIZA. But in terms of the transformer technology, the idea of using neural networks around large language models to be able to create text so smooth and so clean. And so it sounds so convincing, right? That's been around for probably about two years now, but you always had to pay for it. It's always been behind some kind of paywall or part of some type of product. I mean just to open it up to the entire world right at the beginning of finals for higher education. That's not insignificant. Whenever I talk about this, I always think it's really important to put the context around it. Yes, it's a huge jump in terms of technology, right? In terms of the tech that is going on, the interface, the way that you use it, the smoothness, all of that. But a big part of the hype, a big part of everything that's going on around this has to do with the fact that it's free and the timing in which it was released. Those are just two really big parts of it. And especially in that moment, I was hearing all of these news articles and things coming out about people being really punitive with their students. I read an article about a student who failed an entire class when it was discovered that they had used this tool. And it's my understanding also, there's really not a way to prove without any doubt whatsoever that a student used this tool. You can run the text through some of the detectors, but those are flawed. They're flawed in my testing of this. And I don't think anybody even tries to pretend that they can return a 100% positive or 100% negative. There's tons of false negatives and false positives. So I really... I guess I say all of this to say I'm not surprised that the students, when we sent them that email and just said, hey, we're just curious, did you use this? Have you heard of this tool that nobody responded? Because like I said, there were tons of articles and news releases out there with people saying that they were punishing students for using this tech. So if I were a student and I used it out of curiosity and I also think it's a little bit crazy to say that students wouldn't use it if I were a student, I'd be curious. I'd want to try it out at least, and I don't know if I would actually submit that work, but I'd be tempted to, especially if it was the end of the semester. And I was really busy and I had a lot of pressure. I don't know of any academic integrity statement from any university that mentions AI generated anything. I don't know of any classroom policy that mentions any of this kind of stuff. So I think it really does challenge us to think about what we mean by cheating and to critically evaluate and take, retake stock in what we mean, what's valuable in an education. I felt really lucky that I was working for College Unbound during this time so that I could think about these kind of things with an amazing partner like Lance and with a school that is student centered and student focused. That's where I'm at with it. With students right now, I guess I'll throw in at the end. My class policy right now is actually a little bit more broad than the college's policy I actually say it's fine if you want to use it. Just tell me that you used it and describe how you used it. I just think it's way too early right now to be punishing students for it.

Rebecca Hogue:

You mentioned something about a one credit course, and I would love to hear more about that.

Lance Eaton:

Yeah, absolutely. To just build off of Autumm's point about her policy, like we created a policy that we put out as this is our temporary policy, folks, individual folks are welcome to adjust as makes sense for their classrooms. And I think that was, again, like we want to be both student centered. We want to empower faculty to make the right decisions on behalf of their students. That was another piece of this as I went into the winter break in conversations me and Autumm had, I just had this brain blast. That's Jimmy Neutron reference for folks that are interested. I just had this brain blast of like quintessential way I could help figure out this challenge at College Unbound would be to do it in a way that was student centered. And so like literally got out, I had this idea, I got on my phone, I texted the provost and I was just like, What about this for an idea? What if we do a one credit class that is filled with students who are going to play with, learn about and really think about ChatGPT and other AI generative tools. And through that class we can create a recommended set of policies for institutional usage. Instantly got back, thumbs up, let's do this. Which also meant that oh I have to figure out this course. So that was my winter break. And then I realized there was another iteration or rather again, my partner in conversation came up with this really great insight of what if you could also connect it to writing courses. And so we're doing this one credit course in Session, One of 17 week semesters and we do eight week courses in session one, and eight week courses in session two, and then sometimes sixteen week courses. So in Session one, I'm doing a one credit course where we're going to develop a rough draft of policies around usage for faculty and students. In session two, I'm going to try to connect with students that are taking our writing course and have them sign up for this class, and it'll really be an opportunity to kick the tires on the policy. So they'll be taking a writing class. they'll be using this policy to inform how they're going to use the AI generative tools. And that will be a bit of like really trying to figure out like, what are the holes in it, what are the ways that it really works. And in conversation with the faculty teaching those courses as well, so that by the end of the semester we'll have had students had a central role in developing it and testing it and putting forward the recommendations for it. That's where we are with it, we're about three and a half weeks into the one credit course and it's just been this there's about eight students in it and it's been this rich conversation around like them getting to use it and them getting to like really start to see the answers it comes back with. And then also them delving into other content, other things that are helping inform their opinions. The week by week things change because in the second week we got the Time magazine report about how in order to do content moderation of all of the horrible stuff on the Internet that they scrawled in order to like, make this open, AI, was paying Kenyan workers $2 an hour for content moderation, which is just another way of saying they paid Kenyan workers $2 an hour to be traumatized by like the worst of the Internet. So every week there's these new things that help to flesh out our thinking about it and conversations that we have. They hear things like this and they're like, this needs to be on like the UN's agenda. This is not correct. Like the ways to get like really invested and start to challenge their considerations. One of the earliest points that was just great was I had students read some of Autumm's work and raising some of those questions around what does it mean to sign up and get an account with Open AI where it asks for your name, it asks for your email and your cell number? And we got into a discussion around like digital redlining. And our students are predominantly students of color. And so this generated part of when I created the course or part of when I started to create the assignments and the goal was for them to use or engage with these tools, I recognized they would have to create accounts or they would have to have access. And so I've offered my credentials for them to log in and to use. And as a result of that conversation I had, I probably at this point, I've had half the students ask to use the credentials to use it so that they don't have to give up their own personal information to an entity that has 10 billion dollars invested by Microsoft and is like gathering up all sorts of data on the users. So yeah, that that is all where we are now is we're moving into the point in the course where like, besides playing with it, we're really thinking about what would we recommend for usage. That's the next discussion we're starting to have.

Autumm Caines:

Getting back to your question, Rebecca, in terms of how we're using it with students, I have been pretty vocal and I've written a couple of blog posts that you can link in your show notes, really being critical of the idea of using it with students. I'm really hoping that those faculty who do teach in a discipline where it makes sense to use it, take a pause, take a beat, and think critically about how they're going to ask students to use it. And I suggested some techniques that I could employ to make it so that they weren't forcing their students to use it. They weren't forcing their students to sign up for accounts at the bare minimum. But I guess I just want to say that I do recognize that. I think that's discipline specific and especially, of course, I just love the idea of a course that is specifically designed to gather student voice and get student input about university policy, about college policy. I can totally see it. That's a situation where, yes, the students should be informed to the point where they actually have experience with the tool so that they can give informed input. But I love the fact, Lance, that you created like a shared account. So that way nobody's putting your personal information at the account level, but also it muddles up and it creates noise in terms of the questions that they're asking, right? Because it's not just the creation of that account, but also the inputs that you're putting into it. And so by having everybody share one account. I think that does a Greater good in terms of protecting students.

Lance Eaton:

That's your influence at hand.

Autumm Caines:

Makes my day. I thank you for that.

Rebecca Hogue:

My next question is a little bit about what do we do for instructors? Right. What advice are we giving instructors? How can they? How should they? How should they not? What do we tell instructors about this new tool?

Autumm Caines:

I personally don't think there's anything wrong with waiting before you use it. So I guess there is. What do we tell instructors? It depends on what the instructors coming to us with needs for, right? So if they're coming to me and they're saying I'm worried about cheating, that's a different conversation than I'm intrigued and I want to use it. Right? So if it's I'm intrigued and I want to use it, my first response might be, Do you really need to use it? Do you really need to use it right now? What are you doing with it? What are you teaching? Is it directly related to what you're teaching? But is there a way that you could use it and demo it for the students rather than making the students have accounts? If you do want the students to have an account, could you have a shared account or could you talk to the students about their understanding of privacy and digital literacy? Like I would say, if you're teaching like a digital literacy, digital citizenship course, where are your students at? Is this like a level two, three, four kind of course, if this is intro, they might not have a good understanding. Most people don't have a good understanding of digital privacy. I just think that before you dive into using these tools, you should have a good foundation of data sharing and data collection and you should have examined some cases of where things have gone wrong, data breaches and things like that. You should have an idea of what kind of things could go wrong. If they're coming to me and they're asking me about cheating because they're worried about cheating, I usually try to do some damage control around helping them to move away from punitive approaches because I don't think they really do any good. At the end of the day, I think they just degrade our students' trust in us and our students' trust in higher education. And I guess I try to talk to the instructor and remind them how much is really built on that trust, how much of education comes from that place, and help them to realize that they're sacrificing so much more. If they take a punitive approach, they're sacrificing so much more than they would be if they took a more open approach, trying to understand where the students are coming from and trying to figure out how this really aligns with their outcomes and the things that they're trying to do in their course. So I usually bring it back to some type of helping them to articulate some type of policy for their syllabus, because it all comes back to expectations, right? It all comes back to them really thinking about in their heart and what's going on with them, their expectations for the course, the affordances, the tools like ChatGPT and DALL-E and all of these other generative tools can afford students and helping them to articulate to students why it's important and what they learn in the process. In terms of ChatGPT, it's not a matter of we want to create essays. That's not the point. If that's the point, we're doing it wrong. Right? The point is thinking through and being nuanced in your thinking. So nuanced in your thinking that you're putting it down on paper or on a screen and you're scrutinizing and evaluating every word, every paragraph, every sentence to make sure that things fit together and flow together. It's that process of writing. It's not the product of the essay and helping faculty to have that conversation with their students and helping faculty to articulate that meaning to their students in a way that helps the student to understand, I think is so much more powerful than wagging your finger. And if you do this, we're going to punish you. I'm going to turn it over to Lance. Let him talk about it a little bit.

Lance Eaton:

100%. Everything Autumm just said. I think the only areas I would add is I think there's some great value in them using it to enhance some of their own work. And I think that's great with the caveats of the potential concerns around privacy that we've already mentioned. I think if they are going to do that, I think it is important for them to also be citing and identifying where their course, where their thinking is influenced. And I come from this having worked with faculty who have done the things that they don't like, that they don't like their students to do. And so just really demonstrating transparency again in how they're using it with their students. The other thing that comes to mind in using it is and this is something, again, from conversations with Autumm, is really emphasizing, no matter what they think of it right now, I'm going to, going to take the quote from you Autumm. This is the, quote unquote, dumbest that AI is going to be. So I see a lot of dismissal and I see a lot of it's fine, like it's fine. I'll catch them anyways. And there's a whole other discourse around that approach or around those concerns. But I think undermining it or not recognizing like now with this research preview, it is getting better because we're all training it to be better. And I think within that is really highlighting what I'm starting to see. And I want to say, is Anna Mills and a couple other folks have, Maha Bali I think, has done this as well where they they're sharing their dialogs and you're seeing through the questions in the further development in iterations of their questions like the actual dialog, you get some really interesting cool things and I think that's a thing that is powerful and interesting and valuable for for faculty and for their students to be thinking about. I'm influenced by Warren Berger, who's written a couple of different books about questions, and I think this is one of those opportunities for us to really think about the power of questioning and what you need to ask good questions. And so there's, there's something within this that I think is a possibility for no matter the discipline to really think about how like, how do we ask good questions, how do we ask meaningful questions and how do we refine questions as a means of seeking knowledge. But in order to do that, we also have to demonstrate some understanding in order to ask those deeper questions. And I think there's a really rich opportunity there to explore within all of this as well, from both faculty and student side.

Rebecca Hogue:

Yeah, that was part of what I was thinking with instructional design and with my students is it is even in order to use this effectively as a tool, you need to know what questions to ask. And it's the same thing when you're doing analysis in instructional design. When you start, you need to know what the important questions are, right? Is this a training problem? Who are the students? What are the characteristics? Like all of these different analyses things are the questions you need to ask, and if you don't know what those questions are, the tools are not useful. And so I think there is an inherent skill in learning how to ask the tool the right questions.

Autumm Caines:

I think there is. And it's also different even though it's very, very smooth. Right. And it sounds very human. There is a big difference between asking a question of a human and asking a question of a large language model like ChatGPT. If anybody is more interested in this, I was blown away with the prompt engineering stuff that's out there, which is all about how to ask it questions and understanding the different ways that you can ask it questions, which can be. This was really interesting to think about how you interface with it and how it's different than maybe interfacing in a human conversation.

Lance Eaton:

This is probably of no value, but this whole conversation, just like it just had me flashback to Hitchhiker's Guide to the Galaxy when they're waiting for the answer to life, the universe and everything. Like, I feel like there's like there's something there for this conversation as well. It's the same dynamic. Looking for all the right answers are not necessarily the right questions or realizing what the machine is really built for.

Rebecca Hogue:

Lance you mentioned something interesting that I think is caused an interesting conversation on Twitter as well, and that's around citing. And I actually brought it up a little bit in a from a blog post that Maha Bali had put together, challenging not to anthropomorphize a computer system, but that's worth citing. Are we not just doing that? How does that make sense?

Lance Eaton:

I guess I would say I don't know that the citing itself is what anthropomorphizes the tool. I think there's lots of other ways we're doing that. And so citing might feel like one part of that conglomeration, I would say citing it in the traditional sense of citing, it feels like the fix for now as we're still trying to figure things out. I think all of it hearkens to trying to make explicit where information comes from. I think citing is a good start. I think if we were to lessen or try to detract that or deanthropomorphize it, it makes me think about, what is it... Phipps, the article again, it is something Autumm has shared with me in our conversations. Phipps and Lanclos citation approach, which is like you identify that you were using this, and that you understand the repercussions of using a tool that works, that is in part built off of like the illegal copying or use of copyrighted works and also the various exploitated labor. I feel like that's a way of threading. It's like you're citing it. You're citing where this information came from. And because you cannot tie it to individuals, you also recognize that tool is an exploitative tool of sorts. I think that's I think in my head that's one way I've been thinking about or would think about in this context. They offer that citation approach, I think slightly, and just to be provocative and it's something like, nope, I think that's what I will be using and will be encouraging others to use. We can't just hide behind a citation because I think maybe that's what it is. It doesn't anthropomorphize necessarily, but it hides. It hides what really goes into that answer, both technically and human cost.

Autumm Caines:

Yeah. So I do have a blog post pulled up and I can read the citation example that Phipps and Lanclos propose. So this is what they're proposing as a potential citation. And they say we offer the following text not because we think it, but the relevant people will actually use it, but because we think that they should. And so it's this presentation paper work was prepared using ChatGPT, an AI chat bot. We acknowledge that ChatGPT does not respect the individual rights of authors and artists and ignores concerns over copyright and intellectual property in the training of the system. Additionally, we acknowledge that the system was trained in part through the exploitation of precarious workers in the global South. In this work I specifically use ChatGPT to... then they have some ellipses where you would fill in the way that you actually used the work. It's powerful. It's really powerful. It really makes you stop and think, Should I actually use it? If I'm going to acknowledge all of these horrible things about it? I'm not sure that people would. Lance is saying, Lance is embracing it. And I think that's amazing, right? But I think.

Lance Eaton:

It also means I'm probably an irrelevant person rather than a relevant person.

Autumm Caines:

Oh! I don't know if you are, though. I don't know if you are, though. I think what they mean by that, people who would use it. And so if you're willing to use something like that, I think that definitely says something. I proposed in talking with them about the article, so there is probably I need to put this disclaimer out there. And if you go to the article, if you go to the blog post, you'll see at the top they acknowledge that I had some input on this, just I didn't write a word blog post, none of it. But we had some conversations. We had some conversations about citation and about different approaches to citation. And one of the things I put forward is you could use this as activism if you really felt very strongly the ethics of these systems and you wanted to make a point, you could add a citation like this to your presentation or your paper or whatever, and then just have a use of ChatGPT that is so minimal, right? Like I used it like a thesaurus and I changed out this one word. So I used it just so that you could use the citation to point out the abuses that can happen through it. It's really powerful. It could be used in lots of different kind of ways. It's also so powerful because it's using the very thing that ChatGPT obscures. So ChatGPT obscures citation, it fabricates citation. We don't know what is inside of these language models. More than likely, there is copyrighted work in there. We don't know that for sure. It's just we really don't know what's inside of the language that it's trained on. And so this in a way, weaponizes the idea of citation to really speak back to and bring some light and bring some acknowledgment to some of the darker things around this, around this particular tool.

Rebecca Hogue:

I think that's fascinating. I like the power in that citation, that opening citation. One of the ways that I've been using it, which is totally not using it as a data lookup, I have text that's in past tense that I need to move to present tense and it's making it a whole lot faster for me to write it because I can take the stuff that I wrote in past tense, plug it in and say, give me this in present tense, and then I can work with the present tense version that it gave me. And because I wrote the past tense, I own the present tense. It's just saving me time. So again, I'm using it strictly as a tool. Now I'm questioning how it is now taking that data. And I found it did do that to me because I had asked it a question about a town that we visited called Harrington Harbor. And originally it said that this place didn't exist and there was no knowledge of it. And then later, when I asked it about Harrington Harbor, it came back and quoted what I had written about it. Right. So it's like that. Oh, now your database thinks Harrington Harbor is the information that I just fed you about Harrington Harbor, which actually shows you where it's getting the information from and it isn't even fact checking it. You can't argue like Wikipedia has systems in place where things can be at least somewhat quality controlled in some way, or at least say whether it's been quality controlled or not. And fair because ChatGPT is a black box there's no looking inside to see where the information came from. I think that's fascinating.

Autumm Caines:

Just on going back to the point that it's changing all of the time, right? And like it seems like every single day there's a new thing coming out, speaking to what you're talking about Rebecca Amazon is now warning their employees to stop using it because they're seeing some of their, like trade secrets in terms of coding, showing up in answers from ChatGPT because they're using ChatGPT to help them code. And so the ChatGPT is not keeping those secrets and is then absorbing that and then sharing that with other users who are out there. It just put a link in the chat that you can maybe link in the show notes, but yeah, yeah, because it's a learning machine. We've been talking about learning technologies and it's coming back to ID stuff and instructional design and technologies. For years we've been talking about learning technologies. We haven't been talking so much about technologies that learn, which is a little bit different.

Lance Eaton:

This makes me so happy in some ways. I'm not going to like there's a lot there that there is some Schadenfreude going on right now. I'm not going to lie to the point about the responses. This is it. If it gets edited out, I completely understand. But I dropped this line this week when we were having a discussion about it. This just came to my head like, like fluid is that ChatCPT responds with all of the self-confidence of a mediocre white dude. It just does and I just ever since that came to mind, that's all. Okay. I can only hear ChatGPT responses and the voice of somebody named Brad. Yeah.

Rebecca Hogue:

Sorry to all of the Bradleys out there, right?

Lance Eaton:

That's right. That's right. Disclaimer...

Autumm Caines:

I think it's Chad, actually.

Lance Eaton:

I'm sorry.

Autumm Caines:

ChadGPT.

Lance Eaton:

Because we were talking about it in the context of its errors and things like that. And that came to me. And it just it sticks with me now. And that's all I can think about now is.

Autumm Caines:

I love it.

Rebecca Hogue:

I'll ask one more sort of question in this area and then I think we'll close off. And that is we've talked about guidance for students and guidance for instructors. What about instructional designers? What do you think instructional designers should be doing with this technology?

Autumm Caines:

Well, the first blog post that I wrote on this topic was actually me wondering if it could be an instructional designer. There's going to be labor implications of this technology, I think everybody. Everybody is clear on that, right? OpenAI is doing research right now to try to better understand the labor implications because their mission, they say, is to make sure that AI is basically a net positive in the world rather than a negative in the world. And so they recognize and realize this is going to have labor implications and trying to foresee and understand that is really important if we're going to be responsible or something like this. So I was just curious. I was just curious if it could, like, do my job. So I went in and pretended that I was a physics professor, which I know nothing about physics. I couldn't ask any discipline specific kind of questions, but I just asked really general questions about my department chair wants me to take my learning online and I'm skeptical about it. I don't know if this is really for me. And just tried to see what it was, what kind of answers that it would give me. So going back to the idea of prompt engineering, one of the fun things you can do is role play with it. You can ask it to role play with you. And so I started off by saying, You are an instructional designer and I'm an instructor and you, if anything, I think that's an interesting approach because I do think that there is a lot of very standard answers in instructional design. There's a lot of best practice that it's fine, I guess, but I've always pushed back against it because I think I'm not convinced that it is best practice. I don't know who really makes that decision. There's just a lot of it out there. And so I think it's kind of interesting just to see what kind of vanilla answers it gives you to instructional design problems and then say, okay, well can we be more creative than that? Can we go beyond this knowing that these are maybe some of the most generic answers that are out there?

Lance Eaton:

So for ID folks, I mean, I think it's one of those like really having to spend some time playing with it, whether it's ChatGPT or really any of these tools, both to understand their limitations and possibilities and evolution and to be prepared for the questions that they will get from faculty where I see it impacting potentially ID the most is probably in a lot of the

Autumm Caines:

OPMs. You're thinking about OPMs.

Lance Eaton:

OPMs I can see the large scale online institutions. Your SNUs, your ASUs, your Western Governor's looking to leverage this in a way that reduces the amount of instructional design folks that they rely upon or that they contract out and using this as more systematic means of updating their courses. I can see the pathways to that because in some ways it feels very much like textbooks, textbook publishers in their methods. We've got to sell more books, and so we've got to come up with a new edition every two years and we'll just switch the chapters around. There's some ways I can see some of the larger scale ones being like, We've got to refresh our courses. So hey, ChatGPT update it with this kind of flavor. Like I could... I can see a series of APIs and plug-ins being used to like do a lot of that. I think that's and so I would say anybody that is in that space just understanding it and seeing where that starts to pop out because I just have trouble not believing the ways in which those larger entities work on this like assembly line. And efficiency is always the better answer than anything else that they aren't going to start thinking about it and using it.

Autumm Caines:

I think another big role for instructional designers right now too, and I say this on January 30th of 2023 because it's changing all the time, like tomorrow, like it's going to be different. I think your administrators need guidance on top of your instructors. Your instructors are going to be coming to you and asking you for help with this. Just all the different roles of the instructional designer. There's the role of you as a content creator and as a designer who's creating stuff for people. There's the role of you as somebody who gives advice to faculty, but then most of us have either interactions with some type of administration or we have a director who has direct access to administrators. And I think all of us are very busy. We have a lot going on, right. And we don't have necessarily all the time in the world just to concentrate on generative AI. So I think they're going to be coming and asking for advice and asking for guidance in terms of what to do with all of the social change that's going to be happening around what the impact of these tools is going to mean, so just educating yourself and keeping on top of that stuff that's changing, even if you can't stay on top of it for every single day because it is changing every single day. If you can stay on top of it once a week, once every two weeks, whatever, to kind of keep your eye on what is happening and how it's changing, I think that it's going to be really important for those who will be coming to you and asking you for advice, which will be faculty, but more than likely it will be administrators, too.

Rebecca Hogue:

And I think there's an interesting point about whether or not it could replace instructional design. I don't think it can, but I do think it does some things very interestingly, and that's I've been asking it all of the questions, like I've generated some chapters from my textbooks, but I'm using it saying, okay, here are the questions I want this chapter to answer. What do you say? And then it gives me an answer. And I'm like, actually pretty good. I'll take that.

Autumm Caines:

Hey, I will say, and should I say this, and let's say this, just say it. I will say I wrote that first blog post trying to figure out if it would, if it could replace me with a little bit of a little bit of tongue in cheek, little bit of haha thing. But then I realized the larger discourse of people who really are looking at the potential impacts are not thinking about that. They are thinking about whether it could replace teachers. So instructional designers, without teachers, that's a really scary prospect if you ask me. And that's the bigger thing to be concerned with.

Lance Eaton:

Yeah, 100%. Do you think that's where it's going? I think the thing I see is and this is the it's the thing that with whether it's AI or it's robots, that that seems to be sometimes missed or under considered is like I hear folks being like there's some jobs, that robots or AI won't ever replace that's not they'll replace them en masse. What it will mean is that you as an individual instructional designer or plumber or whatever your profession, you will be able to do a lot more, more quickly. And so if previously you were only able to work with ten faculty, you can now work with 20 or you can work with 100. That's the thing that I'm seeing. So it's not that you replace all instructional designers, but it means one like the expectation of one instructional designer will be much more expansive or be much more expected than what it was previously. And like, we see that within our technologies, every tech like we have over the last 50 years, almost all workers have become increasingly way more productive. But we keep asking more and more of them and of course, paying less and less of them. And so I think that's the thing, is what this shows, or what this creates the opportunity to, is like now instead of and I've seen and was playing around with this in December as well when I was doing some advising, I was like, What if I use the machine to help me get started? So rather than spending 5 hours of like hemming and hawing, I used it as a like, let's see what I can get here and start running more quickly. And I think that is where the challenge is. It's not going to be replacing all. It's just going to make the need for this fewer and fewer. And I think when we talk about replacing teachers, like to me in many ways, the big scale instructional places that have 130,000 students, they pretty much replace teachers there. Right. If you teach at those schools, you're not doing curriculum. You are grading and you are doing discussions. And I think they're doing that because they need to at least justify that you actually have an interaction with a human. Well, yeah, they will replace those, but meanwhile they can also much of their staff is not faculty, but is actual staff like instructional designers. And now it's like, Oh, we can maybe even do less. And so I think I don't want to be like, Oh, we're all doomed because of robots and AI But I think that is the threat that they pose. And I think in industries we can start to see that happening.

Autumm Caines:

That's right. And yeah, I don't know if it I also don't want to jump on the robots are coming for our jobs pedestal like we know that doesn't end well. And it's never 100% true. But what you just talked about, Lance, is job loss, right? It's job shifting. If we have 50 instructional designers right now and each one of them are working with ten instructors, if each one of them can work with 50 instructors. Yeah, that's a different situation. It will be interesting to see how it plays out. I guess the big takeaway is nobody really knows. But one thing that we do know is that there will be some kind of labor implications. There could also be job creation around this as well. Right. Knowing people who know prompt engineering or who know how to train a language model, those kind of things could end up being something that are marketable skills that we might need going forward. So I don't want to paint it as all gloom and doom, but there will probably be upset and there will probably be some reskilling that is needed and there will probably be big changes.

Rebecca Hogue:

I want to say thank you to both of you for coming in and joining this little thought experiment around ChatGPT and what it might be doing this semester. I am really curious to see how all of this is going to play out and where we're going to be at come September, right when we go into another semester after having had the summer as well and a little bit more time for things to play out. But thank you very much for your willingness to come on and chat with me about Chat.

Autumm Caines:

Thanks so much for having us, Rebecca.

Lance Eaton:

Thank you.

Rebecca Hogue:

You've been listening to Demystifying Instructional Design, a podcast where I interview instructional designers about what they do. I'm Rebecca Hogue, your podcast host. Show notes are posted as a blog post on Demystifying Instructional Design dot com. If you enjoyed this podcast, please subscribe or leave a

Speaker 0:

comment in the show notes Blog post.