Demystifying Instructional Design

Episode 33: Exploring AI and Instructional Design with Dr. Luke Hobson

Rebecca J. Hogue Season 4 Episode 33

Like the podcast? Leave a message or ask a question here:

In this episode of Demystifying Instructional Design, Rebecca interviews Dr. Luke Hobson, an instructional design leader at MIT and educator at the University of Miami, to explore the intersection of instructional design and generative AI. Dr. Hobson shares his experiences integrating AI tools like ChatGPT into the ADDIE model to streamline processes such as crafting learning objectives, creating simulations, and personalizing learning tracks. He emphasizes the importance of experimentation in mastering AI and highlights the challenges, including limitations in accuracy and adaptability. Together, they discuss the future of instructional design, from immersive learning environments to AI-powered personalized education, offering insights into how AI can enhance, rather than replace, the work of instructional designers.

Support the show

Please consider making a donation to my Patreon account to help support this podcast financially: patreon.com/rjhogue


Marker 1
---

Rebecca: Welcome to Demystifying Instructional Design, a podcast where I interview instructional designers about what they do. Today, my guest is Dr. Luke Hobson. Luke, can you start by introducing yourself? 

Luke: Absolutely. Well, thank you so much again, Rebecca, for having me on to the show. So, as you just mentioned, my name is Luke.

I lead the Instructional Design team over at MIT. My division is called XPRO. It's the professional development area of MIT. I also teach at the University of Miami, the School of Education and Human Development. I teach in the EDD program for Applied Learning Sciences. And because of the fact that I love to obliterate all of my free time, I have a blog, a podcast, a YouTube channel,

books, courses, and everything of a sort, essentially teaching folks more about how to design meaningful, relevant, and interesting types of learning experiences. So that's what I do. 

Rebecca: Oh, excellent. I love talking to other podcasters. It's lots of fun. So we're here today, mostly to talk about instructional design and artificial intelligence and generative AI in [00:01:00] particular.


Marker 2
---

Rebecca: And I'm wondering, are there particular phases of the ADDIE model that you use more often?

Luke: You can incorporate gen AI into really any part of ADDIE if I'm thinking about it out loud, from even trying to be able to help you. Let's say that you actually didn't know anything about ADDIE

like first things first, you're just like, I am, I'm new to this world. But at the same time too, you're like I'm new to Gen AI, how do these things actually go together? And one of the things that I recently experimented with was that I created a custom GPT to interview aspiring Instructional designers and the very first question was talking about for ADDIE.

which is really funny. So, trying to be able to put these things together absolutely are possible, but if you wanted to be able to hypothetically say, if you're going to chat GPT, or with Gemini, or any other form of large language model, and even simply starting put, just to ask about, how can I actually use you to think about this from an analysis phase, or from an evaluation phase, I'm currently at this [00:02:00] step of the way, and we're trying to cobble and put this thing all together.

How does it work for the, for the implement phase? Like, you can do any of those different types of elements. So it is entirely possible to be able to do, but so much so comes from the user and knowing how to ask those questions, how to put in those prompts, and how to be able to think about that. Because naturally, no LLM will just do that for you and just say, Oh, here you go.

You got to be the one to kind of massage that out and to arrive to that conclusion. 


Marker 3
---

Rebecca: So how did you learn how to ask it the right questions?

Luke: A lot of experimentation. Seriously, when I first heard about ChatGPT, which actually I first heard about it in 2019, because we were filming a course at MIT from a professor, and his course was all about designing AI products and services, as you can probably imagine with this topic.

And he called out ChatGPT in a video. And I remember being like, that's a weird name. Like, what the heck is that kind of a thing? And I Google it like, ah, all right, but really never thought about, I always thought about AI [00:03:00] in terms of trying to revolutionize healthcare or for something like for finally trying to solve cancer.

Like, I always thought about it like that. It never was. What about my little instructional design world? How can I figure out like that never ever occurred to me until finally having access to ChatGPT on my own several years back now and just trying to figure out like, well, what do I do with this?

So if I take, hypothetically, my syllabus and I copy and paste the learning objectives and I put it into ChatGPT and say, I want you to look at these for me and tell me, could I do a better job with crafting learning objectives. Can you rate my learning objectives? And now it's kind of evolved to be able to say, using a UDL framework, I want you to now think about my learning objectives and see, did I write them in a best type of UDL fashion?

And then it can give you updates and modifications around there. So there's a lot of experimenting and realizing, too, about what Gen AI is great [00:04:00] for, but also some things, too, where clearly it's still lacking. I mean, even for learning objectives, if we're going to stick with that example, it still would say the terms of understand, know, and learn.

Like, no, that's step one of what we teach instructional design students is like, no, don't write it like that. We have to make it more measurable. We have to try to be able to refine that. And it still said those things. And it's like, ah, so you try to be able to go back and forth, whether it came from learning objectives to repurposing of transcripts, to thinking about how can I do different types of workshops that I've done a million times before the same exact way in the past, and I want them to do them differently, or to helping me with scenario based types of problems, or even now creating simulations.

It's, I mean, it was just so much of seeing what can it do and also being like, Oh no, not, we're not going down that area. That, that really doesn't work. 


Marker 4
---

Rebecca: So, how would you differentiate simulation from case study? 

Luke: So, what Gen AI is great at, and it's interesting because Gen AI will make scenarios and case studies the same.

And that's like, No, they're [00:05:00] not the same. Like, one's a hypothetical, one is more about actually taking information from the real world, and now we're documenting a real world case study. So it thinks that those two things are the same, which I'm like, all right, I kind of get where it's going. But from a simulation perspective, it's like, If I actually want to be able to create a simulation where I can download that onto my desktop and have students to be able to go through, as far as like being able to click around to being able to respond inside of a simulation, to be able to get feedback or a score, put a progress bar, or essentially like kind of like a design, a game or a walkthrough with that simulation.

That's what I mean, as opposed to something that's going to be like a block of text. That's describing a scenario or a case study that I'd be creating. 


Marker 5
---

Rebecca: And so with your simulation, what tools do you use to create that? 

Luke: It was actually GPT 4 was how I created the last one that I did. 

Rebecca: Even with the programming and everything? 

Luke: Mhm. 

Rebecca: Wow. 

Luke: I know. So, [00:06:00] when we're speaking about experimenting, that was an experiment. Because it kept on telling me, no, I can't do that. But I was following along with Ethan Mollick, who is a professor out of Wharton, and he has many, many great insights about AI, and I was following along with him, and he was highlighting how he was creating simulations to chatGPT

I was like I want to do that. Like, how do I do that? So it actually, what I ended up doing was that I took a screenshot of Ethan's post on LinkedIn. And when ChatGPT was like, no, I can't do that. And I shared the screenshot and was just like, look, you did it for him. Why can't you do it for me? It was like, oh, my mistake.

I could do this. Okay. And I was like, okay, so now let's figure this out. And then eventually it was walking me through the steps and it was just so much back and forth and back and forth. And it's funny because that was before having access to creating custom GPTs, where I wanted it downloaded onto my desktop.

I'll [00:07:00] be uploading it into an LMS. And there was a lot of things I wanted to be able to control and to do. And then now to see how that took me hours. Literally hours to get this thing to work. And then now to say like, Oh, I can create a custom GPT in minutes is like, all right, one's clearly the smart way of going about this, where one is a much, much harder way if you really want to go down that road.

Rebecca: I've noticed because I use chat GPT quite a bit now in my teaching. And I'm redesigning courses for the spring semester. And so I'm starting to use it. But I've now created Projects for each of my courses so that I don't have to keep uploading things to tell it about it. But also I've been playing a little bit with those custom GPTs so I can upload all the readings and stuff like that.

But I haven't quite, was hoping to get one that would do my syllabi for me. Doesn't quite cut it yet. 

Luke: So have you tried uploading samples of your [00:08:00] syllabi that you approved of and think are the best and to say to use that as a reference? 

Rebecca: I have uploaded it and asked for feedback and got feedback on it, but I have not said to use it as a reference.

That's very different because it wants me to change things that are like, I can't change that. That's in the boilerplate, right? Right. 

Luke: Like, no, no, no, no. So if you were, if you're direct and you actually say to it, this is an example of a perfect by my standards of a syllabus, I want to be able to create another one.

Based off of this original Help Me With App. So you're now reshaping like the thinking of, Oh, okay. This is the, this is the golden standard is what she uploaded. Now we need to make sure that we're using this version instead of whatever it's finding online. 


Marker 6
---

Rebecca: Okay. That's really helpful. Thank you. That's very useful.

What other tools do you play with?

Luke: So, I would say, from an LLM perspective, I've done quite a bit too, [00:09:00] with Gemini, that used to be Googlebard, and now is Gemini.

I've also tried a little bit too, with Llama, that has some varying, and that's Meta's. LLM is llama. I had varying success with that one on my to do list is to really play around more with Claude with 3. 5. That's what a lot of folks are using right now from a data perspective and having these different types of like dashboards and having it be much more of a analytical tool that I'm like, Ooh, I want to try to use that as well as a new one.

But the other thing that I've been experimenting with has been more on the creative side. So if you want to think more about with DALL-E, Midjourney, Leonardo, Suno for music, Runway is the other one, but I'm really interested right now in video generation. We have, from the LLMs, everything has been really like text generation, where for what we were promised almost a year ago, and it actually like just came out last week, is that OpenAI had Sora, which it [00:10:00] showed this preview of essentially a woman walking through the streets of Tokyo, and it had like a drone shot

of going through a, like an old school Western town. And it was, so it's literally creating videos based around the prompts that you're giving, which is fascinating. And in turn, you're watching it and you're like, it looks great, but like, that's weird, like that person's legs moving in the wrong direction.

And you're like, that doesn't like, it's not like a hundred percent, but then Sora is now finally available to the public in its sort of experimental phase. So it's coming, but Runway has been steadily going up and up as far as what it's been able to deliver . The last update that came out about a month ago or so is that it can take you and if you're recording in front of essentially a blank white screen. You can record almost like a Pixar movie within moments and it is wild to see what and I was like What can I do with that?

Because now if you're telling me that I can be able to use myself as a voice [00:11:00] actor and I can now create videos based off of using a green screen to be able to now upload that and not using like a Synthesia or a Haygen with a virtual avatar. It's still me but now it's in a different more sophisticated version of an animated version of me. That's something I'm playing around with. Like What can I do with that?


Marker 7
---

Rebecca: Yeah, that sounds fun. That sounds crazy. You jumped to video. How did you do with image editing? 

Luke: So, DALL-E was a interesting experiment as well, too. One of the things that you'll find is that when you're using something, and DALL-E seems to be like the standard. There are many many different image tools.

Rebecca: That's ChatGPT, the one that's integrated with ChatGPT, yeah. For me, I am having a heck of a time getting it to do what I want it to do. 

Luke: Are you? Interesting. So, like with what we were describing about from the syllabus example. Whenever I am using it, because if you will actually look at my podcast, [00:12:00] you'll see that all of my artwork lately is now themed in a certain pattern based around what I realized chatGPT could do after putting in certain types of prompts and feedback and such.

Because before that, and this is like one example, was that for trying to be able to use images for anything. And of course, whatever it is, I have a subscription for like everything from Envato Elements to Adobe to like, you know, I have a thousand subscriptions before the Gen AI subscription wave where now all of those subscriptions are, are also now adding up too, which is fantastic.

But it was like, well, I want to be able to have something, but I want everything to be styled in a way that's purple, it's white, it's a vector image, it's modern, and it's like minimalist kind of a thing. So I ended up using this again and again and again with the same type of prompt that I have. And while it's not always perfect, sometimes it's way too busy.

It's way too like right now, some, some things are totally crazy. You have to give it [00:13:00] really specific directions of what you're trying to be able to get out of that or else it creates that very generic. I feel like it's just, everyone always has the same types of like AI posting that is super colorful and a thousand

things in the background with like somewhat conflicting. Yeah. And you're like, that's clearly AI. And you're like, so, so that's what I've found to be able to use with that. But once again, experimenting and the real kicker of all of that too, is that you can't change one detail. 

Rebecca: Yeah. I find that very frustrating.

It's like going through this. And you send it out and the picture is like perfect except for one thing and I say okay change that one thing and it regenerates a totally different picture. 

Luke: Exactly. 

Rebecca: And I'm like oh that is so frustrating. 

Luke: Because spelling is a huge problem. Spelling is just like they don't know.

Yeah, so if you say that I want you to be able to create for me a poster that says Instructional Design. 

Rebecca: Oh yeah, yeah. 

Luke: You will 

Rebecca: get not even the right letters. [00:14:00] And even when you quote it, it's the wrong, yeah. 

Luke: Nope, doesn't matter. So I've done that before, where it's taken me, like, five times for it to finally get the spelling right.

And I'm like, all right, great. I'm giving up. It's right, sure. I'm not happy with the rest of it. 

Rebecca: And it doesn't seem to take no. Because it's like, do this, but with no letters. I just don't want any letters. I will put the letters on later, because they'll be wrong. And that is, yeah, it's like, it does not get letters.

And that seems to be, it doesn't matter which one I've used. I don't have every subscription, but I do have Adobe. So I will use Illustrator, because Illustrator is pretty cool. What's nice about that is it gives you images you can edit. Although there are thousands of items in the image and it stacks

so editing it is non trivial because of the way it builds the image. But it's still editable And then the other one is Adobe Firefly, because I find Firefly [00:15:00] sometimes will give me more what I want compared to DALL-E and I think that's the busyness problem. I think they might have, you know, it could even, I don't know what their background engine is, it could even be the same engine for that matter, but they've filtered out some of the busyness of it.

Luke: Have you used Photoshop's new Gen AI tool? 

Rebecca: No, I have not. 

Luke: That one is wild. It expands. 

Rebecca: Oh, I've seen that because I've seen complaints about that particularly with women and Headshots and then HR conference organizers taking them and changing the size and the v neck that stopped here suddenly is showing a little bra.

Luke: Oh, oh, oh, oh. 

Rebecca: It turns into this very unprofessional picture because of Gen AI and they just generated a little bit more than it should have, which is just a little challenging. 

Luke: [00:16:00] Yeah, that's not great. Let's not go down that road. I ended up using it. I took a picture of myself in front of a dome on campus, and I was curious to hear if I just kept on taking the outside of the picture, and then I highlighted that and I said, expand, what would happen?

And it ended up creating A whole new campus behind me. So it kept the original dome, but then like, it added more trees. It added more walkways. It added more things. I was like, this is fascinating. But I did the same thing too, because I was wearing, I had on a backpack and you could see my arm because it was like a selfie.

So you could see my arm as I'm taking the picture. And when I did that, where I highlighted me to, so it left my face alone, but it would change my backpack. So it changed my red backpack to a black backpack. And then it gave me tattoos on my arms. Like it gave me a new arm and it gave me tattoos. And I'm like, ah, it's nope.

I don't have a tattoo there. I'm like, but okay. [00:17:00] But the thing is that it still looked real. And that was what was insane where if you didn't know me and you saw that picture, you'd be like, oh yeah, Luke clearly has a bunch of forearm tattoos, which I don't, but it, it looked that real, which was crazy.


Marker 8
---

Rebecca: It is crazy. It's kind of, fun in the experimental way, but is there a practical use for it in an educational way?

Luke: So in some of the courses that I have I very much so love scenario based learning. As we talked about it previously, that's my jam. I have scenarios all throughout all my courses.

And one of the things that I like to be able to strive for is to make the scenario as realistic as possible. So that has that storytelling element inside of there. And it's just like, well, what else can I do? If I already have my wall of beautifully written text that describes my scenario. Well, what else could I do?

So let's say that I wanted to be able to describe a company that's going through a cultural shift and the student is going to be the one who comes in from a leadership perspective and try to save the company from the cultural [00:18:00] issues. If I wanted to be able to then have that company and create a fictitious logo

for the company and then to be able to put that onto the actual scenario. And then on top of that, if there were something as far as with a stylized template, or if the course has certain types of color patterns and choosings, as far as for how the course was properly styled, I can incorporate all of that as well, too, to make it look as clean, as professional as possible.

And then if we really wanted to get crazy, now we have a logo, we have the colors, we have a scenario, then I can use something like Runway or a video generator type of a tool to be able to then create a video of somebody going through let's say like the hallway of a company, seeing the actual board or trying to be able to have some images that describes about then I can have an entire story based around all of these elements that I started with just a [00:19:00] block of text.

And now I have a logo, I have colors, I have a video, and you're really painting a picture of things. So we can take it like one step further to really make it more lifelike for the students to actually experience. 

Rebecca: Yeah. That's fascinating. It just amazes me how much we can do without needing that special sort of skillset that we used to need.

Luke: Oh yeah. That would have taken forever to assume that someone that's going to go into Illustrator, create a logo to match it with the, if there is, it's like, Oh, that would have taken so long to be able to do as someone with a Graphic design degree. I can tell you that it takes a while to be able to do these things.


Marker 9
---

Rebecca: And even just the amount of creative energy you need to expend in order to do that. And so one of the questions that I have on my list here is sort of how do you decide which AI tools to use? [00:20:00] 

Luke: So what I like to do is to be observant around what other people in our industry are currently experimenting and playing around with.

Because the thing is that your time is valuable. You don't have enough time to go through every single tool. There's way too many. And you're going to see that some people hype up a tool, but then you look at it and barely anyone's using it so far. And you're like, no, I'm going to assume it has like no support.

It could crash. Like I bet this. Probably not going to be doing that. So what I like to do is just monitor what my fellow peers are experimenting with, sharing it. And if there is a redundancy that I can clearly see that people are using. So Claude is one example that like everyone I know right now is experimenting with Claude.

And it's like, okay, enough people in my network are using it. I got to give this a shot. And that's exactly how I ended up experimenting with chat GPT was that enough of my friends and colleagues and peers were saying, Luke, you really got to try this. Like, here's what we're seeing. Here's the [00:21:00] results.

It's like, Okay, let me give it a try, but there are some things that I just know there's not enough reasons for me to try to go and to use every single tool, like we're talking about from the image generation perspective. DALL-E works for me. It's already a part of a subscription I pay for. It's fine.

There are 200 other competitors right now that we could talk about for DALL-E competitors. I'm like, no, I just, there's no reason to, there has to be a purpose to experiment with so many things. And I don't want to get lost in the weeds and try them out with everything. 

Rebecca: Yeah, I'm wondering when the bubble's gonna burst, right?

Because we see all of these companies popping up doing AI stuff. And it's like, but the big guys are in the game too. 

Luke: Oh yeah, I was gonna say, because Google, Apple, Microsoft, Amazon they're all there. The other thing too, is that every company that you currently know of is now slapping AI onto their product.

And they're like, we have AI! And you're like, [00:22:00] Well, do you? Or was this just like some last minute haphazard thing where they're like, Oh, marketing team is saying put AI somewhere in here. Now, here it is. And you're using it. And you're like, well, it's there, but it's really bad and it's not user friendly.

Yeah. 

Rebecca: Or it's always been there and people didn't realize it. I think about things like Grammarly. And when they say, you know, you put out these blanket statements that say, no, you can't use AI. I'm like. AI is baked into every tool that we have now. 

Luke: Yep. It just wasn't a selling feature before. It wasn't a highlight, and now it is.

So, it's been very interesting to see, though, how the learning management systems out there are now thinking about AI. Because some are going in very different directions, where, for some, I know that, for instance, like for Brightspace, they have a new tool that's called Lumi, that really is so much so like they're trying to make an [00:23:00] instructional designers sidekick as far as for here's

how you create different activities, different assessments. Here's how the learning objectives align and everything. And it's just like, Oh yeah, that makes sense. But then there's others that you get. It's just like, here is your multiple choice quiz generator. And it just blasts a thousand questions.

And you're like, no, I don't really need that. That's not gonna. This is going to produce a lot of really annoying questions that no one's going to want to be able to have. So I was trying to figure out that fine line of like, well, what team actually cared to try to put it in their product? And what team is this kind of like, yeah, we got AI.

And you're like, Hmm, do you? 

Rebecca: Yeah, have you thought about how it might actually be used by a human? 

Luke: Right, right. 


Marker 10
---

Rebecca: And so what can AI not do?

Luke: AI, there are concerns with AI in a couple of different types of areas. One of the things that keeps on driving me nuts, which led me to me [00:24:00] creating more custom GPTs to try to solve the problem, was that right now, if you have folks who are currently using GenAI and you're giving feedback on the tool to say that this is incorrect, this is wrong, this is not the way to do X, Y, and Z.

It's not like that can save that information. So one of the things that you'll see many instructional designers will be talking about is that learning styles is still mentioned in chatGPT and you're like, no, it's been debunked. It's not true. And it still is this like, let me tell you about VARK. And you're like, no, it doesn't matter

what you do. So that's when you have to create your own version to say in your version learning styles you know for a fact are debunked. And it's like oh okay got it and it needs that direction to be able to do. So trying to be able to save progress from that perspective is something that is very frustrating so unless if you're copying and [00:25:00] pasting like the wind from so many past prompts it can get very tedious to be able to do, Gen AI is certainly not like this magical silver bullet that like we're always promised that in education that something is going to fix and change education for the better. with like the drop of a dime and it's gonna be you know way before it was micro learning is gonna solve everything gamification will solve everything competency-based education will solve everything and you're like well is it or is it another and that's what i think about with Gen AI is like it's another tool. It is not going to solve everything.

Could it take more gigantic steps in the future? Maybe? I mean, trying to be able to now see what Khan Academy is doing and now having, like, essentially a personal AI tutor for every student in every classroom? Fantastic goal. If it works. It's like it's always those it's like awesome if it does it appropriately and we're still thinking about other different types of factors so it's always [00:26:00] a bit of a balancing act of okay it's great in this area but oh it doesn't understand how to do this so you're trying to always figure that out.

Rebecca: Mm hmm. I find sometimes that, it can take more time sometimes just because it's not giving you something correct. 

Luke: Yeah. 

Rebecca: Or, yeah, you throw it out the door. It's like yeah. 

Luke: And then that also becomes an area of opportunity for new tools. So, back when ChatGPT was trained on a dataset that went up to 2021, which was essentially the older free version.

That's how we ended up coming along with the hallucinations of creating fake citations and fake references because it's trying to please the user and if you're requesting something and it can't find the information instead of saying I'm sorry I don't know. It's like oh yeah so let me make up this fake reference and you're like But what, what is that?

And you go and Google it and it doesn't [00:27:00] exist. And you're like, Oh, that's not right. Where now, because it actually has access to the internet that has reduced a lot of those mistakes and such, but you'll still find something every now and again, where you're like, That's no, no . I've experimented before, before you used to Google yourself to see like what popped up on Google analysis, like go and chat GPT yourself and see what it said.

And it mentioned that I wrote books that I didn't write. And I was like, promise you, I did not write that book. Like, I'm very flattered that you think that I wrote Julie Dirksen's book, but I didn't. That's hers. That's not mine. So it's, it's getting better for things, but definitely that's one area

where you're like, ah, yeah, you always need to be able to double check that because now, which is interesting and going back to what I was saying was an opportunity for other tools, now we have tools like Google's Illuminate that essentially does that of, oh, you want to be able to go and to search the web for scholarly sources around the history of [00:28:00] online learning, and here they are.

And if you want me to then create a AI generated podcast episode on your select choosing of which whitepaper you want, I can do that and combine it and put it together. And now I can listen to that as I'm driving. And it's like, that's mind boggling to think that previously, if you were a doctoral student and you're doing your literature review and you're just reading whitepaper after whitepaper, and you're just going through all of them.

And now to be able to say that here's a repository of every white paper and I can pick and choose which ones then you read it to me instead of me having to read it myself and that of course leads into another consideration around like well how much we want AI to do for us because I'm thankful that I've spent that many hours in the library and it's like essentially like grinding through books to try trying to read things but also To have a tool that could help you with that to make it more efficient for [00:29:00] trying to find some information instead of just like going with like a Rolodex. It's like oh yeah that could be pretty amazing.

Rebecca: Yeah it comes back to and I remember the number of Business propositions around helping students take notes better and I'm like, okay, that's great, but if they had that better note taking, they would learn less. Like, you're taking away the learning. You gotta be careful about what it is you're automating.

Otherwise, you're getting in the way of the learning, because learning's hard. So you make it too easy. And I think about that when you think about, oh, reading articles for me and doing that kind of stuff. Great if it's just for interest, but if I'm actually trying to absorb and learn that information, is that actually effective?

Luke: Sure. So the hope And this is of course a hopeful thinking there is that if I'm spending my time more wisely, as I was saying, I'm going to be literally driving to Boston for [00:30:00] work. I have an hour. If I can create a podcast episode that's based around the sources that I want to go back and read later on, because now it's more precise and telling me what's in the article so that that way it's not just me just like

mindlessly clicking through things and now it's just saying like, Oh, I have a direction. I know that in the third article that I picked from, it talked about the topic that I was really interested in. So now let me go back and read that article, then go through and highlight my notes and blah, blah, blah.

So it's this, if you're using it like that, that, then I can see what makes sense, but as you just said though, I can completely see the future, which is going to be very bleak, of just like, it will do it for me, and then you're like, alright, well tell me something where you don't have access to the internet, what did you learn, and then, you know, Blank, because they have nothing, and you're like, oh, oh, wait, we went too far.

So, yeah, yep, yep, that is a real consideration of not a great world in the future. 

Rebecca: Yeah. [00:31:00] Yeah. So, I was going to ask you about skills that IDers, so as instructional designers, what do you see of the skills of the future? 

Luke: Right now, the biggest thing is explaining to people how to use Gen AI. Because they don't know.

So, you have the hiring managers and employers who are saying, you need to know how to use AI. And I wouldn't be shocked to see on job postings for going forwards that it's going to, like, it calls out, like, eLearning and authoring tools and such. I bet we'll have callouts for AI. If we don't already, I'm sure we do.

But that will be now the thing of being able to say that, yeah, you also need to know how to use chat GPT or whatever it's going to be. And it's not so much that you need to become a, wizard in how to use this, but it's more about it is mystifying to people and as conveniently as the podcast is named for demystifying.

That, like, that's what this is, is trying to make sure for folks to know, here's how we actually use it in the AI space the ID space, [00:32:00] and here's how to not use it, which is a big big, big talking point right now is that you have folks who say, I know we need AI. It's a part of the strategy of the organization.

The team is thinking about AI and when you're like, great, how do you use it? They're like, I don't know. Okay. So it's clearly on everyone's mind that they're trying to be able to incorporate it, but they need a helping hand on how to use it. And if you just have a couple of examples around how that comes into play for things, and I might actually start recommending this to my students as we're talking about it.

Where in the portfolio, if you use AI somewhere within the project and you can even like call out and highlight where in the project that you used it, that might be interesting to showcase and highlight that to say like, Hey, I am understanding and AI literate or whatever, whatever buzz term we keep on saying AI literacy now, but that, you know, just to say that, yes, I know how to be able to use these tools [00:33:00] it's going to be very handy.

That's, that's the truth. 

Rebecca: Yeah one of the other sort of flip side is what do you say to people that say that AI is going to replace Instructional Designers? 

Luke: Oh, so I gave a presentation at Harvard the other day about how to use Gen AI for teaching, learning, and design. We had a word cloud, did a poll everywhere.

We did a word cloud for that. And I said, describe for me what one word comes to mind when I mentioned Gen AI and the Harvard faculty put up evil, corrupt, retirement like, and all of these things about essentially like replacing me is really what they were saying, which I thought was, you know, it's kind of funny.

We got to chuckle about it and everything. But, I don't think AI will replace us. I think that it is just like any other tool that we have learned how to be able to use. Because I still remember when Ask Jeeves came out, and all my teachers freaked [00:34:00] out thinking it was the end of writing. Because they're like, you now have this.

And I was like, yeah, here's how we use it. So we went from worrying about Ask Jeeves to Google, to smartphones, to whatever, and now we're in the AI phase. So now it's much more about. Here is this thing. What can we use it for? And unfortunately, you're going to have some decision makers who say, Oh yeah, we're going to use AI for going forward to solve the problem, but they don't know what they're doing.

You're like, well, you're just causing more problems, more unique problems. If you think you're going to replace a person with this AI form of a chat bot or whatever it is, and it can't do the same job. So my fear around Gen AI as a whole isn't about replacing people, but it's about actually producing garbage quality eLearning faster.

Rebecca: Yeah, that was the, that was the issue. And I thought about the same thing when I saw one of the video companies that does the, you can now have an animated person walk through and talk [00:35:00] and it's like, okay, so now we're going to have a lot more rapidly generated bad video. Because, you know, That was the promise of Rapid eLearning.

When Rapid eLearning came out, right, when Storyline and Captivate and all of that was sort of just bubbling up to the surface and replaced Flash programmers it was this thing that people are all like, well, now we can just give this tool to the subject matter expert and they can create eLearning.

And I'm like, No they can't. You don't understand what we do. 

Luke: No, and it's just like, sure they can, they can create something, but it's not going to be good. It won't be effective. And then the feedback from your employees or your students or whoever it is, is that they're going to hate it. You create whatever you want, but the product is going to fall short.

I mean, I don't know a single person who loves learning about a sensitive topic or a difficult subject matter, and it's through the lens of some silly, weird, cartoony [00:36:00] video. And I mean, I always think it's so strange that, like, you go through, like, an HR training, and it's something that's, like, it's very serious, and it's about, like, workplace harassment, you know, like, trying to, like, get into the Mode.

And you're like, okay, like this is a serious topic

Rebecca: and they use Vyond. 

Luke: Yes. And then they're like, and then here's some weird cartoony waving thing. And you're like, okay, I instantly now I'm turned off. I don't like, what am I doing? Kind of a thing. So, yeah. So we need to be smart about that, but that is definitely the fear is that folks are just going to be producing things at such a rapid rate

and it's just not going to be good. I mean, maybe it's going to have a reverse effect for us and our industry for them to be able to say like, hey, we need someone to come in who actually knows what they're doing and fix this, start over from scratch, consult, do whatever, coach, and it's like, yeah, that's entirely possible because they don't know.

Rebecca: I think back to one of those terms in database management, right? Like the garbage in, garbage out, right? If you don't know what you want to put into it. [00:37:00] What you're going to get out of it's most likely to be crap. It might be. Shiny and glittery crap, but it's still crap. Yeah, definitely.


Marker 11
---

Rebecca: So I have one last question for you. And that question is what's your prediction for the future of instructional design and AI? 

Luke: As I now put on my, my crystal ball and I take out my tin foil hat and think about things. My prediction for Gen AI and instructional design, which to be fair is still around, but it's not fully been embraced yet for things.

I still think from- in a buzzword alert- gamification, I think from a perspective that if you're trying to learn about something and it is too difficult for you, whether you are not ready, you don't have the time, you don't have the prior levels of knowledge, whatever it is, there should be a toggle to change the difficulty setting.[00:38:00] 

So, I've always thought about this, of having different forms of learning tracks, and having essentially the version that's going to work for you, like an easy, medium, and hard mode when it comes to learning, because you will always have the folks who are the super go getters that they're going to get the straight A's no matter what, if they want to be mentally challenged and to try to be able to get the most out of it, here's a hard mode track.

But for someone like myself, going back and thinking about all the math courses that I had to retake several times, because I never understood it. I was like, why couldn't I have some form of a beginner level to help me ramp up? And then when I'm comfortable, I can flip that switch, click on the toggle, and therefore it's then going to generate a different type of track customized for me that makes sense.

And how to do that in the past was with a lot of people power. It was a lot of time being spent to have multiple tracks and to customize it with personalized learning. And it's like, wow, this is hard. It's not, it's not easy to be able to [00:39:00] do, but now with Gen AI, that actually becomes one step in the right direction of trying to do this and to do it well.

So I've done this a couple of times with taking whatever LLM you want to be able to say, I did it with Gemini a few times where I took a topic from one of my doctoral courses and I asked it to be able to take my writing and then to put it in a doctoral version, into a high school version, and into a sixth grader version.

And I wanted to see what it would do. And sure enough, it did give me three different versions where if you're trying to be able to explain or teach about a topic and it's really not clicking with the student, you didn't have to try to be able to think about another option. I'm like, well, how else do I explain it?

And now here is your way of how to explain it. Okay, I tried this way. It didn't work. Now I want another version. So if I started with more of like the high school version or the freshman version, whatever you want to be able to say, then I can try to be able to explain it with this lens and see if that [00:40:00] works.

And if that does, great. I can use that for going forwards. And if we could use that for whatever level, That's going to be, that would be interesting. So there's a few factors I know in there that we'll have to figure out from. an accreditation perspective, a timing perspective from the instructor's perspective.

Like there's a lot of things obviously to still flesh that out, but this is why it's a prediction to be able to say that, well, if it doesn't matter, hypothetically speaking, someone wants to learn a skill, they're going to take an online course no matter what. Let's try to meet them where they are and not

just say, good luck! Like, we are throwing you in the deep end. It's for these people. So, that's what I'd love to have. It's more personalized learning with Gen AI. That would be amazing. So, that's one. That's one area. I have another one. But I'll wait to see if you want to comment on that first. 

Rebecca: Yeah. No, the personalized learning one is one that I'm very fascinated with because it [00:41:00] has been one of those things that people say.

Personalized learning is one of the reasons for Gen AI, like one of the benefits of Gen AI, and I've always come back with, how would you actually, what do you mean? Like, how would you actually implement that? Like, what does that actually look like? And you're the first person I've had that I can actually explain a little bit about what it might look like, in that it might actually, you know, take the same information, put it in three different languages, in essence you know, three different tracks based upon prior knowledge.

That's awesome. 

Luke: And then 

Rebecca: it's useful 

Luke: Right, and then we can, let's take that step further. So let's say, that students are taking, we both teach instructional design. So we'll use instructional design. So students are coming in to take our instructional design course. We've an entrance survey at the beginning that we ask them about where they're coming from, the years of experience, their goals and everything of a sort.

So let's say that we have a person coming in. Who has like two years of experience with instructional design. They want to [00:42:00] become a corporate instructional designer. And that's why they're taking one of our courses. So we'll have those different forms of learning tracks. And then as we're going through with that learning track, when it gets to a point, whether it's going to be like a learning activity or something of a sort, but allows for more customization, we would then pull the information.

from their survey that says you want to be a corporate instructional designer. So the learning activity would then have a corporate instructional design lens. So if it was a scenario based problem, it wouldn't be about a university. It would be a company. So we could keep flipping that. to become more personalized, the more that we go down that pathway.

And then same thing too, if they were in a leadership position, they're not an individual contributor, well, then we can have the scenario be about you're leading a team of instructional designers and corporate America and blah, blah, blah. So we can keep doing that. Like it's that that's within our reach.

It's still not fully adopted yet [00:43:00] or a way, but I can just like flip the switch and have everything work. But those pieces are there. 

Rebecca: Yeah, I've done the baby steps to that. In that I've put like, you know, I was teaching learning theories and so I would say, you know, build a course based upon competency based learning, say.

And for students that don't have a workplace where they can draw from, I give them like, choose one of these three scenarios and the scenarios are based upon, well, are you coming in with an intent to go corporate? Are you looking at higher ed? Are you looking at like, what is the model you're looking at?

Because then I can give you an example or a scenario that you can work with that is based upon the industry you want to go into. For, especially again, because I don't have the creativity in me to come up with that many scenarios. So when students are asking for more scenarios, I'm like, okay, chat GPT help me.

I need more scenarios. 

Luke: Right, exactly. 

Rebecca: [00:44:00] And so you had another prediction. 

Luke: I said, I said two. So, so the other one, which is really interesting because it's, once again, it's, it's being used right now. But not fully adopted for various reasons is with immersive learning, I caught an end of a fantasy football podcast the other day, and they said that if this VR training works, every NFL team has to use this training.

And I was like, what are they talking about? Why are they talking about VR training for football players? So I go and I listen to the show more and dig more into it. It just so happened that on the Washington Commanders, their quarterback, Jayden Daniels, has always used VR training to take mental reps before he goes out into the field.

So he always uses like an Oculus. and he'll wear this and inside of the oculus it goes through all the different types of plays he's going to be calling for the game. Meanwhile he's having like these different forms of animations and such come at him as if he's actually sitting in the pocket and he's trying [00:45:00] to avoid the other players so before he went into and he got hurt unfortunately and he was trying to Get ready for the next game.

He was a game time decision. Is he going to start? Is he going to sit? What's going to happen? He didn't practice all week. And instead he just wore his Oculus and he just kept doing his mental repetitions. And sure enough, he ended up playing in the game and they won. And it was like, okay. So, like, we can actually take immersive learning now with using Gen AI and to add that into whatever people type of skill, workforce development you want to be able to say he's an NFL player but for whatever version people are doing right now.

Rebecca: You know, they do that. They do very similar things with teaching laparoscopic surgery. 

Luke: Exactly. Yes. Healthcare. 

Rebecca: Like they just put an iPad on top of a box in essence, and then use the tools and it simulates the, yeah, it's a cheap solution to a complex problem. 

Luke: So if that can kind of [00:46:00] keep on, and once again, I know that you mentioned about the costs of that too, because like not everyone's going to be able to afford any of these things.

And then for someone like me I can't wear them. I will get dizzy. I'm going to get nauseous. I'll get a migraine. I can't wear, I tried. I just, I will never be able to use a VR headset. And that's like, Oh, well, clearly we're going to be excluding people because they can't wear it. So clearly not for everybody, obviously, But I do think about that too, of, oh wow, if we actually did need to do something that required that real world extra care as a surgeon, or as something else, and like, Yeah, why wouldn't I want to be able to give them more mental reps before going out and doing the thing? It just makes sense. So I'm very curious.

I know that the military has been using VR for forever and obviously healthcare is really starting to pick that up as well too, so. 

Rebecca: It's actually low res VR which has been very, like, so not really, it's more augmented reality in a lot of cases because the actual [00:47:00] true VR, yeah, there's so many people that can't.

And it's, I'm not sure which problem is harder to solve, like the augmented reality versus Virtual reality. 

Luke: Yeah, that's just tricky. Yep. I know that for me where my little character of it, I was in for this thing. I ended up going down a spiral staircase and that was it. I was done. I was like, I was on the floor.

I was like, Ooh, like the world is spinning. I can't do 

this. 

So yeah. 

Rebecca: Yeah. I use it for boxing and I thought it was, it's actually been a really brilliant thing from a physical activity perspective because I can't do impact anymore. So impact boxing doesn't work out so well, but hitting virtual things with my VR glasses.

Luke: Yeah, but shadow boxing works. Yeah. 

Rebecca: It works really well. It's like a great little thing. So yeah, it's kind of, it's, it's been an [00:48:00] interesting sort of shift, but putting it to that educational sense, it's like the real challenge becomes, is it VR or just augmented reality, or is augmented reality actually harder?

In some cases. It'll be interesting to see where that goes in the future too. We got lots of, lots of promising tech coming ahead. But actually seeing where those two merge, right? Once, once we figure out, we still have to figure out images, but once we actually figure out images well, and then video well, then the next step is, is the VRAR and getting that figured out.

With Gen AI, that, yeah, that's a cool world. 

Luke: Which we will. I mean, if you go back and you look at, cause I know, right now, we're all looking at all of these different things through a very critical lens, because we're like, Oh, that finger's weird. And that arm's going, whatever it's like, why don't you just look up and see what it was like two years ago, where

you can barely tell it's a person. Now, two years later, now we're nitpicking to say [00:49:00] that like one little thing is out of place where it's just like, give it another year. We won't have this problem, like it's the advancement speed is just insane right now. So it's tricky, but I bet we will get there. Pretty soon. A lot sooner than people think. I really believe that. 

Rebecca: Excellent. Well, thank you very much for being a guest on Demystifying Instructional Design. I've enjoyed this conversation. 

Thank you for having me. Really appreciate it.


People on this episode