That Change Show

AI and the Future of Change Management: Promise, Peril, and the Shift in Professional Roles

January 28, 2024 Lean Change Management Season 2 Episode 3
That Change Show
AI and the Future of Change Management: Promise, Peril, and the Shift in Professional Roles
Show Notes Transcript Chapter Markers

Ever wondered how AI might redefine the role of a change manager, or if it can comprehend the human nuances in Agile methodologies? This episode peels back the layers of AI's involvement in change management, addressing the palpable concerns of trust, oversimplification, and the steep learning curve that accompany its rise. Jeff Sutherland, co-creator of Scrum, joins us with his insights, as we put various AI bots to the test with real-world Scrum scenarios. Witness firsthand how they fare against the intricate tasks that change agents handle daily.

As we unravel the complexities AI faces with cultural adaptations in project management, such as integrating Buddhist principles into Scrum, you're in for a fascinating comparison of AI-generated advice against the diverse counsel available on professional networks. The episode doesn't shy away from the tough questions: Can AI enhance user experiences without reducing the human touch to a mere algorithm? Tune in to see how AI might offer a lens to interpret help articles and documentation, potentially speeding up the process without fully supplanting human judgment.

Finally, I share a piece of my journey in embracing AI as a technical ally in the realm of problem-solving. It's about the mindset shift needed for change managers to work alongside AI, treating it not just as a tool but as a colleague. We ponder on the evolution of professional roles with advancing technology, akin to the transition from web designers to full-stack developers. And if you're curious about the ethical conundrums AI brings to the table, stay tuned for a sneak peek at an entire episode dedicated to that very topic. Join us for this episode that promises to leave you pondering the future intersection of AI and change management long after the conversation ends.

Jason Little is an internationally acclaimed speaker and author of Lean Change Management, Change Agility and Agile Transformation: 4 Steps to Organizational Transformation. That Change Show is a live show where the topics are inspired by Lean Change workshops and lean coffee sessions from around the world. Video versions on Youtube

Speaker 1:

All right, sunday, january 28th 2024. Welcome back to that Change Show. I'm your host, jason Little. This is a live weekly show and it's live because that's just my way of saying I'm not gonna put a lot of time and effort into editing, so let's just get topics that were top of mind over the previous week in the world of change and try to point you to some interesting info or, ideally, some answers and things you might wanna try out. So this week I'm talking about the three main things that change agents are concerned about with respect to AI. So we launched a state of AI survey, I guess a couple of weeks ago. You can check it out at leanchangeorg slash AI.

Speaker 1:

And this is about the three main concerns. When you go to our custom AI bot and you ask it, what are the things change agents are mostly concerned about? So these were the three things that it spit out trust and reliability, fear of oversimplification and loss of human element, and technical complexity and learning curve. So let's go through all of those things and see if we can squash some of these myths or some of these concerns. So first, trust and reliability. Yes, that's obvious. I'm gonna use the term AI system a lot in this podcast or video cast, depending on if you're watching me or if you're just listening and what I mean by that is whatever AI tool or system it is that you're using. So if you're using Microsoft's co-pilot, google Bard, most I would guess would be using OpenAI's chat GPT, because when people say AI, they think chat GPT, even though there's a whole pile of other ones out there. So if you ask each of those three things, the same questions, you're gonna get slightly different answers because they've been trained differently and they have different data inside of them. That's been trained. Now, depending on what you're asking it, it should be reasonably consistent. What I'm gonna do with this one is I'm gonna use some custom GPTs and I'm gonna ask it some simple questions about Scrum, and the reason why I'm doing that is because there's a whole pile of quote unquote Scrum coach bots that have been published out there, one of them by one of the actual founders of Scrum, jeff Sutherland. So I'm gonna see how reliable and how different the answer is between how Jeff has trained his, versus chat GPT in general, versus some of the other more popular ones that are out there.

Speaker 1:

So a common thing that I've seen in Scrum is we keep getting interrupted in our sprint so we can't finish the sprint work. What can we do about that? Now, a lot of these bots market themselves as they're your AI powered Scrum master assistants, so ask any Scrum related questions. So let's see how these would answer these questions. So first I'm gonna go to Scrum master assistant and I'm gonna ask it Our sprint keeps getting interrupted with unplanned work so we miss our sprint commitment.

Speaker 1:

What can we do about that? Let's make sure I spell interrupted right, copy this to the clipboard and let's see what Scrum master assistant says it's thinking, it's thinking All right. So some of these are pretty good. Sprint planning, realistic commitments yes. Consider the team's historical data on the velocity. Factor. That into potential disruptions yes, that's probably something that I would generally recommend. Buffering for unplanned work Buffering is a bad word in the agile space. The whole thing with Scrum is it should be an honor and a commitment based on we're gonna give it a good old college try. We don't wanna pad and add buffers and things like that, but what it can mean by that, and what I've had other teams do, is just have one person be the interruption person for the sprint on the team. So this is if a team is doing the development work and they also have to support the product. Just don't factor in one person per sprint into your sprint.

Speaker 1:

Commitment. Strength and definition of ready yep, that can be a good one. Improve communications to understand yeah, that number four never works because they don't care. If you've made a commitment, you're doing the commitment. Prioritization and swapping yep, that's something that can happen. So if a piece of work interrupts the sprint, you can say, yes, we can take this piece of work. You have to take something out of the bucket, review and adapt yep, so, retrospective, find out why this is happening, et cetera. Set boundaries for interruptions. Establish a team agreement how to deal with them. Empower the Scrum Master. The Scrum Master should protect the team from outside interruptions.

Speaker 1:

So all of these are pretty good, pretty interesting strategies. You could probably use any of them depending on your context. But that's really not the point of this. Let's go over to Scrum Mentor and see how Scrum Mentor would answer this exact same question. Oh, strength and sprint planning. So very similar answer on the first one Limit whip, improve backlog grooming. Set boundaries for unplanned work, which is the same as one of the ones here Frequent communication, buffer for unplanned work, review and adapt. So reasonably similar.

Speaker 1:

Here a couple of things are a little bit different. So now the interesting thing with custom GPTs is you can train it on your data, but you don't have to. You can select a little checkbox that says search the internet. So two of these bots might be trained with their own individual. Basically how to answer these questions, type of commands, or maybe the person uploaded a bunch of questions in their own answers or it's just grabbing the answers from the internet. So that's one reason why some of these would be similar.

Speaker 1:

So now let's go into the Zen edition. This is the most popular scrum bot that's out there and it's not really directly related to being a scrum coach, but let's just see what it can come up with here. So same stock answer. It's a common challenge Disturbance and harmony. So it's relating things in scrum, obviously because it's called Zen edition. So just as the third noble truth suggests cessation of suffering, scrum provides a way to overcome these challenges.

Speaker 1:

So here's a Zen approach to dealing with this problem. So right understanding and intention, vision and objectives, right speech and action. Communicate effectively. Right livelihood. Foster a professional environment where team members feel responsible for the success, right effort, continuously strive to commit realistic sprint goals. So similar yep Right, mindfulness and concentration, maintain a high level of awareness about the current state of the sprint and the capacity Buffer for agile story completion.

Speaker 1:

I don't know what an agile story is. Maybe it's a user story that's more agile than a regular user story. What else? Adaptability so same as the other one. Continue some improvement. Same as the other two. So some similar answers here, but just phrase differently to sound a little more Zen like, because obviously that's what this bot is supposed to do. Now you could decide which one of these would be the most useful for you, but we've got three of them so far. I'm gonna save Jeff's to last because he's the creator of scrum. Let's see what Yoda thinks about this exact same problem. I'm gonna save this one. Of course, it's responding like Yoda. So if you're listening on the podcast, a common challenge in the scrum journey, this is mm. That was the worst Yoda ever because it's early in the morning, but basically this is just responding very similar to the previous ones, but there's a little more, I guess, personality to this one, obviously.

Speaker 1:

So this one talks about understanding where the unplanned work is coming from, which is a really good one that the other ones did not talk about yet Identifying the capacity. Is your team's capacity being overestimated? Yes, okay. How is your backlog refinement happening? Here's some things to consider.

Speaker 1:

So the main difference with this one and all the previous ones is this breaks it down into two chunks. So this one talks about first, you need to understand the source of this unplanned work. So here's some questions you can ask. Now here's the steps to consider. This one is vastly better than the other ones, and this might be what's causing some of the reliability concerns that change agents have, because there's really three things that matter with any AI system it's the data it's been trained on, the instructions it's been given for how it operates, and then how you actually prompt it. So those are the main three things, especially the training and the data. So this one talks about some of the steps to consider to improve. It are similar to the others improve communication, prioritize better, buffer for unpredictability. So it's giving you some of the same answers, but it's adding how do you understand where that work is coming from before you decide? Which is pretty good. So now let's ask the master of Scrum Masters Jeff's bot. This would be funny if this is the worst answer I don't think it will be I think it'll be pretty cool.

Speaker 1:

So addressing interruptions requires a multifaceted approach. So this one jumps right to solutions. Again, it doesn't talk anything about understanding and trying to figure out things out, other than retrospectives. They all talk a little bit about retrospectives, focus on discussing the issue of the unplanned work that is disrupting the sprint. But this one does break it down into a few buckets. So refine your sprint planning, strengthen the definition of done, improve your backlog management, discuss impacts and patterns, do some problem solving. Empower team members, coach the product owner, follow up actions. So monitor the impact of implemented solutions.

Speaker 1:

So all of these had kind of similar answers. They were phrased a little bit differently, but you would get different things depending on which ones that you actually used. So the first two, the Scrum Master Assistant and the Scrum Mentor, were basically identical. So I would assume they haven't done a lot of training with either of these two. They might have just uploaded the Scrum Guide or they've got the little search, the internet check box checked. So this is just grabbing the body of knowledge from GPT, and the best way to test that is I'm just gonna ask GPT. So I'm gonna ask a non-bot, just GPT, for the same question and let's see what it comes up with. This is a common challenge in agile project management. Okay, well, first of all, scrum is not agile project management Same improved sprint planning, prioritization and triage, dedicated buffer, improved communications. So, basically, those first two bots have not been trained with anything really specific for answering at least this specific question.

Speaker 1:

Now, getting into the Zen edition yes, there was probably a lot more training in this bot compared to some of the previous ones. Jeff's bot yes, there was probably more training in this one as well. You can tell just by how the answers are bucketed together. And Scrum, yoda, same. So again, the reliability is going to be down to how you've decided to train that actual bot. Now you can ask them what data have you been trained on? So let's ask Yoda what it has been trained on and see how it is actually formulating the answers. Curious about my origins? You are training blah, blah, blah, blah, blah. So this is a stock answer from chat GPT, because if you ask GPT for what it's been trained on, it's going to tell you the same thing that it's been trained up until April 2023. Let's ask Jeff the same one and let's see what it's. Let's see what data includes text. So, jeff, the same, mostly GPT, just stock out of the box GPT stuff.

Speaker 1:

For the specific context, my training includes established principles, so it's possible that he uploaded the Agile manifesto, the Scrum guide and stuff like that as well. Because the answer is a teeny little bit different, let's try the Zen edition. Hopefully you're still tuned in because I know maybe this is kind of boring watching this stuff, because what I'm just trying to get to is the reliability concerns are really going to be based on the data that is put into the bot. So this is the last one. I'll do the Zen edition. Here it's being a lot more specific in how it's been trained and it's been relating those concepts to Buddhism.

Speaker 1:

So the reliability and the trust concerns is if you're new to Scrum, maybe this would be a problem and because you don't know what you don't know. So if you're getting a radically different answer from what, say, a Scrum master in your organization would say yes, that can be kind of unnerving, which is basically like going to LinkedIn and asking a question on any LinkedIn group, because the answers from the humans are going to be biased towards all their experiences anyway. So if you go to the LinkedIn answer thing that's all AI driven and you look at some of those answers, some of them are absolutely brutal. They're not even close to what the real data of that topic is, because that's how humans are. So you're going to possibly run into the same problem here Now in the change world.

Speaker 1:

I could see how this would maybe lead you towards wanting to close off and control your bot. So if you're making an ask my project bot, or you're taking your onboarding or your business readiness documentation and providing something for the people who are going to be using the new software system, you want to give them something where they can ask questions instead of like a typical help file and stuff. Yeah, it's possible that they could ask it different or the same question and get different answers. So it is a concern, but I think what it's going to provide is it's going to give end users a better experience with change because they can ask the AI question. The AI is going to interpret whatever static documentation that you put in and then they're going to relate it to their brain and their context and they're going to think about it anyway. So you're almost getting somebody that can explain your help articles, your business readiness or whatever, a little bit easier, and maybe they are still. You're going to get edge cases where people come back to you to ask questions and stuff.

Speaker 1:

But, like I said, I'll move on to the next topic now. But you can definitely see that, yeah, there are some subtle differences between them. And just before I move off, because I'm not a giant Yannick fan of Copilot, I don't think it's anywhere near as good. But let's ask Copilot what it thinks about this sprint interruption stuff. Oh, I'm sorry to hear that your sprint is getting interrupted with unplanned work. So so far it's given the same answers.

Speaker 1:

Oh, it did have an interesting one, but it's actually citing the source, so it's grabbing the information from scrumorg mostly. Now, again, when you talk about reliability of the actual answers, it's obvious co-pilot is using its Bing search engine. So if you ask this question in Bing, I would guess that scrumorg, because it's one of the more popular global bodies about scrum. Even though scrum came from the scrum alliance, it might be high on the search list here. So yeah, here's the scrumorg. Five dos and don'ts during sprint planning and product plan was another one. Four steps to managing unplanned work. That was another one and that was another source cited here.

Speaker 1:

So this co-pilot is just acting more like it's grabbing information from web pages and displaying it to you, maybe not doing a lot of interpreting compared to GPT, but, yes, valid concern. So when it does talk about the organizational context, that's a little bit different. So that's not so much the reliability of the actual AI bot itself, but, yes, ai has limited ability to understand the context unless you give it the context. So that's the thing that most people don't know is your prompt doesn't have to be one sentence or two sentences. You can string a bunch of prompts together and you can ask it very complex questions. You can give it information about the culture of your organization, the context of the change, who the stakeholders are, feed it your status reports, feed it your survey data and, yes, it can analyze very complex organizational topics and give you some potential suggestions or questions to ask or things to go look for. Now you, as the human operator, still have to go and do that and figure it out, but the point is you can do that instantly. So, instead of having to pour through surveys and do all these other types of things, you can actually get to some pretty good options a lot more quickly.

Speaker 1:

So if we move on to number two, the fear of oversimplification and loss of human element so this one is about people being worried about AI reducing change management to a series of automated tasks, which I think is funny because most of the crap that you read on LinkedIn is about how change management is a set of repeatable steps, because it should be based on science, where there's a start, middle and end and we want the framework and we want the steps to ensure successful change. So a little bit of incongruence there. And there's that word again automated. So I did a post on this. I'll link to it in the description where people are saying things like I'm glad AI is here because now I can automate blah, ai is a smart robot, automation is a dumb robot.

Speaker 1:

So you can automate a lot of tasks, for example, our state of agile and state of sorry, our state of change management and our state of AI and change management surveys are all 100% automated, meaning data goes in, it gets analyzed and it's right there on the page for people to look at. So this whole idea of where you see these annual state of whatever surveys, where somebody's using a type form or survey monkey or whatever, and they go oh, in three months we'll release our whatever infograph about this type of thing Not entirely useful. You can automate all of that stuff. Now that analysis comes into play when you hook your surveys into some AI system. So now you get the best of both worlds. Which is what we're doing is you can display some of the results, which is just like what you can do with survey monkey reports or anything else. You can break down your respondents based on rating scale, questions that you've asked and what community they come from, and stuff like that instantly. And then the AI part is where it's actually generating the insights. So that's where these three parts came from.

Speaker 1:

So automation and AI are different things, but there's still this kind of idea that it's either going to be AI running change management or humans and that's a vibe that's come through from all these responses and that's obviously not the case. There's always going to be an operator in place. You can think of AI as being your thing. That is going to help you make sense of your context faster, to free up your brain, to focus on the decision and not on the analysis and not on just trying to take all this glut of information and make sense of it. I think that's the biggest thing that it's going to get, so it's not going to lose the human element. Although some of these AI bots are pretty good, there is an AI bot sorry, a custom GPT called humanize and it does a pretty good job of taking chat GPT generated text and making it sound more human. So those things exist as well which can help in comms if you're sending that stuff out.

Speaker 1:

But the real power lies in not thinking about it as replacing us and reducing change to a set of automated tasks, but giving us a better way to keep the change on the rails over time by giving us real time analysis of all the data. So the oversimplification and loss of human element might oversimplify the complexities of change management or underestimate the importance of human interaction. So, yeah, this question's tough because, like I said, this does seem like an either-or kind of question. It's going to underestimate the importance of human interaction. So we're not setting our changes on autopilot with AI and automation. That's the thing. We're not replacing the human. Now, if you have a change team of four people, you could probably do the same amount of work with one person, maybe two, because a lot of the administration stuff that I know I've been bogged down with of having to provide reports and status and create blog posts and answer questions about the change and create FAQs and stuff like that all of that can go away for sure. But, yeah, it's a valid concern. But I think the thing to take out of your brain is that it's not a binary thing, it's not either-or. There's always needs to be somebody at the wheel and AI is just like your Roomba. Right, it's helping you do stuff, but it can't do everything by itself.

Speaker 1:

The last one technical competency and learning curve. So this one is around the skepticism that technical competency will be required to effectively implement and use AI and change management. Some change managers are wary of the learning curve and the necessity to develop new skills. Yeah, that's true. I know I would say the vast majority of change agents I'm using that word to represent change managers and agile coaches, because I've been working in both communities for 15, 20 years and there's definitely a difference in vibe between the two. For sure, most agile coaches I know have been doing lean change or agile change or whatever, forever, because that's what they know and how they work. And now today, 2024, people are still discovering oh I can use this lean thing or I can use this agile thing. So the vibe's kind of different, and agile folks tend to have been exposed to more technical things in their career because mostly, at least from the people I know they're either ex-developers or they're still developers or they've worked more exclusively on software projects with teams. So they've been teaching them extreme programming and things like that as well. So it might not be much of a concern maybe for that camp, but there are lots of non-technical scrum masters as well in agile coaches. So yeah, the thing with the technical competency and the learning curve is, if you're not technical, it would be hard to understand the absolutely mind-blowing possibilities of AI. It's like you don't know what you don't know, kind of thing. So I've done a whole bunch of calls with non-technical folks and I show them a few little cool little things with automation they can do to make their lives easier and it just blows their minds. They had no idea this was even possible. So I think that learning curve thing absolutely will be valid. I think the best thing.

Speaker 1:

I recommend this in my book to not to plug the book, but think of whatever AI you're interacting with as a technical person. Don't think of it as a search engine. Don't think of it as a call and response type of thing where you ask it a question, it's going to give you the answer. Think of it like it's a human that has deep technical expertise because you can ask it how to use it. So most people when they talk about prompts, it's I'm going to insert a question, you're going to give me an answer. But if you ask it, for example, I have this problem.

Speaker 1:

Friday I've been asked to go do an update on the change to our C level executives. The change isn't going well. Can you ask me what questions do you need to have answers to? So you can suggest what I could try for this session and it's going to spit out those five questions and then for each of those questions, you can answer it back. It's probably going to say well, I need some context about the project, I need some information about the personality types of the leaders that are involved. I need some information about this, that and the other, and you can answer those. And now it is your.

Speaker 1:

Whatever system you're using is staying in context, meaning you don't have to tell it that again, you can follow up with a bunch of other questions and you can say remember, when I asked you about this, you suggested I should try that. I like that one. Can you expand on that a little bit? So for the technical competency and learning curve is definitely talk to it like a person, but talk to it like it's a technical person. More importantly, ask it how it can help you and what it needs from you to help you the most. So I'm going to wrap this show up because I could talk about this stuff all day.

Speaker 1:

But for those three things, yeah, I think they are valid, valid concerns. For sure, I think the theme around all three of them Is the same concerns with any new piece of technology, right, how is this going to affect me and my job? And it's funny because there's a tone of resistance in all of this stuff which I think is hilarious. So there are some responses that flat-out say it'll never replace a human, it can't replace what I do. Oh, it's this magic thing it's not gonna like. There's language like that that talks about change managers, quote-unquote Resisting AI and new technology, which I think is funny, but it's warranted. I mean the first chapter in my book I talked about the same thing.

Speaker 1:

I was working as a web designer and I don't even think there's any web designer jobs. They get posted anymore because that, that, that that role, was so specific. It's irrelevant today. Now You've got full stack developers, you've got people that can do front-end, back-end automation, managed servers, the cloud and technology is basically taken away the need for a web designer. So when that happened to me, I just had to become multi-skilled and learn more. Learn more programming languages, learn other things. More importantly, learn how to be a product owner and talk to customers instead of just building stuff.

Speaker 1:

So I think the good thing about this is it's gonna force change managers to be more problem solvers and less change managers, because this has always been my own personal bias. I'm not being hired to do a change. I'm being hired to help a team or organization Understand the problems that they're facing and then help them solve those problems, whether that's through a software package or A transformation type of program. I'm being hired as a problem solver and I think the sooner that change agents can sort of move into that mantra, I think it's really going to open up the mind to the possibilities about what AI can do so let's leave it with that.

Speaker 1:

I think the next episode, or at least a future episode, I'm going to talk about the ethical dilemma, because that can definitely be a show on its own. So thank you very much for listening to me ramble for a bit head over to lean change TV. If you want the video versions and Also subscribe in your favorite podcast listener, you'll get notified when new episodes pop up. Hope you have a wonderful day and I'll see you next time.

Concerns of Change Agents Regarding AI
AI and Change Management Oversimplification Concerns
AI as Technical Problem Solver
Exploring Ethical Dilemmas in Episodes