That Change Show
Change management doesn't have to be boring. That Change Show is a weekly-ish, live and unfiltered podcast that dives deep into the messy world of change and transformation. Join Lean Change found and two-time Amazon best selling author, Jason Little as he destroys the status quo while exploring topics like AI's role in change, lean and agile change management techniques, and commentary on the world of change management. If you're an agile coach or change manager craving radical insights, this is your new addiction.
Jason is the author of The Six Big Ideas of Adaptive Organizations, From Skeptic to Strategist: Embracing AI in Change Management, Lean Change Management, and Change Agility. Video episodes available at https://leanchange.tv
That Change Show
The Human in the Lead: Why Change Agents Must Become AI Champions
Have ideas for the show? Liked a topic? Let us know!
Connect with Rachel: https://www.linkedin.com/in/racheljannaway/
That Change Show on LinkedIn: https://www.linkedin.com/showcase/79698579/
Forget the technical jargon. This conversation is about strategy, culture, and embracing the philosophy of "human and, not human or".
This episode is essential for Agile coaches and change managers seeking to lead their organizations through AI transformation, focusing on conditions, iteration, and human-centric design.
Key Discussion Points for Change Practitioners:
Shifting the Narrative: From "Loop" to "Lead" Rachel argues that the common phrase "human in the loop" is "almost derogatory". Instead, AI strategy requires the "human in the lead," meaning humans are taking control of the AI rather than allowing the AI to control us. Jason agrees, noting that while autonomous agents are hyped, a human must always direct and orchestrate the technology.
The Role of Context and Conditions AI success is "not about the tech, it's about the conditions surrounding it". Change managers must apply critical process thinking: if you automate a "crappy process," you just get a "quicker, more expensive, but still rubbish process". Success requires treating the AI just like an intern or junior—providing clear instruction, direction, and leadership.
Agile and Iteration in the AI Landscape AI systems are never "done and finished". Consistent with Agile principles, AI requires constant monitoring, checking, and iteration because change is constant. The organizational drive for "faster, better, cheaper," often focused purely on reducing headcount, risks leading to "bad quality" outcomes.
The Emergence of the AI Champion Jason proposes that change agents are perfectly positioned to morph into AI champions. Change practitioners build relationships and know the most about the organization, having context from "the office junior and the CEO". Rachel offers a framework using five strategic lenses—including Purpose, People, Process, and Practice (continual learns)—to ensure AI adoption is effective.
Addressing Cultural Challenges and Shadow AI The discussion tackles why employees often view AI either as a threat coming to take their job or as a tool allowing them to "put their feet up". Furthermore, organizations must contend with "shadow AI"—the widespread, uncontrolled use of AI tools by staff on their phones, even if banned by policy. Change agents must foster the psychological safety needed for employees to experiment, fail fast, and focus on using AI for "donkey work" so they can spend more time on creative and human-led tasks.
--------------------------------------------------------------------------------
The episode concludes with a powerful metaphor reminding listeners: "Only dead fish go with the flow." Change is inevitable, so choose to be a "live fish" by making choices and having input into your organization's AI future
Jason Little is an internationally acclaimed speaker and author of Lean Change Management, Change Agility and Agile Transformation: 4 Steps to Organizational Transformation. That Change Show is a live show where the topics are inspired by Lean Change workshops and lean coffee sessions from around the world. Video versions on Youtube
Uh, all right, welcome back to episode three of that change show. I know the previous episode I thought was episode one for season three because I forgot that I started a new season in February. So you can probably tell we're not on a regular upload schedule. It's if I find somebody interesting that I want to talk to, I do an episode. So this time I'm talking to Rachel Janaway, who's a freelance uh change consultant based in the UK. And we met through, I guess, our passion for AI, but with the perspective of, and it's something you put you posted on LinkedIn um maybe a month ago or so. It's not about the tech, it's about the conditions surrounding it. So we're gonna chat about the why the human has to stay on the loop with whatever your AI strategy is. So tell tell the audience a little bit more about yourself.
SPEAKER_01:Thanks, Jason. Um so I'm Rach Janaway. Um I'm as Jason said, I'm a freelance um change and project management consultant. Um, and I'm really passionate about people being enabled to find change less scary. And I'm a big fan of lean change and all the principles of co-creation and that. And it it's really for me, AI is one of those things that it's so important we remember that there is we are human and not to lose any of that human-centred. So I'm I I don't call myself an AI expert, I don't believe anybody is. Um, I'm an AI enthusiast and explorer, and um, I'm really keen to sort of talk to other people who are in a similar boat, and that's kind of why I follow Jason and really keen about a lot of the stuff that you do.
SPEAKER_00:Cool, right on. So uh welcome, thanks for joining the show today. Um, the interesting thing I know we've both commented on is uh, and I'm sure listeners, you've seen this all over LinkedIn. There's these failure stories about we let AI, you know, we replaced all of our customer service people with AI, and then Shocker, the quality was bad. And there still seems to be this air of the future of work is either a human or a robot. And I've always maintained that it, you know, the the human has to stay in the loop somehow. So it might just mean, you know, maybe you've got a change team of ten people in the future, maybe you only need two people instead of ten, which sucks. But you know, you might have a whole army of agents that are helping you with stuff, but a human has to stay in the loop. So uh let's just start off with what what are some of the things that you've seen in I know you've posted about this too, especially where it's it's about the conditions, not the tech.
SPEAKER_01:Yeah, so I work with a lot of not-for-profits and charities in the UK, and particularly with the sort of smaller charities, and people generally see AI as either it's coming to take my job, it's um taking away um responsibility, oh AI did it, um, or oh great, I'm going to go and be able to put my feet up and it's gonna do all my job and I'm not gonna have to do anything. And really, I don't think any of those are quite true, or actually really where we want to be. Um, and I sort of I think I've mentioned um to to people, I I I do think the human in the loop is almost derogatory for what we need to do, where we're going with AI. I think it needs to be human in the lead rather than in the loop, because when you're in the loop, it's almost like a checkbox, you know. When I say, oh, keep to somebody keep me in the loop, I'm I'm really not that, you know, I'm not taking an active role. Um, it's just like, yeah, I'm aware that something's happening. If I say, you know, I'm leading a piece, then I'm having some thought about it, I'm having some direction about it. And I really think that that's where we need to change the conversations we're having about AI and about the conversations we're having about where the direction of work is going, because yes, AI is amazing and I love it, and I'm an absolute advocate. And the more that we can do it to do the stuff it's good at, that allows the humans to do the stuff that we enjoy more, but we have to lead it. And people say, Oh, AI didn't do what I wanted to do. And you know, I know how many times you've written about people putting a bad prompt in, and then they go, Oh, this sucks, you know, it didn't give me what I want. Um, and I was listening to one of your previous episodes, and you said about you know, your wife and the garden example, it wasn't it, it wasn't what she expected because actually the picture, well, that's exactly what it is. But if you gave it to your intern, if you gave it to your, you know, um junior and you said, I want you to do this with no instruction, no role, no example, none of that stuff, you'll get a crappy result. However, if you lead them rather than say, Oh, keep me in the loop, you're gonna get a much better um result. So I'm really passionate that actually this is about humans taking control of the AI rather than the AI controlling us.
SPEAKER_00:Yeah, that makes sense. And I think I think part of it is just the hype that's been built around Agenic AI and agents and autonomous uh agents and things like that, that the you know, the tool vendors have been promising forever. So pretty soon you'll have an army of agent employees that can think on their own and do tasks on their own. And I use some of them in some contexts, but it's I'm still very much in control of what it does. You know, I would never uh, you know, my project manager, for example, if there's a shortcut key on my keyboard, I hit it twice and I say, Can you remind me to blah? And then it knows, okay, I have to post this onto the Kanban board, I have to find the feature this is related to, and it does a bunch of stuff that I could absolutely do. Um, but still I have the say. I'm the one kind of directing and orchestrating it, and I would never uh turn something loose completely, because I've seen lots of cases where, oh, we you know, we turned on automatic coding and it ruined one of our features. Well, of course, because you gave it access to think and do everything on its own. Um you know, one uh decision thing that I have is when people fill out evaluations for the course, I have an agent that takes that and it decides whether or not it should be posted on the public website as a testimonial. It now does that on its own. But first, I'm I looked at it for like a month. I was watching everything coming in, how is it deciding and making sure that it's doing the right thing? I think there's too much of a you've probably seen this a gazillion times. Leaders, no matter what, they want faster, better, cheaper. Yeah, and it used to be agile and now it's AI.
SPEAKER_01:Yeah. How much is this gonna save? How much how many people can I get rid of? In it is a terrible uh view, but from a you know, budget perspective, that's often what people think. And you know, unless you um continually train your employees, you're going to have a reduction in their, you know, in how efficient they are. That goes for AI, unless you constantly monitor it, unless you constantly check it. And I I've I don't know whether it was on LinkedIn you posted or it was one of your previous podcasts. You said about that you you ask it to how could I have got this better? Where and I think that's the bit that lots of people do. They think, oh, I've done this prompt, and it's you know, this is the key thing I think about AI. We're never going to be done. It's you know, the change with change is never done and finished anyway, it's always iterating because change is constant and the new normal and uh and that, but AI is just amplifying that, and people, you know, it's not finished. If you think your tech stack's stable today, somebody will bring a new feature out tomorrow, and all of a sudden everything's changed, and you've got to check, or they bring a different model out in AI, and suddenly you're getting different, you know, different results, and you're like, what's happened? And if people don't keep going and keep iterating exactly the principles we're used to with agile, you know, it has to be that constant reflections, the the you know, what what's gone well, what what hasn't gone well, all of that really good stuff that needs to apply to AI and almost as a continual loop, doesn't it? Because otherwise we are going to see this, you know, just everything's gonna drop off the cliff, or people go, well, it worked for a while and I can't understand, but I'm now getting bad results. Um if you don't intervene, if you don't lead, back to my kind of point about if you don't lead it, if you're just checking it, it's a tick box exercise. The same as we see with lots of things with change, isn't it?
SPEAKER_00:Yeah, and and compounding it is you know, to get away from that is you you build your own system internally, which I've I've seen some companies that they do this, but it's impossible for internal teams to keep up with the pace of the changes with like the say enterprise GPT or Gemini or any of these other models. It's like a catch-22. It's we want to go faster, but we don't want to let people experiment with whatever they want. We have to bring it in-house and control it, which will help you with more predictable, repeatable results because you control which language model is being used, uh, but then you sacrifice speed. And even uh just the testing surrounding automating anything with AI. Because I think people some people forget that uh automation's been available since the 50s, right? Computer automation in a certain degree. Nowadays it's kind of within the reach of non-technical people with tools like Zapier and Make.com, you can make some pretty sophisticated intelligent automation stuff. Um and but and the testing. Because when the models change, you know, did it forget to put a piece of data in, or did it not analyze something the right way, or were we over our credit limit and token limit for whatever this was, and now all of our customer service is down. It just changes that whole landscape because you know developers know how to do this stuff. But more and more, I think non-technical people can build some pretty sophisticated things with no code and automation tools that if companies don't clamp down, I mean you're gonna end up with this massive amount of technical debt.
SPEAKER_01:That's exactly what I'm seeing with clients, and actually that's I'm I fall very much into that. I'm not a I'm not a coder, I'm not a you know, technical. However, um I have a good enough understanding of the basics that it enables me with things like Xapia to be able to do some really cool stuff. And what that's great, however, if you don't then show the people again who are going to be doing this forever, how you've built it, what could go wrong, which it will because it's technology, and how to fix it, you're just going to keep getting bad results. And so, you know, I actually saw this with one of my clients recently. Um, I've been helping them move from a one CRM system to a different CRM system. And the reason the old system was so bad is it had been set up 10 years before, nobody had done anything with it. So you'd got terrible data, nobody knew how it worked, um, they couldn't understand why you know they couldn't do what they wanted, it didn't have any reporting functionality, everybody hated it with a passion. Um, the new system's not perfect, but compared to the old system, they just think it's wonderful. And I keep reiterating and saying, yes, but if you don't do learn, if you don't change, if you don't keep on to this system, you're just gonna have the same problem. You're gonna have the same, and this is exactly the thing we're seeing with the AI, isn't it? You know, um, people are thinking, oh, if this is great, I've set up a bot and it can do this, or I've set up a workflow and it can do that. But if you don't continually, just like your employees, oversight, review it, check it, um, then you know, it's going to go wrong. And and there's also that whole thing about choice. Not everybody wants everything automated, and I think that's something that people are often forgetting, aren't they? You know, this is about the human in the loop, but it's also about how the human feels about it, and the things that AI can't do are around that, you know, um heart led and feelings and and all stuff. And I think we sometimes forget that because it can, you know, it spits out some very good stuff, but it's all based on probability and it's all based on maths, and people forget that because when it chats to you and it says, Hi, how are you? It's that's just a learnt response, and and that's something about that's not gonna replace us with with real human people. The real human people, you know what I mean. Yeah, yeah, yeah.
SPEAKER_00:It's um it's kind of like just because we could do this, should we do this? And yeah, too too much of the decisions like we already mentioned is just is bottom line thinking. Oh, I can I can save X on my bottom line by doing this thing. Yeah. Um I I think it's difficult to uh uh try to appoint a centralized uh you know AI COP, for example, to control and do everything and push it out, because it assumes that they understand all the context uh for whatever tools that they're they're creating. And uh we've talked about this in uh the AI course a bunch of times around maybe change agents instead of executing change projects, uh, we're gonna morph into the AI champions. Because I always make the argument that I obviously I'm biased, but change people know the most about the organization, probably, because they work everywhere. They work on so many. Yeah.
SPEAKER_01:And we're curious, aren't we? We you know, generally, we're the people who build the relationships, we're the people who, you know, um, and I'm sure you know, you've you've you see this in so many um different organizations, but as the changer, you're the person speaking to the the the the office junior and the CEO, and not many people do that, you know, and you understand the perspective of what their challenges is, what's getting you know, what's really grinding the gears of the person who's on reception because X, Y, and Z, but also you're in touch with the CEO and the board and and and and they're going, well, this doesn't happen, and and somewhere between the two, the change it changed is is listening to all that and is bringing it all together and actually can see where hey, if we do this, or you can understand where this is useful and the the human side of it, we're all going to have a better, better outcome. I tend to look at some things through like five lenses, and I and I it's nothing particularly new, but I look at it as you know, what's the purpose of whatever we're doing, where we're going with this? Who are the people? Who do we need to be involved? Who do we need to talk to? You know, where does the power sit? Who makes the decisions? And then what's the process for kind of doing whatever it is? You know, we we all know if you've got a great process, uh, a great process and you automate it, it will be an even greater process. If you have a crappy process and then you automate it, you just have a quicker, more expensive but still rubbish process. Um, and then how are we going to keep this going? How what's the practice? What is going to be our continual learns? And that's kind of how I kind of look at things. And I find that really useful that if if when we're talking about keeping the human in the loop, and when I'm thinking about what I'm doing with AI, I try and look at it from those sort of five lenses of what am I missing, what am I not taking into account to make sure that this is going to be effective and is also going to um actually get us to where we want to be, because as uh so many times we've heard, you know, technology should be to solve a problem, not in search of a problem. And we're seeing that so much with AI, aren't we?
SPEAKER_00:Yeah, I think it's maybe we're coming out of the other side of the tunnel now. The uh when it first was released, I think everybody remembers how limited it was. So the training data was limited, you know, Copilot used to give you five prompts and then say come back tomorrow, that type of stuff. And you know, the the last maybe 12 to 18 months, there's been a lot of exploratory projects from pretty much all companies. I mean, MIT, MIT released that uh, I don't know if you saw that stat, 90% or 95% of AI pilots, blah, blah. And I don't think that's a bad number one, who knows if that's true, because how are they measuring it? I always like to make the joke, because it's MIT, it's probably in the enterprise space. Most enterprises have a very bad implementation of copilot. And of course it's not gonna work. But put all that stuff aside, I think it's um not necessarily true, but even if it was true, it's a good thing because the technology is different. I mean, sometimes you have to keep hitting stuff with a hammer before you realize what a hammer is useful for. And there's so much you can do with AI, it's almost like, you know, it sounds crazy to say out loud, but I always, when I talk to companies, it's how much money are you willing to light on fire this year? And just turn people loose and see what they come up with. Uh you know, there's pros and cons of that. Uh, and then the other side of the equation is I had a debate with somebody around they failed because there was no ROI and they didn't plan it right. And I'm like, well, yes and no, but it's tolerance, it's cultural tolerance, right? If you really want creative solutions, take the constraints off, let people run with it, and realize you're gonna throw money out the window.
SPEAKER_01:But you're gonna then enable people, and I'm a massive believer in that. I think people need to have the psychological safety to be able to fail and fail fast and to learn, and that we're all learning and to iterate. And the creativity that you people get when they feel that it's okay to get it wrong and to try and to try various things, you just come up with much better solutions than if everybody sits in the room with a piece of paper and we must make this all the numbers work and we must have this. Yes, we all have to have certain amounts of guardrails, but if you can have that mindset of almost abundance, isn't it? You know, then the the ability to people to be creative and and to think of sometimes solving problems that the C-suite never even knew they had, but has made a huge difference to the people on the ground. And that's where it changes from AI being a top-down, oh, we're going to do this because we're going to save some money and get rid of those people, to actually, I'm going to do use AI because it's going to make my job better. I'm going to give my customers a better outcome. I'm going to enjoy what I do more. I'm going to spend more time doing the good stuff, the creative stuff, and less stuff that I actually find really tedious. Because AI is really good at that. You know, that's what it that's the benefit of it. It's quite happy to do loads of donkey work. Fantastic. Let it let it do that and let us do the stuff that it can't do. Let us do the things that is is more um creative and and and allows us to be better humans and come up with better solutions, um, rather than trying to, you know, put it into a box and say, well, AI can only do this, or AI, you know, it let's be creative. Who knows where it could go? Um, and I think with the change piece, that's the bit with us as change practitioners, it's giving people that confidence that it's okay to do all this, it's okay, and change is good. Change is scary, absolutely, but change is good. And um, one of the people who I learned a lot from back in the day, who was um a lean and six sigma practitioner, um, his favorite saying was only dead fish go with the flow. And I think it's brilliant because it's just so true, isn't it? You know, dead fish are going somewhere, but they're having no input, and that's you know, change is gonna happen. So don't be a dead fish. We all want to be alive fish. And yes, you might choose to go down the river like the dead fish, but at least you're making the choice, and that for me is really important about where we're at with a lot of this stuff.
SPEAKER_00:Yeah. Oh, that's a great, that's a great metaphor. What's uh what what's some of your uh your go-to uh AI tools?
SPEAKER_01:I I I love using them as a tandem in tandem, so I'm a big fan. I've usually got four of them open. Um my faves are uh Chat GPT for sort of general sort of I use them as a thinking partner. I do have some GTPs and stuff, but I tend to do a lot of work for clients and I have to be very careful about what I do and what I put in. Um, a lot of what I do with them is thinking and and strategizing and thinking problems through, like I would a you know, a colleague. Um, so chat. I love using Claude particularly for that sort of different perspective. And if I've got to do anything with code, as I said, I'm I'm not somebody who naturally does formulas and code. Um, but I had a particular issue with a client, and we I needed to do something in Zapia, and I needed it to do something quite complicated. And Claude came up with a solution of writing some Python code. I couldn't have done that without Claude, believe me. It wrote the Python code for me. I managed to put it back into Zapia. I've got a really slick solution that I'd never have come up with without that kind of co-creation. Um, I like perplexity because of the sort of the research. I've been experimenting a bit with comment, comet, comet um as a browser, but I'm I've got a few concerns because of my client work when I'm logging on with client um passwords and stuff. So I'm a little bit reticent about that. But I do use perplexity for research and and for checking and Gemini, and Gemini for me is the best for the sort of images that I've got so far. Um, but I everything they've got so much different, and I do love using them to critique each other's work. So I quite often will put something ask some chat for something or that I'm thinking about, and then I will put its whole answer into Claude and say, Hey Claude, this is what chat says. What are your views on this? I also my favorite prompt um is to red team test it and ask the AI to red team test, and that I find is really useful because that gives you that different perspective. What am I missing? And also helps you to look about the biases because that's obviously a concern around hallucinations and bias. And that again is the human leading. Tell me what I'm missing and now fix it. Um, and I also a massive fan of uh notebook LM, as I know you are. Um, I think it's just super cool. Um, and particularly if I'm doing so, I was at an event at um I as I said, I live in the UK, I was in London at the House of Commons last week, and um we recorded some of the speakers that we were at the event, and I've put that into Notebook LM. I've got a synthesise and I've got a little um video, you know, produced summarising the key points from a sort of two-hour session into six minutes. Absolutely brilliant. You know, that is just working smarter, not harder, isn't it?
SPEAKER_00:Yeah, yeah, yeah. So um as we get into the wrap-up, when you're doing this stuff with clients, what's their what's their, I guess, level of acceptance with having someone come in and using AI or maybe helping show them how they can use it?
SPEAKER_01:Um I actually have it in my contracts that I use AI and that they give me permission to use it and that I use it responsibly. And I do have a sort of um, you know, one of my things is that I'm constantly upskilling myself so that I can give them the best of my my knowledge. And part of I feel as a as a change agent and a project manager, when I leave a client, I want to leave them empowered to know how to do things better, not just to have come in, done some consultancy and gone away and and and stuck them with a big bill. So um the clients that I work with are all very much um love the fact that I am an AI advocate, and that enables them to access this without necessarily the risks themselves. So a lot of it I, you know, it's my own Chat GPT and stuff. Um but generally within the charity sector, there is quite a reticence in the UK at the moment around you know, um, privacy, about permissions, about user, and there's a lot of shadow AI going on. A lot of companies still don't have um proper policies, they don't understand the risks, and you know, that's where I feel that I can add value by saying, you know, AI is here, it's no good banning it. I did have a client say to me, oh, we're just gonna ban it. I'm like, yeah, that ain't gonna work because you know, you know, everything that's in your computer, it's got some AI. I mean, back in the day, do you remember Microsoft and the little clippy paper clip? Essentially, that was early AI, wasn't it? You know, it was an early form of AI. Um so yeah, I think it is a bit of a differentiator for me that I couldn't provide them with the level of service I do as a you know independent practitioner. I'm basically supercharging what I can do for them, and I'm charging them my time rather than I would be taking me a lot longer or I would need you know several other people to help me do some of the analysis and the and the junior work. So they see I think they're now seeing what I'm seeing within in the UK generally and in the sectors that I'm working is people are expecting you to know how to use AI and expecting you to know how to make those time savings, and it's almost a given, even if they as an organization are very much, or we're not sure, we're baby steps. And I think, and I know you have talked about this, I think every organization has a shadow AI culture because people are using it on their phone, whether you've got a policy or not. So I think it's much better to be open with people, as we said, and give them the the the say, look, we know you're going to use it. Here's how to use it responsibly.
SPEAKER_00:Yeah, it's it's not like you know, anybody is going to go grab all their data out of Salesforce and put it in a public model, right? I mean, I I would highly doubt that would happen unless you have a really disgruntled employee, but it's people are it's exactly as you said. Everybody, they'll just point their phone at the screen. So if it's banned, you know, it's yeah. I like that, yeah, using the term shadow. Um, yeah, make makes a lot of sense.
SPEAKER_01:And that's what it is, isn't it? And it's always been the same. It's the same as we have shadow cultures, you can have, you know, and that's the bit we what we don't want. We want it in the open. We want to as we want to have those difficult conversations. We want to um, you know, as is it what the the the good old thing, the exercise of bringing out your stinky fish, you know, we want everyone to tell us what they're doing with it, because then we can all have an adult conversation about what's the best way to protect and and to to use it best.
SPEAKER_00:Yeah, yeah, for sure. Cool. So tell uh tell people where they can uh find more about you or reach out if they want to chat more about change and AI stuff. And we'll put this, I'll put this in the show notes uh down below for people to click through.
SPEAKER_01:Yeah, LinkedIn's a great place. I'm quite active on LinkedIn and I'm starting to produce a sort of monthly newsletter around where what I'm seeing in the change world about AI and people and leadership. Um, but yeah, reach out, Rachel Janaway. I think I'm the only one. Um, so yeah, really happy to connect and always happy to chat to people about what's new. Every time I speak to people, I learn, and that's what I think AI is just such so brilliant about. We're all learning about this together, and it's a great way to upskill the whole world and hopefully make the world a better place.
SPEAKER_00:Right on. Perfect closing. Thanks so much for taking the time to uh to join today.
SPEAKER_01:Thank you for having me, really appreciate it.