That Change Show

Navigating the Intersection of AI, Coaching, and Organizational Change with Lauri Palahemo

February 29, 2024 Lean Change Management Season 2 Episode 7
That Change Show
Navigating the Intersection of AI, Coaching, and Organizational Change with Lauri Palahemo
Show Notes Transcript Chapter Markers

Embark on an enlightening journey with Lauri Palahemo of Pandatron as we scrutinize the fascinating convergence of AI, coaching, and change management. Lauri guides us through the transformative landscape of coaching evolution, from primitive chat-based interactions to the cutting-edge AI capable of emulating empathetic responses within the corporate labyrinth. With a profound background in philosophy and futures thinking, Lauri illuminates the critical role of forward-thinking in crafting AI tools designed to tackle the unpredictable hurdles of our era.

As we navigate the promising future of organizational AI, we uncover its potential in reshaping traditional structures, especially during upheavals. The discussion unravels the intricate notion of organizational fractals and how AI's adeptness in dissecting complex qualitative data can bring systemic issues to light. We probe into the shift towards decentralized models and contemplate the ethical implications for organizations venturing into AI adoption, underscoring a vision of a cautiously optimistic future wherein technology serves as a beacon for organizational progress.

Finally, we explore the role of AI in mediating cultural nuances, acting as a crucible where leadership presumptions are tested, and diverse global teams find common ground. Lauri's expertise in blending Japanese and Western coaching methodologies provides a captivating lens through which AI's impact on cultural identities is examined. We conclude with a reflection on AI's double-edged sword – the potential to homogenize yet also enrich cultural practices – inviting listeners to join the vanguard of change managers pioneering at this groundbreaking crossroads of AI, culture, and change.

Jason Little is an internationally acclaimed speaker and author of Lean Change Management, Change Agility and Agile Transformation: 4 Steps to Organizational Transformation. That Change Show is a live show where the topics are inspired by Lean Change workshops and lean coffee sessions from around the world. Video versions on Youtube

Speaker 1:

All right, it's Thursday, february 29th. Welcome back to that change show. So special episode that we're doing today. I normally run these every Sunday morning but just having a hard time trying to coordinate our schedules for this one. But I'd like to welcome Laurie Palahemo and hopefully I put the finished spin on the last name there. I know you're exaggerating.

Speaker 2:

It was pretty good actually, palahemo, yeah, so way better than most people so, and finish is notoriously difficult to spell, so congratulations on that.

Speaker 1:

Let's see if this still makes sense, because I used to use this. I would say Valetvasta on Puhu Suamir or something I would try to say I don't speak Finnish very well whenever I would go over there just to see how people would react.

Speaker 2:

But how did they react?

Speaker 1:

They were like I appreciate the effort, let's put it that way, yeah yeah, I can relate to that.

Speaker 2:

Let's move on.

Speaker 1:

Yeah, so tell the audience a little bit about yourself and what you're up to these days with Pandatron.

Speaker 2:

Sure, so I've been building well. Pandatron, you could say, is a business that we started doing in 2021. We had an, in a sense, mvp period of it in 2019 and 2020s where we were experimenting with chat-based coaching because we wanted to see that does the interface allow for any kind of a meaningful coaching-like dialogue? And we received good enough results so to get encouraged to start creating chat bots and, eventually, artificial intelligence on top of it.

Speaker 2:

Before that, I've been an entrepreneur since 2016 and I was working with a couple of other cases related to corporate training, analytics and also training marketplace platforms and stuff like that. Then, for whatever reason, I've acquired three coaching certifications. The first was, in a sense, this kind of like an accident that became my destiny, where one live coaching accreditation company here in Finland has this tradition that they offer free training for someone who they feel would have make a societal impact eventually with such an education. And then the CEO picked me I think I was 21 at that point and then I kind of got the spark, so to say, and that led to getting trained in business coaching, then systemic coaching and eventually the agile enterprise change management the course that you were running or, sorry, your colleague was running with the change management and then, beyond that, I'm actually still studying philosophy and futures thinking, especially, more specifically, critical futures thinking, which basically takes, well, as I said, critical stance towards future and tries to uncover how our ways of thinking of reality and of present and even of our history a project into what kind of futures we can create.

Speaker 2:

So, yeah, that would be, in a sense, my background and what I'm doing with Panatron, like well, officially the title is Head of Change, which means that I lead a team of experts that are responsible for designing, refining and iterating with the content that the AI is using with our customers, and also developing the, in a sense, the change process. Beyond just the AI as our AI is, it is a significant and a cornerstone, but still a part of a larger whole that we try to use when driving in change. And, yeah, you could say that what all of this quite diverse background eventually seems to be aiming is that I would like to create a new type of systems for leading complexity in the more and more weirder 2020s that we get into. And that's how I actually originally got acquainted with your work, as I was trying to look into change models that somehow take the essence of complexity thinking applied in an organizational context and I thought that what you guys are doing at Lean Change Management was pretty spot on. So I got curious.

Speaker 1:

Cool. It sounds like a perfect storm, especially talking about your background and coaching, plus the critical future thinking and complexity, and merging that with how AI is basically eating the world. It seems like every hour there's something new that pops out. There's a million new chatbots that come out there. I've tried, I don't know like 20 different. Ask the scrum master, expert bots and things like that.

Speaker 1:

I think the interesting thing talking about critical future thinking is a lot of the things that I see about AI and the change world, especially with our state of AI change survey is people talk about how it's impossible for AI to have empathy. It's impossible for AI to mimic a natural human conversation when it comes to things like coaching, because it requires a lot of system to thinking and a lot of coaches. You know this. Coaches don't just always act as coaches sometimes. Sometimes they switch their stance to a mentor, sometimes they switch to a trainer and there's confusion and I wouldn't say pushback from the change community, but skepticism around. How can AI mimic those things? So tell me a little bit about how. Well, don't give away the secret sauce, obviously, but how are you addressing those types of things with your chatbots?

Speaker 2:

Sure, I think I would say I try to avoid black and white thinking because I think we are too easily projecting our expectations of machines, and especially mechanical machines that were prevalent in the previous century, and we are projecting that into the future. So I do actually think that, even though AI, as far as we understand, doesn't genuinely feel effective empathy, it can simulate empathetic capabilities and of course one could argue that it is smokes and mirrors if you in a sense just prompt it to simulate active listening, reading these, how would I say, inquiring, leading questions that with a very compassionate tone, trying to uncover what kinds of needs, values and character traits the user is implying behind their answers. But even though it is, of course, limited empathy, if we compare it to, let's say, professionals, psychotherapists, who has been working for 30 years with thousands of clients, of course it is limited in by that extent, but at the same time I can make a strong argument that most people, especially in the corporate world, are not looked as objects worthy of being treated empathetically. Coaching is something that in a sense is a luxury product in most companies, that is mostly offered for executives. But then what is there for the everyday worker? And that's what we have seen that in a sense, even though the empathy the AI provides for the users is not real in the sense we think of realness, it scaffolds their capability of being empathetic towards oneself.

Speaker 2:

And also we have seen cases where that type of an approach actually leaks into the organization, where the management in a sense figures out that when they are having these coaching, coaching conversations with the AI, that hey, actually I could approach my teammates in the organization and my subordinates in the organization with this type of an approach, and which has been something that I've been very satisfied to see.

Speaker 2:

So actually to double down on this question, I do think we are in the complexity theory, there's this concept called bivorcation, which means that a phenomena gets separated into two different types of phenomena, where I see that actually the human coaches have to become, in a sense, more relationship oriented, more presence oriented, in a sense max out on the effective capabilities and relational capabilities of humans, because artificial intelligence will provide this empathy to an extent for everyone from now on. So actually the interesting question is not to me that whether AI will eat or not eat coaching industry or, more broadly, change management industry. The more interesting question is, how will AI change the category of coaching? How will AI change the category of change management? What problems become obsolete and what new problems we will have going now on that were prevalent before? And those are the questions I'm interested in. They're tricky ones, but they're definitely something that we think about.

Speaker 1:

Yeah, I think the interesting part about that is we're so focused on the titles right now the coach does this, the change manager does this, the project manager does this. I think, with the ability to have AI coaching, I think there's a few benefits, because each person is different. I mean, I've worked in some organizations where you're working with a manager or a leader and they want you to be a mentor. They don't want you to ask them coaching questions and whatever they know. They don't know what they don't know and they want advice. And I think one of the benefits of AI is there's a little bit of training on behalf of the user, where you can actually just ask it to say I would like you to act as a blah, and I think there's probably less fear on behalf of the coachee, because that's hard to do in a one-on-one conversation. I don't want to look like a novice or I don't want to look stupid by asking a question like that. I don't understand what you're saying. Can you just tell me what you would do in this situation? And that can be a very vulnerable thing for people to do.

Speaker 1:

I think with AI coaching, it might actually create more safety for people to ask and I think, with respect to the fear that it's going to take jobs and things like that, I don't. I think it's going to allow people to rely less on large teams of people. I know some really great coaches where the limit is their time and that almost goes away with an AI co-pilot, so you can have coaches who have their own companions and then they can focus on maybe higher level, deeper conversations. But I think it's more of a closed, safe environment for people to really pour things into a bot compared to talking to a human. Yeah, exactly, when you think about I don't know if you've seen any of Nile's flageing's work. He wrote a book called Organizing for Complexity and one of the models.

Speaker 2:

I haven't read of it.

Speaker 1:

actually it's a very small, like thin, easy to read book, but one of the premises in there is he has he calls it the Peach model for the organization. So it's basically the pit of the Peach is the people who support the periphery and the periphery is all the people who have contact with the market. So it's not like the pit has a hierarchical structure in it, where this chunk is HR and this is change and this is whoever. But it's when I look at that model, I think AI is going to make that possible when it comes to coaching and change, because you need in my opinion, you need people. You need more creative people who can creatively solve problems Exactly and are good thinkers and they can use AI as their co-pilot. And now you've got a pit of people who are just creative problem solvers and they're supporting the periphery.

Speaker 2:

Exactly, exactly. We are in alignment here, and here we actually go into this a bit esoteric concept that I've been playing around with. But in a sense AI allows for what could be called fractal change management, in a sense that you are able to scale certain ways of thinking, scale independently, because you can provide an access to a certain certain way of thinking and certain way of behavior that previously we kind of have tried to do with policies and giving instructions. But now, with AI and the dynamical element and the active element within it, people can figure out the principle themselves and then think how to apply it in the, as you were saying, in the periphery.

Speaker 1:

Yeah, and what I like about that is, you know, when you talk about the fractal approach to change management, I see that a lot with our community members. Basically, I was working in an organization consulting organization a long time ago, so I was doing some subcontracting stuff and we always talked about the coach vending machine, which is people don't need to be paying $3,000 a day for a coach to sit there hoping somebody's going to walk in and ask them a question. It's you want to go up to the vending machine and you know, tap your credit card and go. I got this problem. Can you got 10 minutes and then they go away. They might go away for a few weeks Experiment. They might try some things and then they want to come back. So it's I think it's going to radically change the model, the interaction, and probably make things like you know, I do a lot of retainer type of contracts with companies because it's for me.

Speaker 1:

It's really difficult to try to figure out exactly how much of my time you're going to need. So we'll start with a block of time. You're paying me X dollars for Y hours. You can use them however you want throughout these six months. You could burn through them all in a week, whatever.

Speaker 1:

And those are difficult conversations to have because somebody's got a budget and account for that and I think with this fractal approach or this chatbot approach, a lot of that can make it easier for organizations to see the value of coaching because they can tinker around with it, they can try it out, they can get instant results, see what's going on and then they can decide to make a bigger investment. And it's a pretty big risk, especially in today's market. I don't know if you guys see the same thing. It's not a lot of companies are hiring for outside coaches and consultants the way they used to. You know they're not bringing in a change team of five high price consultants anymore for six months or a year for a project. Budgets are getting cut, people are getting let go and I think they're going to still get more value from these chatbots.

Speaker 2:

Yeah, have you read the Mariana Matsukato's the Big Con?

Speaker 1:

No.

Speaker 2:

Okay, you absolutely have to read it. It's an economist critique of the consulting industry where, well, she makes a big case, but I think her main thesis is that working with externals, especially for long term, is a buy-off for the company because it disables internal capability building for them. So, in a sense, with these AIs actually providing these capabilities throughout the organizations and making organizations less dependent and, if you may, more lean, becomes more possible than previously. So it might be that in this current turbulence, something very interesting as an organizational paradigm waits on the other side, or at least I'm theoretically optimistic about it, because whenever there is turbulence and crisis, we usually need to renegotiate our priorities. And, of course, there's first the reactive phase of just laying off lots of people and squeezing lots of processes.

Speaker 2:

But I think there is a deeper level, a more transformative level to that, where that I hope AIs could help us uncover, and that at least it is based on our hypothesis that, even though you can use AI to scaffold different types of capabilities throughout the organization, because you have an AI that interacts with everyone in the organization, that provides massive amounts of qualitative, really deep data that the AI, given appropriate models, can sense, make into these systemic patterns or, if we can talk about this, organizational fractals.

Speaker 2:

What is the way of thinking that is repeating on all the levels of the organization? Are these issues on the organization actually isolated or are they just representations of something that is ingrained into the structures but is just hidden, which makes me hopeful that a lot of the cool stuff related to systems and complexity approach that have been becoming more and more trendy for the last couple of decades actually get leveraged with the power of AI, just because there is too much complexity right now in organizations to make sense of it all with the more reductive models that we are used to working with.

Speaker 1:

Yeah, what you just mentioned, there was like a bunch of Lego bricks clicking together with bots that have been kind of rattling around like a tin can in my head. When you talk about sense making, that just changes everything. For how we approach change A lot of the things I've talked about is we almost dumb down some of our activities as change managers to make it easier to sense make of things like we'll ask questions. How much do you agree with this statement? Completely disagree, mildly disagree, whatever it is. We want things or numerical rating scales and things that we can put in nice charts so we can go.

Speaker 1:

Oh, 80% of the people support the change and that's not too terribly useful because you're going to get whatever mood the person was in that day you're going to get. You know if you didn't like the change person who sent you the survey, you're probably going to give a worse answer than you normally would. But when you talk about how you can, number one, make things more safe for people and make more sense of the complexity that is transformation, you don't need status reports, change canvases, all these types of things. You can simply have a transformation diary. So imagine there's a world where people have an app on their phone where anybody can just feed in how the change is going and AI can make sense of that and it can all still remain private and safe, so people will probably feel more free to speak up.

Speaker 1:

You know, I've seen these town halls you've probably seen the same thing where they're scripted, the comms person generates okay, you're going to ask this question. You're going to ask this question and it's change theater and I think that whole approach to change is just going to get completely revolutionized because you can feed it any data you want and you can ask it anything to make sense of it, which will be hard for change managers to put the tools and the templates down and use AI as a co-pilot, because there might be a trust problem on the behalf. You know, how has this language model been trained? What's happening to my data when I put it in there? Is there any personal information that's going to get leaked out? I know that was a lot of random thoughts, but that just kind of glued everything together about how we can really focus on the interactions in the system as the change agents instead of all the mechanical day to day stuff.

Speaker 2:

Yeah, exactly, exactly. And it is an interesting question that not perhaps in the very near future, but, let's say, in five to ten years on what that allows for more decentralized organizational models. For example, if you have AIs that are able to coordinate, calibrate and communicate between all these different parts that are moving together, so it might be that in a sense this could give leverage to completely new types of organizations. Or I think a more modest take would be that it would help materialize these organizational models that people being theorizing for a couple of decades but that haven't yet found their way out. And that's actually I do see that there's this even historical pattern that a paradigm shift in informational technologies precedes what is materially possible.

Speaker 2:

In a sense, if we go to, for example, the invention of printing press, the category of literature was very limited to what was after printing press. We wouldn't have had the political and the theological and the societal reforms without printing press because that allowed for completely new types of scale and lower access for the production. And even though the internet has provided also a massive shift, I hope that with the coming of AIs, if they are prompted wisely, they could help us actually make sense of all the chaos and allow us to coordinate it more effectively. And that's where I do think that it is a big point of reflection for all organizations that what is our responsibility of this, because just taking a very opportunistic angle to it can very easily have unforeseen and catastrophic even catastrophic consequences. I'm not an AI doomer, but I do see that it does have tremendous danger if we just, in a sense, project this business as usual, predatory way of thinking into the future with these powerful new technologies that we have now access to.

Speaker 1:

Yeah, I think the tech community at large is pretty well. I'm going to get in trouble for saying this. I think their intention is good and they do a reasonably okay job at policing themselves. The open-sourced community is notorious for, oh, people are abusing this technology. We're going to create a group of people that create awareness for this unethical use of whatever this thing is, and we're going to try to counterbalance that with something else, because I generally believe all humans are good.

Speaker 1:

You're always going to have people like you've probably seen these folks where you're doom-scrolling on TikTok or whatever, and here's how I made $85,000 in 10 minutes writing a book with AI, which you know is a bunch of nonsense. So you're always going to have people that want to abuse the technology to get more followers, to get more monetization on their channels and in organizations probably to influence as well. I talked about I don't remember who I was chatting with, but around using AI to create more tailored communications based on what people need. So if you're a fan of any of the trait, temperament, personality models out there, you can craft communications that speak in language that makes sense to a variety of different people, and there's a fine line between using that technology with good intentions to create messaging that makes sense for people and using it to nudge them or manipulate them into action. And I think the tech community as a whole will rally around some of these unethical things and put something in place at least to make people aware and counterbalance some of those things.

Speaker 2:

I hope so too, and I think what I say, I think, to make it more relevant for the people listening to this. It's actually a very interesting topic on what are the responsibilities of change managers who are willing to use these new types of technologies, where I actually do see that this what could it be called agile or lean or I think, to make it even more abstract and inclusive complexity way of leading change is the only ethical avenue, at least based on these more democratic Western standards when it comes to ethics, because if we use this, in a sense, by just trying to forcefully push corporate propaganda on people, there is going to be a pushback. People are not stupid. So therefore, the model of what you were proposing in your lean change management philosophy push and pull I think is very descriptive on how we need to design these AIs. And this push and pull logic, I think could be also called coaching, engaging into a dialogue that respects the agency, first and foremost, respects the agency of these people and then enables, engages and empowers them to become active participants and co-creators of the change and not just recipients for the change.

Speaker 2:

So, in a sense, and that's kind of like, what I feel is that, with these technological breakthroughs and this comes back to critical future studies that they kind of uncover our ways of thinking, because they take these ways of thinking into their logical conclusions. For example, if we are just used to pushing people, if you leverage just pushing people by technological means, you cannot do that indefinitely and therefore you either you need to iterate at some point. So therefore I'm actually one of the few people feel that remains optimistic in all of this stuff that is going on, just because we cannot afford taking, in a sense, these standard exploitative ways of thinking and treating one another into the future work life. We just cannot, because it's going to look like a dystopian man at that point. So we do have the ethical responsibility for designing AIs that people actually want to be in a trusting, enabling relationships with.

Speaker 1:

Yeah, I like that you mentioned. You know, people aren't stupid. They can see right through typical corporate nonsense when it comes to change and transformation. And I think one of the things AI will do is it'll make that transparent, because you can't hide from it the data is in the system, which you know. Obviously, that's going to create a whole different set of problems, because what is going to happen in the case of a stakeholder or a sponsor who has their view and their vision about what this change can be, and then everything they've collected the sense making, the insights that they've gathered tells that person that it's the opposite, or nobody wants this change. They want this other change instead.

Speaker 1:

And now you run into different types of conversations where leaders you know might feel embarrassed that they completely missed the boat. So I think it's things like that like when I talk to other change folks about their either lack of trust in what AI is saying or the lack of the ability to make sense of things like that, to talk about how there are status differences. You know, one of the coaches who coached me used to say one of the worst things you can do as a coach or consultant is to solve an executive's problem because you've got a status thing to worry about. You know you're at a lower status compared to the executive and when you come in with ideas that solve it, it makes them look bad to a certain degree.

Speaker 1:

So there's things around cultural challenges and statuses which I wanted to kind of pivot to, because when we chatted before we were talking about some of your clients in Japan, and when you start to talk about making sense of things and using AI in different cultures, you know I think it's pretty well known that the power distance index which for listeners, if you don't know what that is it's basically the difference in status, social status, between people. So in the Nordics in particular, it's a very low power distance index, meaning there isn't a lot of threat between somebody who's at the bottom of the org chart and somebody who's at the top, compared to cultures like China, india or Japan, where following the social structure is very important. So how do you see, or what have you seen maybe with clients that are scattered around the world, that have different types of cultures, what's their perception difference with AI?

Speaker 2:

Sure, I actually do feel that, at least based on the initial results, japan is particularly willing to adopt these new technologies, and I think there's a pragmatic answer and a deeper answer to it. The more pragmatic answer is that, well, they are aware that they live in this very nasty contradiction of ancient honorary culture that is ingrained not only in their ways of thinking but in their very language. You talk to elderly people in a different dialect than you do to your peers or people who are younger than you, and also they are, as they are, connected in the global marketplace. They need to become more agile, they need to be able to constantly communicate what is not working in order to have functional organizations where people are able to work well and work for long, and they all know that the situation is in a stalemate, in a sense. So, therefore, there is this perception that artificial intelligence could become this middle man, or well, middle thing that could help mediate between these power distances and allow for more functional workplaces and further well-being. So we are still in the very beginning of that journey with them, but we have gathered quite good results and I remain hopeful.

Speaker 2:

The second, which is the more deeper answer, actually comes to the philosophy and even religion of the Japanese people, shintoism, or the state religion, where in Shintoism there isn't this dualistic distinction between organic and inorganic, an actor and a tool, because everything is just in a spectrum of being with relationships to one another.

Speaker 2:

Every object, in a sense, has a spirit. So my hypothesis it is called in a sense techno-animistic. So my hypothesis is that because they always considered objects to have a spirit, therefore adopting an artificial intelligence is quite natural continuation to Now. It just talks back, but so that is stereotypizing and generalizing. But I think it might have a seed of truth, because when I look into the western discourse, we are deeply troubled by the artificial intelligence talking to us back, because we want to have this distinguishment between objects and actors, and objects are things that are defined by their functions for actors. So in a sense I feel we have to go through this. I call it the philosophical hangover, which is the fault of the Greeks that we are still recovering from, to get to the same side as the Japanese and seeing everything in the spectrum of being.

Speaker 1:

I like that you brought up the philosophy, because I think that that's a really good topic to. To get into the wrap up some of the things that you mentioned Around AI being the broker between cultures almost I know I've modified that slightly from it. Uh, a couple of things kind of popped into my head around the philosophy of it. Um, number one yes, for global multinational companies going through transformation I think that is a huge benefit of AI is to be able to act as a broker or a companion to help different cultures understand each other. Um, one of the most powerful things I think that I worked on, we were running a day long session in person and we had brought change teams from I don't know 10 different countries or something together, and the magic was when we were having the conversation.

Speaker 1:

So the folks from Spain had their own view and their own approach for how they did things compared to the people in the UK, compared to the people in the us, and the conversation was how do we keep our identity and our culture but also still benefit from the perspective of different cultures?

Speaker 1:

So you know, ai, used as a broker, can help me Not lose my identity and what's special about my culture, because I think that's that's really one of the benefits of um Everyone working remote and virtual. Now you've got such a diverse perspective of people, and by diversity I mean cultural background. You know, if you get people from all corners of the globe together working as a change team, you're likely to have a much better change team because you've got such a wide variety of perspective. Um, do you want to say more about either AI, how it can act as a broker, or do you see anything where it can almost modify Our cultures, to be more robotic? You know, over time, as humans are interacting with AI, we interact with it more, um, in a machine way and we start to leave our culture and ourselves behind a little bit in favor of, uh, you know, trying to make sure we're training it properly and trying to make sure we're prompting it properly. How do we preserve the good things about our, our culture with AI?

Speaker 2:

Really difficult question, but where, where you could also argue that well, at the same time, the AI is becoming less and less mechanical over time. Uh, so, even though, yes, uh, we, I do feel that we have, uh, I mean, since the industrialization we have become more robotic, quote, quote, quote, uh, but at the same time, I do feel that AI, hopefully, would allow us to become again more creative culturally, or that's like at least what I've been experimenting with. That, okay, if I work with the Japanese people, then I'm uh Read Japanese philosophy and uh, their preferred coaching models, which I've concluded, are, for example, high relationship orientation, uh, this approach of iteration and iterative perfection towards things Uh communication, respectfulness and so on and so on. But at the same time, uh, because they are so very other oriented and relationship oriented, it also causes the A lot of difficulties for setting boundaries in their workplace. So, uh, I've been experimenting with Integrating a little bit of western models, for example cognitive behavior or even rational behavior type of models where, in a sense, you don't take the culture, cultural status as a granted, but you give alternative ways of thinking Uh that people can experiment with uh, and then I would love to do the vice versa, for example, with uh western people, I give them, uh, systemic thinking, uh avenues of Understanding that they are parts of a larger whole, uh, more focus on emotions, and and so on and so on.

Speaker 2:

So, to me, I do believe that identities are are important, our histories are are important, yet, at the same time, cultures are processes and uh, it. It is again an Optimistic viewpoint, but I would like to think that we could Learn from each other, or at least, uh, I have personally needed Japanese philosophy to recover from the metaphysical hangover that uh, our own cultural tradition, tradition gave me so, uh, and so that's why it opens up quite a lot of opportunities, but, at the same time, ban the level of thinking that we need to do here in order to even understand what is at stake here, is, is is quite something, and therefore I really would want to see this emergence of a global community of change managers who actually see this opportunity, and not not just opportunity, but also the responsibility. So, if you guys are out there like, please, please, reach out, because this is, this is, we have a lot of work to do, I promise you.

Speaker 1:

Oh that that is a fantastic bookend. There's two, two things I wanted to get you to wrap up with. What would be your advice for people who would be skeptical of the things that we talked about with respect to using AI and coaching? What would your be, what would your advice be to help them get past that skepticism? And then the second one would be just follow up with how people can reach out to you if they want to learn more.

Speaker 2:

Sure, first of all, skepticism. Good, please keep that, because it's like we are talking about something that hasn't happened yet and I'm extrapolating based on very limited evidence. It is good that you are skeptical. We are making outrageous claims here. It is good that you are skeptical. Please keep thinking.

Speaker 2:

However, I would want you to reflect on why you are skeptical. Is it because you don't want this to happen at all and you would just want the world to remain the same? Or, and if so, why? Why is that? What does it imply on what you value and what you want? And then, coming from that standpoint, on what you value and what you want In the discussion about well changing change, as as we seem to be circling around, is important, because I feel that we need to take this open and creative stance.

Speaker 2:

We don't and we shouldn't be naive, because these are powerful things that are changing realities At the very moment we are speaking. But if they are doing that, then the question is what do we want from the reality? What are the elements of the current that, for example, these identities and cultural traditions that we want to see keep living on in the very future, and what are the new opportunities, new capabilities that we also also want to see in the in the future. So that would be my answer, and then about when it comes to reaching out LinkedIn, lauri Palaheimo is probably a good one. Also, I recommend trying out our browser based demo, at which you can be found on our webpage, panatronai, so you can do that session and tell me how it, how it worked for you, what you did, what you like, what you didn't like, and we can discuss afterwards. I always love getting getting feedback in order to make it make it even more better.

Speaker 1:

Yeah.

Speaker 2:

I think that's. That's more, more or less it. And especially if you got curious of this network of change managers around artificial intelligence, like, please reach out, because we definitely should make it happen.

Speaker 1:

Marvelous, and all that stuff will be in the show notes for the people who are listening to the audio only, and then, if you are watching this on our YouTube channel, all the links and stuff will be down below in the description. So, kitos, lauri, that was. I could talk about this all day. We could make this like a seven day podcast. I love the philosophy aspect behind AI because it's you know, when you talk about the change management world, it's, it's. It is more conservative and more mechanical, if you will, I would say, right, there's a lot of perception that it's a search engine. I ask it something, it gives me an answer and I don't see a lot of chatter around. You know the ethical use of it, the philosophy behind it and the longer term societal consequences and possibilities. So this was, yeah, this. This was a Probably my favorite episode that I've done in the couple years I've been doing this. This was, oh my god, oh my god.

Speaker 2:

I'm, I'm, I'm, I'm, I'm, I'm. You can't say such things to famous we are. We are humble creatures, yes, thank you. Thank you for the recognition. That means a lot to me.

Speaker 1:

I should have brought my book. I have. I have the two versions of Finnish nightmares, one and two. It's, it's. You've probably seen those books where it's just like little books and it's. You know, yeah, it's, it's great. Yeah, I've been there many times. I hope to be back sometime in June as well. So, wonderful, yeah, it would be well. I'm going to be in Sweden, but I'm hoping to piggyback on on that trip and at least get over to Helsinki.

Speaker 2:

Hit me up whenever. Whenever you're here, would love to grab a coffee like there's. Yeah, we are. Even though we don't produce any, any coffee, we are famous for our consumption of it. So, yes, we pick that, pick the best friends. Yeah, awesome.

Speaker 1:

All right, well, thanks. Thanks very much for for the time and if you're listening or watching, remember to hit subscribe to get notified when new episodes come out. And thanks for chatting today.

Speaker 2:

Yeah, thank you very much.

AI Coaching and Change Management
The Future of Organizational AI
AI as a Cultural Mediator
Cultural Adaptation and AI Integration