That Change Show

Why The Backlash Against AI May Be More Harmful Than The Hype

Lean Change Management Season 3 Episode 2

Have ideas for the show? Liked a topic? Let us know!

Well, it's been so long I actually forgot I already did episode 1 of season 3!

In this episode we talk about why the anti-ai sentiment is more damaging than ai-hype.

Jason Little is an internationally acclaimed speaker and author of Lean Change Management, Change Agility and Agile Transformation: 4 Steps to Organizational Transformation. That Change Show is a live show where the topics are inspired by Lean Change workshops and lean coffee sessions from around the world. Video versions on Youtube

SPEAKER_00:

Hey everybody, welcome back to that change show. The this is the debut of season three. And the tough part with this show is I do episodes when there's a topic that I think is relevant because there's so much noise out there in the change world that really the episodes used to be weekly, but now they're sort of, hey, this is an interesting topic, so let's put a show out there. So this is the kickoff of season three. There's probably going to be lots of AI stuff on the show this season. Um, but uh this week I'm joined by Marchine Mikrovsky, who's the founder of Opt4AI. And today we're going to chat about the anti-AI sentiment and why that hype might actually be worse than the AI hype. And so uh uh Marcin, tell us, tell the audience a little bit about yourself.

SPEAKER_01:

Um hello, I'm Martin Wachkowski. It's a Polish surname, so it can be tricky. Um, uh, in short, I'm a software developer for maybe 16 years now. Uh so I was the witness in many transformations when it comes to the AI and software development in general. So this new transformation, AI transformation, is one of the biggest ones for me. It's a very hot topic. There are many opinions, uh, and also there are many people who have pretty radical opinions from both sides. You have AI hype, so AI will regenerate everything, um, will replace everyone, and it will be an utopia of sort. And also, you have other people who say that AI is fully unethical, it cannot do anything right, you should not use it, and it's toxic for your mind and for everything it touches. So that's why we're here.

SPEAKER_00:

Yeah, that was it was interesting because there was a um a post that we both read. Uh, I typed about 40 comments in it and canceled every one of them because I was like, I don't want to feed this. But it was one of those typical posts that uh what I didn't like about it mostly was the the author was promoting their course by hyping anti-AI sentiment. So basically the same things that you described is you know, the the people that don't like AI, um, they might prompt it with something it gives them a wrong answer or does a hallucination or something, whatever that is. And then immediately it's kind of that binary switch. Oh, it's terrible. Um, and uh I I thought it was funny, just the the the post, and then, oh, first comment, please join my AI workshop and I'll teach you about it. So it just kind of rubbed me the wrong way, but I've seen tons of that uh all over LinkedIn primarily because there seems to be a lot of people uh posting edge cases, right? Like we've all seen things here uh in Canada at least. There was a story about how one of the Air Canada AI bots gave away a free flight. And of course, the people who don't like AI goes, oh, well, AI is terrible, you should never use it. And it's it's kind of like leaping to this conclusion when I don't want to get too technical on the uh on the show today, but uh there's lots of stories like that. And people seem to latch onto those edge cases as opposed to the the positive cases. So, what are kind of the some of the the anti-AI hype things that that you're seeing that is maybe more damaging than the actual hype?

SPEAKER_01:

Um it's really curious things because uh you have uh grifters on the both sides. You have uh AI hype grifters, so I will teach you uh AI coding without coding anything, and so on. So I know that uh it's not only from the AIH side, it's only it's also on the AI hype side. But it's pretty surprising that you can um make money on this topic, just um just telling people that AI doesn't work, it's pretty surprising topic to make money on, but I guess it's it exists. And uh about your questions, why why uh why it's so dangerous? I think um opinions like this do nothing for the people. Because companies will not stop with AI adoption, that they will not uh change any course when it comes to the big tech. When when, for example, I use uh AI uh every day professionally, those opinions are completely inconsequential. Uh if you remember this post, this post was about AI startups. That's why it wrapped me the wrong way, because uh I have the AI startup and I have uh uh operating uh solutions, and it was just uh hobosh to be completely honest, because you can you can have uh uh good solutions with uh which uh do the good things. In my case, I do the call analysis because co-listening is very uh it's it's a terrible job. I I I guess it should be should be replaced by as as much as possible and as fast as possible. And when I encounter a person like this, um I I don't know really how to respond. It's it's so wrong, it's hard to respond. When the person says, it's just wrong, it's bad, do not use it, don't try to use it, it's it's it's really a horrible opinion. The only thing that that is doing, it's demoralizing people from doing it. And also what's what's strange about it? For example, let's see uh those people are right. AI is bad, AI is unethical. We only create terminators who uh who do the bad things to people and the whole whole whole thing is bad. I guess you should still know it. Just like with guns, guns are bad, guns, guns can kill, can kill. Maybe no one should have guns, but if there are guns, you must know them. You must know how how to use them. And that that that exactly how how I think it's dangerous because it's demoralized normal people from from using AI, from learning about AI, from knowing AI, and that's nothing to the really really bad side of the AI. So, for example, this uh AI bubble, all the invigilation from the big tech, it's nothing to it. So that's why I think it's harmful.

SPEAKER_00:

Yeah, one of the one of the things we we chat about um with some of the change folks who are uh in our AI club and lean coffees and open meetups and stuff, is it it it's it's about when they don't have predictability and certainty and have a repeatable response from it, then immediately it's well, it's no good because I did something a year ago with, you know, chat GPT seems to be like the Kleenex of AI. Uh from for many people, that's what they know. They don't see AI as a whole bunch of things in the ecosystem. They just think ChatGPT and they tend to use it like Google. So it it's kind of like though those old days when it was just Google and people would say, Oh, I have to do a report to my stakeholders this Friday with a project kickoff. So I'm gonna Google something and I'm gonna copy and paste the first template. And I see people kind of following that same pattern with AI. Just give it something, get an answer with like a zero shot prompt or even just a single shot prompt. So it's not understanding the context. And it's sort of like we chat about imagine you walked up uh to a stranger on the street, ask them a question about whatever your area of expertise is, and if they don't know it, do you that do you say like they're bad or they're stupid or they don't understand? It's it's it's for me, it's kind of the same thing. It's without context, without thinking of AI like you would be interacting with a human to have a rich conversation, you're gonna get out what you put into it.

SPEAKER_01:

Yes, exactly. Uh usually if you see the uh people's prompts, if you do the same prompt on the human. For example, you have a co-worker and you give uh this co-worker the same prompt. Usually the person looking at this will say we say to you, You are crazy. You you gave me one simple sentence, and you think that I should, I don't know, give a good response to it. I need a context. Why? How I should do it, what materials I should use. So uh I guess uh AI has this allure of all-knowing machine, and if it's not getting to those expectations, people think that it's bad. So it's not all knowing machine.

SPEAKER_00:

Yeah, it it's almost like just the name artificial intelligence puts people's brains in a frame of mind, much like uh autopilot with Tesla, right? You hear that phrase autopilot, you think, oh, it can do everything. I can read a book, I can take a nap, and oops, my car just crashed into a telephone pool. Um, so maybe people are taking that intelligence word to uh too literally. When it's really, you know, when you really think about it, it it's only going to be as good as the training. Um, and and not to get into too much uh technical detail, uh, you know, we we talk a lot about when you're training a language model or you're creating an instruction set or prompts, kind of talk to it like it's a five-year-old, right? Like don't ever touch the stove. If you're ambiguous with what you're saying, it's not going to be exactly sure. It might try to guess, it might try to make some inference. But there's things that I think you know, non-technical people can do if they're using, say, Cloud Projects or OpenAI projects or you know, any of these uh startups that are offering chatbot and kind of no code solutions for people to use, if they learn how to write good instructions, they're gonna get better responses out of it, is is kind of my view.

SPEAKER_01:

Of course, the that that's true. There are many very common pitfalls. And uh I think the biggest one is the I call it I call it tape one conversation, when you have uh disjointed conversation in many topics in one chat. I guess that that's the most most important one because LLMs um do not work like humans. LLMs cannot just uh dynamically uh clean up uh current memory like humans, because when when when we uh discuss some things, you know what is important, what's not important, and you retain in your memory only what's important for the current context. LLMs cannot do this. LLMs have this uh huge backlog of desperate topics uh all over the place, and when you ask another question, all those topics um influence inference in real time. So you you need to clean up the conversation and you or use some other mechanisms like memory mechanism, like compaction mechanisms to fix it. But usually LLM get all the context, that's why very long conversation, usually LLM just lose lose the plot randomly, and people say, Oh, it's not working, it loses the plot. Yes, because it has thousands of messages before, so that's why.

SPEAKER_00:

Yeah, it's it's it's uh I get to go back to the human thing. It's if you're having a conversation with someone and you're talking for three hours and you're talking for three hours and filling their head with three hours of content, you're not gonna get a good rich answer and response from it. I think people think just because I'm typing something into a machine, it should keep all that context and understand everything. Um, I I think Claude just released context windows of a million characters or something or a million words. Like that is a huge thread. Um, so for listeners, if you've seen things like this where you're chatting with whatever language model it is, if your browser starts freezing up, like scrolling starts getting slow, uh, forget the technical stuff behind it. The brain's getting full. Because it's trying to remember so much context, it's trying to remember context from all the different threads you've had, because now they all have uh certain types of memory and stuff like that. And you might just have to start over. So I uh I you know I chatted with uh one group about using the you know the men in black stick for people who've seen the movie where they walk up to people and they erase the memory. It it's you have to learn how to interact with it to get the most out of it.

SPEAKER_01:

You can't you can't just analogy. It's a great analogy. Uh I just I just wanted to interrupt you because that's a great analogy. It's exactly like these LLMs. They just lose all the information if they have no context. It's exactly like you just clean the memory of the human and then black.

SPEAKER_00:

Yeah, yeah. So when when you're working with companies, because I know you do uh AI training and and AI strategy uh assistance with companies and stuff, what what's their what's their greatest concerns about implementing AI uh in their companies? Is it is it the capabilities of the models? Is it you know, MIT released a study around 95% of AI pilots fail, which I've always said, I don't think that number's true, number one. And two, I think the people who try sooner, even if they don't know what they want to use it for, they're gonna learn how the technology works. So they're gonna be further ahead, even if the project kind of failed. So, what what uh sort of patterns are you seeing with uh the companies that you're working with?

SPEAKER_01:

I guess the main problem is safety and privacy, no, not capabilities, because usually um beginner users are very uh stricken stricken by the possibilities and by capabilities. They cannot uh induce the wrong response from LLMs because they don't know enough how to do it. That's why they, for example, paste a document and it gives great answer. So usually they are very happy with capabilities. Only later they discover that it has limitations. And companies are usually uh nervous about uh if the company will train on my data, for example. Or private conversation or private company code maybe will be leaked out from everyone to see. So that's the I that's I guess the main problem. Is it safe or and private?

SPEAKER_00:

Right, right. The funny thing about that is um uh for me at least, uh it's not entirely clear. Say if you're using any of the public models, but it's you know, it's your threads with all, I think with all the models, Gemini, Claude, whichever ones, you can share those threads with people. But if you keep them private, um it's not clear what does happen to your data you put into it. I'm sure it's learning from the patterns of conversations, but just for argument's sake, you upload your your uh tax return or something crazy, which you know obviously don't do that, listeners. But uh, what do they do with that data? Does any is it really transparent or have you seen or read anything about that describes what they do with it?

SPEAKER_01:

That that's a great question. And uh it's uh it's the general more general question than AI because we all use cloud cloud computing right now. We all use uh cloud uh email, for example. And when when you have your email, usually people do not care, they just give the full personal data uh in emails and still Google or Microsoft has access to it. If the Google or Microsoft want to um use your data and just break the agreement with you, they can do it and they do it all the time. That's the reality. It's horrible reality, but it is. Uh and AI, I guess, it's um usually seen as unique threat, and for me it's not a unique threat. All those all those uh services like drive, like email can be stolen from and you cannot do not you can do nothing about it. The only thing you can do is just do not use uh uh host your own image. Of course, no problem, you can do it. And also you can host your own AI. It's also not a problem, especially now when you have, for example, this uh Nvidia DGX um hardware. You can just just have your uh your private uh pretty powerful model, so you can do it. So I I don't I don't see it as unique threat because people already use the same services, the same producers of the same services with the same data, they just more afraid of AI. That's why it's so uh I guess um not trusted.

SPEAKER_00:

Yeah, I it's funny. A friend of mine works for a company where uh open AI is banned internally, so they're not allowed to use it uh for the same thing, data privacy concerns. And and his argument is you realize we have much more sensitive information in our hosted sales force, right? Like deal sizes, financials, all that stuff is in there. And I I have this conversation a lot with with, I guess, more non-technical people too, is think of it's exactly as you said, think of any service, think of Facebook, you know, x.com, Instagram, whatever you're putting on the internet is fair game. And I think Sam Altman has said this a few times. Uh, you know, if if any of your IP is out there, whatever language model is gonna grab it. You know, AI can't exist uh and and keep your IP private for you if it's public on the internet somewhere. Um, and I think the you know the conversations around whatever other hosted solutions that you're using, uh your data's in the cloud somewhere too. So it's really no different. I think it's uh maybe maybe it can be a case of if you cancel Salesforce, you can delete all your data. And you can't do that necessarily with AI. And even then it's gonna be in a backup somewhere, but yeah, yeah.

SPEAKER_01:

Yeah, still still uh the company small must retain uh the customer data, for example, uh when you have a criminal investigation, they need to hand out uh those backups. So the those backups are there.

SPEAKER_00:

Yeah, yeah. Because I I guess if you kind of boil away all the noise that we see and just get to the signal, it really becomes a trust issue more than anything else. Is is you just you have this perception that you're dealing with a cyborg or a robot, not a human relationship, and there has to be a certain level of trust. Whereas, you know, if you're gonna sign up for Salesforce, you're gonna have trust in their platform because they've been doing this forever, right? For 30, 40 years. They've got good data security, they've got all that stuff. So so you feel your stuff is safe, uh, even though every day it seems like there's a password leak or a data leak or whatever. And and I don't think AI has maybe it will never have that for people who are of a certain generation. Because, you know, we're not my uh uh and to disclose, I talk about this a lot too. Both my kids absolutely hate AI. They're both in post-secondary. My daughter is in uh uh art school, so she really hates it because obviously, with all the stuff you can do with art. And my older one just thinks it's cheating. He he just he wants to use it as feed a feedback mechanism or to generate practice tests. But uh there was a case where he was in class and uh the professor put up something on the screen and and said, you know, I know about 80% of you used AI to do that last assignment. And then in a future class, the some of the students used AI to write an apology letter. So he showed that too. So it's kind of like there, I don't know if there's a barrier or there's a line where uh a certain generation is just never gonna trust it, um, just because it's so different from the norm.

SPEAKER_01:

I don't know, to be honest. Uh, I don't know what would happen. Uh, usually when I see young people, they are lose, they're using uh AI more than older people, because in general, and also the situation is is really uh complicated because they are very good uh reasons to criticize AI. And usually you have um sensible people, they are criticizing what should be criticized. They telling uh okay, that's cheating, that's stealing, it should not be there, that's good. But uh but the problem begins when the people go overboard with it. So they took good criticism, for example, big tech stealing data. Yeah, that's good, that's great. It's true, it should not be like this. I know it. But then they go overboard with it and say AI is useless. So they they're doing extrapolation to the so extreme degree and so absurd degree that it's hard to read. Um if you want to say that AI is useless, I think now it's delusional stance. If you really think that AI is useless completely, for example, uh for example, LLMs, because AI, we get we got some uh for example, uh character recognition. I know uh there are I guess there are no people crazy enough to say that character recognition is uh uh not useful, but when you have specific LLMs, uh current adoptions and a multitude of uses, if someone says that it's useless, I think it's now it's delusional sense. Delusional or completely dishonest. I don't know which one.

SPEAKER_00:

Yeah, because I guess if that if that's kind of the attitude going in, you're you're probably not gonna take the extra care to figure out how to work with it uh to get better results. And you'll kind of always latch on to the uh the negative things. Um I was using it to try to design uh the front garden at our house with my wife, who's not technical at all at all. Uh and some, you know, you'd ask it to do something and it would generate some really weird image. Uh and right there there's a negative sentiment, right? It's like it doesn't work. It's like, well, part of it is the image we put in, part of it is what we prompted, part of it is the thread was too big. So let's just delete it and start over again. Um and I don't think, you know, maybe some people they just don't need to, so just don't use it, right? You know, if you if you're worried about privacy, stay off the internet, is is kind of uh my view in the whole thing. Um, but I I do agree at least with one sentiment, and you mentioned it a little bit earlier with that first poster, is there's always going to be some people who uh just try to take advantage uh on both sides of the equation, right? There's gonna be people that want to create um, you know, it's it's the 15 minutes of fame uh cash grab scenario, right? If I build a simple AI app using lovable or one of these automatic builders that I can just put a$5 forever charge on it, you know, even if I get 10,000 people to use that thing, I've made some money pretty easily. So I think we see that with the creator mentality. So I don't know if you watch any TikTok or YouTube or stuff. You you get these creators where that's the goal is do something that is either anti-uh hype or positive hype just to get eyeballs on content because we're making ad revenue from it and stuff like that. And I think that's always going to exist to a certain degree, but we're in the professional industry, right? I don't think any developer would just give AI access to their whole code repo and say and fully automate it. You always want a human to be watching uh to a certain degree. So if you think of, yeah, um, it's not AI or human, it's AI and human, but probably with less humans. And I always hate saying that out loud, but you mentioned it earlier, right? Companies aren't going to take their foot off the gas. They see this technology as faster, better, cheaper, right or wrong. And I think the best that we can do, especially folks kind of in my audience as chain change agents and agile coaches and change managers, um, learn how to work with it. You know, yes.

SPEAKER_01:

It depends how how you look at it, because uh I I am very advanced in it. I I was doing it uh when the when the first AI models uh were uh uh released uh uh through API, OpenAI API. It was uh I guess uh DaVinci 2 and 3. They are very very very um early models, nothing compared to what we have here. It's it's it's uh it's completely revolution when it comes, for example, to the G GPT-5 compared to the GPT-3. Uh and it depends how how you look at it, because when you have something which can write code according to your plan, you're an architect. You are you must understand that it should be there, it should be realized in this way, and okay, now you do it. And I I I can even argue that it's um uh more demanding than normal coding. Because when you code, you're just only doing syntax mainly. You you try to um code what should be what should be uh what should have business value, so you try to code it. And when you delegate coding, you have only architecture and higher level thinking. So I I can argue, yes, we have a lot more coding, but we need much more architecture. So I I I want to do I want to do architecture more than coding. I don't know. Maybe there are coders who just want to code and never touch architecture, no problem. Do you you do you? But I guess uh when you when you go, for example, from the uh coder to architect, um that's a promotion, not a demotion. That's my sense on it.

SPEAKER_00:

Yeah, yeah. It's almost like having a a really good pairing partner. So if you want to make that leap from from coder to architect or you want to just stay as a coder, you know, consider whatever AI you're using to be your pair programming partner, right? It's like you're the driver, it's the navigator, you know, it's not always going to give you the best output. I use it for just feedback a lot. Here's what I'm thinking of doing to change the permission scheme on whatever, here's what I have kind of set up now. And sometimes it gives me a really good technical answer that's not relevant in my context. I just I just ignore it. Yeah, or I'll re-explain it. It's like this doesn't matter because of these reasons. And uh thinking of it as a thinking partner instead of an answer generator, I think uh can be pretty uh pretty healthy mindset.

SPEAKER_01:

Yes, um, yes, I AI coding critics because that's my speciality. Yeah, I'm calling so I am observing this uh uh a lot. Usually do something like this dream about functionality, give this functionality to AI, or do this, and then unilaterally uh judge if AI did a good job. That's their process, and they judge it's the bad job. I will do better, I will do faster. It's it's useless. When you're advanced in it, you're getting good context, you're asking, okay, we need to do this. How we can we can do this. AI option one to three. Oh, I like three, but I don't like this in this tree that we do this. All right, so I'm cleaning the context, getting only option three with my corrections to the clean context, only only code and other documentation in context without without basining before, and only these options option, concrete option with my feedback, and I'm prompting. And usually models get it right away. Okay, this option with this dude, this, this, this, this, this, this right here, here, here. I'm looking at it, it's great. I I'm just doing uh I'm just copying it to the coding agent, and I'm getting my drink, I don't know, uh or something. And and that's the much much better um better work than um, for example, sacrificing next four hours for pure pure coding, and uh after these three to four hours you're done. And it's also very hard to use. You cannot do eight hours of coding, you will be dead after uh after two hours, three hours, maybe four hours if you're inhuman. So it's much better to delegate one coding task to the agent if you can do it, if you know how to do it, then code something um, for example, much difficult for half an hour yourself, and then you have extreme productivity boost with less coding, more architecture. That's great. I I think that's a great, great uh solution, and I never see process like this uh sensible and practical process from AI critics. It's always it produces bad code. I'm done with it.

SPEAKER_00:

Yeah, yeah, yeah, yeah, yeah. And and it's it's only gonna give you bad code when it doesn't understand the context. And I know we've already talked about that a million times, but that's really the thing that matters is is the context. It's it's like a new employee. You can't expect a new employee to be perfect with uh uh a couple of hours of onboarding training. You know, a new a new human employee is gonna make mistakes. And what do you do? Well, you give them feedback, you retrain, you do other things, you mentor, you pair, you do things. Uh do the same thing with AI. You know, there's I don't want to get into algorithms and fine-tuning and all that type of stuff, but just as a quick example, I uh yeah.

SPEAKER_01:

Go ahead. Usually people uh think that when you have the hallucination, that's the fully on the AI side. Uh that's the that's the problem of the AI, AI didn't have. And I always ask why this hallucination exists. And usually you can get to a reason. Usually you can look at your conversation. Oh, I thought I thought it this. Oh, I know why it's go to this direction. For example, when I get the code, I'm copying the code and I'm asking and it hallucinates about the code. What what happened? I'm looking at the code and the code is duplicated. You have a big block of the dead code, which is not functional. It's only trash in the context. You remove this dead code and it's all great. That's why I I don't like when the people just say oh it hallucinated it's bad uh throw it out. I always say why.

SPEAKER_00:

Yes that the the that's missing usually critics uh miss it completely the curiosity why LM did it they just say oh LLM hallucinates it's bad without uh trying to even understand it and when you understand it then the next prompt will be better and that's most important things and that's why it's so hard hurting for me when I when I try for example when you look at the My LinkedIn I I always try to do practical do this do this do this don't do this because that's giving problems and usually when I see I critics they just say it's bad don't use it that's most horrible uh thing you can do to the human yeah uh tell them do not land it yeah yeah yeah yeah it's it's the danger on both sides so it's all the uh hey we've got a new multi-agent orchestration system that can replace your employees that's not entirely true and then the AI is always bad so it's it's kind of you know it it's almost like the same conversations of every technology wave is there's always going to be the outliers on the extreme ends. You know, find find middle ground somewhere in the middle. You know I have a project manager agent that does a bunch of project management y stuff but I've trained it to say every time you do a task, tell me what you would have needed to do it better in the context of here's your goals and stuff like that. And then I take that every week or something like that and then I'll update my instruction set and things like that. And maybe people think that's too much work. You should just the AI should just be able to do it magically but it it's hard. It's actually the learning curve is pretty steep to do proper instructions and I would never fully automate any new business process. You know I do you have clients when they uh when they come to you that they want you know 100% intelligent automation with AI or are they kind of willing to have a human babysitter watch it for a bit and retrain it and then once you're happy maybe let it loose and make it autonomous.

SPEAKER_01:

I don't know uh exactly if I'm not in the in the bubble but uh when it goes to to the clients usually they are very very beginner when it comes to the usage they ask me what what can be done right they do not tell me oh automate everything they are they almost say let's test if it's even useful here. Yeah that that that that that's the main I guess sentiment. Yeah okay cool so uh as we get into the wrap up um tell uh people where they can find out more uh about you and how to get in touch if they want to chat I guess the best place is LinkedIn because I'm active there and uh if someone wants to talk to me I'm always open for discussions I even uh even um did a lot of meetings with critics I just I just tell them okay I will show you let's meet up I will show exactly what I'm doing why it works yeah I like I uh I also open to do doing uh uh something like this which is maybe original thing but uh but uh if someone wants to contact me LinkedIn is a is the best place to get things good good awesome yeah that's that's very healthy I think the more we have debates with you know the pro-hype and the anti-hype uh the better like try try to meet somewhere in the middle for forget the extremes um you know your mileage may vary depending on how you use it so yeah I very much uh uh appreciate the time and uh thanks a lot for the chat thanks a lot