Thank you all for being here. I'm Andrew Webster with ExperiencePoint and, joined by some of my ExperiencePoint colleagues and some very special guest stars. We're so grateful that you're all here for ExperiencePoint's Make Your People AI-Ready webinar. You're in the right place if you want to help advance your AI strategy through human adoption and optimization of some of the tools that are available to us. Yeah. Jason, that's actually from Gregory Perez, who one of our guest speakers knows Gregory. So I'll give credit where credit is due. ExperiencePoint. So ExperiencePoint, our mission here is, as many of you know, to make experience a better teacher by making it faster, safer, more focused. And we've been in the change and innovation game, change for over thirty years now and innovation for about twenty. And when you're playing at the cutting edge of innovation and change in today's environment, that means that by necessity, you must also be playing at the cutting edge of AI, and our role there is around this adoption. As mentioned, experience being the best way to learn with the help and reach of many of you here, we've been able to craft experiences for over half a million people now. In these thirty years, we've been operating, building innovation, change, and AI capability around the world. So, a smattering here of some of the folks we're thrilled to partner and work with. Thank you all for making the impact, the shared impact we have, possible. Alright. You're gonna see today's webinar is organized into three big adoption challenges, and we're proud to have a guest come in and illustrate each with a story. As we're moving through, if folks have questions, you'll notice that Q&A is reserved for the end. If you have questions, drop them in there. If we can't answer them immediately, we'll do our best to get to them at Q&A or follow-up with you afterwards. And how we structure this begins with Building AI Confidence. We're thrilled to have Chris from The Economist join for that piece. Then we'll look at Identifying AI Use Cases with Azadeh from Pipedrive. And Tom from ExperiencePoint is gonna share a story about Aligning on AI Strategy. We're also going to have some polls throughout, so we'd love to hear your thoughts that way. And as we move through, we'll be sharing some pro tips. Those pro tips that we share will also be shared as a PDF, a more robust kind of tool for you as well. Why don't we get started with our first poll? And the results are gonna be anonymous, but we'll be happy to share the results with you afterwards through social media. Let's see here. Which of these barriers to AI confidence, we're talking about AI confidence now, is most pronounced in your organization? Great. Alright. Response is starting to roll in. Okay. Please keep them coming. By the way, I'll just say as an aside, one of the things that's transitioned for us right now, eighty percent of our top clients identify skill gaps and unclear use cases over trust concerns as primary barriers preventing adoption, which, was not necessarily true a year or two ago. I'm gonna close the poll. Thanks for many of us getting in there. And let's see these results, please. So, we've got a pretty even distribution. So even as to be uninteresting, but probably also meaningful in telling that we're all in kind of different parts of the curve. But lack of practical role specific capabilities, interesting. That will, definitely tee up what we have next. Clear unclear guardrails or governance. We saw a bit of that in the chat a bit earlier, where to start, where to explore, what tools we have access to or do not. So thank you all. And now it will be my great pleasure to introduce us to the Building AI Confidence portion by introducing us to Chris Clarke of Economist Impact. So Chris is currently the Head of Content and Conference Production for EuroFinance at The Economist Impact, which is the events division of The Economist. And Chris has over twenty years of experience. He leads the development and delivery of large scale conferences for the corporate treasury and cash management community. I've had the privilege of seeing Chris in action and being part of leading, an event like this in front of a pro like Chris, I gotta say it's a bit of a nerve-wracking experience, but thank you for joining us, Chris. And I wonder if you'd be so kind as to tell us a bit about Economist Impact. Yeah. Absolutely. Thank you, Andrew. And hello, everyone. Really, really pleased to be with you today. As Andrew said, Economist Impact is part of The Economist Group. It's the B2B division. So we deal with policy research and insights as well as our events. And in my role with EuroFinance or the brand's EuroFinance CPI, I look after the content and conference production across they're kind of like our finance-focused events. So our audience is made up of corporate treasurers within global organizations, both in Europe and the US, their banking partners, and also their technology providers. So as an events organizer, it's a bit strange for me to be sat kind of presenting and on this side of the fence today, but it's a nice change, and I'm I'm as I said, I'm I'm glad to be joining you. Yeah. Well, thank you. And it was it's so exciting to see you at work. You and your team just have such a deep understanding of those folks that you serve, those treasurers of world leading organizations. So I wonder what does, like, AI adoption look like across that user base of leading treasurers? Yeah. Absolutely. I think looking at some of the tensions that came in on the chat as well, I mean, the short answer for us is it's very varied, but I think that's the case across many industries and people. Right? But it is growing quite rapidly as we go. I always say that treasurers by nature are quite risk-averse. They're dealing with sensitive financial data, obviously looking at the cash flows of organizations. So any adoption of new technology or new platforms has to be quite gradual and it is. When we did the event, which was nine months ago now, so not that long ago, but in the world of AI probably quite a long time ago, we polled the audience and only 13% of them were using AI daily. And that's professionally and personally. But I know that that's been changing quite a lot over the last few months. Even at the end of that workshop, we did another poll, didn't we? And that number was 70%. So, obviously, did a good job in kind of building their confidence throughout that. But we've run events since then, and we've started to see a lot more good case studies. We've been lucky enough to have some of them on stage presented to our audiences. So the adoption is definitely speeding up quite rapidly. And, yeah, we're already maybe with kind of the Agentic AI as well off of those really tech savvy treasurers that are making some good use from it. Yeah. Fascinating. We would love to hear if we have a moment, like, if there's a way to genericize any of those use cases, what it looks like. There were some inspiring examples that were shared at the conference. I wonder if you can share with us, considering treasurers, what are the sort of challenges and opportunities that GenAI presents to a leader like that? The initial challenges are probably easier to begin with. But the, you know, the initial fear as I probably said, or challenge was definitely around risk and security. They're dealing with very sensitive information. They have to be really careful about, you know, the access to that, where that's going in. I think as and I don't know if people on the call have this in their organizations, but, you know, having certain kind of different bits of AI that have the guardrails on and a bit more kind of security on there has definitely helped and given them a bit more of a sandbox to play around with. Now I think we're hearing more about the data accuracy that's going in. We're hearing the phrase "Garbage in, garbage out," quite a bit. That's not necessarily, you know, referring to the quality of data. And a quick kind of niche example that would be, you know, we've seen treasuries using AI for their cash forecasting to be able to see what their cash flows will be like in the future. The data they're putting into that to make those forecasts, you know, it states that it goes alongside almost black well, definitely black swan events like COVID, but tariffs, and obviously, the wars in Ukraine and most recently in Iran. So the results that come out for that, they're starting to question how useful they will be going forward if they're using the data from from these quite significant events, in the past. So, that's definitely something that that that is top of mind and monitoring for them. In terms of opportunity, we're seeing some really great examples, in many facets of what they do. I've mentioned cash forecasting there, and there's some really great platforms out there, and I know some traders are building their own as well. There's good uses in in fraud and anomaly detection. Foreign exchange, you know, our treasurers as they move money across different entities, different parts of the world, you know, foreign exchange costs can be quite high and so being able to hedge against those and predicting where the foreign exchange will occur. The AI has been really good for that as well. And then even with we've seen examples recently for Agentic AI in monitoring accounts receivables and those kind of smaller short term cash forecasts as well. So, yeah, a lot of kind of niche little specific examples, but there's a lot of good opportunities out there as well. Yeah. A curiosity is some of these use cases you've shared, some some niche, some very inspiring advanced things, and I wonder the degree to which, like, you're building AI adoption capability, not just across an organization, but across a community and have the degree to which you inspire versus intimidate with some of those very future facing things, must be a unique challenge for in a seat like yours. I wonder if you could elaborate a bit, on your approach to kind of raising the level of this community doing essential work globally of a AI companies. Yeah. Absolutely. I mean, the initial idea that we had was because, you know, there weren't that many, you know, real examples within treasury was to, you know, run workshops with, you know, experts from outside of treasury. And obviously, we work together on that as well. We did it specifically and this is something we have done across a number of events. But if you think this was, like, kind of a three-day event. And on the third day, we look to try and do something a little bit different. The treasurer has spent two days in the weeds of passion, intelligence, and FX. So day three is a good opportunity for us to do different things. We've had leadership classes. We've had other inspirational things as well. But AI and the adoption that it was at felt like a good opportunity for us to do that. So, you know, this is about kind of building confidence and so we did start quite small even in that specific work—specific workshop, I should say. We have been working in groups and learning from each other as well. But with that kind of low familiarity, we did start with some quite basic commands. We had a very low entry point for what people are using on a day-to-day basis, just to kinda see, or to try and get them to start thinking about how this might be implemented using personal examples and how that might be implemented in a professional setting. I mean, the good news is, and I shared those results, so that was that was relatively quick. They're a small audience. They you know, once you give them some kind of guidance that they tend to pick it up quite quickly. But what was really exciting was what we're able to do with stage two of that, and this is just a kind of morning workshop, right, where once we provided them with that framework, which was great, how they could and should be using generative AI... You know, we started looking at and we'd worked on providing, different groups using kind of different niche examples. We put together, twelve different potential cases that it could be useful. Again, quite basic and it might sound complicated when it's talking about things like FX and cash forecasting, but the same principles kind of apply. And we gave those some time and guidance alongside yourself to to work out exactly what they would want from something like this and what they think that they could use it for. And the feedback was great, wasn't it? We had groups sharing kind of what they've managed to put together in just that short period of time. Some quick results that got some good efficiencies. But what was I think really the feedback that we got, which was most pleasing I think was that it kind of opened their eyes as well to what could be possible. So this thing, you know, it moves so quickly. We've met had many treasurers tell us, you know, as part of their feedback and since as well that it's just has given them the confidence to start using these tools and see what's possible. See that, you know, they can work kinda work it out from themselves a little bit once we've given them those building blocks to really kind of, you know, shepherd their kind of teams in certain directions to start putting some really cool systems in place. Yeah. Thanks. Thanks for sharing. And the way you got them started with some simple examples and then the cases that they were working on or the things that they're all concerned about, and you gave them the opportunity. Here are some things we can work on, and you choose what you're going to work on. Sounds like it's an important ingredient there. Yeah. Absolutely. I mean, we'd done the research with them beforehand of, you know, some of the challenges that they're currently facing, you know, taking AI out of the picture. Right? And it was about kind of helping them see how AI could help them solve those challenges that they already have. You know, some of the questions that we have at Events is, you know, from those who own those experiences, well, what problem is this trying to solve? So it's really great to be able to give them real examples of how it could be solved and show them the kind of starting point from when they can do that. Yeah. Super cool and serendipitously, I heard this week a a leader that that we work with, our leader, James Chisholm, was describing how, like, a lot of people are out there and investing in some of that asynchronous courseware, you know, learning about the tools of AI and maybe even presenting some use cases that way or having some use cases presented that way like Copilot, Google. They have those available to anyone. And they are pretty good at checking that box of I know. So I've been through that, so now I kinda know what these tools can do. But they don't, always help people or rarely help people maybe with the we can. And it strikes me that the Economists in the events that you run, you'd be maybe world leaders helping bring people together for the "We can." So it wasn't just a person engaging, it was people engaging. And I wonder if you might elaborate just for a moment on what kind of magic that creates. I think one of the biggest things for us with and a plug for, you know, events like ours as well. But it's sometimes difficult, I think, to test within an organization or and there's I think that came up again in some of the tensions of maybe a fear of failure or if a mistake happens. Right? We again, speaking specifically for this example, you know, we were able to provide them with a really safe place where they could make you know, they could kind of ask ask the the the programs anything they wanted to. They could it didn't really matter if what they put in ended up being a bit weird or funky or or or, you know, when it came out because that's okay. They were all learning together. So that kind of environment, I think, really helps with these kind of things, whether it be kind of hackathons or, you know, tutorials, learning hands-on in that way. I think that's the best way to do this. And definitely without fear of kind of making those mistakes and and learning on the job, I think that's a real benefit. Thank you. We might have time for one or two questions for Chris if anyone has one they'd love to ask. Chris, you said make a plug for, events like ours. I also know there's a publication being released tomorrow. Anything we should be on the lookout for? Well, I mean, there's a lot in the news at the moment, which I think is probably taking up many of the headlines that that that we've got. So and that is definitely outside of my area of expertise. So I'll leave it by saying it would be good to just pick up a copy maybe. Pick up a copy tomorrow for your clients. Chris, on behalf of ExperiencePoint, on behalf of the community that's joined here, we'd just love to thank you so much for giving your time, giving your expertise, and, sharing so generously. So thank you very much. No problem. Thank you so much for having me. Great. And now I'll just, kind of wrap things on building AI confidence. So one of the pitfalls again, we'll share this, and a more elaborate tool, towards the end. So oftentimes people hesitate to use AI or they apply it cautiously. For some folks, maybe, avoid it altogether, especially in that kind of high stakes work. We just had a great example of treasurer who's doing incredibly high stakes high stakes work. One one pro tip here, one thing to keep in mind just to get started, kind of a a a teaser here. One thing to think about is to treat AI as a capability to build, not a tool to deploy. And we saw that mentioned a bit in the tensions, earlier as well. But if we were to leave you with one thing to think about when building AI confidence, is a capability to build. This is not a tool to deploy. And so we'll get one more poll up here. And if you would be so kind as to share, when you do have AI initiatives in your organizations, where do they most often stall? Okay. The responses are coming in so quickly on this one. I feel like people were ready for this question or experience this viscerally enough that it's top of mind. Thank you for sharing. I'll give it one more moment. Where do initiatives stall AI initiatives stall in your organization? And we actually saw some of this forecasted in when we shared tensions in the chat a bit earlier. I'll close this pull up in three, two, one. And let's see some of the results there. Great. I thank you for the coaching. Do that. Great. So for most of us, it looks like when moving from pilot to scale indeed, we did see someone mention that in the chat a bit earlier. So when we're moving from that point of pilot to scale, we've done some super interesting things. Now how do we scale that either capability or solution? After initial excitement, I'm not surprised to see that getting several responses as well. And when trying to define business value, that can be tough because the what value looks like or how we how we achieve it would be different, but the kind of, value we provide could could also be different. So thank you so much. I'll stop sharing now. And I'm gonna move us into our next section, which is finding AI use cases. And joining us will be Azadeh. And so quickly, I'm sorry to embarrass you a bit here, Azadeh. But I've been fortunate, as all experienced pointers are, to work with incredible organizations. I've fortunate to work with the incredible unicorn organization this year called Pipedrive. More on Pipedrive in a moment. And one huge inspiration for me in in general leadership, in innovation, and especially pioneering AI use cases has been Azadeh. So Azadeh Pak, VP of product at Pipedrive. She defines product vision, leads the development of their next gen AI narrative native CRM, and she has experience in in B2C and in B2B. So experience across the user base, and she's passionate about driving innovation led growth and building teams and delivering customer value. So welcome, Azadeh. And I wonder if you'd be so kind to please tell us maybe a bit about how you got to where you are today. Thank you. That was quite an intro. I really appreciate it. So as Andrew said, I'm Azadeh. I've been a product manager for fifteen years, and I really loved what you said about inflection points of innovation and change. I feel like that's kind of the story of my career. I started in medium publishing with big newspapers like The Times and Financial Times who were going behind the paywall, and we were looking at, like, how do you grow subscription? How do you move from ad revenue to subscription growth? From there, I went to Expedia Group. I led teams across growth, acquisition, and retention. And there, the big inflection point first was small screens. Apps came out mobile. Like, how do you fit a web experience or booking holidays into a an into an app, and how do you reimagine that? But then more and more recently about machine learning and eventually AI and how we bring personalization and those capability into the experience for the user and really build that direct relationship with the customer. I think anyone who knows those big kind of travel operating brands, you know, the biggest cost is acquisition. And if you can build that direct relationship, it's a big game changer. And more recently, I've moved to B2B, and this is Pipedrive, and it's a really exciting place to be. This is where AI is really making massive changes, and it's another inflection point for us, technology that lends itself really well to the SaaS space and working on zero-to-one product here at Pipedrive. So very happy to be here. Thank you. And, please, if you can share with us a bit about Pipedrive. Yes. Pipedrive is a very interesting company. An Estonian unicorn, as you said, it was founded in 2010 by five Estonians, which the best way to say it is they were really frustrated with existing CRM tools designed by people who didn't really sell. And so they're the kind of tagline from an ethos from our founders is that Pipedrive is a sales CRM built by salespeople for salespeople. And that's been a really successful approach to how Pipedrive is being built. And today, it's a global company with a hundred and ten thousand customers across a hundred and seventy-nine countries. And it it really is a product focused on the SMB market, the small business market, and, again, very focused on helping people sell. It is a sales pipeline. And our ambition continues to be the world's best loved and fastest growing CRM, keeping that ethos that we we support salespeople. I like it's an incredible growth story and with the North Star being most loved. So putting users first, great stuff. So you were selected within Pipedrive to lead a a special team with a very ambitious mandate, and I wonder what what you can tell us about the the why this project was established. Yes. I joined Pipedrive with that mandate last year, and I think it was a it was a very exciting space to be in. It was almost like a blank page to rethink their approach around AI experiences. I think there had been a year long investment in AI, and we had reflected on kind of what was achieved. And there was lots of good progress, lots to be proud of. Like, we'd validated some early use cases. The team had been upskilled, a lot of investment in tools and capabilities, and a lot of foundational capabilities built around it. But I think the one thing we were missing, and I heard some of that come through some of the polls in the conversations, was that the the customer traction that we were hoping for wasn't there. And I think the the challenge is that the approach was about layering on AI into a, you know, an existing product, like patching putting that little star icon Right. Here and there, but really not thinking about kind of the customer experience. What is the workflow there? And it was fragmented, and it felt brittle. And so we we really wanted to kind of not patch, but leapfrog and think about what is the net value to the customer. What is gonna make them use more of our AI capabilities? And I really loved how the leadership just gave us that space to pause, reflect, and approach to problem from first principles. And, you know, when you say first principles, and I think, you know, that's where ExperiencePoint came in. We worked very closely with you. It's like, how do we take the team back to understanding the customer pain points? And I think Chris kind of touched on it. Like, AI is an enabler. It allows us to tackle hard problems now in in a faster way and sometimes impossible problems that we wouldn't even try to solve technically we can do with AI. But the the fundamental is that pain point. Like, what are we what are we trying to do? And I think that's how we I came to this project and really thinking about can we build new product and can we bring that moment for our users. Thank you. One of the things I'm I'm sure, like, people are noting down right now, a lot of the folks in the ExperiencePoint community work with leaders, coaching them, building leadership capability, and and you shared how leaders created that opportunity and that space. So I I know everyone's curious about that. And when given that space, we want to create real value for the people that we serve. Why did you choose to start in the place that you did? I I'm sure there are limitations to what you can share, but, yeah, what what can you tell us about why you started? So for us, there was two things. One is that we we wanted to focus on on the sales workflow. Like, we'd had a lot of feedback and research that suggested, like, customers had lots of point solution. And actually, there's a lot commoditized already in the AI space. And what they were asking for us to solve is is a workflow problem. And so we wanted to make sure that what we were where we were starting is where where our sales p teams are working and what that workflow looked like. But the second thing is, like, you know, the ambition is to to really rethink our CRM's AI native, but we we didn't wanna go full throttle to that end before we understood what the value was. And so we we wanted to start narrow and deep with a problem space that then created those kind of flywheels for building more more slices of value on top of that. And so for us, when we when we thought about sales teams and particularly sales rep, we started with one persona, really kind of tried to be focused. It it it was about, like, a lot of different context. Right? And, you know, a sales rep in their day when they're thinking about meeting a client, you're you've got emails and previous calls, you've got your data in your CRM, you've got stuff in Slack, maybe stuff in Notion if you use other tools. And actually a lot of the effort and the task, and we had good data on this, fifty one percent of their time went around, like how admin and how do they like essentially get the context out of all these different sources. But, also, once they did that meeting, once they did that call, they had to then input that data back into the CRM. So, like, it was almost like our our salespeople were working for the product rather than the product working for them. So we want to start with a narrow problem that had those flywheels, that had those those those kind of unlocks for the user, but also unlocks for us in terms of building more verticals on top of that. And so that we we started in this, like, omnichannel communication space as we try to call it. Can can you define flywheels for the rest of us, please? Yeah. So it's like a good example is, for exam if our sales rep if our system is able to capture the data for them automatic the CRM becomes more data complete. Once the CRM is more data complete, then off the back of that data, we can build other products. For example, coaching or sentiment analysis or scores cut. So every every kind of cycle enriches the experience and allows us to build better functionality on top of that. And that is the kind of flywheel effect of data in and data out. Right. Thank you. So in in what you just shared, very clearly, a big part of this, like, you're even quoting user data here. Like, understanding your user was, like, foundational to what you were doing. And the way you described so we're not just, hey. We're AI native now. It's let's follow one use case. That feels really useful for the rest of us. And in design thinking, often call that, like, the beacon project. One group working on something meaningful, you can achieve lessons. Just on on that project in omnichannel, could you also share a bit about the something that stuck with me is the the the wow and the how and and that mandate? Yeah. And I think that really got to the point of, like, we we wanted to create an AI native product that had wow factor for our our customers, but we also wanted to be AI native in how we approach that problem. And so the the how and the wow really came together. The mandate for the pilot was, like, it's not just what we define and build to get to PMF, but the playbook on how we build to drive organizational cultural change within Pipedrive, within our products and technology organization so that we can almost set the blueprint for how we would build in the future was a was a big part of how we we approach the problem. Great. I I just remember being in the room when the team you're building in this totally new way, and they were sharing how some of their traditional rituals, like how we gain acquire metrics, how we measure like, all that stuff's gonna be turned upside down and not quite lamenting, but, you know, there were big questions in the room. And I remember you stopped the conversation and said, hey. We're not just creating solutions for our clients to help them automate aspects of their lives, but we must also be automating our ways of working. Yeah. I wonder how the team been responding to that kind of challenge? How's that been going? It's a really important point because I think the the previous year, there had been a lot of investment in training. Right? That we we got very robust training for the team to learn understand AI, learn how to set AI strategy, build with AI tool. We we did hackathons and some of the stuff that Chris was saying, right, to to kind of really make people comfortable with AI tooling. But the reality is it's only the practical application of these tools to day to day workflows, the way the team works, which really brings it to life. And I think that was what we were trying to almost instill in this team through this work stream. So, you know, we we the one thing I can say is that we were a really small team. We had three engineers, one product manager, one designer. And, actually, the limited resource was a really great constraint because now the team had to use those tools to scale themselves, to be able to do things faster, to be able to build and and drive those things. So it give gave them a forcing function to lean on these tools. But then more crucially, I think the they had very similar challenges. Right? Like, we're talking about the sales reps challenge where it's so many channels and so many different contexts and you're trying to fast distill into something. Well, actually, team had a very similar problem. Like, there's Confluence and Slack and all these channels and all these different updates they have to create. And it was an opportunity for them to apply AI to a very adjacent problem that they were have were having to almost build that empathy in terms of what they were gonna build with the customer. And so I think there was certainly in the setup, we have some forcing function that took AI from a tool you read about or play about within a in a course to now something you are practically applying to your day to day workflow. Right. To that and as I'd like a resource constraint, so we're putting people in a situation where they can't do things the way they had traditionally. They'll have to be creative and apply this technology to do it. I know there are a good many creativity experts in the group here that appreciate that resource constraints, leading to abundance in in creativity. One maybe final question for you. Like, how is the organization going about adopting some of the practices that you're pioneering? Yeah. We did one of the things we did from the beginning was the to have a very open door policy, very transparent policy across the team. So my team was creating these AI toolkits, whether they were building, like, synthetic personas to do assumption validation. Like, after our session, Andrew, we were like, okay. We even built a ideal design thinking persona so that they could actually validate a hype based on your methodology. So these things, they kind of created very quickly. But then we we made them a toolkit, and we made that toolkit very publicly available to everyone across products and tech, whether they were on this team or not. We had open demos, but our open demos were really there's demos are always a hard thing because people are trying to put their best feet forward. And for us, actually, we were just like, this is what sucks about what we're using and what it doesn't work. So that transparency of sharing what's working, what's not working, we really helped us operationalize this. And then I think when the other team started seeing how it was unlocking value for this team who are kind of resource constrained and and trying to move fast, we then saw some of that culture propagating. Now I wanna be real here. Like, we we have a wonderful discovery point with zero to one teams where we're able to do some of this stuff, then you get to delivery and people are heads down. And, actually, when your head's down, you can fall back into, like, we just need to build and maybe you go back to those old practices. And it takes a lot of reinforcing and then a lot of, like, pausing retros, reflecting data, and reporting on these things to encourage that behavior to keep that momentum because, you know, there are pressures, and you need to move fast. And often people can fall back to old practices. But I think for us, we're we're definitely seeing in at least in areas, like I would say, in operations or research or in in even encoding, the adoption rate way over sixty percent now in everything we're doing on AI. So it's really embedded. It's really how do we now get it across the rest of the organization. Yeah. That's that's really interesting. So in in sharing demos, hey. Not just the cool stuff that we're building, but what's not working and how we're building it. That feels in important. And and, also, what you mentioned there is, you know, everything's not perfect, and you can't just trust momentum. It sounds like there's continuous investment, and you as a leader, you're you're putting in those kind of management structures to make sure that we're checking and keeping the accelerator down so that we don't lapse into old behaviors? Absolutely. And I think the other thing is you have to make space time and space for learning because the the technology is moving so fast. Like, you can adopt one thing and that becomes a meta base already out of date. And so making sure that we are conscious leaving time for the team to almost research as well as adopt and use is is really important. Yeah. That sounds like an essential structure as well. As as with Chris, we could all listen to you all day. It's so much to learn here. Thank you for sharing so much and so generously. Such a treat. As it is great to see you. Thank you. And I am going to, share my screen again here. So for finding AI use cases, just a a quick kind of wrapper on this, something I'd I'd love to share with you all is the one of the things that, as I mentioned, was first principles. And so I'll I'll accelerate us through this very quickly. We have one more story to share, but one of the things we can we do with teams and and that all of you can do, this is a portable framework you can all use is to think about first principles, what value do we exist to create. If you just ask, you know, what's the work of my team? If we do it poorly or we don't do it at all, then what are the consequences for the people that we serve or or users or the business? And if we do our work just brilliantly, does that mean for our users and the business? And if you have that sort of exercise with your team, you can start to kind of assess and create first principle value statements. And then you can look at the value you create and consider what has AI made easy or abundant. Think of, like, translation. So I'm fortunate I get to do some work in in China, and for for years, I would always have a translator with me delivering workshops. Now I have an earpiece, and translation is instantaneous through AI. That's become abundant. But there are other things where, you know, AI has not yet made it abundant. It remains scarce. Certain fact, there are some things that AI itself has made more difficult, and valuable. So if you look at your team, what are the what are the activities that we engage in and which of those activities kind of where are we spending our time as a team assessment? What has AI made easy or abundant? What has become or or will become scarce? Or even a way to kind of gauge users' expectations. How and then look at, okay, left side, right side. Again, where have our users' expectations or quality of life increased on the left side? Where should we be engaging AI for more automation there and more humans at the helm on the right hand side? And it is even a way to look at a valuation of an organization. So a constant question you an activity you can actually engage in with your teams. What has AI made easy or abundant? How can we automate there? What remains or has become harder scarce? That's one starting place to consider where a use case might be for your teams, your organization. So finding AI use cases, a common pitfall. AI efforts can often feel kind of scattered. Teams are experimenting, but value is unclear or momentum stalls. So just one pro tip, if we wanna redesign work flows intentionally, we might consider what value is abundant and what is scarce and where do we need to automate and where do we need to lean into, creating a new form of value, with that scarcity. Great. Now, I we we will have time, I hope, to those thanks that are asking questions to get to them at the end. But one more poll for us here. And thank you. So how aligned is your organization around AI, priorities? So wherever you are in your journey, how aligned are folks from highly aligned to maybe even we don't have a strategy articulated yet? A couple more seconds here. And three, two, one. We may not find this surprising. This is definitely consistent with my experience. Executive clarity, folks doing a lot of, thinking and learning there, but hasn't necessarily translated to the frontline. One of the the great change leaders that we work with that had mentioned how a lot of executives can talk about what their change strategy is, an AI change strategy, for example, and expect that if they give that messaging to managers, it just kind of cascades down as perfect messaging, but there is some translation that needs to happen. Managers need to contextualize for for the parts of the organizations they work with. So last guest voice for today, we're going to like, let's say even the organization is on its way to building AI confidence and work is happening. You've got some use cases. You can't necessarily expect our AI strategy including adoption and further exploration to thrive without some alignment and and some change principles there. So now my pleasure to bring in my colleague, Tom Merrill. Tom is a seasoned innovation change catalyst with EP. He's done his tour of duty in academia and in leading an innovation lab outside of ExperiencePoint. He's worked with many Fortune one hundred organizations. You're gonna hear a a story about one in a moment, and he plays on that cutting edge of innovation change and and bringing both to the adoption and acceleration of AI strategy. He's kind of like our our rock star and compelled to call him a rock star because of inner his energy and accomplishments. But his doctorate in music, was not in rock music. Don't think Tom, correct me if I'm wrong. So please Yeah. Thanks. Tom. Yeah. Thanks, Andrew. I could absolutely put you all to sleep with a discussion about early twentieth century Italian serial composition techniques, but I think everybody here is a bit more interested in hearing about AI alignment strategies. So let's let's dive into that. As you mentioned, I recently had an opportunity to deliver a workshop to a Fortune one hundred company in the tech space, and we started by imagining one AI solution that each person in the room might wish to implement tomorrow. And then we offered these few thought starters for consideration. One of the participants focused on this idea. What if we used AI to analyze partner feedback in real time and deliver insights directly to our teams. His colleagues in the room, after everybody had sort of submitted their answer, immediately saw the potential because it provided a better understanding of how partners were feeling, their sentiment. It created faster response to issues, and it forged stronger relationships, had the potential to forge stronger relationships with their partners. Everybody agreed it was a great idea, so they moved forward then into the next step of the workshop. And that next step was then to imagine rolling out his change across the network. The crucial moment here was when we gave participants an opportunity to imagine the changes from the point of view of the people who had to create or implement or manage it, to have empathy for the people who were involved. And then, something pretty interesting happened. When the participant asked these questions, who needs to change behavior to make this work and why would they participate and what barriers will they face, the conversation changed based on the empathetic point of view of the stakeholder. Partners might ask a question like, how do we know our feedback actually matters? The engineers might might ask frequently, who owns building and maintaining this? And leadership, of course, always wants to know how are we gonna measure impact. And then suddenly, it became clear to everybody that the challenge wasn't about AI anymore. It was about alignment. And that's the moment that many of the organizations that we work with discover something surprising. They don't actually have an AI technology problem. They've got an AI alignment problem. This progression from idea to behavior to barriers illustrates a real challenge. AI transformation is not primarily about technology or algorithms or even tools. It's about aligning people around purpose and incentives and especially around behaviors. When when Azadeh was speaking, I was you know, Andrew, we talked so much about skills and conditions, and you provide people the the right skills, the right training. But when you send them and I was thinking about, like, they they got back to their space heads down, and suddenly, were the conditions that those people were going back to supported by support in support of those new skills. In our experience, the organizations that succeed with AI won't be the ones that roll out the most tools. They'll be the ones that are able to achieve strategic alignment with the most people. Our pro tip here is to stop rolling out AI and instead start handing it over. And that's a very short story because I know we want time for q and a at the end. Andrew, back to you, my friend. Yeah. Thanks. Thanks so much for for sharing, Tom. And I'm I'm sure many of us have, had experiences that make that that sort of resonate where we have seen leaders maybe falter on the alignment piece. So a summary of the three challenges here, building AI confidence, finding AI use cases, and aligning on AI strategy. Probably, these things are happening concurrently, But especially if our early AI use cases are treated as those kind of beacon projects where we're learning some things, some strategy will probably be emergent from that. So there is a sort of a a sequential nature here. In the tool that we'll share, there will also be a a bonus piece that is there will also be a bonus piece that is, the scaling of AI adoption. And for AI adoption, a common pitfall is the so the organization is not yet in a place where they're anywhere near, what you could call AI native or working in an AI native way. So not doing thing habitually, with AI. So the pro tip teaser for us now is we want to treat AI as culture adoption, not technology adoption because it affects the way that we work just so much. So for our Q&A here, I wanna honor some of the questions that have come in a little bit earlier beginning with Becky. And I might draw on you, Azadeh, if you're comfortable to help on this one because you have this real example. How do you create capacity to experiment? Yeah. It's a really good question. I I wanna go back to the culture question. We actually even before we kicked off our AI work stream, we had this artist scientist framework at Pipedrive, which was anchored on experimentation as a way we operate. For zero to one teams, when you're a small innovation team, that's kind of the culture you said. Initially, when you have no customer, that's a some validation. It's a different type of experimentation. But as we scale the team, it's kind of our ability to take high risk bets very cheaply. The other advantage of AI is that you can build much faster these experiments, like things that took weeks or months. Now you can build in a few days with Claude Code and other things and get those experiments live today. So it's it's very much how we set up the team rather than trying to find specific way to do experimentation on the side. It's how the team really operates fundamentally. Great. Thank you. And I also wanted to add to to this. For experimentation capacity for experimentation, some people believe we need to extend the project. If we do rapid experimentation, it's not necessarily true that you're adding in time. You are building in smaller blocks, experimentation blocks that serve as your validation or learning to open up the next steps. So if we're baking an experimentation to our implementation process, we don't necessarily need to add time to that implementation process. Jason, thank you, asks, would love to hear more about how to create space for folks to share learnings on their experiments, so kind of a build there. And, Azadeh, I'm gonna tag you again if if you'd be so kind. Yeah. I I think we I mean, there are a lot of traditional forums. Demos are always a really good place for sharing experimentation. I think one of the things that we started getting the team in the habit of was prototyping and sharing experimentation with AI and then disseminating that through different forums, whether it's, like, a peer-to peer sharing session, whether it's in demos, whether it's at All Hands. I think you have to find I don't think there's a best practice here. We're also learning, but we try and leverage as many forums as we can to share how the team are using and experimenting with AI and also what they're liking and what they're not liking. I think that's equally important to let people voice that. Thank you. And so oh, I saw that the jazzed hands there. I'll I'm going to share with us in just a moment. So if we will provide for you a practical takeaway here. We're also going to share out a survey. I'm gonna get back to some of these questions, and if people time to stick around, can elaborate on some more responses here. But just wanted to say before we get to the hour, I know folks will need to drop soon, we really appreciate you being here today. If you've got a minute to spare, we'd be grateful if you could fill out this short survey. Your input is gonna help us shape few future sessions like this around what leaders like you actually care about. And once you submit the survey, you'll receive that practical one-pager as well. It's a quick kind of like a treat it like a diagnostic tool to help you spot some common AI adoption challenges and what to do about each. You're also going to receive the webinar recording, each of you that attended today, and some additional resources in the coming days. So if you check-in chat, you'll be able to see that. Thank you. And why don't we I'm just gonna stop sharing here. If folks need to drop, of course, we understand, but wanna honor some more of these questions here. So on that question, just another build, Jason, on create a space for folks to share learning on their AI experiments. So there used to be a few years ago, you'd hear a lot of leaders say, failure, is celebrated around here now. Failure is cool now. And then there was another movement that followed that was failure is not that cool. It is the lessons that are cool. Right? And so we talk a lot about the Tom even mentioned conditions that support behaviors, And a very simple practical way to think about encouraging learning through experiments is if you're a leader and you have one to ones or team meetings with people and you're not asking people what lessons they've learned through trying new things or through failure recently, then you wouldn't expect people to be trying enough or extracting enough lessons for them. So that's a very simple thing everyone can do is ask questions. Not just what did you work on or what has succeeded, but what lessons have we learned. So thanks, Steve, for the question. How do leaders get convinced that AI is not a complete replacement of employees and SMEs? They all seem to think they can just cut people and save millions because AI will do that. AI will do tasks and more and more AI is becoming capable of doing workflows. That's one of the reasons, a lot of what people are thinking of right now is AI as replacement technology. We've got the things that we do right now, and AI will do those same things for us. That's replacement thinking. One way to help lean into what I called that right side stuff earlier to help people think of differently and apply themselves differently is reimagination thinking. So how can we yes. AI can support us people and s a SMEs within organizations, but how are we reimagining new ways of working? It is true that AI will replace some ways of working, it becomes our responsibility to think of reimagining workflows, ways of working, ways of achieving new things, and things we can achieve. And in terms of how to get leaders convinced, one way to get started is that that kind of, back to your first principles, what value do we exist to create and what of that value becomes a little more scarce with AI that we need to, you know, direct the energy of our SMEs into. Suggestions around building AI with third-party startups and mitigating the risk if they don't scale. I'm gonna pause on that one. Let's see if we can get some thoughts from the group to share in. I don't wanna overstate my expertise there, and maybe we can share some thought up follow-up thoughts. And, Max, any best practices for how teams might use AI to roll out the AI change management? So, Max, we are in a time when if as leaders we're engaging in leadership activities without drawing on an AI wing person, that's that's probably irresponsible. So what are the things that we do as change leaders? We are connecting with people individually. If we consult with an AI on here's some messaging I'm planning to share about why the change needs to happen, we should probably get feedback from an AI before we get feedback directly from people as as it has shared. They they do some experimentation with some sample or simulated users. That's one way, one very easy way that change shows up AI shows up in in the change workshops we run now is to help us elaborate on our force field analyses to help us think through our messaging differently. We are just about at the hour, so thank you everyone. Oh, one more question. How to measure success? If the goal is to make eye a more innovative culture within a large organization of leaders. I love this question, Eric, because, for those who have to drop, thank you. Thank you. Thank you. Copilot even recently released a dashboard so people can see things like daily usage or if they're coloring outside of the lines, in terms of governance. But that is really about AI usage and not necessarily about value created. So if we're thinking about a more innovative culture, Eric, we probably want to think about how do we measure innovation. And AI is a means to an end there. So innovation, one level, you have your lagging indicators that is like dollars made through new offers, new customers acquired, and things like perception impacts like net promoter scores, that sort of thing. And then you can look at some of the leading indicators that that ladder up to those lagging indicators, and those would be things like new behaviors that people trying AI, not just using AI, but using AI for certain workflows. Alright. Thank you. Thank you. Thank you, everyone, on behalf of ExperiencePoint. Thank you, Chris. Thank you, Azadeh. Thank you, Tom. Take care.
Explore this 60-minute webinar to hear how leading organizations are overcoming the most common barriers to AI adoption by treating it as a people challenge, not a tech rollout. You’ll hear directly from leaders at The Economist, a hyper-fast growing CRM, and a Fortune 100 tech giant as they share their firsthand experiences and what they did differently to make adoption stick.
By the end of this webinar, you’ll learn how top organizations overcame three main AI adoption challenges:
Building AI confidence
Finding AI use cases
Aligning around AI strategy
This session is valuable for leaders driving AI adoption in the real world, strengthening talent capability and aligning stakeholders across L&D, HR, IT, Transformation, and business-unit teams.
Hosted by ExperiencePoint’s VP of Organizational Innovation, Andrew Webster, this session features insights from our clients:
Chris Clarke, Head of Content & Conference Production for EuroFinance at Economist Impact, the events division of The Economist Group
Azadeh Pak, VP of Product at Pipedrive
![]() |
Why Your AI Adoption Strategy Is Failing Learn 3 proven strategies for successful AI adoption by giving employees agency, equipping managers, and creating two-way communication. |