What boards want from finance AI strategy: Joyce Li

Joyce Li, CEO, and chief AI strategist at Averanda Partners, brings a rare combination: CFA charter holder, computer science graduate, MBA from Wharton, Board advisor on AI. She advises on multi-billion dollar investment strategies and works with boards and C-suites on AI strategy, governance, and adoption. 

  • The power of Excel in an AI age
  • The ROI of AI and what boards want to see 
  • 15% as the magic AI productivity number 
  • Agents and the future of finance

Full transcript

Glenn Hopper:

Welcome to fp NA today, I’m your host, Glenn Hopper. Today on fp NA today we’re joined by Joyce Li, CEO, and chief AI strategist at Ever End of Partners. Joyce brings a rare combination of expertise to our conversation. She’s both a CFA charter holder and a computer science graduate with an MBA from Wharton. Over her career. She’s co-led multi-billion dollar investment strategies, advise global financial institutions, and now works with boards and C-suites on AI strategy, governance, and responsible adoption. She also serves on the advisory board of Open BB and co-authored the Athena Alliance AI Governance Playbook, helping finance leaders navigate the fast changing world of ai. Joyce, it’s a real pleasure to have you with us today. Welcome to the show.

Glenn Hopper:

It’s great to speak with you, Glen.

Glenn Hopper:

It’s been, it’s been great getting to know you over the months. I know it’s been a few months since we first spoke, and I just, every time we talk, I just, I have to say I absolutely love your background and focus as being both CFA charter holder and a computer science graduate and that MBA from Wharton, I feel like you have the kind of the perfect package. And I’m wondering, with that approach and insight, how would you say that unique combination has kind of shaped your career path? Can you sort of walk us through both your educational and professional background?

Glenn Hopper:

Yeah, for sure. I’ve been thinking about that a lot. What’s the full line, uh, of my career so far? Because clearly if you look outside in, they, uh, it has three tractors. The first one is computer science engineering, uh, doing analytics work for financial institutions. That’s my early career. And then I switched lane and then became a, uh, analyst and then portfolio manager managing investments, including in different, uh, verticals, uh, and also, uh, type of firms such as hedge funds, long only mutual funds, ETFs, you name it. And then now I’m doing almost like the back to a little bit more technology and doing the intersection work, um, between ai, finance and governance. And the through line I came up with <laugh> is I just have this curiosity and engineer sort of can do attitude. I feel there’s always a solution to a challenge that’s interesting enough waiting for us to solve that challenge.

Uh, early part of my career may be how to bring the data into, you know, some of the finance suites help people to make decisions in the middle would be like, how do I create value for our shareholders, our investors to discover great companies, and when they grow in their, uh, market value, the investors benefit as well. And now it’s more like bridging between, uh, how people should look at technology and how they can translate, uh, that into the problem they are solving, which may be business model may be, uh, how to unlock the potential of their, their, uh, labor force or maybe, uh, just simply investment decisions and how to restructure their investment team structure as well. So I feel like that is just a through line that gives me a lot of fun. And that can do attitude. You can also say very naive, uh, attitude for me to just say yes to a lot of these opportunities and have a lot of fun with it.

Glenn Hopper:

Yeah, and it is such an interesting time with technology thanks to generative AI being so integrated into everything we do at a level that it, it never has been before. Because truthfully, in the past that the barrier to access of being able to do real things with technology was the ability to code. And of course, with your computer science background, you already had that. But now people who couldn’t code before, they, they can get into vibe coding and access the, the power of Python and whatever other languages they’re using in their everyday job. I’m wondering though, and we’re gonna talk a lot about AI obviously, but when you were managing investment portfolios and, and doing your other work outside of computer science, did you ever think, Hmm, I could maybe automate this or do some modeling or some portfolio balancing or whatever it is? Did you ever think about writing programs or did you apply the computer science when you were doing, uh, portfolio management?

Glenn Hopper:

Oh, definitely all the time. I may not always been doing it well, but this is always one of the, uh, questions I will ask for my colleagues or who, who are much more, uh, skills in programming, uh, especially in the past, like dealing with big amount of data or highly complicated monitoring techniques. Uh, always been assuming there’s a better way to leverage technology to on whatever we’re doing. So I’ll give you an example. One of the things that we looked at as an investment team in the past was, um, how do you, uh, sort of get the unstructured data? Of course, at that time, there’s no gen ai, so a lot of times we went into trade shows or we went into government, uh, sort of filing, uh, databases and get all these, uh, non, um, non-standard data sources. And interestingly, that’s where this, the, um, a little bit of technology can do a lot of, uh, mileage to uncover some of the interesting insights.

And when I was doing longshore strategy, we actually discover a lot of sort of inflated, um, financial claims or, uh, just maybe some of the, you know, uh, questionable business practices by doing that. So I would say that benefited my, uh, career a lot. The ability of always asking, even just asking the question, can we do something differently with the technology, uh, available to us? And I, I really look forward to, um, uh, convince or encourage everyone else to think about that. And in fact, that’s maybe one of the things that you also do a lot is like, encourage people to think about what can I do with technology these days that can either, uh, multiply my ability to do things or maybe discover something that I didn’t know before?

Glenn Hopper:

Yeah, absolutely. And I think I have a hard time drawing a line between, okay, look, I understand we’re talking to senior leadership and, and finance and accounting professionals, and I’m not saying that anyone who’s carved out a career with domain expertise in another area that they, I’m not saying they need to go become a machine learning engineer or get a new degree in computer science, but I think it’s important that we understand at some level what’s going on under the hood if we’re gonna use ai. And don’t get me wrong, I love, there are so many finance leaders right now and just across all professions and all industries who are really leaning into generative AI and are getting very good at using it, but that a lot of times they don’t take the time to look under the hood and say, you know, they’ve figured out a good prompting strategy, they’ve figured out good things that they can offload to ai, but they don’t understand what’s happening underneath the hood. And with that engineer’s mindset and computer science background, you do have a little bit better of a read or a significantly better read, I would say, uh, because you understand sort of the engineering that’s happening when you’re talking to, whether it’s leadership or boards or anyone that’s interested in rolling out ai, where do you draw that line and what are your thoughts on how much on the technical side we need to know versus just being good users?

Glenn Hopper:

Yeah, it’s really interesting. We’ve been talking to about, you know, gene ai, why is it different from past technology advancements? And then I know you have a strong opinion on this as well, and I believe gene AI is really easy for a business leader or anyone to get onto. Like they can start using it, they can start prompting, and then they can even create this, um, or leverage on the prompt library other people create and really be, uh, doing amazing things already. However, the curve stop there. So if you want to create, you know, truly value unlocking business strategy type of thinking, you have to go a step deeper. You have to keep using it and keep thinking, okay, what are the other things that other people are using it for? And sometimes it doesn’t have to be directly, uh, related to your business function.

Sometimes it’s just the idea that, you know, you use it at work, um, sorry, at life could be sparking a sparking a idea that you can use it at work. It’s more like that mindset that basically if you think about you are sitting in the middle of a sphere of the more you use it, your sphere surface is gonna expand, you’re gonna have more interesting ideas, and also you create, you create this taste. Um, I know it’s a little bit like fluffy, but, uh, bear with me. I do think for business leaders, a lot of times we develop that second level thinking based on our first level thinking. So that taste of what is a good idea, what’s not a good idea in traditional business domains, we’re so used to it. But on gen ai, if you can think about it similarly, you are gonna be able to, uh, be much more confident on determining what is the right, uh, AI initiative that your companies should consider. What are some of the noise that is not really related to your true competitive edge of this business, therefore you should pass. So I do think that taste has to be developed by just grinding through all these daily usage, even though you don’t know which one will give you that a hundred percent genius idea.

Glenn Hopper:

Yeah. And we talk a lot about if you can do, you know, all these calculations and forecast and generative AI and not in Excel, is Excel going anywhere? And I don’t, I don’t think it is. And we talk about it all the time. Mm-hmm

Glenn Hopper:

<affirmative>.

Glenn Hopper:

Um, I don’t care if you’re in finance or, or data science or Yeah. Or BI or whatever. I mean, Excel is just, it’s a, a perfect format to, to do, uh, data analysis in.

Glenn Hopper:

I agree.

Glenn Hopper:

And I think about my early career, I kind of came up, you know, being an Excel warrior and, uh, really proud of all the, um, formulas I could make and all that. And, um, but it’s, you know, as I was been a CFO for a couple of decades now, and I don’t do as much in Excel anymore, but, and, and my point with all this is, if you understand ways to manipulate data, if the, if the format is Excel or, or if it’s r or whatever platform you’re in, then you have more of an engineering mindset around it, and you think, you think about what’s possible, and whether you’re writing the formula or not, you know, the outcomes you can get. And I sort of think all this vibe coding right now, it’s really cool if you can just talk to generative AI and have it build an app for you or whatever.

But to think about a production ready app and what you need to understand about it to be able to prompt more intelligently, even if you’re not a great coder, if you know constructs and you know, this is a conditional loop and this is how it works and this is the way that I’m gonna do this, and this is the, you know, sort of the, the over overall architecture I want, then you can guide the prompts better. So I think for us in our careers, yes, we still need to have that domain expertise, and that’s thing one. And if it gets easier through AI to do our job, that’s great, but we still need to know the questions to ask and how to structure and guide the prompts if it’s a chain of thought and all that. So I guess, uh, all that to say, are you doing any vibe coding right now? Are you building anything that you’re <laugh> having AI write write code for you?

Glenn Hopper:

Yes, yes. Again, like if I don’t do it, how do I know what questions to ask? I wanna extend your, your comments on the Excel, um, modeling a little bit. And then I go into the via coding. So I used to look at a lot of pre IPO companies and when they come IPO and there’s a step you probably are familiar, uh, with, but just for your audience, the sales side of investment banking analysts will have their model build up. And then for us, uh, before the IPO, we also, based on the communication with management plus the financial filings, we also build up our own model. So there’s always this, uh, meeting where we’ll compare our assumptions and I’ll question, or our team members will question their assumptions. They may try to defend their assumptions, and that back and forth will, um, will make each side have their own decisions.

But that ability to ask what’s key questions that will inference that model, you wouldn’t ask like what’s the, you know, tiny little detail that will affect the cell that will lead to another formula. You wanna ask that you ask the most important levers, but how do you discover that? How do you discover all these levers? How do they work together? Uh, of course in the form of Excel formulas, but why one will lead to another and why the assumption difference will make a huge difference here is developed over time. And I, I do think that engineering mindset or whatever you call the analyst mindset, um, that requires a, a little bit like more literacy than, you know, maybe not actually building the model. I haven’t actually building the, uh, model also for a long time, but I know what to ask. I, I know within 10 minutes of looking at model, I know what are some of the holes I would poke, right?

So, and going into the vibe, coding, vibe, coding, um, to get, start again, getting started is super easy. Within minutes, you’ll find something really amazing and, and you can, um, especially show off to your kid that you are <laugh> you are absolutely, uh, in, on the cutting edge. That’s very easy. However, once you get there, uh, how do you test, how do you evaluate? Um, that’s why I think nowadays people tend to say eval is the key. I think for board members or for C-level, uh, executives, that’s the key criteria. But unless you understand or at least, um, have that curiosity to learn to a certain level of literacy around ai, what AI can do and what AI might be able to destroy, um, or create problems, you wouldn’t be able to ask these very targeted questions that will influence your decision making. So that linkage between the capabilities, the potential risk is very critical for board members and and executives to keep learning.

Glenn Hopper:

Yeah, I’ll be very curious to hear what you are seeing in the market right now. FP and a today is brought to you by Data Rails. The world’s number one fp and a solution Data rails is the artificial intelligence powered financial planning and analysis platform built for Excel users. That’s right, you can stay in Excel, but instead of facing hell for every budget month end close or forecast, you can enjoy a paradise of data consolidation, advanced visualization reporting and AI capabilities, plus game changing insights, giving you instant answers and your story created in seconds. Find out why more than a thousand finance teams use data rails to uncover their company’s real story. Don’t replace Excel, embrace Excel, learn more@datarails.com.

Last year, everybody was talking about ai, but nobody had budget This year we’re still, we’re flying off the edge of the Gartner hype cycle on, on generative ai. Um, and I think though we’re starting to see a little bit of that, uh, slipping into the trough of disillusionment. And the super interesting thing around this, I don’t think it’s because of any let down in the technology, the technology is moving lightning fast and, you know, it’s, it’s better every day. And there’s new features for the arms race between the, the frontier providers is, is insane how, how quick everything’s happening. But there’ve been studies, and this is one where I, this is a, a time where I get jealous of the really big podcasts that they have, um, like a producer in the room and they can ask for the stats and get all that. Since I don’t have anyone and I didn’t prepare ahead, you know, we’ve talked about it and, and I’ve written about it, there’s a lot of noise in the media right now about AI projects failing.

And you, and I know the reasons for those, and a lot of, a lot of times it’s, uh, I’d say the bulk of the time, it’s not because the technology, it’s bad. It’s because maybe the desired outcome, what they were going out for was unrealistic given where technology is today. But I’m wondering what you’re seeing in the market right now, because I think there’s still that push from what I’m saying, there’s still that push from the top down of we have to do AI with nebulous, unclear what do AI means so that, you know, uh, investors, boards, senior management is pushing that down to their teams, telling ’em to do AI without clear goals and outcomes. And then kind of those of us stuck in the middle are, well, what do you want me to do with this? I’m, I can try to do it here, whatever.

So these sort of big capital investment projects are stalling, slowing down, not coming to fruition because of what I think, because of what I was just talking about where people don’t understand the technology and don’t have clear goals around it, but where we are seeing more success and more efficiencies happening at the front row or the, at the <laugh> front lines, rather, where employees are using the tools sometimes with explicit permission from their companies, and sometimes they’re just doing shadow AI off on their own and, and using their own personal account. I’ll break this into a couple of questions now that I’ve laid out all that exposition. So the first question is, when boards come to you right now, are you seeing a cooling off of, or a fear around, okay, maybe we’re gonna pump the brakes on AI projects. What, what kind of AI questions are they asking right now? What’s your sense of people’s mood and appetite for trying new AI projects right now?

Glenn Hopper:

Yeah, I think ROI question is number one, um, still in boards mind, I wouldn’t say, um, boards would say, let’s pause and review. In fact, I do think, um, if anything, um, boards now still have that urgency of let’s figure this out. Um, maybe what we have done, um, give it, give us valuable lessons, um, successes or failures of pilot projects are meant to give you insights, um, that you can guide your decisions. So I, I would say ROI is still very much the big ask. Um, but also the, I think the big change I, I’m seeing is the realization of closer attribution of these, um, AI initiatives to business goals. So what I mean by that is, maybe in the past it would be, oh, let’s, let’s think about how to, um, you know, improve our productivity by 15%, because that seems to be the magic number people throwing around.

But now maybe it is like, okay, if you are a, um, chief Revenue officer, what are your growth goals? How can AI help there? If anything, it may not be AI helping. It may be AI helping to track the marketing campaign and really be a little bit more agilent on which marketing campaigns are doing the right things and all that. But regardless of what it is, it’s linked to that business head as well. So the, both the business division head, um, function head would be a, would be able, would be asked to sign off on a certain AI goal. And the eight KPIs of the AI team will be linked to that goal as well. So I see that being talked about a lot more than I would say, six months and 12 months ago, when at that time it was a little bit more like, top down, let’s figure out how to improve, uh, productivity.

And I do think, going back to your point, understanding what AI can do and cannot do, where the AI risk might happen, really is part of very important input to drive that change as well. Because now you can say it’s, it’s not that easy to just, you know, uh, bluff, right? <laugh>, it’s, it’s more like, how do you get measured will will get done. The other thing is I do feel there’s, uh, this middle management dilemma. Uh, I I have a lot of sympathy on, on that group of, uh, talent because a lot of times, uh, depending on whether the head of the division have a clear sense of what AI can do or cannot do, sometimes it’s a little like difficult to communicate. Um, how do you go from that goal to what are the, uh, what are the implementation, um, reality is, and especially I think one of the area I would love to hear your, your feedback on that, um, is the views on data.

Um, are we ready as the first question, uh, your head of division may have very different view to the <laugh> as to middle management. The second thing is, even if we are ready on data, um, is that expectation of our, uh, core competence or our, this treasure from our data? Is it really true? Um, uh, uh, our competitive edge or our core competence five years down the road isn’t really coming from the this data. And the third thing, I I, I also feel like there’s a lot of difference, especially in the finance folks who, um, FPA functions have to calculate the, the follow and payback is this understanding of, um, are we doing automation or are we truly leveraging ai? Right? <laugh>. So if you do the automation, it is, it’s like, it’s easier to calculate I, and then sometimes it’s much more attractive, I have to admit. Um, but in order to really, let’s assume that AI really will play a huge difference in this business model, then there’s definitely need to have a lot more communication, a lot more alignment, a lot more discussion, because just simple calculation comparison, you may easily go with the process automation, which you are missing out a lot, uh, of other potentially interesting things.

Glenn Hopper:

Yeah, and it’s so interesting because there is all this pressure and sometimes it can feel like if somebody doesn’t dig in and figure it out and get down to like the brass tacks of what, what are we trying to do here in, is it a rule-based workflow or where, where, where’s AI in there, it can feel like the blind leading the blind. It’s like everybody just telling each other, we’re gonna do ai, we’re gonna do AI without defining what it means, or even necessarily defining what their end goal is other than productivity increase or, or whatever the case is. And I, I’ve had this semantic battle that I’m, I’m just not ready to let go of. But, you know, 2025 this year, everybody calls everything an agent. And that’s, you know, great for marketers, they can call this chat bot an agent, they can call this workflow and agent and all that.

But an agent is a very specific tool in artificial intelligence. It is something that has agency like you and I have agency and agent is something that you tell it to go do it, and it goes off and does it. And unless it has a problem, doesn’t come back, it does and performs all the calculations and doesn’t come back to you till it’s done. So if you call your chat bot an agent, or if you call, you know, whatever tool you have that’s not truly an agent, then you’re watering down how significant that is. And also you are messing with people’s expectations where you, they, they hear, uh, from <laugh>, various media, oh, we could just have an agent do that. Well, it’s not that simple. And I think that’s why a lot of these projects are failing is just unrealistic expectations and a lack of understanding of what AI can and cannot do.

And, and the only reason I would think of going back and like sort of conceding on calling things that I, that we know are not agents and, and just allowing them to be called an agent. If it looks like an agent to the end user, does it matter if it’s actually an agent or if it’s just an orchestrated workflow? I don’t know. I mean, if I tell my computer to go off and do some task, and I don’t know the Rube Goldberg machine that’s going on in the background that’s doing all the calculations and decision trees and, uh, typical automation flow, um, and then it comes back when it’s done, to me, it seems like an agent, so I’m gonna call it an agent. So maybe that’s where I could let it go. But I, at, at the board level, when you’re talking to them, I feel like they probably don’t want to hear that distinction, right? I mean, how do you, I guess I, I just, I can’t get past the, a need for us to have technical understanding, but also someone to be that interpreter and go do it. But I think if you don’t have that interpreter and you don’t have someone with that deep level of understanding, that’s why these projects are failing. So I don’t know if boards are hearing that or if they’re aware if they care or what, what their take on it is.

Glenn Hopper:

Um, the sure answer is, uh, at least I haven’t met someone who really care about that distinction. Of course, when the CTO or the CIIO, um, when before them got asked about these type of questions, um, then it’ll come up. But in terms of just proactively say, are you doing the Asian, you know, was true autonomy, true tool use, um, and true ability to learn, um, from the past decisions, I haven’t, at least I haven’t encountered. That doesn’t mean there’s none. I’m just saying it’s not common yet. But then I do think one of the reasons is, uh, a lot of board members, um, especially in the more traditional, I would say regulated industries, assume this is far away from core, um, business functions of this business. So they tend to assume that agents are used more and let’s say go to market, you know, lead generation, um, marketing, and, um, maybe some of the, uh, email and in, in, in casual life will be like booking a flight ticket or whatever.

But in reality, we are seeing more and more, uh, the agents becoming a core confidence, uh, at least workflow in in some of these businesses. And it will be interesting to see what will get the board members’ attention. My bet would be they will be very worried about the risk associated with it. Um, because when you say agency, it’s great to, you know, for people to have agency, but once you, assuming machine has agency, the risk alarm bells will just start ringing and who will be able to, um, you know, stop an agent before it, they do something that’s unexpected or, um, above the guardrail. So I, um, you, you, we already mentioned vibe coding. Actually, one thing I really like to see, um, and I I, I would like to attend, um, when I have time, is to go to these type of, uh, hackathons where, um, these AI agents, startups are not even startups.

They are already like hundreds, uh, uh, at least dozens of millions of revenue in size to present, like, uh, them as a tool for developers and where they’re focusing on. I also know this, uh, for agents, um, are moving from, you know, what agents can do in, there’s workflow like no co local workflow. You put together an agent, build an agent to now very much focus on, uh, what are the guardrails, how do we monitor, how do we get, um, almost AI response to some of these, uh, unexpected behaviors? How can we create all the trails so that even though agents may, you know, create all these, um, desirable outcome, but in case something happens, we can offer all these audit trail logs, um, to, uh, anyone who’s checking that. And lastly, I would just say, um, also if you assume not just yourself have agents, right? And then your counterparts, um, have agents as well, how, how is your position, how is your business position to talk to or sell to, or even deal with these type of agent counterparts? Are you, um, really accommodating as opportunity, or are you sort of shutting them all out? Um, because the, uh, risk metrics, I would love for us to have more discussion about that, but I would just say not yet.

Glenn Hopper:

Yeah, what agents can do when they start interacting with each other. I know, I think Google just came out with an agent payment protocol. Yeah. And there’s a lot of protocols being built around it now, and it is, it’s funny to think we’re, you know, if you’re designing robots, um, you design robots to be human form because you want them to operate in a world that was designed for humans. So if you have a, you know, two foot tall, something on wheels with one arm, you know, one mechanical arm or whatever, it can do a lot, but it can’t manipulate around the way that, that humans can. So, you know, that’s on the, on the physical world and what you’re designing. But then if you’re designing digital agents, it’s um, kind of funny to think that the whole idea of a UI around a website and the way we navigate apps and the internet and everything is very human centric and it’s very inefficient for agents.

So it’s gonna be interesting to see what happens on the web as more and more like, I think I’ve, it’s funny you mentioned that. Um, it’s always the demo for personal use. It’s always booking a flight or booking dinner reservations and all that. And it’s, but if you think about how complicated it is going to the Delta website and navigating, you know, where to from, and then looking at the different options and the different flight class options, and then the seat options and all that, it’s a very cumbersome way to navigate when really an API that just went in and looked at the database of available flights and seats and all that would be much easier. So, um, it is, and how much do you wanna, if, if you’re in business right now, how much are you planning for this sort of agent run internet and, and how much your focus on it is when you can’t even get an agent to, you know, reconcile your credit card accounts or whatever, <laugh>?

Glenn Hopper:

I do think for fp a colleagues, there are two things that people are more and more have to consider. So one is maybe from a product side of things, if you sell AI type of products, agents means the pricing strategy will be very, very different, right? So outcome based or even action based, usage based is one thing for sure, is not gonna be seat based <laugh>. So, so, so how do you model that out? And especially with a lot of uncertainty and not a lot of existing playbook yet. Um, that’s from a, if you sell these type of products, I think if you as a FP and a, um, professional can think about ways to think through that, create a framework around that is a great way to stand out and differentiate yourself, just, just my opinion. Uh, but if you are buying these type of products on the other side, like your cost side of things also could be more and more important in your, in your cost structure as well. So that’s something also to consider. I’m pretty sure that Glen, in the future episodes, you will talk about that a lot. Um, so I look forward to listening to those.

Glenn Hopper:

I I wanna talk a little bit more about governance, but before I do, I wanna put a pin in my question about the direction from boards and, and the latest media around, or the latest studies that have shown, you know, all these AI projects failing. A lot of the numbers I see over and over are our CFO says, yep, AI is strategic, we have budget, we’re ready to go forward. Um, and then you ask how many are doing pilots? And it’s, it’s some percentage, but then if you ask how many have gone beyond pilots and are actually using AI at scale in production, that number drops precipitously. And I’m what, you know, and it’s also, it’s a very nascent technology and, you know, finance and accounting, we’re not, you know, we’re, we’re meant to be risk averse <laugh>, so we’re not gonna be out on the bleeding edge. But it, what do you, is it more than timing right now? Why is there such a gap between, call it aspiration and execution on AI implementation? Yeah,

Glenn Hopper:

I know you wouldn’t say yourself, but I do feel like your article on this topic, um, on your substack was excellent. So I would highly recommend people to track down your substack. Um, but my opinion, um, other than the normal, you know, the, the timing and all that, people management, change management, I do wanna bring in two, um, points that I feel are relevant, but less talked about. So one is, um, you do have that first phase of, um, you know, using ai, adopting AI as a co-pilot or chat bot. Uh, sometimes it’s even built internally to customize for the internal use case, and that was good, but until it’s not, um, it’s, it’s the chat bot as the interface is very rigid, it creates another workflow. And, uh, initially maybe people are, you know, um, benefiting from that chat bots help, but with AI’s capability, uh, continue to develop, um, people would love to have embedded AI into their, uh, existing workflow instead of having another window or have to copy paste and copy paste back.

So that really a fast utilization rate a lot. And that’s also a natural way of, of progression, I guess, uh, for the technology to go from a traditional ui, which is a, we call it traditional but 2-year-old UI type, uh, chat bot to now is like, uh, agent or embedded completely native type of tools. I do think that transition, depending on your, uh, company’s stage, sometimes people are stuck in between, like, how do I move to, from a very canned, very easy to understand type of chat bot to a much more powerful, but also has a little bit more, um, friends really integration work type of projects. So the other thing that I feel people are now, uh, not talking about it enough, is the way of implementation. So in the past, maybe, um, again, like we chose to, there are two ways. One is to buy this, um, very, um, generic use, but very powerful chat bot.

Uh, the other way is, uh, we have this secret formula of great data. We have to build our, our own, uh, therefore higher army of talents and build our, our own. But now both sides have something else to desire because the general use sometimes cannot, again, like cannot be fully integrated and fully, uh, fully harvesting the potential of that business. And also doesn’t make you differentiate, right? Everyone is using the same thing, like, why would you be so different? And then the other side building your own yourself, and it’s even has even more, uh, I would say that MIT article also mentioned it has even higher failure rate because your talent either cannot keep up or, um, there’s also a risk, uh, of overspending or budgeting, uh, where your, your talent may or your allocation may not be moving into the most cost effective way.

And we all know that, you know, the techniques of AI model training has been changing so much over the last two years. So that conversation either at board level or at uh, C level or even at the maybe a layer, uh, uh, below C level, it’s built versus buy, but there’s another way which is partner, um, partner or maybe the small, also the small new way is like acquire. Um, but I would say acquire is rare, but partnership becomes a lot more interesting now versus 12 months ago. If you look at some of these AI labs, but also some of the AI startups and AI companies, they adopt this, uh, huge advantage of domain knowledge of your specific industry, and then they can work with you much more effectively. Um, and then when we build together, sometimes it’s the best solution. And, and that could actually bridge for some, some firms bridge between, uh, bridge between those two phases, like the chat bot and narrative chat bot phase and the tool that’s really tailor made for their business potential.

Glenn Hopper:

Yeah, and it’s, as you’re talking through all that, I think about, I spent a lot of my career focused on the SMB space and I think about where they are with generative AI usage and where midcap and enterprise, certainly any public companies and anyone with compliance issues, um, where they have to be with ai, and you’ll see reading LinkedIn posts and articles, um, SMB adoption is pretty interesting right now because they don’t have the same compliance issues, they don’t have the audit requirements, they’re just, you know, they’re founder led and they found ways to automate things. And if it’s 74% right, that maybe that’s close enough <laugh> Yeah. And they, you know, they can’t reproduce it or whatever if it saves them two days of month. And I, you know, obviously I’m talking very small businesses there, but, um, enterprise companies are the ones that have the budget to do this and typically will have the data to make it valuable, and they will have a higher level of data maturity.

But I think that they’re the fears and sort of, you know, some large companies are more agile than others, but the fears around whatever that fears they have of ai, is it gonna be wrong? Is it, you know, is it gonna hallucinate, is it gonna steal my data? Is it gonna leak information? Are really, I think that that’s part of what’s hindering adoption at the enterprise level. And one of the questions I get all the time, which is really to me, it’s a pretty straightforward thing to solve for. I mean, I know, you know, you’ve used AI in areas where it is appropriate if you are doing something that needs a deterministic outcome, don’t use probabilistic generative AI to solve for that, use a rule-based system or whatever. And then it’s a matter of just logging, you know, what are the prompts, what are the responses, whatever, just a, a a log so that you can go back and replicate the parts that you can. But I’m on the compliance side, and maybe this is a path end. I wanna talk about your AI governance playbook, but what are you seeing, or how are you advising public companies on making sure that the ways they use AI are repeatable, scalable, auditable, and understandable for whether internally or external audit?

Glenn Hopper:

Yeah, there’s definitely not one playbook for all, but from a board angle, um, the, it’s almost like you don’t implement anything before you have the risk framework and guards in place for big businesses, um, because the reputational and business risk and legal, you know, compliance risk are highly expensive and very hard to recover from. So I, I, I would say that’s definitely just the guide, the guideline for EE every businesses. But I do wanna, um, go back to one of the things about the risk or maybe the fear of, um, adopting AI because of these, all these risks. Uh, there are two things that, again, like one is a literacy thing, right? Understanding where the, uh, techniques are these days to separate some of the tasks that needs a very definite outcome. And that could be, you know, the, again, like using AI agent a little bit broadly now, uh, they can run a certain, uh, part of Python code, uh, which always produce the same result based on the same formula.

So I think there’s a very easy to bust type of misconception about gen ai is that if we do anything about gene ai, the whole machine is black box. No, it doesn’t have to be, there are sweet spots where the black box or the hallucination is great, is is useful, but there are certain part that you won’t do that be that very, very, um, reliable. Then you can still let you know, uh, link, link these codes that you are very, um, careful, carefully written. And, and that is something, you know, a lot of people who are new to gen AI didn’t know. So I, I would say this is something that people should know so that it’s not like it’s unsolvable. Uh, the other thing is, um, I, I do think for, uh, for startups and also for some of the existing consulting or tech technology companies, um, that providing that gal providing that, um, that sort of data security governance layer is part of their core business offering.

And we, if we think about the infrastructure companies, the, the, some of the, uh, AI data layer companies, that’s the whole selling point of their businesses. So definitely there are solutions. Um, of course, going forward, I’m pretty sure we’re seeing new things that are coming up that needs guardrails to be amended. Um, but um, I do think that we we’re not looking at something that we cannot solve. And lastly, oh, I forgot to mention one thing is, um, now people realize, uh, the biggest, one of the biggest risk for, um, AI implementation or AI strategy is to lock in with one vendor. And fortunately now people are very used to the vendor has to be, you know, exchangeable. Um, if we have to, you know, exchange models or exchange some of the other data sources, the, that should be very modular, of course, that creates, uh, challenges for, you know, the vendors <laugh>, how do you hold onto these customers? You have to, uh, keep keeping, uh, you kept the keep up with the newest and the safest practice.

Glenn Hopper:

Yeah, challenge for vendors and challenge for the end users when OpenAI changes their model and makes changes and don’t tell anyone <laugh> ahead of time. So you’re scrambling to plug it and to swap it out and plug it in. Yeah, I can,

Glenn Hopper:

That’s right. Completely

Glenn Hopper:

Relate there. God, there’s so much more I want to talk about, but I do want to be, and just time has flown by. I, I think a couple questions I want to hit before you go though. Um, because so many people seem a little bit stuck around rolling out AI right now. What is one piece of advice that you’d give a CFO or finance leader who really, they, they have been directed to, they want to lean in, they want to use ai, but they just don’t know where to go. What advice would you give them at this point?

Glenn Hopper:

Again, this is very case by case, but I do think one overwhelmingly, um, good advice, I hope, uh, is to really focus on, you know, five years down the road, where do you think your business competitive edge would be? And walk that back. I think for a lot of business leaders, especially finance leaders who are so used to think about MPVs and just think about the, you know, go and then discount it back, like what needs to happen now? And that actually is easier to get people, uh, aligned than what’s gonna happen next six months or 12 months. Um, and and i, I do think that the more you can bring people on the same page on that, what really make our business click, uh, would be will be better.

Glenn Hopper:

Yeah. And I think that longer time horizon makes people think more strategically than tactically because it, in an AI arms race, being tactical is very difficult when you’re like, oh, we were using Gemini, but now Anthropic does this, so we need to pivot and do that. It just makes it very difficult to try to react to the latest. But if you are thinking strategically and focus towards that long term, hopefully you’ve got a goal that moves

Glenn Hopper:

That’s right. That

Glenn Hopper:

Also

Glenn Hopper:

That also allow you to be very firm on where you are going, but very loose on how to get there.

Glenn Hopper:

Yeah. We’re at the time of the show where I, I get to the, uh, boilerplate questions that we ask everyone. So we’ll close it out with these, I’m sure you’ve heard the show before, but, uh, we ask everyone what is something that, uh, maybe most people don’t know about you, something they wouldn’t learn from just checking you out on LinkedIn or, or other social media?

Glenn Hopper:

I’m actually a introvert. I got very nervous going to conferences, cannot strike a conversation, <laugh> with the stranger. So I will hope people reach out to me, but I enjoy one-on-one conversations a lot and, and definitely I would highly encourage you to reach out to me if you feel what I’m at, uh, saying here, adding some value to you and would love to get connected.

Glenn Hopper:

That’s great. And as I too am an introvert, which is crazy to say for a podcast host, but I’m doing three podcast recordings today and it’s gonna be exhausting. I’ll just go sit in like a sensory deprivation chamber after this is done. But to your point though, if I were talking to three people at once, it’s a lot. But these one-on-one conversations are fantastic, so I’m really, uh, yeah, I, I right there with you on that. Alright, so everybody’s favorite question and I know, uh, you’ve, you probably write a lot more or would use programming a lot more than Excel, but we have talked about it a lot about Excel. So what is your favorite Excel function? And y

Glenn Hopper:

Yeah, I already hinted on that. Um, it’s XMPV because I just keep thinking about in that lens a lot.

Glenn Hopper:

Yeah, yeah. Excellent. Well I guess before I let you go, how, and, and we’ll put links in the show notes, if our users want to reach out and get in touch with you, what’s the best way for them to follow and connect with you?

Glenn Hopper:

Yeah, I’m quite active on LinkedIn. Uh, definitely reach out, uh, to me there. I also have a substack, you can check it out. The link also is on my LinkedIn profile.

Glenn Hopper:

Excellent, excellent. Notice I just said users, <laugh>, not listeners, my business hat came on to our listeners. Apologies for calling you all user <laugh>. So. Well, Joyce, thank you so much for coming on the show. This has, this has been a blast.

Glenn Hopper:

Thank you Glen.