
This special episode formed part of FP&A Con 2025 which saw a record-breaking 1500 registrants.
The guests:
Nathan Bell, Managing Partner, VAi Consulting, is among the most in-demand experts. Bell has unique experience– having started in computer science, before leading finance teams at Native American Bank, Digital Media Trends and Gartner– where he advised hundreds of FP&A teams facing chaotic data situations).
Anna Tioma former CFO of Sandoz and Softeq. Most recently she has pivoted to become a fractional CFO at Blend2Balance.
Anna Yamashita, Solutions Consulting at Data Rails who has worked across SAS finance throughout her career where she helps FP&A teams implement modern performance management solutions.
In this session:
- How AI is being applied in real FP&A workflows today
- Predictive analytics as the gold standard
- Getting to a single version of truth with AI
- ROI in AI
- Mastering Data Management
- Misconceptions about AI in finance and security
- Co-pilot, prompts
- Bonus: Security and AI
Full blog post and transcript
Glenn Hopper:
This is FP&A Today. Welcome everyone to this special live edition of fp NA today, brought to you by Data Rails as part of FP&A Con. I’m Glenn Hopper, host of the fp and a today podcast and author of AI Mastery for Finance Professionals. Today’s session is titled Data Analytics and AI for FP and A Teams. This will be a fast-paced, practical conversation focused on real use cases, hands-on tools, and how finance teams can thrive in the AI era. This session is being recorded and will be released as a special episode of the podcast, and I’m joined today by three outstanding guests.
First, Nathan Bell, co-founder and managing partner at VI Consulting. Nathan works with CFOs to build integrated finance technology and data strategies, and he’s led Enterprise Transformations at Gartner Embark and Native American Bank. He holds degrees from Harvard, DePaul and the Graduate School of Banking at Colorado. Next, Anina fractional CFO and founder of Blend to Balance. She has held executive finance roles, including global CFO at SoftTech and Cluster CFO at Sandoz today. She helps small and mid-size businesses apply AI to everyday finance operations and writes the balanced AI Insights newsletter. And finally, Ana Yamashita, manager of Solutions Consulting at Data Rails. Ana has worked across SAS and finance previously at Vena and now at Data Rails where she helps fp and a teams implement modern Performance management solutions. She has a background in psychology and a passion for data storytelling, enablement and financial modeling. I love the psychology background. I feel like in this era of AI psychology probably helps, you know, <laugh>, you know how to talk to the, to the bots.
Anna Yamashita:
Yeah, exactly. It doesn’t hurt for sure.
Glenn Hopper:
Yeah. Let’s go ahead and, and dive in here. You know, a a lot to cover today. And, um, I wanna start off with how AI is being used in FP&A today. So maybe Nathan, you wanna start us off and, and tell us what are some of the ways you’re seeing AI applied in real FP&A workflows today?
Nathan Bell:
Yeah, thanks Glen. And a couple examples here. One is just my own personal example. When I was CFO of Digital Trends Media Group, and we had an, i, I joke at the time, it was really an FP&R team. Not really, we weren’t doing much analysis, right? We were lucky to ship the financials out on time and maybe do a mini variance analysis, but once we were able to really get the data we needed and do some automation, and you, you hear automation kind of intermingled with AI a lot these days. And I, I think there’s a lot of confusion around automation and RPA and that’s been around a long time, but also just how that can be super useful for f and a teams. And what we were doing with it, from just trying to get the reports out to variance analysis, to ultimately doing predictive analytics, which to me is kind of the gold standard of where most people want to be these days.
And then once you able to make that move, then it’s, you know, true prescriptive analytics, which is, you know, where I get the most excited about. But for us, it was just getting a common lingo together, you know, with multiple versions of truth. We have that, uh, situation where I was a digital trends media group where my team, we’d put the reports together, we’d go to the quarterly offsite, I would do, you know, the board pack and presentation. I’d go first and I’d say, here’s the numbers, here’s, you know, how this looks. And then you’d have the head of sales go after me or head of marketing or HR operations and they would all say, I see Nathan’s numbers, but mine’s slightly different. Here’s what my forecast looks like when Nathan doesn’t have, and his team doesn’t have. Is this information about a certain client or, you know, it’s not in the CRM, it’s just in our head, or this is how we look at it, or how we calculate this metric in our, in our group.
And those multiple versions of truth was something that, uh, for fp and a is, is real. I’m sure most people are probably nodding their head right now and they’re like, yes, I’ve been there. Or we think our numbers, you know, should be what goes in the board pack and we have everybody else have a variation of that. And I think AI can really help solve that really kind of, it’s gonna be a near impossible to get to one single version of truth. We, when I worked at Gartner, we used to joke that was like chasing Bigfoot, getting an organization to, you know, you hear about it, but actually nobody’s ever really seen them, right? Like a company that has a single version of truth with metrics. But if you can get to what we call sufficient truth, where there is a common data dictionary and glossary where everybody is operating from a metrics and KPA standpoint, then you can start automating things and start shipping out, you know, normal various type analysis and, and freeing up that time to be a true business finance partner that I think FBA really wants to do.
Glenn Hopper:
Yeah, that’s great. Well said. And, uh, so we have two Annas on the panel, so we’re gonna go Anna a t, and Anna a y. So I guess Anna a t we’ll go to you next. What are some some common misconceptions about AI that come up when you start working with finance teams?
Anna Tiomina:
Yeah, first of all, thanks for having me here. And, um, second, the biggest misconception that I hear is that AI is going to take our jobs. This is the thing that comes up every time we talk about ai or when I show the AI capabilities. And this is not what is actually happening. So ai, at least from the point that I see today, is not going to take our jobs. It is taking the part of our jobs that is manual, uh, involves a lot of data processing, repetitive, yes, it makes our jobs maybe more meaningful. Yes, research actually shows that people are starting to work more efficiently, but they are not working less. This is important, right? So we are spending our time on something more valuable, becoming better partners to our teams. So I don’t think AI will change our jobs. I think that the standards will change and that we will need AI to meet these new standards.
Glenn Hopper:
Isn’t that amazing? Every time efficiency goes up and productivity goes up, it’s never like we get more work life balance. We just get no more and more work for the <laugh> for the company. No, like you, I do a lot of, of speaking around AI and I do a lot of live demos, which it’s always scary if you’re using generative AI to be trying to do financial analysis live, but I’m almost like, it used to really stress me out, but these days I’m almost happy when the analysis does something wrong because it, everybody kind of breathes a sigh of relief. Maybe I don’t want ’em to be too comfortable because there are <laugh>, you know, there are a lot of automations that are coming, but at the same time, see, you know, we use this stuff every day and you see nothing right now suggests we’re gonna, we’re ready to pull the human outta the loop. So, um,
Anna Tiomina:
Which is also good, which means that you will not lose your job.
Glenn Hopper:
Yep. Yep. <laugh>. Well, Anna, why in your work with Datarails clients? And by the way, plug, I, I I guess it’s, uh, self-promotion or self-interest, but I love what, uh, datarails is doing with AI in the, in the platform right now, <laugh>. So just to, if anybody hasn’t seen that, get a demo of the, uh, AI that’s in the, in the data rails platform. Super cool stuff. Back to my question before I interrupted myself. Um, in your work with Datarails clients, how are you seeing like smaller and midsize teams starting, starting to use AI and what does that sort of early AI adoption look like to you?
Anna Yamashita:
Yeah, I think that like to, um, Anna Tee’s point, it’s a lot of the prospects that come to us are not saying that they need AI because they want to do something kind of like to give themselves free time. It’s just that they already have all of these different things that now are competing interests of what they can get done on a daily basis. Um, and I think that AI is really great to help with, to help with that point. I think one point, uh, one way that a lot of people use it, which is honestly very low hanging fruit, is to answer questions from different stakeholders, right? If someone’s coming in saying, what did I spend last month? Rather than some of the prospects having to go in, pull a report, having to kind of send that email out, which maybe takes 15 minutes of time, but it’s a lot of switching between different tasks and then it’s interrupting something, it’s interrupting a thought process.
Now these end users can just ask datarails itself, right? What did I spend last month? What did I, what was I supposed to spend last month? And those are really easy, surefire ways that people get use out of different AI tools really quickly. I think that that’s great for the end user, but in terms of maybe more in the finance department, um, everyone is very, very excited by our storyboarding using AI to actually create some of our board decks immediately to enable you to have a really good starting point when it comes to the commentary that you wanna show on these board decks. And my favorite part of that is that everything can be overwritten. So just like you were saying, Glenn, sometimes it’s like almost a sigh of relief to see like, okay, like it, it did something wrong. But being able to go in and actually override or kind of add in your own commentary as well, you’re really using the best of both worlds. You’re getting a lot of time back because that starting point is being made for you, but then also you’re not losing any accuracy because you still need that human intervention to take it from like 95 to a hundred.
Glenn Hopper:
Yeah, I work on, um, AI solutions for clients all the time, and a lot of times SMBs can be left out of this because they don’t have the budgets and they may, a lot of times they don’t have the, the data, uh, to, to build these bespoke projects. So something like datarails, having it built in, and I’m seeing more and more now, even like in Snowflake, there’s the snowflake cortex where you can interact with your data. Um, and then there’s just more and more tools and I think for a lot of businesses, their first experience with AI where it’s truly integrated in the system, not just doing stuff in chatGPT at the employee level, but it’s gonna be when the software companies start doing. And I know a lot of ’em, you know, it’s gonna be built into your ERPs, your CRMs, your, uh, billing systems and all that. And it’s, it’s just a, a matter of of time, but to, to our previous points on the human in the loop, it’s, nobody’s ready to turn it over and say, oh, we’re gonna let the, uh, you know, bot put together our board package. But if it can get you 80% there and you’re, you become more of an editor than a doer. That’s super exciting.
Nathan Bell:
Yeah, and I’d love, I’d love to, this is a really good topic and I’d love to chime in here. I was a Gartner in the finance practice and the data analytics team, and I got to see and, and advise CFOs from all size companies. Uh, one of the most common things that they had me do was review their board pack, review their PowerPoint presentations, and you know, try to figure out how much do I put in there? What’s the narrative? What should I talk about? You know, and they’re taking snapshots and embedding Excel spreadsheets into PowerPoints and trying to drop the whole p and l in there and, you know, all that type of stuff. And then what would happen is they would try to get in front of me, like, I think I know all the questions that are gonna come my way. I’m gonna send out this board pack and hope somebody has questions ahead of the meeting because I don’t want to get ambushed.
But inevitably there’s gonna be a ton of questions real time in those board meetings, exec offsite, exec meetings that they’re not gonna be able to answer this one-offs. But now what I’m seeing is folks able to real time in that meeting, give me five minutes and then just ask, right? What’s what you’re talking about data rail solution or embedded in LQ and LP type type solutions, and all of a sudden you do have that answer ’cause you want to keep engaging in those meetings. You don’t want to disappear ’cause you’re, you’re stuck behind a spreadsheet trying to answer question from 20 minutes ago. So now the CFO can still be present, right. And still get that answer. So then the conversation keeps moving.
Glenn Hopper:
Yeah, that’s great. So we have a couple more minutes before I wanna get to the next segment. So we have a couple of questions here. And I wanna, uh, so Dinesh had a question, does AI learn from what we override and change the comments in the future? And I don’t want to, I don’t wanna keep everything just limited to data rails here. Obviously they’re the sponsor, but we’re not trying to just do a, a data rails commercial here. So I would say that the way it, it’s gonna depend on the interface. So you know, in chat GPT, it has a memory. Like if you were coming out of the system and using just chat GPT, it’s got memory and you can sort of, um, build things into the prompt by, by directing the memory over time of you, um, AI in general, it, it just depends on the application.
So if you’re talking about changing the model’s response because of the way it was trained, that’s not gonna happen. But it, when, um, when you give feedback, you know, there is a feedback loop in, in the training of the models. Um, but uh, that all happens before we’re using it here with generative ai. So that feedback could happen in more of a memory that changes the prompts going forward. So it’s vague answer on my part if anybody wants to add any color to it, but that’s a sort of a blanket wise statement. Um, it, it’s tough to, to respond to that in, in general. I mean, I, it’s, no, but the, the prompt can be changed a a bit in the, in the memory of the system, if that makes sense. Anna, why? I’ll let you address the question on AI prompts and data rails. Uh, are they secure?
Anna Yamashita:
Yeah, so in terms of within data rails, um, every, every individual tenant is kind of its own little ecosystem, right? It’s completely closed. And we do that because of security reasons. We wanna make sure that you guys have your data completely secured. We have all of the kind of like security certificates that we would want and need. Um, and so in terms of the, like the prompts that happen in data rails, everything is kind of secured into your own tenant, which is great from a security perspective. But I would say that that closed idea means that sometimes, um, we have to think about whether when we want like things like market data, how are we gonna get it into that system, which is always just kind of another conversation around how we wanna get that done.
Glenn Hopper:
Yeah. Great. Thanks. Okay, well let’s get to some overview of tools in action stuff that you guys are using today. So I know each of you has helped companies implement AI or advanced analytics and finance and, and will have some experiences here maybe on t we’ll start with you. And if you’ve got like just a three to five minute story from your fractional CFO work, uh, you know, identify the challenge, uh, what tool you used, if there’s any kind of way to show ROI because ROI is a question we get all the time, right? So if, if you’ve got that, that would be great too. So <laugh>
Anna Tiomina:
Yeah, that’s really a great question. ROI is the first, like it always gets on the table when you’re discussing any kind of AI based automation. And I, so I have some examples where ROI was above like thousand percent and these were mostly custom solutions and I unfortunately cannot share the details. I can only share that the executives that were presented with this project said were stupid not to be doing that. So sometimes the use cases are very, very impressive. It doesn’t happen all the time. And the tool that I’m using mostly in my fractional CFOwork is Chad GPT, just because it’s so versatile. I used to have a lot of various, uh, l lms. I used to have large GPT Gemini. But now because Charge GT has this memory feature, it learned so much about me that all the answers that I get are already tailored to what I need and I keep a lot of information about my project in it.
So this is something I cannot work without in my client’s, um, situations. So sometimes just getting a corporate access to charge GPT within the policy around it and teaching the team how to use it is a great improve, brings a great improvement in productivity. Now, ROI in this case is a little bit harder to measure, right? Because it’s a lot of small things you do every day, which you do faster. And it’s not like you are working less, like we discussed couple of minutes ago. It’s like you are bringing more value, you are able to be a part of the discussions where previously you were like crunching numbers to keep this discussion running. You are bringing more insights to your leadership team when previously to you, I dunno, several day days to do eight data analysis or to put some re report together. So there is definitely ROI, there are companies who are measuring this ROI, uh, I’ve seen numbers like 30%, 50%, but it’s really hard to say and depends on the role a lot.
So if the role is included a lot of manual data processing, then the time saving will be higher. If the role is executive and you spend a lot of time on the meetings and maybe not so much, but still a lot of productivity needs, I have some experience with implementing some custom AI tools. I would say that my experience is mixed here. Even the tools like data rails, which is like a industry standard in some way, a lot depends on what data the company has and what quality of data they have. So to be able to use this data processing AI tools, you really need to look at the data first and maybe on the quality of processes. Unfortunately I cannot share the details of this project, but I’m also curious to hear what, uh, you, Glen have, I’m sure you have these examples of like over a thousand percent <inaudible>, right? So, and, and this is a really, really exciting thing.
Glenn Hopper:
Yeah, and I think Nathan and I can probably speak to very similar use cases. I’m probably gonna let him, uh, let him take that one. But I do. Um, and there’s a lot of questions here. I think several of the panelists, if not all of us, are sort of championing at the bit to talk about ’em, but around security and data privacy, I do wanna, um, we’ve got some time at the end for FAQs and maybe we can set some time there. And I know Anna, that’s something Anna t that’s something you wanted to talk about as well. Um, so I’m gonna actually push that down the road, but I on Ana, why, um, I’m gonna ask you to ask a, um, or answer one of these questions as you answer yours so that I’m looking at the chat now. There was one about the model. Do you see it Anna? The um,
Anna Yamashita:
Yeah, what, what LLM does data rails use?
Glenn Hopper:
Yeah, if you can answer that and then I actually have a panel question for you as well, so <laugh>
Anna Yamashita:
Yeah, for sure. So in terms of the l lm we use, we use the open AI l lm, so the same backing as chat GPT. That being said, the way that we use the LLM is in order to help us understand like how you are speaking, right? Understand context of words, what sentences mean in terms of the actual data that we’re using in order to answer your questions. It is the data that is within your specific environment. So I think that’s a, a very, um, necessary distinction that can kind of be mixed up sometimes, especially when we start thinking about like the security questions that, that have been popping up a little bit too.
Glenn Hopper:
Yeah, exactly. And I guess as, as sort of follow up to that, but the, the question I really wanted to ask you from this section, I’d love to hear a client story, um, from Data Rails that shows showcases how, uh, AI has helped with planning automation or reporting any, any sort of, any of the new functionality that’s built into the tool.
Anna Yamashita:
Yeah, so I would say in terms of reporting, like I was talking about that storyboard feature, I think that that is something that people are always using all of the time for, for reporting. Um, one of our newer things that has come up more recently is data Rails cash. And what Data Rails Cache does, it allows us to pull in data from banks and then from there we can make it a lot easier to do the categorization. So then we can make our reporting a lot easier. Um, where does AI fit into this? Well, that’s gonna be in the categorizations piece, right? So we wanna make it so that it’s easier for you for data rail to actually do the work of some of these bank categorizations so that we can say, okay, this transaction goes with this venture, this memo means that this is going to go into this kind of expense. Things like that. So then just like with other pieces of the AI that I’ve talked about before, it makes these categorization things more of a review process than like a tedious task that you are dreading to do.
Glenn Hopper:
Yeah. Um, that’s great. And uh, I, you know, Nathan, I think you and I are, uh, spending a lot of time in this, this space here. I really want to hear from you, uh, kind of on the enterprise level and uh, bonus points <laugh> if it in involves, uh, master data automation or like a significant digital transformation.
Nathan Bell:
Yeah, absolutely. And I think to to what Anna was just talking about, I’ve got a lot of examples and you know, to a certain extent, I can’t mention the names ’cause a lot of the clients were at Gartner, you can imagine. But this idea of mapping your data to your gl, your chart of accounts, this is just a universal problem and so much time is burnt there. But a lot of this starts on a lot of the engagements that I would go on or advise on was kind of on the boring side of things is to your point, the data governance, the master data management, having a data governance committee where you’re agreeing on the metrics and the KPIs, because you can’t go tell AI and train AI to go map it all right? Unless you know and agree internally what you’re going to call something.
And I, I call it, you know, the algorithm or calculation, but the data ingredients, where are they coming from? And if you can’t sit down with your CTO head of marketing head sales to sit down and say, this is how we put things in our system and this is how we’re doing ours and we can’t get any kind of agreement, then it doesn’t matter what tool or technology you roll out, it’s going to fail. And the amount of, you know, hundreds of clients I saw that would talk to me about two years into huge technology investments that are still in the red on their ROI because they had no type of structure, no data governance. The master data management was incredible on the enterprise side. Typically what I would see on some of my larger clients that were bigger problems was, hey, we’ve acquired over the years, these multiple companies, we’ve got three ERPs now running at the same time, or your whole call with subs.
We’re trying to do a roll up, wrap up. These ERPs are all capturing in different ways. We don’t know, this is all manual still. We might have even, they might have a data rails, they might have, you know, other systems, but it’s not gonna be helpful when you’ve got multiple e ERPs running. And I think for me, you kinda start with that data governance, master data management, and then we will go and set up, as, you know, going the data warehouse or data lake. I like a lakehouse. ’cause if you’re doing any kind of advanced analytics, you have that, that flexibility. And then you need a data hub, and that’s that data semantic layer where your glossary dictionary, all of those things are gonna live. All those things that you agree on. I like to do, uh, a data governance watermarking. So when you’re looking at reports coming from a BI tool, power BI look or Tableau, that you’ll see a mark that says, this has passed internal governance standards for going out, external reporting, that kind of thing.
Glenn Hopper:
<laugh>, can I, I wanna interject just because I, you and I have talked about this before, that that watermarking is great, especially if you’re using defined KPIs, but self-serve data marts where people, so do you watermark if somebody makes their own <laugh>?
Nathan Bell:
Yeah, so there, there is a problem. We were trying to solve the problem that we had so many incoming questions about the reports on the p and l balance sheet. You know, different departments we’re like, oh, I wonder if we rolled out self service analytics, if this would just solve that. And now the finest team won’t have to be constantly hit up for all these questions. It actually just created more problems for us. Like you said, we went from having 50 standard reports and bi dashboards to 200, 250 to 300, and they were showing up in all these meetings and other high level reports and nobody knew what was the truth with those. Um, but as far as far as tools, you know, I’ve worked with all the major BI players. I’ve worked with Snowflake, which I’m a big fan as you know, snowflake. I’ve worked with AWS, we’ve worked with data clean rooms. I think AWS has a a really good solution there. I’ve worked with, uh, snowflakes data clean room as well. Um, I’m kind of agnostic if I consulting is I believe in a composable tech stack. I know a lot of CFOs like the the bundled package, oh, we’re gonna all Microsoft shop or AWS shop or Google Cloud shop. But I think you need that flexibility to choose the right tool, not just be stuck with everything. One, you know, Microsoft,
Glenn Hopper:
You know, as you were talking about the, the sort of the data definitions and kind of made me think about Schema and how we interact with all this data. And then different companies have different levels of data, but I’m gonna, and I’m not gonna go too far down this rabbit hole for any of of the crew here who’s tuned in today that is more on the technical side or wants to go back and have your team research something. Right now I’m kind of losing my mind on model context, protocol, the ability to, uh, interact with your own data, with, um, with it, it, it’s replaces APIs, it takes, you know, all this stuff we used to have to do through these weird rag solutions to get to data. But, um, model context protocol is making, whether it’s you have data in your, uh, SQL database or Snowflake or whatever it is, and being able to interact with that data or getting, uh, stuff from the web, but, and, and the tool use.
And then the other one right now is N eight N. So N eight N with model context protocol, the ability to integrate and interact with data. It’s really opening up all kinds of, uh, uh, new pathways to, uh, for not for automation and for just interacting with the data. And if we have time, then we can go, I don’t, I don’t wanna go too far down it, but I did wanna mention those of tools in action of something. And if we, I think we’re okay on time. Let’s, uh, let’s just do a lightning round, uh, maybe, uh, one answer from each of you and we’ll go on a T on a Y and then Nathan, most valuable tool you used in the past year for fp and a and why
Anna Tiomina:
ChatGPT for me, just because it’s so versatile and I work with a lot of clients, so it helps me with various,
Glenn Hopper:
So awa, I feel like I know what your answer’s gonna be. <laugh>,
Anna Yamashita:
I feel like my answer is pretty biased, so I’ll give two. But I would say, um, like data rails is great. We’re a great tool. Um, I would say chat GPT is really useful specifically around like definitions and stuff, right? Like, because then it’s like we don’t need to remember all of these different formulas or different things anymore. Now if I, I have, instead of an encyclopedia, I have a very easy reference to go to as well.
Nathan Bell:
Um,
Glenn Hopper:
Nathan, what you got?
Nathan Bell:
I mean, I use Chat GPT all day every day, of course, as you know, that. And we, we create a lot of custom GPTs. Uh, I’m a huge fan of Notebook lm, and one of the most recent things I’ve done with that is create a prompt vault where I can now keep all of my prompts in one place and easily access them so I’m not having to redo my work again and again and again.
Glenn Hopper:
Great. Yeah, great tool that when Notebook LM came out, that was one of every, every couple months. I mean, the, the arms race and AI just is so fast, it’s hard to keep up with everything, but every couple months something comes out, um, that just kind of blows your mind. And notebook LM that creating an audio podcast from your notes, uh, was mind blowing to me to get, you know, a 17 minute podcast that you didn’t prompt in any way. And, um, so if any, for our audience who doesn’t know what Notebook LM is, it is from, um, Google, and it’s, um, uh, it’s just, I think it’s notebook lm.google.com or whatever, but it, you can take data, um, from multiple different sources, PDFs, research papers, URLs, videos, aggregate it all, and it puts it into a, a single folder that you can interact with the data, create VA FAQs, create quizzes. It’s a, it’s a super cool and it does those audio overviews, but a super cool product. And, uh, everybody who’s on this, I’m sure knows what we mean. We don’t need to define chat GPT anymore. But
Nathan Bell:
I wanna just add, when you, when you make a really good prompt that you’ve worked at for, you know, 30 minutes to an hour and you’re really proud of it, right? You obviously don’t wanna lose it, and I, everybody knows probably what I’m talking about, when you finally get it right and it works, or you know, 99% of the time when you use it, it gets you what you need out of it. Having a way to store and archive that and access it really quickly is why I think the book element is really fantastic there. Yeah. Yeah.
Glenn Hopper:
Fp and a today is brought to you by Data Rails. The world’s number one fp and a solution Data Rails is the artificial intelligence powered financial planning and analysis platform built for Excel users. That’s right, you can stay in Excel, but instead of facing hell for every budget month end close or forecast, you can enjoy a paradise of data consolidation, advanced visualization reporting and AI capabilities, plus game-changing insights, giving you instant answers and your story created in seconds. Find out why more than a thousand finance teams use data Rails to uncover their company’s real story. Don’t replace Excel, embrace Excel, learn more@datarails.com.
So I think you guys, and myself included here, I’m about as big of an AI evangelist as, as there could be out there, but those of us as power users of the tools know that it also has limitations. So no shortage of hype around ai and, but I want to talk about what’s real and on why, because data rails came so quick to market with their tool where if you look at the big ERP players out there and, you know, they keep talking about it, but they’re, they’re slow to roll it out, and I think that they just haven’t figured out how to sort of solve for hallucinations and all the issues that come up. But I, I guess on a why from you, I mean, since you guys are using it now today, and I’m sure you get a million, uh, questions from customers and, and potential customers, what do people often misunderstand about where we are right now with generative AI and, and the tools available for fp and a and where do we, is there somewhere where we need to kind of recalibrate expectations?
Anna Yamashita:
I would say that, uh, to Anna’s point before, I think that a lot of people think that, okay, this is going to do a job completely end to end for me. It’s going to replace a person that that’s coming, that, that we already have, right? I think that, um, a big thing to understand about AI in my perspective is that it has to learn, right? You can, when you kind of start from scratch, you can almost think of it as, as a baby in a, in a, in that you have to kind of create its world. You have to tell it what it needs to know. You have to teach it about in the FPA context, you have to teach it about functions. You have to teach it about what is a variance, what is the budget, what do we need to look for? And so with that, it is crucial that we have good people, that are people that are good at communicating in order to help actually make AI functionality very useful for your team.
And I think that once people understand that better, then one, there’s a bit of a sigh of relief of like, okay, cool, like, I’m not in competition here, which is always great, but also there’s an understanding of like, oh, okay, this is an incremental way to make my life better, right? The, those increments might happen very quickly, but it is still incremental. It’s not just like in a snap. You’re not gonna understand what you’re looking at anymore because you are a massive partner in terms of, uh, bringing, uh, of, of utilizing AI functionality in order to get your Board deck, to get your plans, to get all of those things done in a more like, kind of efficient way.
Glenn Hopper:
Great. Nathan, any any trends or technology? It’s, you know, we’re <laugh> you Gar you mentioned Gartner, but I mean, we are like rocketing over the, uh, Gartner, uh, hype cycle here, but are there, are there any trends or technologies right now that you think are actually overhyped and, and then, or maybe on the flip side of that or some, is there something that we’re perhaps under hyping in the midst of all, all this and are, are there some out there that you are already seeing? This is gonna be, this is gonna be a fundamental change?
Nathan Bell:
I think there, there’s a couple of things on, there’s a lot to talk unpack on there, but a, a couple of things is this, all these AI solutions that claim to be fully autonomous out of the box. And that’s just not true. And I think unfortunately, there is also a lot of AI washing going on with vendors where they’re like, I’ve got AI now, right? And it’s never gonna work out of the box. And it’s what Anna, why I was just talking about, there’s no context. It has no domain knowledge, no business knowledge, it’s gonna have to train. So when you see software solutions, and there’s a lot I like that are out there claiming it’s just gonna roll, you know, weeks after deployment, it, it’s a deception. Finance is messy, data isn’t clean. You still need that human oversight and governance, if you will.
But there are some cool tools that are happening that to me are addressing some huge needs, which is dealing with those multiple versions of truth, dealing with that data governance, obviously the PRI privacy and, and I’m a huge fan. You heard me mention earlier with data clean rooms. I think they’re extremely underutilized. I think Snowflake was kinda one of the first ones I worked with habu, which is now part of LiveRamp. But AWS has a really good solution there. And if that’s, if you’re in an industry that’s highly regulated, uh, healthcare, medical, those types of things, you need to be looking at these solutions as well. Um, and then there’s some things just for the fp and a folks that are just really gonna make your life easier. And, and one I think you’re familiar with, Glen Amalgam is another one example of making your GL entries just that much quicker and easier when it can be trained pretty quickly, right?
And you, you tell it, this is what, this is, this maps to here and it makes suggestions and then you just, instead you’re reviewing and approving most of the time instead of actually making those entries. And another one that I’m pretty excited about is anomaly detection. And there’s so many applications within finance and fp and a for that. Uh, whether it’s finding, you know, revenue leakage situations, um, or, you know, every, everybody does a leaky bucket exercise when times are tough. Finding those ER areas are other advantages that we’re not taking, uh, you know, when it comes to invoicing and paying early to get discounts, are our clients with our invoices, are they not paying the penalties that they’re supposed to be paying for being late, uh, you know, fraud? Are we seeing multiple invoices for different companies coming from the same address? What’s going on there? And those types of things that you pretty quick ROIs as well with those solutions. Yeah,
Glenn Hopper:
It’s, and not that we’re gonna go deep into detail here, but a lot of people tuning in here, you know, prior to generative ai, if we’d been having this panel on AI and finance, where, like you mentioned where we were talking about a lot of, uh, you know, machine learning, we’ll, we’ll call that classical ai, pre generative ai
Nathan Bell:
Mm-hmm <affirmative>.
Glenn Hopper:
It’s, um, you know, it’s nerd land where you have to write Python and you have to be a coder to use all of it. And now, now generative AI is doing all this. So a lot of times I think you’re all the examples you were just talking, I think some people in the audience might be thinking, can I do that in chat GPT? And I think there’s a, there’s a thing we need to clarify here. When we talk about ai, we’re talking about the broad, uh, area, the broad field of ai, which includes the traditional machine learning that you were talking about, and the new generative AI also falls in that. And it, it is an important distinction. And the generative AI part for like fraud detection, anomaly detection and all, all the things you were just talking about, maybe that’s a, a top layer that lets you interact with the, those kind of things in natural language, but the real machine learning and deep stuff is happening in the background.
So just a point of clar clarification around the, the differences and the types of AI there on a TI know, uh, you know, when you, when you come into a client, they may just be, they may have this FOMO around everybody’s doing ai, I’ve gotta do ai, I don’t know, I don’t even know how to ai, you know, where do I start? So, um, when you’re asking, when you’re helping clients pick which tools to use, how do you guide them? Do you give them like red flags to watch out for or, or where do you go with that?
Anna Tiomina:
I mean, it’s not only about picking the tools, right? Before you even get to picking the tools, I try to build some kind of roadmap and to identify the cases where they don’t even need the tools. And having a corporate access to LLM with a secure environment and policy and training around it is enough. Sometimes the process that they are planning to, uh, improve is so critical for them that we would go with something really custom because the security, um, maybe some, um, I don’t know, like regulatory landscape that they operate in demands very high attention to this aspect. And then in the middle is out of the box tubes, right? So it’s always a mixture of things that you are implementing. Uh, the red flags in the tools is we, if we get to the tools, I would say if the AI tool is saying that it is everything to everyone, this is a huge red flag for me.
If it is work as team, if it is, I don’t know, some kind of analysis, like whatever finance process it covers, it cannot be everything for everyone. So it should be focusing on some kind of company. Enterprise startup, they all operate in very different environments, right? So this is one of the red flags. And another one is, uh, if the company is not able to answer how the data is handled, who owns the data and how is the security and data access is, uh, managed? So if they’re not able to answer this question, this is a no go.
Glenn Hopper:
I want to, I’m really, I’m, I’m watching the clock go by and it’s flying. I really wanna leave time to q and a. So I’m gonna go lightning round on this, um, from each of you and we’ll go awa Nathan Ont one question every finance leader should ask before signing a contract with an AI vendor.
Anna Yamashita:
I would just ask like a, um, can you explain to me in very simple words what your AI functionality is? Keyword is simple words, I don’t want buzzwords.
Glenn Hopper:
Yep. Nice, nice. Nathan, I immediately ask my
Nathan Bell:
First question be how long is this contract? But second is, you know, obviously what is your engine? Is it, you know, chat, GBT, are we talking cloud, are we, is it custom? Is lama like, obviously uncover that quickly. And then the, the third one, I know you said one, but would be, how clean does my data have to be outta the box for this to work?
Glenn Hopper:
Yep. Great. One, great one. Annat,
Anna Tiomina:
Who owns the data and how it is managed?
Glenn Hopper:
Yeah, so I do wanna talk about what everybody’s struggling with around this is building data fluency and fp and a teams, because we’ve been talking about digital transformation for three decades. <laugh>, uh, we’ve been talking about, uh, you know, data democratization for, uh, 15 plus years at this point, and everybody getting their data fluency higher. But it wasn’t really until generative AI came along that everybody, it’s like the, the clarion call for this. It’s like, oh, now is time. So AI’s only part of the equation. The other part is our people. So we we’re all scrambling right now trying to figure out how to use this, but I’m, I wanna know from your perspective what it takes to build real data fluency inside fp and a. Because we know to use ai, it has to start with the data. So on a t we’ll start with you. How do you help teams upskill, and what does that look like in practice?
Anna Tiomina:
Yeah, so my practice is maybe unusual, at least for this team. We have here. I work mostly with smaller companies who sometimes don’t even have like a dedicated FP and A team. And I usually don’t start with the data assessment because everybody thinks their data is perfect or at least ready for ai. So we start with, okay, how can you apply AI in which processes you can apply it? And then if everything goes well, the finance team gets access to various tools and they start experimenting. And this is where they understand that data, data is not perfect at all, and that the answers that they are getting from AI are not great, not because they are asking the wrong questions, but because their data is kind of dirty. So it doesn’t, so I don’t start with data usually, although you are very right to say that people aspect is a very important aspect of that.
But I start with the tools and I bring the teams to the understanding that the data needs to be clean and ready. And sometimes it, they need to step, take a step back and to take care of the data, implement some kind of like a data warehouse before they even can move forward with any kind of like AI implementation, like at scale, right? So yeah, everybody understands that like they need data. Nobody, not a lot of people understand what does it actually mean to have this clean data? What, what is the definition of clean data where you need to pay attention to the sources of the data, et cetera, et cetera. So it’s a long process. Um, and the organizations I work with, they do have a lot of bad data.
Glenn Hopper:
Yeah. Uh, Nathan, I know, you know, you, you’ve worked across finance and tech and I’m wondering in the, the two areas and, and maybe everything is merging more with tech, but finance and tech really seem to be coming, joining more and more together. But I’m wondering how do you bridge that gap between the business teams, the domain experts on that side, and then the technical teams and the domain expertise that they have? Like how do you, I mean, we have to work together to the point where there’s almost an expectation that everybody has a bit of both those skill sets in ’em, which is, it’s hard to come by, but I don’t, I don’t know what you’re seeing out there with that. Yeah,
Nathan Bell:
That, that’s right. I think just listening to, to on talk as well, it’s on my many calls I went on at Gartner with CFOs, inevitably we would ultimately end up bringing a CTO or CIO on the call, because to your point, finance team is data illiterate. The CTO tech engineer team is finance illiterate, and there’s so much issues when, hey, it just stood up all of our BI dashboards and we gave them the specifications, but the moment they deployed ’em, we all got in and started trying to use the dashboards, and none of it made sense. Even the labels of the data made no sense and we couldn’t build anything, we couldn’t use it. And all this time and money, you know, was, was wasted. It, it really does start by building a shared language between the two. And that goes back to what I was saying on that data governance side of things, getting together and not just mapping the data.
It knows how to do a data map. It’s mapping ’em to decisions and knowing who’s creating the data, who’s using the data, consuming the data, who can update the data? Who’s actually controlling if I need a field changed or added into my CRM or my ERP, who’s gonna do that and do they know what that field’s being used for? They might just see a Jared ticket come by, right? ITCs this ticket like, Hey, we need to make this field structured from, you know, moving from unstructured and required, well why, what are we, what’s the finance team trying to do with this? And that bridge between them, you know, it’s that common language as I mentioned before, wouldn’t you agree on the metrics and KPIs and saying, Hey, there’s the ingredients that are gonna go into this. Here’s how finance is using it. Here’s just showing up in, you know, business decision wise in the, in the, in the company. And then IT finance doesn’t understand how complicated it can be for them from a data engineering standpoint to go actually get that data and have it as, you know, wrangled in a way that’s useful.
Glenn Hopper:
Yeah. Ana, why, what advice would you give to a finance pro who’s not technical, but wants to become more data savvy, more AI literate, and sort of get up to speed with what we need to know for, uh, the AI driven world where we are now?
Anna Yamashita:
Yeah, I would say that if, if, if possible with, with kind of like the AI technology that you’re using, I would try to find an answer to a question that I already know the answer to that took several steps. So for example, if we’re thinking of like, okay, I wanna understand, um, like where did a variance come from? I wanna see it like split up by, um, sorry, if I’m looking at like my, my variance in terms of budget versus actuals for revenue in total, right? And then I wanna see that split up by customer. I wanna see which customer was the one that contributed to that variance the most. I want to see then that customer, all of the information split out over months, right? Um, you can always get to that kind of answer by I’m assuming pulling reports or asking different people.
And so then you get to that, that final answer there. And if you’re working with a tool that is supposed to be able to give you a similar kind of answer, try to get it to get you that answer, right? Whether maybe it’s a, a chat bot that you’re prompting, right? Try to get down to that level of granularity. And what you’ll do is one, probably get a little frustrated because it will probably take some, some tinkering in terms of what your prompts are or different things like that. You’ll also see if there are limitations, what do those limitations look like? How can I actually use this tool moving forward? And then also, because you already know the answer that you’re looking for, once you get to that final answer, you will know, okay, did this actually help me or not? And again, you’ve, you’ve gained so much information because you already knew where you’re trying to get to, but now you’ve filled out all of that context around it about how this tool is actually going to, to help you moving forward.
Nathan Bell:
I say, Anna, you make a good point. I just wanna add, you know, if by consulting we design AI projects around real business questions, can we predict churn by product line or where is Cash fe outpacing our plan? And that anchors data fluency and impact, which is key
Glenn Hopper:
To your point. Well, we’re actually right on time, so I know, um, ONT definitely wanted to talk security. I have a whole soapbox I want to get on <laugh>, but I’m not gonna take up the <laugh> the whole panel’s time. However, uh, I have the controls here. We’re, we’re gonna end in 11 minutes, but maybe we’ll have, uh, overtime and, and talk security if anybody wants to talk after. And I know, and, and I think in Tel Aviv it’s almost 10:00 PM right now, so Annu is probably, uh, ready to choke me for saying I’ll stay on longer <laugh>, but what we may throw another five minutes on the end for interested people. I do. Alright. Security and compliance is with generative ai, with all these, you know, taking your data, uploading it into the, um, the cloud, uh, when you’re using these tools, a lot of, um, questions around that. So before we go to the broader q and a and, and there’s no way we’re gonna get to all these in the time, but, um, on a t you mentioned in the prep, what should finance leaders be thinking about as they integrate AI into sensitive financial processes?
Anna Tiomina:
Yeah, so when they raised up this security question, I wanted to say that I see companies that on the one hand think that AI and security cannot be like together, and then they block all the initiatives that are even remotely AI related for specifically finance teams. And in reality, there is a way to handle data security. It’s not a blocker. On the other hand, I see teams who just go forward with AI without even thinking about all the security implications. So I just wanted to bring it to the attention of finance leaders and explain that on one hand, this is not a blocker, but on the other hand, this is a very important aspect of AI implementation. That’s it. I don’t wanna to spend more time on that, but I just see a lot of bad stuff happening there, especially on this like breakfast implementation, um, cases
Glenn Hopper:
When I get get on my soapbox after, uh, in our after hours special that’s coming up, I’m gonna, um, start a fight with CISOs, is what I’m gonna do. <laugh>, but I, but we’ll, we’ll save that for later. So, uh, Nathan or Anna y anything you’d add, especially from a system or implementation standpoint?
Nathan Bell:
Yeah, I just wanna add, you know, I always see the finance lens of, of audits. We always begin with access controls and audit trails, especially when AI is touching sensitive financial data. We encourage finance leaders to ask vendors one key question, can your outputs be traced, tested, and explain in plain English to my auditors? And if not, it’s not enterprise ready. Yeah. Yeah. That’s the last place you wanna be is sitting there with your auditors. Everybody knows that. Yeah.
Glenn Hopper:
Anna, why anything from your end?
Anna Yamashita:
I would say like from, from an implementation standpoint, uh, like, just like Anna was saying, there’s a di like the, there are trade-offs when you want, when you want security, like you need on the, the finance team and the, some of those limitations are gonna be like, can we do an automatic feed of market data? Can we do an automatic feed of, of exchange currencies? Can we do an automatic feed of anything else? And I think that just knowing that, that you are having those, um, you are, uh, balancing those two, I think is important to, to realize when you’re going into your implementations.
Glenn Hopper:
Yeah. Great. Great. Okay. Eight minutes for q and a. Um, I have thoughts on all these, but I’m just the host. I’m gonna shut my mouth a little bit. <laugh> fine. <laugh>. Let’s see. Couple of questions on Microsoft copilot. Um, I’ve, I’ve been pretty vocal about my thoughts on it. Um, what do you, annat you’re shaking your head.
Anna Tiomina:
Gimme your, gimme your thoughts. I do, uh, yeah, I’ve heard a, some of your like feedback and I have not met a person who would prefer copilot to child g pt, but there are a lot of organizations who have copilot on default. So my answer to that, at least get a good training, there is a way to improve your interaction with copilot. It is different. It’s not the same as having CGPT, but if this is your situation, at least get the training and try to get the most out of this tool.
Glenn Hopper:
Yeah. Yeah. Naan or <inaudible>.
Nathan Bell:
Some, some of our listeners probably don’t have a choice ’cause corporate is locked out, chatt pt and maybe they’re on copilot. I try often to use copilot to do the things they use PT for. And it just always disappoints me and I keep waiting for it to somehow catch up or be better. It should be better integrated with Excel and you know, embedded AI in Excel is a dream, but I just, my personal experience, that’s just not there.
Glenn Hopper:
Yeah. So I will say I, I’m bullish long term on Microsoft. They put what, 50 over 15 billion into OpenAI. They’ve done their acquihire thing, they’re gonna get it. We’re gonna see clippy, revenge, clippy coming back. <laugh> and <laugh>. He’s gonna be helpful. <laugh>, actually, there were a lot of questions around prompts and, and Nathan, you mentioned a prompt library. We’ve got six minutes left and I, I don’t wanna try to squeeze too much into it. Um, maybe though, I think we could probably go round robin here. Nathan Ony on a t Um, prompt tips
Nathan Bell:
For me start with context. The hardest part is why I need a prompt library is writing the, you know, once it gets to know you, as Anna mentioned earlier in the call, it’s much easier, but always giving it the context and you don’t wanna rewrite that ’cause you don’t know how inconsistent you might even rewrite it every time you, you have to do that. So write at the one time embed as much of your, your company and your industry jargon and knowledge and domain expertise as much as you can in there. And then keep it consistent every time. Keep your outputs consistent with your organ because even within an industry and organizations talk about things differently and keeping that context is so critical in your prompting.
Anna Yamashita:
Yeah, I why tend to like to use like keywords. So if, you know, like keywords are gonna be, I don’t know, used kind of more broadly. So I think that that’s really useful. And also if you’re trying to get consistent answers, ask the cons, a consistent kind of question. So I think that’s where having a prompt library is so useful. ’cause then you can kind of treat it like mad libs, take out the parts that you don’t need, but you have that same sort of structure. So then, um, you’re gonna get an answer and a structure that, that you probably are gonna like
Glenn Hopper:
Annat.
Anna Tiomina:
Yeah. Well my favorite tip or trick is to use AI to write a prompt for ai. Like explain what you’re trying to do and that chat GPT handle that and then tweak what you get. This is specifically great for long prompt, so you don’t have to type a lot of things, uh, into, uh, into the window. Uh, and it works in most of the times. And in terms of, uh, consistency charge, GBT has this amazing memory feature. So this is important to check what it has in memory about you and, uh, like manage that. So tell it, forget that I’m traveling next week. Focus on my work environment, right? If you use your for example, or you have the work, uh, the same, uh, account for your work and for your, I don’t know, like leisure, uh, requests, right? And check, like the best way to do it is to ask it based on everything you know about me. Put like the five, the most important things and tell it to forget whatever you think is not relevant and add whatever is relevant for your work. So the answers that you will be getting will be much higher quality if you do that and if you consistently manage that.
Nathan Bell:
Yeah, I love that tip. I just wanna add I agree with that Anna and I personally like to use Claude to write my more complex prompts. Yeah. I find it’s more useful there.
Glenn Hopper:
Yeah. Jessica Jari had, uh, some good advice in the chat. She said, I start all my prompts with context. Eg. You are a CFO and expert in finance specific to the FinTech industry. Explain a SC 6 0 6 and your target audience is a high school accounting class. And you might say, um, I might add to that. You know, you might say, uh, give me two paragraphs on this. Give me five bullet points. Give it, let it know how much data you’re looking for. ’cause sometimes if it’s, if Chad GT’s filling frisky, you’ll think you’re getting a paragraph and suddenly you’ve got a a 2000 word essay that it just spit out. So, um, I like to give those guys. So we do have other, a couple other questions in about two more minutes. Um, uh, this is a great one from Vernon, integrating machine learning requests to an outside system like R or TensorFlow for machine learning.
And the, the best example of that that I could think of would be, um, cortex and Snowflake. Um, and really that what it’s about in you integrating all these different data sources, having generative AI go in and instead of having to query, you know, write SQL queries or whatever, when you have that layer of generative AI where you can query directly either, whether it’s a data lake or directly into a system through an API or through MCP, that’s how you could work generative AI into your ML pipeline. Um, Nathan, any other thoughts on, on that about as far as how you could actually, you know, it’s almost like low code, no code machine learning, but instead you’re using a, um, a generative, you know, a a natural language prompt in order to, to do that.
Nathan Bell:
Yeah. I love, I love your example as well and the integrations and a p and NCP as we were mentioning earlier, you and I had a good conversation on that. As that gets more advanced, I think things are just gonna be so much easier. And you see like, like for I don’t know what everybody else is on, we’re on Google. And now that I can give access to chat CPT to my Google Drive and being able to have data and shared folders just move in and outta there for that, that’s really nice. Um, but yeah, the integrations, then there’s the whole separate conversation of the world of exploding of integrations and how to use what and, you know, all the add-ons and just, there’s a lot. It can be overwhelming.
Glenn Hopper:
All right, well that brings us the, to the end of our, the official end of our session. So thank you to our incredible panel, Nathan Bell, Ana Mina, Ana Yamashita covered a lot today. We got real use cases, tools, strategies, um, and if you want to Conti continue the conversation, I’m sure anyone I, this panel myself included, uh, connect with us on LinkedIn, uh, or reach out, uh, through, through the various websites. This is, we all love talking about this stuff. And this session will also be available on demand and as a podcast episode on fp and a today. And thanks again to Data Rails and to all of you for being part of the conversation and for anyone who wants to stick around, I’m <laugh> we’ll, we’ll do five minutes more on security because that is a big issue. So, alright guys, this is <laugh>. This is after hours. This is sitting <laugh> sitting at the bar with a drink. So, okay, anytime if your data is not in a Faraday cage, if you are connected
Nathan Bell:
Mm-hmm <affirmative>
Glenn Hopper:
To a network anywhere, your data is not secure. If you are uploading your data into Snowflake, if you’re uploading your data into AWS if you’re uploading it into your cloud-based ERP, those are all threat vectors for your data. If you are uploading your data into Gemini, clawed chat, GPT, <laugh>, the, the Manus or, or deep seek, anywhere where you’re uploading your data, it is a new threat vector for you and all the models right now. So all those companies, I, well, I don’t know about the Chinese owned ones, but all the companies that I mentioned are SOC two compliant. So if you trust, uh, Google to not leak your emails and your Google sheets and all that, then why would you not trust Gemini with your data that you put in into there? Now <laugh>, there are also on all of these models, and someone mentioned it in the chat.
So open AI chat G GT defaults to, if you don’t go into the settings and change it, it’s going to upload your data to help train the models. Claude defaults to, we’re not gonna use your data to train, train the models. I don’t know where Gemini is. They all the different ones have have different settings. So first off, look at that setting. Now here’s my, this is where I’ll fight the CISOs. I’m, I, I’m not a security specialist, however, I’m, I’m something of a, a specialist on LLMs. LLMs do not learn facts. They learn probabilities. So if I go to my favorite LLM and type in Michael Jordan is a blank, 999 times out of a thousand, it’s gonna say basketball player because there’s thousands and thousands of articles about Michael Jordan being a basketball player. However, he also played baseball for a couple of seasons. So maybe one out of a thousand times it’ll say Michael Jordan is a baseball player.
Mm-hmm <affirmative>. But it doesn’t have a memory of it doesn’t know everything about Michael Jordan other than how much it’s been exposed to that. So conversely, uh, as much as I like to think I’m, uh, you know, out there well known, uh, if, if I went to chat GPT right now and typed Glenn Hopper is a blank, it doesn’t know there’s like a hundred other Glenn Hoppers in the world that I know of and it doesn’t know anything about me. So if I went had it where my data was not turned off, that my data’s going up and training the model. If I put in my social security number, my blood pressure, what, you know, what medications I’m taking, the chances of that leaking through all the training and coming out to a person is slim to none. Now that’s not saying be flippant with your company data or with your client data, but the way that these LLMs work, one mention of of this instance, it’s not, it the, the security risk is, is disproportionately overblown around all this.
And I, companies have to figure out a way around this. And I know that there’s ways you can wall it off. You can run smaller models locally and all that and, and you know, there’s the Azure solutions where, or the OpenAI enterprise, there are all kinds of ways to wall this data off. But okay, I just rambled a whole lot. I would love to hear additional thoughts from you guys on that and how you’re handling it because I, you know, I, I say all that when I’m dealing with client data, I have to tell them, look, here’s, this is what we’re doing. Is it okay if we use your D data in this way? And it’s so I know how I treat it for my data, but it’s different with, with company data and client information.
Nathan Bell:
How good is your cyber insurance? <laugh>? Go ahead Anna.
Anna Tiomina:
I agree with you, but it’s hard to sell to security guys. Um,
Glenn Hopper:
Oh yeah. Oh, a hundred percent. Yeah, they will <laugh>.
Anna Tiomina:
But the important thing is, okay, even if you are concerned that your data will be used for training models, I mean don’t, don’t do it on a free account at all means just, it’s not a fortune to pay for a paid account and switch this setting off. Like don’t use my data to train the model. This is already good enough. I would say like this is the standards. We don’t have any standards around that. But I would say this is a good standard. Uh, but then if you are concerned about some kinds of data explicitly clarified in the policy and train your employees what is okay, what is not okay, it is much worse when people don’t know what is good for the company and what is the company’s tolerance is. Right? And uh, going back to one, what you said, I mean, yes, probably my data will not show up, but if I’m handling sensitive plan data, if I’m handling medical records, I would be very, very cautious about how these things work.
And nobody knows exactly how. Right? So you don’t want to be responsible for leaking your client data or for leaking your patient’s data, right? But for a lot of data that the company handles, it doesn’t really matter that much. And having a corporate account and some training and some guardrails is good enough. So don’t be too restrictive. This is what I’m trying to get from CISOs. Don’t be too restrictive. They tend to be over restrictive. Like any level of DI data confidentiality is a no go for, uh, A LLM and for finance teams, that always means you can use it on nothing. Even the bank email is something that I <inaudible> on contract. No. So like really have a conversation around that. Have a conversation about why you think this is dangerous. In many cases, CSOs are not able to answer this question, like, what is going to happen? They don’t know. Like even if they have very low tolerance to sharing the data within LLM, define what is still okay, give us something.
Glenn Hopper:
Yeah, Nathan, you were starting to say something.
Nathan Bell:
Yeah, I’ve just had a bunch of thoughts, but yeah, I wear, you know, I work with a lot of CISOs and of course they’re gonna be conservative. That’s their job. Uh, that’s their main job, right? If you think about it, and I, I like to laugh ’cause I’ve worked with a lot of companies and I’ve seen from having no policy at all to having very restricted policy. But policy is not al always common practice. They might have this document and everybody’s like, oh, I know it’s there, but everybody’s doing their own thing and it’s not locked down. And how do you account for every individual that has access to chat GPT or clot or like whatever it is they’re using. How do you know? Like it’s really hard. It is just, you know, kind of playing whack-a-mole as these different things pop up. I mean, it’s really important to do training and have a policy, but uh, unless you’re dealing with I think kind of the big three, which is, you know, SharePoint, SOC two, compliance, medical data, PII, those types of things.
Don’t overthink it. Nobody really wants your data as badly. Do you think they do? I mean, I, I’d be terrified if I went to Chet PT and says, what, what is Nathan Bell’s social security number? And have it accurately respond back because somebody hacked it and put it in there. You know, or I’m trying to do insider trading and find out what, you know, the big market movers, what they’re doing, and somehow their data got into, you know, and all of a sudden chat, TPT has it and is answering to people that are querying it. Like those, that’s terrifying, right? That could happen.
Glenn Hopper:
Well, all right, we have, we did go 10 minutes long. Uh, we, we’ve gotta get a nuke to bed, so we gotta <laugh> let her, let her go. And it’s, I know it’s been a, a long, super busy day there. Uh, thank you to everyone who stuck around and really I’m happy to continue the conversation on any of these points offline. Just hit me up on LinkedIn and thank you again to our panelists and our guest and, uh, all of you who stuck around for our overtime session here. Thanks everyone.
Anna Yamashita:
Thank you. Thanks. You take care.