Building a high functioning partnership between BI & FP&A

Wasabi Technologies (a Boston-based cloud storage company with over $530M in funding, a decade of hypergrowth, 450+ team members, and multiple exabytes of storage deployed) sets high standards for its finance and analytics teams.  Both their FP&A team and Business Intelligence (BI) department are under pressure to deliver fast, accurate, and actionable insights. Marcos Bento, Director BI, at Wasabi Technologies and David Suter, Director, Financial Planning And Analysis at Wasabi Technologies join us for a special two-parter delving into their approaches. They delve deep into how their teams work together, across functions to turn raw data into real-time financial intelligence.

In this episode: 

  • Building and gaining trust between BI and FP&A 
  • Getting from “10% out” to within a couple percentage points for forecasting 
  • KPIs such as dollars per terabyte 
  • BI and FP&A as investigative reporters 
  • Predictive analytics: forecast in the 6 or 12 months working with sales 
  • The manifesto for AI at Wasabi 

Don’t miss the second part next time where Marcos and David go further on metrics, systems and strategies that bring the partnership to life. This is alongside net retention, data governance, and scaling insights.

Glenn Hopper:

If you would like to earn CPE credit for listening to the show, visit earmark cpe.com/fpe. Download the app, take a short quiz, and get your CPE certificate. Finally, if you enjoy listening to FPA today, please go to your podcast platform of choice. Click the subscribe button and leave a rating in review of the show. And now onto the show

Marcos Bento:

From Data Rails. This is fp NA

Glenn Hopper:

Today. Welcome to fp NA today, I’m your host, Glenn Hopper. In this two part blueprint style episode, I’m talking to the directors of BI and FP&A at Wasabi Technologies. Wasabi is a hyperscale cloud storage company competing with the biggest players in the market, with a rapidly expanding global footprint and a complex go-to-market model. Wasabi’s finance and analytics teams are under constant pressure to deliver fast, accurate, and actionable insights. And that’s what we’re gonna talk about today. Marcos Bento and David Suiter share how their teams work together, not just across functions, but as a single insight engine to turn raw data into realtime financial intelligence. Marcos is Director of Business Intelligence at Wasabi, and Dave is director of FPNA. From trust and tooling to forecasting and first drafts, they break down what it really takes to build a high functioning partnership between BI and fp. And a Gentlemen, welcome to the show. Thank

Marcos Bento:

You very much, Glenn.

Glenn Hopper:

Yeah, I think this is the first time that I’ve, uh, had two guests with a planned two part episode that wasn’t part of a big webinar, so this is kind of an experiment, but I thought there’s, you know, talking to you guys before the show, I realized we’re gonna have so much great stuff to talk about that I really think there’s two episodes worth here. And, uh, I love this conversation. Uh, regular listeners know we talk about it all the, all the time. And so having you guys to talk about that partnership, I think it’s gonna be a, a lot of fun. So let’s start with the relationship between fp and a and BI at Wasabi. And I guess, Dave, if you can kick us off here, from your perspective, what makes this collaboration, uh, work so well in your case?

David Suter:

Yeah, well, in fp and a, we’re big consumers of data, so we rely on BI team to make clean a lot of big, messy data. I mean, that drives our forecast across, you know, multiple scenarios really value that relationship, you know, and what makes it work. I think, you know, trust communication, I think both our teams have a real pragmatic approach to problem solving in a, in a real, you know, dispassionate approach to that same problem solving. You know, I think both these teams really want to get the right answer rather than some, you know, pride of ownership. Um, I, I think I’ve found Marcos’s team to Marcos and his teams to be, you know, really receptive to feedback and, um, new ways of looking at things. There’s a lot of back and forth, a lot of dialogue. A lot of communication, so really works, works well together.

Glenn Hopper:

Yeah. And that ownership of the reports, it’s funny, I mean, you know, on the operation side, people wanna take ownership of the, you know, our provisioning time is down or whatever they’ve, you know, whatever they’ve done there. And it’s interesting as, as both in fp and a and in bi, it’s, you know, we’re reporting and analyzing and in forecasting and that ownership. I could, that could be, I could see that being a tension point of, you know, trying to own the data and, and the reporting. And I’m wondering, Marcos, from your side, as you work with fp and a, they’re a customer, but you guys I know I’m sure are also tracking, you know, your own metrics and you have other customers too. So how do you build that trust and alignment between the two teams? And I guess is, is that something you have to sit down and be intentional about, or does it in, in the case of you guys, did it evolve organically? What was the setup?

Marcos Bento:

Great question, Glenn. I think it’s part intentional and part organic. The intentional part is, hey, we need to communicate if this is working or this is not working. To Dave’s point, we always love when people like Dave and his team and other folks come to us and say, Hey, this data is not looking what we were expecting to, we were looking, we would expect to see something different. Or, Hey, what are the insights that you can give us out of that data? That’s also very important feedback that we can learn. It’s also the other part of being organic, and we are as, as a remote first company, there is also another layering that how can you manage and balance being on Zoom calls all day long, meeting people? How do you manage these relationships and not having the coffee or water cooler for you to build and the trust that they’ve referred to.

Glenn Hopper:

Yeah. Is that, and that can be tough, um, on, on Zoom calls because it’s, especially, you know, we all get Zoom fatigue and you feel like you’re on it all day and you get to a point where you’re just tired of, of another meeting, but you still have to have that collaboration. Any insights or, or anything you’ve found that <laugh> that works well to keep that up when you are really just back to back zoom and working together on, on complex issues? I

Marcos Bento:

Think at the most foundational part, it goes back to our culture. I think that a lot of people here are really good, nice people to work with and to be with. I generally like to work with Dave and everyone on his team. It’s always a pleasure to work with them and here and learn and collaborate with them. You have your point of the ownership and their responsibilities and the deliverables, but how we are doing this, the journey is always a pleasure, and that makes working on these kinds of problems and issues really special.

Glenn Hopper:

Yeah, Dave, I mean, you know, you’re under pressure to get numbers and you have to have that, that collaboration and, you know, uh, you’re not bis only customer from your perspective, and, you know, when you’re managing your team and you know, you’ve got your own reporting requirements and needs, how do you handle that interaction with the BI team?

David Suter:

Yeah, I mean, I think, well to, to follow up a little bit on what Marcos was saying about, you know, the, the collaboration between our teams. I think it is organic in a sense, but there’s also this function of type of people we hire. You know, like to Marcos’s point, if you’ve hired people that are well intentioned and easy to work with and a pleasure to work with, it makes that collaboration so much better. And that just rolls right into prioritization. I know they’re under a lot of pressure, try to be respectful of that. But, uh, you know, the data can never come fast enough. <laugh> in our mind, but it’s a balance, you know? I mean, sometimes there are needs where I just need a fast answer that is ballpark close enough, and, and, you know, you communicate the priority and the urgency and okay, lemme see what we can do. Um, other times it’s a little more in depth. It takes, you know, something on projects for weeks, but we gotta get it right. So I think it’s just that collaborative effort to try to understand where they’re coming from, understand that we’re under demands from executive or a board meeting. Um, I think it just goes back to that collaboration and with a good team that, uh, you know, it’s understanding of both, uh, sets of problems.

Glenn Hopper:

So what does the day-to-day interaction, uh, look like between your teams? Is it, is it project based or recurring syncs or more on demand? How does, how does that work? And I, Dave, I guess we’ll stick with you on this and I’ll, I’ll, I’ll go to Marcus.

David Suter:

It’s a little on demand. Um, you know, I think there are times when, you know, I we’re, we’re working on a specific project for a specific period of time, and then other times, you know, it kind of, my demands kind of fade away. And I’m sure, you know, other teams are, are keeping them busy. Um, but it kind of ebbs and flows a little bit. There’s definitely times when, uh, we’re off on, on other projects and give these guys a break from us.

Glenn Hopper:

Do either of you do daily standups with your, with your internal team?

David Suter:

I don’t do a daily, I do, you know, weekly I have calls with our team and then, you know, weekly one-on-ones, but not unless there’s a project going on, but when there is a project, yeah.

Glenn Hopper:

Gotcha.

Marcos Bento:

Yeah, the BI team does twice a week standup calls. Uh, we have two week sprints for deliverables and larger projects, and you have the add-on requests that comes and goes every other time.

Glenn Hopper:

I wanna stay on this theme for just one more question, but Dave, we talked before the show and you’re obviously one of bis biggest internal customers. How do you think of, and I, and I, maybe this, you kind of already answered this when, when talking about the collaboration earlier, but what do you think about the role of BI and supporting fp and a, like, you know, if you’re rolling something up and you’re reporting metrics, but you’re, you’re waiting on them, you’re never gonna, you’re never gonna throw bi under the bus and be like, well, we, Marcos can’t gimme the numbers. So I mean that back and forth and you, and you have to understand, I mean, you’re not gonna just, you know, take numbers that you don’t understand. So how does that sort of collaborative part work when you’re, maybe it’s defining new metrics or understanding, uh, the data that you’re pulling. You’re not the one going pulling it all the time, but you’ve gotta understand and, and, and work with the, uh, BI team for that.

David Suter:

Yeah, you know, prior to having, uh, a BI team, I, I, I, I would always joke, you know, I don’t, I don’t trust anybody, so I want all the data it as granular as I can take it, and then, you know, we’ll make sense of it. To not have to do that, to have a partner that we can rely on is fantastic. We totally now rely on Marcos’s team. There’s been a lot of trust built up. Anytime we’re looking at data, there’s always this issue of all the nuanced and edge cases that these guys have spent so much time digging into and understanding and mitigated. Um, it’s really fantastic to be able to trust that data. And then, you know, we’re big consumers, like I said, we’re, we’re constantly pulling more data, better data, different slice and dice, because more granular we can get it, the better we can forecast. The trust that we’ve built up over the past few years has been, you know, invaluable. That’s

Glenn Hopper:

Great. I wanna ask both of you about this in fp and a, if you’re forecasting, you can be directionally correct and I think about the difference between finance and accounting. Sometimes there’s no directionally correct in accounting <laugh> that, you know, trial balance has to balance and, uh, and debits and credits have to, uh, match up and all that. But if you’re forecasting, you can, as long as you’re close and, and can explain what you’re doing. But I think bi I guess it depends if you’re doing descriptive or predictive analytics or whatever, whatever, there’s, it seems like there can be more of a push for, um, obviously you have to, uh, be able to repeatable and explainable all of your answers. But there’s, there’s also that tension and you guys are dealing with so much data with the industry you’re in. And we’ll talk obviously more, a lot more about that. But with that tension between speed and accuracy, and Marcos, let’s start with you on this. How do you balance that need for We have to get something fast, and this goes to what Dave was just saying, but we also need to be as, as precise as possible here. ’cause you can get really bogged down trying to close that last, you know, <laugh> 2% or 10% gap of, of making it exact.

Marcos Bento:

I think at a philosophical point, Glenn, I think that you cannot cheat experience. So if we are a storage company and we are reporting on how much storage our customers have, we need to understand, the BI team needs to understand what does that mean, what does that not mean as well? And that goes to all the use cases and edge cases that whatever that can be is that, how do you think about storage from different customers in different data centers? There are all these kinds of nuances, but you can only learn about this if you have the experience, if you’re talking with the folks that know about the business and it’s either your customers, but they, how the input of the data comes from. So that comes from engineering and from sales, and that gives us context to make the right assumptions when we are building the correct database.

I think that over the last two, three years, we built the foundation of what’s the essential, what’s required for us to run as a organization, as a fast growing software as a service company. Now the next step is what else can we do with this data? And that’s where this gets interesting and it goes to your speed and accuracy. At maybe the first forecast that we did three years ago, we were off by 10%, then we make improvements and it goes from 10 to eight to five. I think that Dave now has a good enough forecast that we are within a couple percentage points from what our plan or our forecast. So that’s probably how you also build trust, is that the data that you’re showing is consistently reliable and you can explain what’s underneath that.

Glenn Hopper:

And Dave, I think about that, that speed and accuracy and the potential for drift. So if marcos’s data that is delivered to you is not as precise as it as it needs to be in your building a churn forecast or whatever you’re building off of it, that it can exaggerate the, the error in difference. So you, your, your deliverable if, if you don’t have it as as close as possible to what you need could drift from that. So from, from your standpoint, I mean, and and you alluded to it earlier where you, you used to have to get all the data and figured out yourself, but now you have someone upstream who’s providing it for you. How do you balance that? Does it start with un understanding what you got from BI and where there, where there could be error there or, or how, how do you approach it?

David Suter:

Yeah, I think it’s, uh, just the classic cost, quality schedule, trade off cost, quality schedule. You can only have two, uh, in our case costs are relatively fixed, so now it’s yeah, speed versus accuracy. Yeah, like I said, I think there’s times when I just need to ballpark answer quickly. Other times I need a, you know, a solid answer and understand it’s gonna take a little more time. To your drift point though, I’m always looking for a third data point to validate is what I’m getting. Correct. We sell cloud storage, we sell it in dollars per terabyte. So, you know, I’ll get data from Marcos team around our storage, our terabyte usage, and our a ARR usage. But when you divide a RR by storage, you get a dollar per terabyte. And when I’m looking at this and it’s, you know, 10 x what our list price is, I know something’s wrong.

Um, so now which one is wrong? Well, if I have a RRI can forecast revenue, I can understand if that’s directionally correct. And so I know whether the error is in revenue a RR. So it’s things like that where I’m always looking for outside data sources to validate is what I’m seeing, right? And if I can go back over 6, 12, 18 months and get something that is directionally correct to these outside sources, I know we’re on the right path. So that kind of helps mitigate that drift. And if it does start to drift, okay, now we have a new project, we gotta another layer of the younging we gotta peel apart to, gotta figure out why we drifting. And then again, I think where Marcus and his team add a ton of value is they’ve peeled this thing apart and they know these edge cases and it’s like, uh oh, it’s, it’s, oh, you’re not factoring in, you know, X, Y, or Z and those are, you know, outside of the bounds of this regular view of what you’re looking at. So those are, those are, it is you really helpful to be able to peel that on un apart. Yeah.

Glenn Hopper:

And you guys are both, I’m trying to suss out if there’s a, a difference in mindset because you’re both, when we talk to fp and a guests all the time, we hear ’em say, to be really good at fp and a, you have to sort of be an investigative journalist. You’re constantly asking why you’re trying to get to the, the root cause of something, and you’re using the, what you find there to create a narrative and to present, um, the, the facts as, as you’ve seen them through your investigation. And, and BI is, is very similar to that. So that’s really, it’s almost like, uh, you know, two, uh, crime solving agencies, both, you know, trying to solve the same, uh, problem with different approaches. But I guess, and, and what got me thinking about this, Dave, is when you said, um, uh, talking about, uh, you know, you have to, you, you, you have to know your business.

And I think that that domain expertise, we keep hearing about this promise, and I, and Marcos, I think you’re gonna laugh at this, but the <laugh>, the idea that we’re, we’re self-serve data is gonna be so easy, it’s gonna be easy to get all this ’cause generative ai, we’re just gonna be able to access everything and pull all this where humans will add value, even if it does get, you know, if a cortex or whatever in snowflake really starts to work great and you can chat with your, your data, there’s still that domain expertise. And I think about maybe because fp and a has been evolving as a field longer than, than data science. It used to be you could sort of sit in this ivory tower of finance and not have to know the rest of the business because you were, it didn’t matter what the cogs were, you were just presenting the numbers.

But now the way that, um, that fp and a works, you have to have business partnering, you have to truly understand the business because all the work that used to go into putting the data together and all that, that comes a lot quicker. Or you have a team like that’s coming from, from Marcos, um, that is, is helping with that as well. I’m wondering if you refine down your approach to the metrics and this, if I’m trying to picture a meeting where you guys are both together trying to solve for a new KPI or whatever, is there a difference between how you guys approach problems or the universe of data that you deal with? And I know that I just said a whole lot there, so <laugh>, so David, I’m gonna start with you. You, and I guess it’s, if you’re trying to figure out something, you’re trying to squeeze some extra margin outta, you’re trying to find some, some new area. Are you initially going to do that just through the data that you have or, or when do you bring BI into that? I don’t know, is that maybe, did that distill it down a little bit more?

David Suter:

Boy, that’s a tough one. I mean, obviously we rely on data from BI to give us historicals. If that then looks off trend to a forecast, that might give us a little hint to dig into. But you’re right, as good, you’re right. It’s, it’s, we are investigative reporters here.

Marcos Bento:

And Glen, to be fair, I think that a lot of times we work with the same set of objectives. And that’s, that’s what the overlap happens quite frequently, maybe with some different lenses. We are looking at, Hey, give me the raw data and what, what’s the output of that? And for David’s, okay, give me the output of that now what can I explain with this set of numbers? So that makes the work, our collaboration also work because if we’re looking at a RR or storage or churn or net retention or whatever the metric is, we have the same set of objectives. If we are trying to make or create one of these reports, we need to understand what’s the context, what’s the audience that this is going? And we try our best to be on the same page.

Glenn Hopper:

And are you both doing predictive analytics or, so I’m, I’m, I know, I’m sure Marcos, when you are, you’re using machine learning algorithms and everything. And I’m wondering, Dave, are you, if you’re doing something predictive, is your team using machine learning or are you doing, you know, are you using Excel or, or pi? What, what are you guys doing? And I guess Marcos, we’ll start with you if you can, uh, tell me about what kind of predictive analytics you’re doing and, and your approach and, and who the customer if you are doing that, who the customer of, of that, that analytics package is.

Marcos Bento:

Great question. So one of our first predictive, uh, or machine learning models is to forecast the amount of storage of a customer in the next six and 12 months. But the primary user of that, it’s not Dave, it’s sales. So we use that to engage sales into better discussions about the future. Hey, we are looking at similar customers for the same amount of storage for the same amount of years. With Wasabi, they tend to grow at X, Y, or C rate. Now if you’re going into this rate, we can probably offer you better discounts. We can probably offer you some discounts or some third, and that makes some actionable items for the sales teams to act on. We are actually, as we speak, we are working to try to integrate this model into the fp a model. So they does a forecasting on the entirety of the p and l. At some point we wanna use some of these machine learning models to feed whatever Dave is using.

David Suter:

And how about on your side, Dave? Yeah, we’re a little bit still in the stone ages. Uh, we’re not doing any sort of predictive stuff other than just manual forecasting.

Glenn Hopper:

Fp and a today is brought to you by Data Rails. The world’s number one fp and a solution data rails is the artificial intelligence powered financial planning and analysis platform built for Excel users. That’s right, you can stay in Excel, but instead of facing hell for every budget month-end close or forecast, you can enjoy a paradise of data consolidation, advanced visualization reporting and AI capabilities, plus game changing insights, giving you instant answers and your story created in seconds. Find out why more than a thousand finance teams use data rails to uncover their company’s real story. Don’t replace Excel, embrace Excel, learn more@datarails.com.

When you’re forecasting, what is your approach right now? If you’re doing, I guess first, you know, I know everybody has their annual plan and we know especially it’s gotta be in the hyperscale mode, that’s gotta be <laugh>. It’s crazy to, to try to look 12 months out. But whether you’re looking 12 months or 36 months or, or you’re re-forecasting at the quarterly basis, you have a lot of data that can feed your forecast. How complex are your forecasting models and are you using variables that are, are using like, internal and external? And could you tell us a little bit about, uh, your approach to, to forecasting on the fp and a side?

David Suter:

Sure. I guess, uh, quickly on the expense side, it’s fairly straightforward. Um, we look on, if you take away all of the CapEx and data center stuff, we look an awful lot like a software company. Um, 70, 80% of those costs are people related. So that’s just workforce planning. Um, on the revenue side, it’s a little more complex. We have, um, you know, a very detailed forecast down at the monthly cohort level across different geos, different, uh, payment types, um, across different channels, uh, across different products. So we can get very granular very quickly, or we can roll it up at a very high level and just do a classic roll forward a RR roll forward, uh, with churn and net retention and bookings functions in there. So it’s a little bit of both, you know, try to triangulate on an answer that you all feel good about, but it’s, you know, it’s a lot, it’s complicated. Yeah,

Marcos Bento:

And Glen, to be fair, I think that Dave is underselling his capabilities and of his teams because the amount of work that went to the revenue forecast at the cohort level is just absolutely incredible. The amount of details that were put into this, and it’s o obviously we, he and his team is very good at doing Excel modeling using all the advanced formulas. But the most important piece is what’s the, the output of the results and the accuracy is very good. Even at our growth rates, uh, the ability to do a accurate forecasting, it’s really hard to do. And as I said before, when they just started, we were off by 10%. So it was very hard to understand now that he can go deeper and see per cohort per month, per whatever type he wants to slice and dice, this gets into a better cadence of forecasting.

David Suter:

Thank you. Yeah, thank you. Morris, we’re getting better. <laugh>

Glenn Hopper:

Is machine learning for revenue forecasting, is that something you guys have sort of spitballed or kicked around the idea of? Because there just, I, I just think with your industry, you have to have so much data that, um, that it, I could see that potentially working, but the also, I don’t know, I <laugh> I’ve had historically, and I think I’ve mentioned it in the past couple episodes, I, because I came from a similar to where you guys, you guys are with pri private equity backed companies and, uh, you know how private equity folks are. They’re, they love their models and they want you to build really great models and stuff. So I, I have a tendency sometimes because it’s so fun to build these things to sort of, uh, confuse the map for the terrain. You know, <laugh> where it’s, I’m, I’m so focused on the model that I forget there’s a, a real world out there. So I don’t know if you’re, if you’re doing this unless it was quicker or you could get more precise or whatever, obviously data rails loves Excel and, and all of our listeners and even what data science or uh, or finance Excel a lot of times is, is the first place you start. But, um, is that a collaborating on a, using machine learning for predictive revenue forecasting? Is that something you guys have talked about?

Marcos Bento:

Between the three of us, this is something that we are actually working on. Uh, we have a great, uh, senior data scientist working on this model. Uh, he’s fine tuning the model for a couple of different variables to understand not just at the cohort level, but in the tenure, uh, the billing methods, but also looking at the behaviors of each individual of one of our more than a hundred thousand customers, what Excel is pretty good at said, okay, give me a million rows and let’s do a forecasting. Now when you’re looking at, hey, you have a billion rows, how do you manage that, those databases, right? So that’s where some of these data lakes, uh, joining and the idea is to speed something for Dave so that he and everyone can use this to create better, better actions out of that. But it’s still a working process and we’re gonna refine, and Dave is probably gonna see some problems in version one, version two up until a point where he’s saying, okay, I can get away with this. I can explain, let’s now roll this forward and make this plan even better.

David Suter:

Yeah, and I think it’s early days for that kind of thing. Well, I’m not trying to be like a Luddite, but I’m, uh, <laugh>, you know, I’m a little skeptical of AI’s ability to just like forecast everything into the future. I think it’s gonna be great for anomaly detection, variance analysis, you know, looking at where to highlight anomalies that need to be dug into. Um, I’m a little less, uh, bullish on its ability to, you know, hey, just like put in all the data and like out pops a bulletproof, uh, revenue forecast.

Glenn Hopper:

Yeah. Especially if you have to have that explainability and you can’t just say that threw it into the black box and this is what it’s about. Yeah, exactly. So, but that said, I guess to Marcos’s point with a machine learning model and you have a boatload of data you can excel has its limitations and whether it’s crashing or slowing down or, or whatever, just the cumbersome nature of, of trying to use Excel for everything. I think there’s a lot of, you know, if you’re doing, uh, Sima forecast or something, it’s a lot easier to do that in in Python, um, than it is in Excel. Um, or you know, if you’re doing and you have other sort of forecasting models. So, um, I, but the problem is how many people can build forecasts in Excel versus the few people who are not on the BI side who could build the forecast in, in Python and then make it explainable to the board to management and all that.

So there is that sort of chasm between the output, the explainability, and how much better is it gonna be or how much time is it gonna save? Maybe it, you know, ult, maybe ul maybe it ultimately comes to that is once you’ve said it, that it, it becomes quicker to, um, reforecast and, and do that. You brought it up. I I didn’t have this as a, as a planned question, but obviously this is an area I’m very interested in. And then it seems like I have to bring it up on every episode, but when you mention flux analysis and, um, what AI is good at, we do talk a lot on this show about generative AI and um, I know a lot of SaaS tools out there are building in their, uh, generative AI wizards and genies that you can talk to your data. Um, and there’s some stuff you can do, uh, with off the shelf chat, GPT or Gemini or, or whatever are, and I’ll ask this. So both of you, I guess, Dave, we’ll start with you. Are you using generative AI successfully in in anything, um, in your department now, or is your team using it for anything?

David Suter:

Uh, not in any sort of official capacity right now. Um, like I said, we’re, we’re, we’re a little bit in the stone ages, um, still using Excel, still love it. Um, you know, you’ll pry on my cold dead hands <laugh>, but we’re definitely bumping up against the limitations. Like we said, you, you can only fit so many rows into a spreadsheet. Uh, and we’re, we’re definitely pushing the bounds of that today. Um, so we’re in the process now of moving to a planning tool, um, that has some of those, you know, whizbang AI features that we, uh, hope to take advantage of in the future. Um, but like I said, we’re, we’re still early days, so we’ll see what pans out.

Glenn Hopper:

How about you Marcos? Is your team doing anything with it?

Marcos Bento:

We are. There are some interesting use cases here and there. Uh, one of our data analysts, he writes a letter to or a summary email to the executive team every Friday initially was by hand. We were writing the summary, Hey, this is how much storage we added, this is where the pipeline came and closed and open month over month, uh, six months in and drafting these letters. That took a couple hours every day. Uh, he realized, okay, we can prob we now have enough data and text that we can train a model to do this for us. So every Friday morning he clicks, now a button generates the text, he now just edit a couple paragraphs here and there. Okay, I need to change. The numbers are already there. It, it’s played because it pulls the data from the correct place in the data leak. Now it’s a matter of adjusting and improving how the letter is written or maybe we wanna highlight something different than the usual cadence that it sent every week. Uh, so that’s a interesting use case that we are working on. They’re still not in production, that we are trying to figure it out with sales operations, how to improve and automate some of the processes on the order to cache.

Glenn Hopper:

And I know you guys are, uh, use Snowflake. Are you doing this with Cortex or what are you using as the

Marcos Bento:

That’s right. That’s right.

Glenn Hopper:

Okay. How are you finding Cortex? I mean, I guess you, you have to pick your use cases where

Marcos Bento:

Oh, absolutely. In general, it’s been great for us. Uh, they have developed and created so many new functions in the last six, 12 months. Uh, their roadmap seems very exciting and interesting for the new use cases that we have in our roadmap. We were just using one of their functions to predict from the, from all the activities that our sales reps do with partners and with end users during the pipeline creation, what are the most important activities that our sales reps need to do in order that can predict a closure of a deal in the next 3, 6, 12 months? We were using three different models from Cortex. We ranked them, we now have one of our favorite models. We’re now going to double click on each one of these activities.

Glenn Hopper:

That’s super cool. And I think the use cases you mentioned really are, that’s the area where generative AI is gonna be the most valuable. And the problem is if your data’s not in in Snowflake or if you’re, you’re just trying to work in Excel, obviously you blow out the context window of the AI pretty quickly. But when you have AI integrated into your data, if you can look directly at your GL or if you can look directly at your, your production data or whatever’s in, in, in Snowflake. And Dave, you mentioned, um, variance analysis and explanation. So obviously very quickly Gen AI could go look at your monthly financials and see where there’s a, a variance and even look for things. Computers are much better at finding correlations and and and variances than, than people are, but if it can’t tie in to your GL data, well you’re still gonna have to go dig it in.

I do think though that my old first round, I, I, I think back to my days as a CFO and whenever I got the financial statements, the first thing I did was manually scroll through. Uh, obviously the FBNA team would’ve, um, given me notes too, but I wanted to see what those variances are. And I think that whole first level of financial statement review right after it’s closed, if you could have it pull and say, Hey, uh, you know, revenue was up 2% and cost of goods were down 6%, so that sounds great, but it’s gonna ask you the question of why were our cost of goods not keeping up with revenue? So then, but if it’s not tied in the jail, you still have to go do it. But I do, I’m an optimist around this and I do see a, a future and especially with Marcos and I were talking about where it’s integrated into systems or into your software.

’cause a lot of the SaaS tools now are, are integrating it out there, but that’s where it’s gonna be valuable. I think right now having a, a probabilistic engine <laugh> doing journal entries or reconciliations, it’s not really, that’s, that’s not the practical use case of it. But, um, yeah, I think those summaries and and analysis like that, there’s a, but for most people, and, and I don’t think Dave, I I don’t think at all you guys are are in the stone age with this. What’s the sort of manifesto or approach around Gen AI at, at Wasabi or is there a lot of, because you hear it everywhere, like, we have to AI this, we have to AI this. And I guess, uh, Dave start with you are, do you get any questions from leadership or is from the company of, can you AI that or <laugh> or anything along those lines?

David Suter:

Um, not yet, but I feel like it’s coming. Yeah. And I’m like you, I think, um, I, I think it’s the ability to do that first pass of variance analysis is gonna be huge. Uh, it’s like, uh, why was this off? Oh, well there was like, you know, one anomalous transaction and then, okay, great, we can go look at that and figure out what that was, why it’ll just really speed up that whole process. Um, I’m sure there’s still gonna be some manual interpretation of what it is, but you know, to point, you know, big red flashing arrows at a couple of different things to go dig into I think is gonna be great. Pressure’s coming. Our CFO’s definitely a fan and you know, sends us articles, you know, once a month of like, Hey, I read this, you know, cool new thing that it can do. Um, so it it’s coming for sure,

Marcos Bento:

And I think we are intentional. It’s not applying AI for just for the sake of applying ai. It has to have an outcome. Yeah. If we can prove that the outcomes are gonna be better, if we can use any gen AI platform out there that can speed up and automates any of our processes, then I think our executive team will be all wing. Even once we understand a, use a killer use case. I don’t think they are all just for the bus and they’re not against just because I think we are all open to explore and understand and we have the green light to use some of these tools. Obviously not with wasabi data unless we’ve vetted with legal and all the processes, but we can definitely use to understand where we need to go next.

Glenn Hopper:

Yeah. Alright, so I can step down off my AI soapbox now when we get back to our regularly scheduled programming <laugh>. Thanks guys for the, for the insight on that though. So we haven’t talked about dashboarding and sort of the, the standard routine reports that go on. Thinking about trying to have your data dictionary and have it consistent and write before I I dive too deep. Do you guys both create dashboards of, of certain types that are there some that fp a is responsible and some that be as responsible for?

David Suter:

Yep.

Glenn Hopper:

So in that dashboarding, you’re defining the metrics, you’re, you’re building it out, you’re trying to tweak it. People always want to drill down to one level deeper and a lot of times what they see in the dashboard drives more ad hoc reports if there’s a variance or something that they can explain. I guess each of you, and, and maybe Marcos will start with you on this when you are, I I just, I’ve seen so many, I I’m saying dashboards, but I’m thinking of of slide decks and, and the monthly presentation and everything. Sometimes we can get, so people want so much information that there’s the monthly, the board deck is becomes 80 something pages of charts and graphs and it, they be, to me at that many, they become like billboards that you, you drive by on the interstate and you just, you’re not seeing it. Or every, every man manager or every, uh, board member or whoever has the one graph that they go see, they know it’s on page 30 or whatever. So I guess you guys are both delivering two others. They, they want these reports, but Marcos, when people are asking about data and metrics they want to track and you start hearing the scope creep and all the stuff they wanna see on the dashboard, do you have some sort of simple advice or guidance or, or way that you structure that dashboard from the beginning? It’s

Marcos Bento:

A great question and we can probably use three episodes just on this, Glen. Yeah, <laugh>. But, uh, I would say that in general, it goes back to context what we initially started. We can only create a dashboard that is actionable and useful. If we understand why people are asking for those questions, sometimes we will go back and push back and say, Hey, from all our, I don’t know, 300 reports available, reports that we have in our BI tool, can’t you really use whatever we already have today? Uh, we have built tons of every single potential view and cuts of every single one of the data sources that we have. In general, it’s a matter of educating the users and saying, Hey, do you know that you have this report available? In most cases, people are saying, oh, this is exactly what I needed. I never knew that we had and did this is on the BI team to constantly educate and inform people about what the day, what data is available today.

But to your point, Glen, every Monday we have to develop a 200 slide pay, uh, slide deck. And the, when we meet with the executive team, we will just summarize that we are like, okay, for historical purposes, we have all these 200 slides for you, but for this week the important ones are slide two, 14 and 57 and let’s walk you through all of them. And then we spend a full hour just talking about the three most important ones because the others are in plan or whatever we were expecting to what the executive team needs is what’s new, what’s noise versus what’s signal around all this data.

Glenn Hopper:

That’s great. That’s where you add value is

Marcos Bento:

I hope I I obviously

Glenn Hopper:

There’s all the,

Marcos Bento:

I hope that’s right, <laugh>.

Glenn Hopper:

Yeah, that’s where, I mean, there’s all the foundational work, obviously you’ve gotta pull it all together and it’s massive, but that’s turning information into knowledge. It, it’s, you’re taking it and refining all that so that they don’t have to read every billboard that they drive by in the interstate or every street sign. You’re telling them this is, these are the actionable items and they’re trusting you to, to pull that out. So that’s totally get that. And I get that they’re, these are metrics that are important, maybe not every month, but if they’re indicative of something, if there’s causation type or whatever, if they’re a leading indicator or, or even if they’re lagging indicator what whatever it is, having that information historically is good, but cutting through all that noise and saying here’s, you know, shining the light on and here’s, here’s what your focus is. So, and I guess, Dave, what, what about you? I know sometimes maybe not dealing with as massive amounts of data, but there are as many financial KPIs as there are probably across the rest of the company. So what’s your approach with these?

David Suter:

Yeah, I’ve had a similar experience. Uh, I worked at a company where every week, you know, we get a spreadsheet, email to the entire company of, I think it was like 400 metrics. And, uh, and every, you know, I, I would look at it, I’d read it, I’d try to digest it, and every time I would reference it to anybody who’d been there for more than six months, they were just, it was just, I, they would just dismiss it. Like, ugh, I, I can’t even look at that thing anymore. Right? And so to your point though, it’s just becomes these billboards that like, just nobody looks at it. So all this work that gets done, nobody looks at it. So my approach to dashboards has always been, um, you know, you gotta resist that metric bloat and really you should be able to, any, any department should be able to distill it down to three to five metrics tops, you know, a couple of historical looking views, maybe a couple of forward-looking view metrics, but like three to five max, you, you don’t need anything more than that. I mean, of course there’ll be like ad hoc or we gotta dig in because something went haywire, but in terms of like regular reporting and dashboarding, you gotta keep it to three to five. That’s it.

Glenn Hopper:

Yep, yep. Totally get that. With that, we’re gonna wrap up part one with Marco Bento and David Ser from Wasabi Technologies. We’ve covered the foundations in this episode, how fp and a and BI collaborate build trust, navigate the tension between speed and accuracy. And in part two, I wanna dig a little bit more into the metrics systems and strategies that bring the partnership to life. And we talked before the show about net retention, data governance, wasabi’s approach to scaling insights, um, and all that. So let’s save those for episode two. And thank you guys for, uh, for being on.