Click for Takeaways: How Wasabi Improved Forecast Accuracy
- FP&A and BI alignment is rare: industry research shows only 11% of organizations have fully aligned strategic, financial, and operational planning, with most still working in silos, and 46% of FP&A time is still consumed by data collection and validation rather than analysis
- Forecasting accuracy improvement: through three years of iterative cross-team analytics work, Bento’s BI team reduced forecast variance from 10% to within a couple of percentage points, building the kind of trust that makes AI outputs usable
- Financial data validation as the discipline that holds it together: Suter’s third data point methodology, dividing ARR by storage to derive dollars per terabyte, catches errors before they reach executives and demonstrates how validation protects both teams
- The cost of skipping validation is real: research shows more than a quarter of organizations lose over $5 million annually to poor data quality, and 70% of AI projects fail to meet their goals because of data quality and integration issues
- Metric discipline matters as much as model accuracy: net retention required more than 130 definition iterations to reach consensus; without agreed definitions, cross-team analytics produces numbers no one trusts
- Outcomes over buzzwords: both leaders reject AI for its own sake, testing multiple models, ranking performance, and deploying only when results justify it; the Friday executive summary automation reduced a two-hour weekly process to minutes and is the model for every AI initiative that follows
The stone age excuse doesn’t work anymore.
David Suter, Director of FP&A at Wasabi Technologies, admits it freely: “We’re a little bit still in the stone ages. We’re not doing any sort of predictive stuff other than just manual forecasting.”
Meanwhile, his counterpart Marcos Bento, Director of Business Intelligence, is training AI models to write executive summaries, predict customer storage growth six months out, and identify which sales activities drive deal closure. His team uses Snowflake Cortex to run multiple machine learning models simultaneously, ranks them, and deploys the winners into production.
The gap between these two teams at the same company reveals the messy truth about AI financial forecasting in 2025: it’s not a question of whether AI will transform finance. It’s a question of whether finance will transform fast enough to use it.
In a recent episode of Datarails’ FP&A Today podcast, Bento and Suter revealed how their teams work together despite vastly different approaches to AI forecasting, why Bento’s BI team went from forecasts that were 10% off to within a couple percentage points, and what happens when the pressure to “AI everything” meets the reality that most AI projects fail.
Their partnership demonstrates that successful AI forecasting isn’t about replacing Excel with algorithms. It’s about knowing when to automate, when to intervene, and how to build trust when one team is sprinting toward machine learning while the other is still perfecting pivot tables.
From Two Hours to Two Minutes: The First AI Win
The executive summary email went out every Friday without fail, following a standardized executive reporting process. Storage added, pipeline status, month-over-month changes: same format, same data sources, same analytical approach.
For six months, a data analyst on Bento’s team spent over two hours each Friday crafting these updates. The numbers came from the data lake. The narrative followed established patterns. The analyst understood what executives needed. But the execution still consumed time that could have been spent on strategic analysis instead of manual reporting.
After six months of consistent summaries, Bento’s team saw the pattern. They had enough historical data to train a model.
“One of our data analysts, he writes a letter to or a summary email to the executive team every Friday initially was by hand,” Bento explains. “That took a couple hours every day. He realized, okay, we can prove we now have enough data and text that we can train a model to do this for us.”
Now the process is automatic. Every Friday morning, the analyst clicks a button. The AI generates text, pulls numbers directly from the data lake, and formats everything according to established patterns.
“So every Friday morning he clicks a button to generate the text, he now just edits a couple paragraphs here and there,” Bento says. “The numbers are already there. It’s played because it pulls the data from the correct place in the data leak. Now it’s a matter of adjusting and improving how the letter is written or maybe we wanna highlight something different than the usual cadence.”
The analyst’s role shifted from writer to editor, from data gatherer to strategic narrator. What took two hours now takes minutes.
This is AI forecasting at its most practical: not replacing human judgment, but eliminating the repetitive work that prevents analysts from applying that judgment to harder problems.
The Forecasting Accuracy Journey: 10% to 2%
When Bento’s BI team started building financial forecasts three years ago, forecast variance was close to 10%.
That’s the kind of variance that makes FP&A teams nervous and executives skeptical. It’s close enough to be directionally useful but far enough off to undermine confidence in the numbers.
Three years later, they’re within a couple of percentage points.
“At maybe the first forecast that we did three years ago, we were off by 10%, then we made improvements and it goes from 10 to eight to five,” Bento explains. “I think that Dave now has a good enough forecast that we are within a couple percentage points from what our plan or our forecast.”
This improvement didn’t come from a single breakthrough. It came from iterative refinement: understanding edge cases, learning business context, building validation mechanisms, and earning trust through consistent results.
“That’s probably how you also build trust, is that the data that you’re showing is consistently reliable and you can explain what’s underneath that,” Bento notes.
For Suter’s FP&A team, this accuracy creates confidence. He can rely on BI’s forecasts as inputs to his own financial models, using them as validation points rather than starting from scratch.
“The trust that we’ve built up over the past few years has been invaluable,” Suter says.
But trust only works when both sides understand limitations. Bento’s team knows they’re providing inputs, not gospel. Suter’s team knows they need to validate those inputs against other data sources.
This is the foundation of successful AI forecasting: models that are accurate enough to be useful, transparent enough to be validated, and honest enough about uncertainty that finance teams can use them confidently.
When FP&A Stays in Excel While BI Builds ML Models
The contrast between the two teams is stark.
Bento’s BI team is building machine learning models to forecast customer storage growth six and twelve months out on top of established financial modeling frameworks. They’re using Snowflake Cortex to predict which sales activities drive deal closure. They’re training models on customer behavior across hundreds of thousands of accounts.
Suter’s FP&A team is still in Excel.
“We’re a little bit still in the stone ages,” Suter admits. “We’re not doing any sort of predictive stuff other than just manual forecasting.”
But Suter isn’t apologizing for it. He’s skeptical about AI’s ability to produce bulletproof revenue forecasts without human oversight.
“I’m not trying to be like a Luddite, but I’m a little skeptical of AI’s ability to just forecast everything into the future,” Suter says. “I think it’s gonna be great for anomaly detection, variance analysis, looking at where to highlight anomalies that need to be dug into. I’m a little less bullish on its ability to just put in all the data and like out pops a bulletproof revenue forecast.”
The skepticism is warranted. AI forecasting without explainability creates black boxes that finance leaders can’t defend to boards or executives. If you can’t explain why the model predicted what it predicted, the forecast is useless regardless of accuracy.
This is where the partnership between BI and FP&A becomes critical. Bento’s team builds the models. Suter’s team validates them, challenges assumptions, and provides business context that prevents the models from producing technically accurate but strategically meaningless outputs.
“Between the three of us, this is something that we are actually working on,” Bento reveals. They have a senior data scientist building financial models to forecast at the cohort level, looking at individual customer behavior across billing methods and tenure.
The goal isn’t to replace Suter’s team. It’s to give them better inputs.
“The idea is to speed something for Dave so that he and everyone can use this to create better actions out of that,” Bento explains. “But it’s still a working process and we’re gonna refine, and Dave is probably gonna see some problems in version one, version two up until a point where he’s saying, okay, I can get away with this. I can explain, let’s now roll this forward and make this plan even better.”
The Dollar-Per-Terabyte Validation: Finding the Third Data Point
Suter may be skeptical of AI-generated forecasts, but he’s relentless about validation. He learned early that even the best models need external verification.
His approach: always look for a third data point.
“I’m always looking for a third data point to validate is what I’m getting correct,” Suter explains.
At Wasabi, which sells cloud storage at dollars per terabyte, this validation is straightforward. Bento’s team provides storage data and ARR data. Dividing ARR by storage produces dollars per terabyte.
“When I’m looking at this and it’s 10x what our list price is, I know something’s wrong,” Suter says. “So now which one is wrong? Well, if I have ARR I can forecast revenue and validate downstream cash flow assumption, I can understand if that’s directionally correct. And so I know whether the error is in revenue ARR.”
If the metric tracks correctly over 6, 12, 18 months, it becomes a reliable input for working capital management decisions.. If it drifts, it triggers an investigation.
“And if it does start to drift, okay, now we have a new project, we gotta another layer of the younging we gotta peel apart to figure out why we are drifting,” Suter says.
This is where Bento’s team adds enormous value. They’ve spent years understanding edge cases, unusual transactions, and business nuances that explain apparent anomalies.
“I think where Marcus and his team add a ton of value is they’ve peeled this thing apart and they know these edge cases and it’s like, uh oh, it’s, oh, you’re not factoring in X, Y, or Z and those are outside of the bounds of this regular view of what you’re looking at,” Suter notes.
Successful AI forecasting requires this kind of validation. Models produce outputs. Humans verify those outputs against reality. The combination is more powerful than either alone.
Predictive Analytics That Actually Drives Sales Behavior
The most successful AI forecasting at Wasabi isn’t even consumed by finance.
Bento’s team built machine learning models to forecast customer storage growth over six and twelve months. The primary users aren’t in FP&A. They’re in sales.
“One of our first predictive or machine learning models is to forecast the amount of storage of a customer in the next six and 12 months,” Bento explains. “But the primary user of that, it’s not Dave, it’s sales.”
The model analyzes similar customers with similar storage levels who’ve been with Wasabi for similar periods. It identifies growth patterns and predicts trajectories.
Sales teams use these predictions to have better conversations. If a customer is growing at the expected rate, they can offer volume discounts. If a customer is growing faster than expected, they can propose architectural improvements or premium support. If growth is slowing, they can investigate whether there’s a problem.
“We use that to engage sales into better discussions about the future,” Bento says. “Hey, we are looking at similar customers for the same amount of storage for the same amount of years with Wasabi, they tend to grow at X, Y, or C rate. Now if you’re going into this rate, we can probably offer you better discounts.”
These predictions create actionable insights that directly impact revenue and downstream cash management. That’s the standard AI forecasting should meet: not just accurate numbers, but numbers that change behavior and drive better decisions.
Bento’s team is now working to integrate these models into Suter’s FP&A forecasts. The goal is to feed machine learning outputs into the broader financial planning process, combining predictive analytics with human judgment about market conditions, competitive dynamics, and strategic initiatives.
The Snowflake Cortex Experiment
Bento’s team is pushing AI forecasting further with Snowflake Cortex, running multiple models to predict sales outcomes.
“We were just using one of their functions to predict from all the activities that our sales reps do with partners and with end users during the pipeline creation, what are the most important activities that our sales reps need to do in order that can predict a closure of a deal in the next 3, 6, 12 months?” Bento explains.
They tested three different Cortex models, a mature example of real-world AI applications in finance. Ranked them. Selected the best performer. Now they’re drilling deeper into specific activities to understand what actually drives conversions.
“We were using three different models from Cortex. We ranked them, we now have one of our favorite models. We’re now going to double-click on each one of these activities,” Bento says.
This experimental approach: test multiple models, rank performance, deploy the winner, represents mature AI implementation. It’s not betting everything on a single approach. It’s treating AI forecasting like any other analytical tool: test, validate, iterate, improve.
Bento’s assessment of Cortex is positive but pragmatic: “In general, it’s been great for us. They have developed and created so many new functions in the last six, 12 months. Their roadmap seems very exciting and interesting for the new use cases that we have in our roadmap.”
The rapid evolution of tools like Cortex creates both opportunity and risk. Features that work today might be deprecated tomorrow. Models that perform well in testing might behave unpredictably in production. This is why Bento emphasizes intentionality over enthusiasm.
The AI Manifesto: Outcomes Over Buzzwords
When asked about pressure to “AI everything,” Bento articulates a philosophy that more finance organizations should adopt.
“I think we are intentional. It’s not applying AI just for the sake of applying AI. It has to have an outcome,” Bento says. “If we can prove that the outcomes are gonna be better, if we can use any gen AI platform out there that can speed up and automate any of our processes, then I think our executive team will be all wing.”
This is the opposite of the “AI-first” mandates that doom many implementations. Rather than deciding to use AI and then looking for problems to solve, Wasabi starts with problems and evaluates whether AI offers better solutions than existing approaches.
“Even once we understand a killer use case, I don’t think they are all just for the bus and they’re not against just because,” Bento continues. “I think we are all open to exploring and understanding, and we have the green light to use some of these tools.”
Suter echoes this pragmatism. His team is moving to a planning tool with AI features, but they’re not rushing to deploy capabilities they don’t understand.
“We’re in the process now of moving to a fp&a tool that has some of those whizbang AI features that we hope to take advantage of in the future,” Suter says. “But as I said, we’re still early days, so we’ll see what pans out.”
The pressure from leadership exists but remains reasonable. Their CFO sends articles about interesting AI applications, creating awareness without mandating adoption.
“Pressure’s coming. Our CFO’s definitely a fan and sends us articles once a month like, hey, I read this cool new thing that it can do,” Suter notes. “So it’s coming for sure.”
This measured approach: explore, test, validate, implement when outcomes justify it, is how AI forecasting succeeds. The alternative, mandating AI adoption and measuring success by implementation speed rather than business impact, is how AI forecasting fails.
Where AI Actually Adds Value in Finance
Both leaders agree on where AI forecasting will prove most valuable: variance analysis and anomaly detection.
Suter sees AI handling the first pass of the monthly financial review, identifying unusual transactions and highlighting areas that need investigation.
“I think the ability to do that first pass of variance analysis is gonna be huge,” Suter says. “Why was this off? Oh, well, there was one anomalous transaction and then, okay, great, we can go look at that and figure out what that was. It’ll just really speed up that whole process.”
The work doesn’t disappear. Finance teams still need to interpret findings and determine whether variances matter. But AI eliminates the manual scanning that consumes time without adding insight.
“I’m sure there’s still gonna be some manual interpretation of what it is, but to point big red flashing arrows at a couple of different things to go dig into, I think is gonna be great,” Suter adds.
This represents realistic expectations for AI forecasting: not replacing financial planning professionals, but accelerating the mechanical work that prevents them from focusing on judgment, strategy, and business partnership.
The danger is overselling AI’s capabilities. Suter remains skeptical that AI can simply ingest all available data and produce perfect forecasts without human oversight.
“I’m a little less bullish on its ability just to put in all the data and like out pops a bulletproof revenue forecast,” he says.
The explainability problem remains unsolved. Finance leaders can’t present forecasts to boards or executives without explaining the logic behind the numbers. If the model is a black box, the forecast is unusable regardless of accuracy.
The Bottom Line for Finance Leaders
AI forecasting is creating a gap between teams that experiment and those that wait for perfect solutions, marking a shift in current FP&A trends.
At Wasabi, that gap exists within the same company. Bento’s BI team is building ML models, automating reports, and testing multiple AI platforms. Suter’s FP&A team is still in Excel but moving deliberately toward AI-enabled planning tools.
Both approaches work because both teams understand their role. BI pushes innovation and builds capabilities. FP&A validates outputs and maintains skepticism. The partnership creates better outcomes than either team could achieve alone.
The finance leaders who will thrive:
- Implement AI for outcomes, not for buzzwords or board presentations
- Start with problems that have clear ROI, like automating two-hour weekly reports
- Build validation mechanisms to verify AI outputs against third data points
- Maintain healthy skepticism about black box forecasting
- Partner with BI and data science teams rather than competing with them
- Test multiple models and rank performance rather than betting on single approaches
- Focus AI on anomaly detection and first-pass analysis, where it excels today
The ones who demand bulletproof AI forecasts without human oversight will make catastrophic errors. The ones who refuse to experiment with AI because it’s not perfect will fall behind teams that iterate toward better solutions.
The opportunity is clear: AI forecasting works when it eliminates mechanical work, surfaces anomalies faster, and provides validated inputs to human judgment. Getting there requires intentionality, partnership between finance and BI, and the humility to admit when Excel still works better than algorithms.
This article is based on David Suter and Marcos Bento’s appearance on the FP&A Today podcast.Ā
Marcos Bento is Director of Business Intelligence at Wasabi Technologies, a hot cloud storage company based in Boston. He leads a five-person team building machine learning models, implementing AI automation, and managing data infrastructure for the platform. Marcos rose through the ranks at Wasabi from Financial Analyst to Senior Data Analyst to Manager, Business Analytics before taking his current role. He holds an MBA from Babson College’s F.W. Olin Graduate School of Business.
David Suter is Director of FP&A at Wasabi Technologies, leading a four-person team focused on financial planning, forecasting, and business partnership. Before Wasabi, he held Senior Director positions at PeopleFluent and Sovos Compliance, and earlier in his career spent a number of years at PTC in strategic planning and operations roles.