It started with a conversation at a conference . Between sessions, I was demoing Wiv.ai to a DevOps engineer who had a problem that resonated deeply with me. His manager had tasked him with figuring out how much Compute Savings Plan commitment their organization should purchase, and he was stuck. He wasn’t a FinOps practitioner with years of experience navigating AWS pricing models – he just had a task, a deadline, and a spreadsheet that wasn’t getting him anywhere.

“AWS recommends we buy $150/hour,” he told me, “but I have no idea if that’s right. I can’t explain it to finance, I can’t defend it in a meeting. It’s just a number from a black box.”

That conversation stuck with me because the problem is universal. Across hundreds of customers on our platform, we kept hearing the same frustration: organizations were being asked to commit tens of thousands of dollars monthly based on recommendations they couldn’t verify, couldn’t customize, and couldn’t trust. By the end of that day, we had built something different, and it took about an hour.

The Trust Problem with Savings Plan Recommendations

AWS reservation recommendations aren’t wrong – they’re based on your historical usage, and they generally point you in a reasonable direction. The problem is opacity, and opacity becomes a serious issue when you’re about to sign a one-year or three-year commitment worth six or seven figures. Finance wants to understand the logic, engineering wants to know the assumptions and verify that the right workloads are being considered, and leadership wants scenarios they can evaluate. The person responsible for the recommendation wants to sleep at night knowing they can defend their decision.

We’ve seen this play out in uncomfortable ways. One customer came to us after a third-party vendor recommended a specific coverage target, but the vendor’s model was proprietary and the customer had no way to verify whether the recommendation made sense for their specific workload patterns. They were about to commit significant budget based on faith alone, which is exactly the situation that makes experienced practitioners uncomfortable.

Rethinking the Approach

When we sat down to build our own Savings Plan analysis, we started with a fundamental question about what someone actually needs to know before purchasing a commitment. The obvious answer :”how much should I buy?” ,  turned out to be the wrong starting point. The real questions are more nuanced: understanding current coverage levels, measuring usage variability, identifying the point where additional commitment stops making financial sense, and quantifying the risk of over-commitment.

A good Savings Plan calculator shouldn’t produce a single number. It should give practitioners the information they need to make a decision they can defend, which meant building something transparent that shows its work at every step.

Understanding the Data Foundation

Every Savings Plan calculation ultimately comes down to one data source: the Cost and Usage Report. Specifically, we needed to understand hourly compute spend broken down by payment method – whether it was covered by Reserved Instances, by existing Savings Plans, paid on-demand, or running on Spot instances.

The key insight is that Savings Plans can only cover what we call “eligible compute,” which includes Reserved Instance usage, Savings Plan usage, and on-demand usage. Spot instances are already discounted through a different mechanism and aren’t eligible for Savings Plan coverage, so they need to be tracked separately but excluded from coverage calculations.

Step one was aggregating hourly data into these buckets. For each hour in the analysis period, we calculated exactly how much compute spend fell into each category, and this gave us the foundation for everything else.

Breaking Free from the Black Box: How We Built Our Own Compute Savings Plan Calculator

The Coverage Calculation

Current coverage sounds simple: take what’s already covered by RIs and Savings Plans, divide by total eligible compute, and express it as a percentage. Conceptually it is that simple, but the devil is in the details.

We quickly learned that looking at averages alone is dangerous. A customer might show 80% average coverage, but when you examine the hourly breakdown, you see wild swings :95% on weekends when batch jobs are running, 65% on peak weekdays when interactive workloads dominate. That variability matters enormously when deciding how much additional commitment to purchase, because commitments are paid whether you use them or not, on an hourly level.

So we built in coverage variability tracking: minimum daily coverage, maximum daily coverage, and the spread between them. A high spread is a warning sign indicating that your workload is variable and that aggressive commitment targets carry more risk of waste.

Finding the Optimal Point

Here’s where the math gets interesting. The goal of purchasing a Savings Plan is straightforward –  you’re committing to a certain hourly spend in exchange for a discount, typically around 20-40% depending on the plan type and term. The catch is that you pay for that commitment whether you use it or not.

If your on-demand usage drops below your commitment in any given hour, you’re paying for compute you didn’t consume. We call those “waste hours,” and the more variable your workload, the more waste hours you’ll accumulate at higher commitment levels.

This creates a tension that drives the entire optimization problem. More commitment means more discount on the hours you do use, but also more waste on the hours you don’t. Somewhere between zero commitment and maximum commitment is an optimal point ,the level where your net savings (discount minus waste) is maximized.

Visual way to think about it:

Breaking Free from the Black Box: How We Built Our Own Compute Savings Plan Calculator

Optimal Coverage is the point where you get the maximum net savings .The sweet spot where the savings from the SP discount minus the waste from unused commitment hours is at its highest.

Break-Even Coverage is the maximum point before you start losing money, where your savings from the discount exactly equal your waste from unused commitment hours. Go beyond this, and your waste exceeds your savings, resulting in negative ROI.

Finding the optimal point requires iterating through possible commitment levels and calculating the net savings for each. For every potential commitment amount, we look at each hour in the historical data: if that hour’s on-demand usage was above the commitment, we capture the full discount; if it was below, we calculate the waste. Summing it all up produces the net savings for that commitment level, and the optimal point is simply the commitment level with the highest net savings.

This arithmetic is verifiable by anyone, which was exactly the point.

The Percentile Control

One challenge we encountered was dealing with outliers in the hourly data. Workloads often have spikes: a monthly batch job, an unexpected traffic surge, a one-time data migration, that can skew the analysis if treated as normal operating conditions. We needed a way to let practitioners control how much of this edge-case data influenced the recommendations.

The solution was introducing a percentile parameter that controls data accuracy versus conservatism. When set to the 75th percentile, for example, the analysis focuses on covering the on-demand spend that occurs 75% of the time, acknowledging that the highest 25% of hours might represent anomalies rather than steady-state operations. Setting it higher (90th, 99th percentile) produces more aggressive recommendations that cover more edge cases, while setting it lower produces more conservative recommendations that accept some on-demand exposure during peak periods.

This parameter gives practitioners explicit control over a trade-off that generic recommendations handle implicitly and opaquely. Someone who knows their workload includes predictable monthly spikes can adjust accordingly, while someone with steady-state compute can optimize more aggressively. The key is that the choice is visible and adjustable rather than buried in a proprietary algorithm.

Making It Target-Aware

A crucial design decision emerged as we tested with real customers: the calculator shouldn’t just report what’s optimal, it should evaluate whatever target the practitioner is considering and explain whether it makes sense.

We built target-aware analysis where you input your desired coverage percentage and the system calculates exactly what would happen if you pursued it. The output tells you whether the target is already achieved by current coverage, whether it’s achievable with positive ROI, whether it’s above optimal but still profitable, or whether it’s past break-even and actually harmful to pursue.

For each scenario, we calculate the specific commitment required, the expected waste hours percentage, and the net financial impact. This changes the conversation entirely. Instead of asking “should we buy the recommended amount?”, teams can explore what happens at 85% versus 90% versus 95% coverage, seeing the trade-offs and understanding the risks before making a decision.

Breaking Free from the Black Box: How We Built Our Own Compute Savings Plan Calculator

We saw this matter in practice. A customer came to us with a target of 97% coverage, reasoning that higher coverage must be better. When we ran the analysis, their break-even point was around 90% due to workload variability. Pursuing that 97% target would have resulted in waste hours and a net negative return on their commitment . The “better” coverage would have cost them money. Having the target-aware analysis let them see this before committing rather than discovering it in next quarter’s cost report.

The Outcome

The DevOps engineer from the conference was exactly the person we built this for. He wasn’t looking for a magic number; he was looking for a way to understand the problem well enough to make a defensible recommendation.

With our approach, he could walk into his leadership meeting with a complete picture: current coverage, variability analysis, optimal point, break-even boundary, and a clear explanation of what happens at different commitment levels. He could answer follow-up questions and run scenarios on the spot because he owned the decision. He understood the logic behind it.

Breaking Free from the Black Box: How We Built Our Own Compute Savings Plan Calculator
Breaking Free from the Black Box: How We Built Our Own Compute Savings Plan Calculator

That’s the real outcome. When you can see inside the calculation, you can adapt it to your specific context. Maybe your organization is risk-averse and wants to stay well below optimal, or maybe you have planned growth that historical data doesn’t capture, or perhaps you’re consolidating workloads and expect usage patterns to change significantly. Generic recommendations can’t account for any of that, but calculations you understand and control can.

The Larger Point

We built this particular analysis in about an hour, start to finish. That speed wasn’t because the math is trivial, it’s because we’d designed our platform around the principle that FinOps practitioners shouldn’t wait on vendors or engineers to solve their problems.

The questions you need to answer about your cloud spend are specific to your organization: Your workload patterns, your risk tolerance, your business context , these aren’t things a generic recommendation engine can fully capture, but they’re things you understand. What practitioners often lack is the ability to translate that understanding into calculations quickly enough to be useful. By the time you’ve built a custom spreadsheet, exported the data, cleaned it up, written the formulas, and validated the results, the decision has already been made based on whatever recommendation was most convenient.

The people closest to the business context should be empowered to build the analyses they need, when they need them. The Savings Plan calculator we built isn’t special because of the math , the math is straightforward. It’s useful because it took an hour instead of a week, and because the team that needed it could verify every step.

Moving Forward

If there’s one thing I’d want you to take from this, it’s that you don’t have to accept black box recommendations for high-stakes financial decisions. The math behind Savings Plan optimization isn’t proprietary or mysterious – it’s knowable, and you have every right to know it.

Whether you build your own calculations, use a platform like ours, or work with a vendor who shows their work, demand transparency. Understand the assumptions, verify the logic, and own your decisions. When you’re committing hundreds of thousands of dollars, “the algorithm said so” isn’t a strategy. Understanding why is.