Picture yourself sitting in front of a cloud cost dashboard, endlessly filtering through services and cost items, trying to make sense of your spending trends. Sound familiar? If so, I have a story that might resonate with you.

Eight years ago, I began my journey in the FinOps space. Like many FinOps practitioners, I spent my first two years building a comprehensive cost visibility platform in my previous role. The goal seemed straightforward: I wanted to know where every dollar was going. Looking back now, I realize I was asking the wrong question entirely.

The Traditional Approach: A Well-Intentioned Misalignment

Traditional BI-focused FinOps platforms excel at providing detailed billing insights, managing chargebacks, tracking unit economics, and offering multi-cloud visibility. These tools are undeniably crucial for financial planning and reporting. If your goal is to understand your cloud spending landscape comprehensively, these platforms serve their purpose well.

But here’s the challenge: while you might have solved the cost visibility puzzle – knowing each team’s spending and establishing unit economic thresholds – you’re likely still struggling with what truly matters. The real questions are: How do you effectively address cost spikes? What drives concerning trends? How do you identify and investigate anomalies?

In a small-scale environment with a limited tech stack, manually investigating cost dashboards might be manageable. However, as companies scale and their cloud service usage expands, understanding trends through traditional cost reports becomes increasingly complex and time- consuming.

The Automation Revolution: Shifting the Focus

So, where should FinOps practitioners focus their attention? The answer lies in automation. As a Financial Engineer, I’ve experienced firsthand the enormous context-switching overhead required from FinOps practitioners. Our ability to participate in designing, implementing, and contributing to new cloud initiatives is often limited by the time spent on manual investigation processes.

Automation isn’t just about reducing waste – it’s about fundamentally transforming how we approach cloud cost management. Whether your investigation strategy is top-down (from account to service to usage type), tag-based, or bottom-up to catch the smallest infrastructure changes, these processes can be systematically automated.

A New Approach: Intelligent Anomaly Detection

Working for a FinOps automation company has allowed me to translate my experience and investigative logic into actionable solutions. We’ve developed templates that revolutionize how we detect and analyze cost changes.

The Templates are designed to analyze time-series cost data to identify abnormal patterns, trends, and anomalies that might indicate issues requiring attention. The approach is sophisticated and multi-layered, examining cost data through several different lenses.

The system first establishes what “normal” looks like by calculating baselines using median values and percentile thresholds rather than simple averages, which makes it resistant to being skewed by extreme values. It then hunts for various types of anomalies: single-day spikes, consecutive anomalies, periodic patterns, end-of-period surges, and persistent baseline increases. What makes this approach particularly sophisticated is its awareness of periodicity – it can recognize when certain costs follow regular patterns (daily, weekly, monthly) and adjust its expectations accordingly. For example, if costs always spike on Mondays, the system won’t flag a typical Monday increase as an anomaly, but will detect if a Monday spike is significantly higher than usual.

The system applies a holistic evaluation approach, considering both percentage changes and absolute dollar impacts. A 200% increase might be significant for a large service but meaningless for a tiny one, so the code requires anomalies to exceed both percentage and minimum dollar thresholds. It also cleans the data by removing identified anomalies before looking for more subtle trends, preventing extreme values from hiding other patterns. The analysis produces prioritized results with meaningful explanations, quantifying the financial impact and providing context about when and how anomalies occurred. This context includes comparisons to previous similar periods (like same day last week) and whether the pattern aligns with expected periodic behavior.

In essence, this system doesn’t just tell you that something unusual happened with your costs— it tells you exactly what happened, when it happened, how significant it is, and whether you should be concerned about it, transforming raw data into actionable intelligence that helps you take control of your financial landscape.

The FinOps Focus Shift: Beyond Cost Dashboards

The Future of FinOps

If you’re asking where FinOps focus should be, I believe it’s about acting on digestible, actionable information. If you find yourself manually tracking trends, struggling to determine priorities, or inefficiently engaging with engineers – it’s time to rethink your approach.

I’ve long advocated that automation is the key to effective FinOps, and I’m excited to see this vision becoming reality. The future of FinOps isn’t about spending hours in dashboards – it’s about leveraging intelligent automation to tell you exactly where your attention is needed most.

By automating the investigative process, we free up FinOps practitioners to focus on strategic initiatives, collaborate more effectively with engineering teams, and drive real value for their organizations. After all, isn’t that what FinOps should be about?