I’ve been practicing FinOps for the last 8 years, often working at its cutting edge. Coming from an engineering background, I’ve always followed the RTFM (Read The Fine Manual) approach – understanding that documentation holds the key to discovering solutions that aren’t immediately apparent. Recently, I had an insightful conversation with my CEO and a customer about FinOps documentation, or rather, the common misconception about what constitutes essential FinOps documentation.
Most practitioners primarily focus on financial documentation: pricing pages, cost metrics explanations, and pricing calculation examples. Some venture further into basic engineering concepts related to compute, storage, and data transfer best practices. However, to truly master FinOps and unlock its full potential, we need to look beyond these conventional resources.
The Evolution of FinOps Knowledge
Today’s FinOps practitioners share a common foundation of best practices. Those with at least a year of experience understand fundamental concepts like rightsizing, identifying idle resources, and managing unused databases. Even professionals from financial backgrounds have bridged the technical gap, becoming familiar with concepts such as CPU utilization, dynamic memory, I/O, disk storage, data transfer, and NAT- just to name a few.
However, standard pricing and cost documentation often falls short. While these resources tell us how much services cost, they don’t always reveal the nuances of efficiency and optimization. The real insights often lie in the technical documentation of the services themselves. Let me share three illuminating examples:
Hidden Optimization Opportunities
The Truth About Snapshot Sizing
A common misconception involves AWS snapshot pricing. Consider this: if you have a 50GB volume in AWS and create a snapshot, what would be the storage cost? Many practitioners multiply the volume size (50GB) by the snapshot price – but this calculation is incorrect. According to AWS documentation: “The size of a full snapshot is determined by the size of the data being backed up, not the size of the source volume.” Since Amazon EBS doesn’t save empty blocks, your snapshot size – and therefore cost – is likely considerably smaller than the source volume size.
CloudFront Compression: Beyond the Basic Toggle
Optimizing CloudFront distribution costs through compression has different requirements depending on which caching policy type you’re using:
For distributions using modern Cache Policies, AWS documentation outlines three essential requirements:
- The “Compress” setting must be set to true in the cache behavior
- The cache policy must have both Gzip and Brotli settings explicitly enabled (EnableAcceptEncodingGzip and EnableAcceptEncodingBrotli)
- Cache policy TTL values must be greater than zero – setting Minimum TTL to zero disables compression caching
For distributions using Legacy Cache Behaviors, the requirements are simpler:
- Only the “Compress” setting needs to be set to true
- No additional configuration is needed as Accept-Encoding handling for both gzip and Brotli compression is automatic
In both cases, practitioners should verify whether their file types are included in CloudFront’s supported compression formats to understand if savings are possible. CloudFront only compresses files between 1,000 and 10,000,000 bytes, and only specific content types like text/html, application/json, and text/css are eligible for compression.
Understanding RDS Backup Economics
When investigating RDS snapshot costs in test or staging environments, many overlook a crucial detail from AWS documentation: each region provides free backup storage equal to 100% of your total database storage. For instance, with a 500GB database, you receive 500GB of free backup storage. Charges only apply to storage beyond this threshold, at $0.095 per GB-month.
Excess costs typically arise from:
- Retaining snapshots longer than necessary
- Storing unnecessary manual snapshots
- Keeping snapshots from terminated databases (which always incur charges due to the absence of an active database providing free storage)
The key to optimization lies in regular snapshot audits, removing unnecessary backups, and adjusting retention periods to maximize the free storage allowance while maintaining required backup coverage.
These examples demonstrate why reading service-specific technical documentation is crucial for effective FinOps practice. In an era where AI assistants can provide optimization suggestions, your ability to craft precise queries based on deep service knowledge becomes invaluable. This comprehensive understanding ensures you don’t overlook critical optimization opportunities or service-specific nuances that could impact your cost optimization strategies.
Understanding both financial and technical documentation empowers FinOps practitioners to move beyond surface-level optimizations and unlock deeper cost-saving opportunities. The true edge in FinOps comes not just from knowing costs, but from thoroughly understanding how services work and interact.
Conclusion: Beyond Surface-Level Optimization
Understanding cloud services efficiency requires more than familiarity with pricing models – it demands deep knowledge of their technical intricacies. Success in FinOps comes from mastering both the obvious costs and the subtle technical details that influence efficiency. Each service has its own set of best practices and considerations that, when properly understood, can significantly impact cost optimization efforts.
When Roni and Gil founded Wiv.ai, they set out to leverage our team’s collective experience – representing over 25 years of combined FinOps expertise across our CEO, CTO, and myself. Our goal was to transform this deep knowledge into a sophisticated, automated FinOps platform. We’ve embedded our comprehensive cloud expertise into every aspect of our solution, from intelligent remediation workflows to nuanced optimization strategies.
For example, our EC2 instance termination process demonstrates this attention to detail by first verifying the “delete on termination” flag, preventing potential resource waste. This same thorough approach guides how we discover and evaluate new optimization opportunities for our clients, drawing from our fundamental understanding of cloud operations.
Our philosophy is straightforward: we conduct in-depth research and technical analysis, allowing our clients to benefit directly from our expertise and experience. This approach creates value for everyone involved – we apply our specialized knowledge, and our clients receive optimized and efficient cloud operations without having to master every technical detail themselves.