March 30, 2026 might seem far away. For organizations running Amazon DocumentDB version 3.6, that date marks the end of standard support and the beginning of an expense that doesn’t show up in any forecast, doesn’t appear in any budget planning session, and quietly accumulates on your AWS bill month after month.

AWS Extended Support is one of those charges that feels reasonable when you read about it in documentation. Your database keeps running, AWS continues providing critical security patches and bug fixes, and you get more time to plan your upgrade. The catch? You’re paying for that time, and the pricing isn’t trivial.

Where This Started

At Wiv, we had already built monitoring for OpenSearch, EKS, and RDS Extended Support. The pattern was familiar, older engine versions approaching end of standard support, organizations caught off guard by new line items on their invoices, and engineering teams scrambling to prioritize upgrades they’d been deferring for quarters. When AWS announced the Extended Support timeline for DocumentDB 3.6, we knew the same scenario would play out.

The assumption going in was straightforward: take the OpenSearch automation, swap out the service-specific details, and deploy. OpenSearch Extended Support uses a normalized instance hour pricing model. AWS publishes a regional rate, you multiply by a normalization factor based on instance size, and the math is clean. DocumentDB would surely follow the same pattern.

That assumption lasted about fifteen minutes.

The Pricing Puzzle

DocumentDB Extended Support pricing works nothing like OpenSearch. Instead of a formula-based approach with normalization factors, AWS publishes explicit per-hour rates for every combination of instance type, region, and support year. A db.r5.large in eu-west-1 during Year 1 has a specific price. The same instance during Year 3 has a different, higher price. Every instance family, every size, every region, all individually priced.

The variance across this matrix is dramatic. A db.t3.medium in us-east-2 costs $0.20 per hour in Extended Support. A db.r5.24xlarge in sa-east-1 runs $71.52 per hour. That’s a 350x difference depending on what you’re running and where. Annualized, the large instance translates to over $626,000 in Extended Support charges alone, for a single instance. Organizations running DocumentDB clusters with multiple large instances across regions could face seven-figure exposure without realizing it until the invoices arrive.

The first challenge was simply finding this data in a usable format. The AWS pricing page displays it in tables, which works fine for humans checking a single instance type but falls apart when you need to calculate costs across an entire fleet programmatically.

This is where things got interesting. AWS publishes their complete pricing catalog as JSON files, publicly accessible without any IAM credentials. A single curl command to https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonDocDB/current/index.json returns everything, every SKU, every region, every price point. The file is large, but it contains exactly what we needed: machine-readable pricing that updates automatically as AWS adjusts rates.

Parsing that JSON revealed the structure. Each Extended Support SKU encodes the instance type, region, engine version, and support year directly in the usage type field. APN1-ExtendedSupport-Y2-db.r5.8xlarge-3.6 tells you everything: Asia Pacific Tokyo region, Year 2 pricing, db.r5.8xlarge instance, MongoDB compatibility version 3.6. The actual hourly rate sits in the associated terms object.

Building a pricing lookup from this data meant restructuring it into something queryable, engine version at the top level, then instance type, then region, then year. The result lets you answer “what does Extended Support cost for this specific cluster” in a single dictionary lookup rather than parsing AWS pricing logic on every calculation.

The Timeline Complexity

Pricing was only half the problem. The other half was understanding where each cluster sits in the support lifecycle and what that means financially.

DocumentDB 3.6 Extended Support spans three years, but not all years cost the same. Years 1 and 2 share one price point. Year 3 is higher, roughly 44% more per hour. A cluster that enters Extended Support on March 31, 2026 will pay one rate until March 2028, then see that rate jump for the final year through March 2029.

For clusters still in standard support, the calculation is entirely about potential future costs. You want to know: if we don’t upgrade, what’s the total exposure? For clusters already in Extended Support, the question splits in two: what have we already paid, and what’s still coming?

The automation needed to handle all of these states. A cluster with 63 days remaining in standard support needs different messaging than one that’s 200 days into Year 2 of Extended Support. The former is a planning conversation. The latter is a conversation about sunk costs and remaining exposure.

We ended up tracking the current support phase explicitly, standard, Y1, Y2, Y3, or ended, along with days remaining in that phase. When someone asks “where do we stand,” the answer is immediate and specific.

From Clusters to Instances

Another discovery during implementation: DocumentDB clusters don’t carry instance type information directly. The AWS API returns cluster-level metadata, identifiers, endpoints, engine versions, security groups, but the actual compute configuration lives at the instance level. A single cluster can have multiple instances, potentially of different sizes, each with its own cost profile.

The automation needed an additional API call per cluster to retrieve instance details, then aggregate costs across all instances in that cluster. A cluster with three db.r5.xlarge instances has three times the Extended Support exposure of a single-instance cluster. Obvious in retrospect, but easy to miss when you’re focused on cluster-level reporting.

What the Automation Actually Does

The workflow runs on a schedule, pulling DocumentDB clusters across accounts and regions. For each cluster, it extracts the engine version, checks eligibility for Extended Support against the published timeline, fetches the associated instances, looks up pricing for each instance type and region combination, and calculates costs across the full support lifecycle.

The output distinguishes between current and potential charges. Clusters approaching Extended Support show projected costs if no action is taken. Clusters already in Extended Support show both accumulated charges and remaining exposure. Everything rolls up to account and organizational views for prioritization.

Alerts trigger based on configurable thresholds, 60 days before Extended Support begins by default. Enough time to plan an upgrade, coordinate with application teams, and execute without the pressure of charges already hitting the invoice.

The Broader Point

Extended Support charges are completely avoidable. They exist because upgrades didn’t happen, and upgrades didn’t happen because competing priorities won. That’s a reasonable tradeoff when you’re making it consciously, with full visibility into the cost implications. It’s a problematic outcome when the charges appear months later as a surprise.

FinOps as a discipline talks extensively about optimization, rightsizing, reserved instances, spot usage, storage tiering. Less attention goes to charges that shouldn’t exist in the first place. Extended Support falls into that category. So do outdated snapshots, abandoned resources, and over-provisioned development environments running 24/7.

The pattern that connects these isn’t cost optimization. It’s operational hygiene. Knowing what you’re running, understanding the cost trajectory, and surfacing decisions before they become invoices.

Building the automation to surface DocumentDB Extended Support took the better part of a day. The pricing investigation, the API quirks, the edge cases around clusters with no instances, all of that required digging. Without the tooling to express that logic as a workflow, the alternative would have been a script running somewhere, owned by someone, maintained by nobody after that person moves on.

Automation isn’t optional for FinOps at scale. The cloud generates too many resources, too many pricing models, too many edge cases for manual tracking. The question is whether you’re building that automation yourself, buying it from vendors, or accepting that some charges will slip through because nobody was watching.

The organizations that treat cloud cost management as an engineering problem, with monitoring, alerting, and proactive intervention, consistently outperform those treating it as a reporting exercise. Extended Support is a small example. The principle applies everywhere.