{ BRAD OYLER }

The Snowflake Cost Control Guide

Snowflake is ridiculously productive until the monthly bill shows up. Here's a battle-tested playbook that real teams (including several >$10M ARR startups and Fortune-500 data teams I've worked with) use to keep costs predictable and usually 30-70% lower than the "default" configuration.

1. Start with the Right Foundations

Choose the correct edition wisely

  • Most companies are fine with Standard or Enterprise.
  • Business Critical is only worth it if you need HIPAA, PI protection, or customer-managed keys + failover across regions.
  • You can upgrade later without downtime. Do not pay for it on Day 1.

Use separate Snowflake accounts per environment

Prod / UAT / Dev. Yes, it is more admin work, but:

  • You can suspend Dev accounts overnight (auto-suspend = 1 min) and save ~90% on compute there.
  • No surprise prod bill from someone testing on 6-XL for 8 hours.

Size warehouses by workload, not by ego

Workload Recommended starting size Auto-suspend Auto-resume
ELT / Ingestion Small or Medium 60-300 s Yes
BI dashboards (Looker, Tableau) Small-Medium 60-120 s Yes
Ad-hoc data science Medium-Large 300-600 s Yes
ML feature engineering Large-2XL (multi-cluster) 600 s Yes

Never use X-Small except for tiny dev tasks. Credit burn per second is almost the same as Small, but performance is terrible.

2. Compute Cost Killers (The Big Three)

Virtual warehouse strategies that actually work

  • Auto-suspend aggressively in non-prod (10-60 seconds).
  • Use multi-cluster warehouses only for BI workloads with unpredictable concurrency. Turn on "Scale out" only, never "Scale up".
  • Create "job" warehouses that are started/stopped via Airflow / dbt Cloud / Snowflake Tasks. Never leave them running.

Query Acceleration Service (QAS)

Turn it on selectively per warehouse. In 2024-2025 it is usually cheaper than oversizing warehouses for spiky queries.

Search Optimization Service

Add it on huge fact tables (>500 GB) that have heavy point lookups or non-prefix filters on high-cardinality columns.
Payback is often <2 months. Do not add it to everything. Maintenance cost adds up.

Materialized Views (2024+ improvements)

Now incrementally maintained and much cheaper. Use them instead of constantly re-computing expensive joins in dashboards.

3. Storage Cost Optimization

Time Travel & Fail-safe = 7 days of pain

  • Set retention to 0 or 1 day for all transient/transient-like tables (staging, temp, external).
  • Only prod dimensional and fact tables need 7-90 days.

Use clustering wisely

  • Cluster only large tables (>100 GB) and only on columns used in join keys or WHERE filters.
  • Deep clustering = expensive micro-partition rewrites on every load.
  • Recluster automatically with Snowflake Tasks + SEARCH_OPTIMIZATION is often better than clustering.

Separate raw / transformed / served layers

  • Raw (bronze) → transient tables, no clustering, 0-day retention.
  • Transformed (silver/gold) → permanent, clustered, proper retention.

4. Query Patterns That Bleed Money

Avoid SELECT * in production pipelines

Every unnecessary column = more data scanned = credits burned.

Stop spilling to disk

Queries that spill >10 GB to local/remote disk are a red flag.
Fix by:

  • Increasing warehouse size temporarily, or
  • Adding better predicates/clustering, or
  • Breaking into multiple steps with temporary tables.

LIST @stage and RESULT_SCAN() are not free

Repeatedly listing large stages or scanning query history kills credits. Cache externally when possible.

Do not use Snowflake as a full application database

If you are running 1000 qps of tiny queries, you want something else (PostgreSQL, DynamoDB, etc.). Snowflake shines at analytical workloads, not OLTP.

5. Monitoring & Budget Controls (The Guardrails)

Must-have alerts

  • Daily credit usage > X (Resource Monitor at account level)
  • Any warehouse running >4 hours continuously
  • Query duration >30 min or scanned bytes >10 TB

Resource Monitors

Set hard limits per warehouse or per account. Snowflake will suspend warehouses when the limit is hit. Better a failed dashboard than a $200k bill.

Cost attribution

Tag everything:

ALTER WAREHOUSE BI_WH SET TAG COST_CENTER = 'Marketing', ENVIRONMENT = 'Prod';
ALTER DATABASE ANALYTICS SET TAG OWNER = 'Data Platform';

Then query INFORMATION_SCHEMA.TAG_REFERENCES and join to ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY.

Use Snowflake's native Cost Explorer (2024+)

Or tools like Keebo, Inviso, or AccelData if you want fancy dashboards.

6. Advanced Money-Saving Tricks (2025 edition)

  • Snowpark Optimized Warehouses: 2-3× cheaper for Python/Scala/Java UDF-heavy workloads.
  • Dynamic Tables: Replace daily dbt runs for near-real-time pipelines (often cheaper + fresher data).
  • Iceberg Tables (public preview → GA in 2025): Write once, read from Snowflake and Spark, Trino without duplication.
  • Buy Reserved Credits (if you can commit 1-3 years). Up to 60% discount in some regions.

TL;DR Checklist You Can Copy-Paste

[ ] Separate accounts: Prod / Dev / Feature
[ ] Auto-suspend ≤ 5 min everywhere except 24/7 warehouses
[ ] No SELECT * in ETL
[ ] Time Travel = 0 or 1 day on transient/staging tables
[ ] Clustering only on large tables with known query patterns
[ ] Resource monitors with hard limits
[ ] Tag warehouses and databases for cost allocation
[ ] Review top 10 most expensive queries weekly
[ ] Use Query Acceleration or Search Optimization instead of oversized warehouses
[ ] Move OLTP-like workloads off Snowflake

Follow this guide and most teams drop their Snowflake bill by at least 40% in the first quarter without losing performance. The platform is expensive only when you treat it like an unlimited resource. Treat it like a Ferrari and you'll be fine.

Happy querying!