Who this is for: Developers, SREs, and product teams that run serverless workflows (Step Functions, Logic Apps, Cloud Workflows, etc.) and need predictable, per‑run cost estimates. We cover this in detail in the n8n Cost, Scaling & Infrastructure Economics Guide.
Quick Diagnosis
Problem – Serverless workflow runs often generate “mystery” bills because teams can’t forecast the cost of each execution.
Solution in a nutshell –
- List every billable piece of a workflow run.
- Pull real‑time usage metrics.
- Apply the provider’s current pricing tiers (via API).
- Compare the model to the monthly invoice.
In production this usually shows up when a sudden spike in the invoice appears. You can’t tell which workflow caused it.
1. Billable Components of a Workflow Execution
| Component | What It Covers | Typical Pricing Unit |
|---|---|---|
| State Transitions | Each step the engine moves through (Task, Choice, Parallel, …) | $ per 1,000 transitions |
| Compute Time | CPU‑seconds or GB‑seconds used by a task (Lambda, Cloud Run, …) | $ per GB‑second |
| Data Transfer | In/ out bytes between services in the same region | $ per GB (first 1 GB often free) |
| Durable Storage | Execution history logs, payload snapshots | $ per GB‑month |
| Additional Services | API Gateway calls, managed secrets, etc. | Varies |
Provider‑specific examples (2024)
| Provider | State‑Transition Rate | Compute Rate | Data‑Transfer Rate |
|---|---|---|---|
| AWS Step Functions – Standard | $0.025 / 1 k transitions | $0.00001667 / GB‑s (Lambda) | $0.09 / GB (cross‑region) |
| Azure Logic Apps | $0.000025 / action | $0.000016 / GB‑s (Functions) | Free intra‑region |
| Google Cloud Workflows | $0.000004 / step | $0.0000025 / GB‑s (Cloud Run) | $0.12 / GB |
EEFA note – Prices vary by region and tier (e.g., first 1 M transitions free). Pull the latest list from the provider’s pricing API rather than hard‑coding numbers.
How to Identify Which Components Apply
- Inventory task types – Lambda, Fargate, Cloud Run, etc.
- Map each task to its billing model (compute, memory, request).
- Add engine counters – transitions, payload size, storage usage.
Missing a storage charge is easy if you only look at compute metrics.
2. Capture Real‑Time Usage Metrics
| Metric | Source | Retrieval Method |
|---|---|---|
| Total Transitions | CloudWatch / Azure Monitor / Cloud Logging | Metric API (e.g., GetMetricStatistics) |
| Task Duration & Memory | Lambda / Function logs | Log parsing or CloudWatch Insights |
| Payload Size | Execution history JSON | describe‑execution (AWS) |
| Data Transfer | VPC Flow Logs / Cloud Logging | Export to Athena / BigQuery |
Sample AWS CLI call – fetch transition count for the last hour
aws cloudwatch get-metric-statistics \ --namespace "AWS/States" \ --metric-name "ExecutionsStarted" \ --dimensions Name=StateMachineArn,Value=arn:aws:states:us-east-1:123456789012:stateMachine:MySM \ --statistics Sum \ --period 300 \ --start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ) \ --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ)
EEFA warning – Metrics may lag up to 5 minutes. For real‑time throttling prefer EventBridge rules over polling.
3. Build a Reproducible Cost Model
Core Formula
Cost_per_execution = (Transitions / 1,000) × Transition_Rate + Σ(Task_i) (Duration_i × Memory_i × Compute_Rate_i) + (Payload_In + Payload_Out) × Data_Transfer_Rate + Storage_Overhead + Service_Extras
At this point tweaking the Lambda memory is usually quicker than redesigning the whole workflow.
Example Calculation (AWS Step Functions + Lambda)
| Variable | Value | Source |
|---|---|---|
| Transitions | 12 | Execution history |
| Transition_Rate | $0.025 / 1 k | AWS pricing API |
| Lambda Duration | 250 ms | CloudWatch logs |
| Lambda Memory | 256 MB | Lambda config |
| Compute_Rate (Lambda) | $0.000016667 per GB‑s | AWS pricing API |
| Payload In/Out | 45 KB | Execution JSON |
| Data‑Transfer Rate | $0.00 (same‑region free) | Provider docs |
| Storage Overhead | $0.000001 per KB | Azure Blob pricing |
| Service Extras | $0.00 | None |
Step‑by‑step
- Transitions –
(12 / 1,000) × $0.025 = $0.00030 - Lambda compute –
GB‑seconds =0.25 s × 0.256 GB = 0.064 GB‑s
Cost =0.064 GB‑s × $0.000016667 = $0.00000107 - Payload transfer – Free (same region).
- Storage –
45 KB × $0.000001 = $0.000045
Total ≈ $0.000346 per execution.
EEFA tip – Round up to the nearest $0.0001 when budgeting. Providers bill in whole‑cent increments per month, so many cheap executions can push you into the next cent bucket.
4. Automate the Estimation (Provider‑Agnostic Python Script)
Below is a compact, production‑ready utility that:
- pulls the latest transition price from the AWS Pricing API,
- enumerates successful executions,
- extracts transition count and payload size, and
- writes a CSV with per‑run cost estimates.
4.1 Imports & Configuration (≈ 5 lines)
#!/usr/bin/env python3 import boto3, csv, json, datetime from decimal import Decimal STATE_MACHINE_ARN = "arn:aws:states:us-east-1:123456789012:stateMachine:MySM" REGION = "us-east-1" OUTPUT_FILE = "cost_estimates.csv"
4.2 Fetch the Current Transition Rate (≈ 5 lines)
def get_transition_rate():
pricing = boto3.client('pricing', region_name='us-east-1')
resp = pricing.get_products(
ServiceCode='AmazonStates',
Filters=[
{'Type': 'TERM_MATCH', 'Field': 'regionCode', 'Value': REGION},
{'Type': 'TERM_MATCH', 'Field': 'group', 'Value': 'StateTransition'}
],
MaxResults=1
)
price_json = json.loads(resp['PriceList'][0])
price = Decimal(price_json['terms']['OnDemand']
.popitem()[1]['priceDimensions']
.popitem()[1]['pricePerUnit']['USD'])
return price # $ per 1,000 transitions
4.3 List All Successful Executions (≈ 4 lines)
def list_executions():
sf = boto3.client('stepfunctions', region_name=REGION)
paginator = sf.get_paginator('list_executions')
for page in paginator.paginate(stateMachineArn=STATE_MACHINE_ARN,
statusFilter='SUCCEEDED'):
for exe in page['executions']:
yield exe['executionArn']
4.4 Pull Execution Details (≈ 5 lines)
def get_execution_details(arn):
sf = boto3.client('stepfunctions', region_name=REGION)
details = sf.describe_execution(executionArn=arn)
history = sf.get_execution_history(executionArn=arn, maxResults=1000)
transitions = len([e for e in history['events'] if e['type'] == 'TaskStateEntered'])
payload_kb = len(details['input'].encode('utf-8')) / 1024
return transitions, payload_kb
4.5 Compute the Cost for One Run (≈ 5 lines)
def estimate_cost(transitions, payload_kb, trans_rate, lambda_rate):
trans_cost = (Decimal(transitions) / 1000) * trans_rate
# Demo assumes a single Lambda of 250 ms & 256 MB
duration = Decimal('0.250')
memory = Decimal('0.256')
lambda_cost = duration * memory * lambda_rate
storage_cost = Decimal(payload_kb) * Decimal('0.001') / 1000 # $0.001 per KB placeholder
return trans_cost + lambda_cost + storage_cost
4.6 Main Routine – Write CSV (≈ 5 lines)
def main():
trans_rate = get_transition_rate()
lambda_rate = Decimal('0.000016667') # $ per GB‑s
with open(OUTPUT_FILE, 'w', newline='') as f:
w = csv.writer(f)
w.writerow(['ExecutionArn', 'Transitions', 'PayloadKB', 'EstimatedCostUSD'])
for arn in list_executions():
t, p = get_execution_details(arn)
cost = estimate_cost(t, p, trans_rate, lambda_rate)
w.writerow([arn, t, f"{p:.2f}", f"{cost:.6f}"])
print(f"✅ Estimates written to {OUTPUT_FILE}")
if __name__ == "__main__":
main()
Run the script
pip install boto3 python estimate_cost.py
EEFA reminder – Replace the hard‑coded Lambda duration and memory with the actual values you retrieve from CloudWatch for production‑grade accuracy.
5. Validate Estimates Against Your Monthly Bill
| Validation Step | Action | Tool |
|---|---|---|
| Aggregate Estimated Cost | Sum the CSV column (`awk ‘{sum+=$4} END{print sum}’ cost_estimates.csv`) | Bash / PowerShell |
| Pull Official Invoice | Export the “Bills” CSV from the provider console | AWS Billing, Azure Cost Management, GCP Billing Export |
| Reconcile | Compare the two totals; investigate any > 10 % variance | Excel pivot, pandas diff |
| Root‑Cause Analysis | Look for missing items (e.g., API Gateway, cross‑region traffic) | Execution logs, service‑usage reports |
EEFA insight – Free‑tier quotas (e.g., first 1 M transitions) must be subtracted before applying rates; otherwise you’ll over‑estimate.
6. Advanced Cost‑Optimization Checklist
- ☐ Disable long‑term execution history storage unless debugging.
- ☐ Merge tiny Lambda tasks to cut down transition count.
- ☐ Switch high‑frequency flows to the Express tier if latency permits.
- ☐ Compress payloads (gzip) before cross‑region calls.
- ☐ Schedule batch jobs during off‑peak discount windows (if offered).
- ☐ Set CloudWatch alarms on sudden spikes in transition count.
Bottom Line
By cataloguing every billable element, harvesting precise runtime metrics, applying up‑to‑date pricing via the provider API, and continuously reconciling against the actual invoice, you can predict the cost of a single workflow execution with sub‑cent accuracy. This keeps your serverless spend under control.



