Why customer_id alone isn't enough
A customer paying $180/month might have two workflows: a cheap support bot ($5) and an expensive research chain ($175). The per-customer margin report shows them as one number: $180 total. You can't fix what you can't see.
To know which workflow is burning margin, you need per-task cost visibility.
Name your tasks descriptively
from apeiros import ApeirosAgent
# Support workflow — fast, cheap
support_agent = ApeirosAgent(customer_id="acme-corp", model="claude-3-haiku")
support_agent.start_task("answer-support-question")
support_agent.update_tokens(1_200)
support_agent.end_task()
print(f"Support task: ${support_agent.cost_estimate:.4f}")
# Support task: $0.0005
# Research workflow — slow, expensive
research_agent = ApeirosAgent(customer_id="acme-corp", model="claude-3-5-sonnet")
research_agent.start_task("research-and-draft-report")
research_agent.update_tokens(42_000)
research_agent.end_task()
print(f"Research task: ${research_agent.cost_estimate:.4f}")
# Research task: $0.3360
The task name is stored in every registry record. Use it to build a cost breakdown.
Aggregate by task name
ApeirosAgent._registry is a dict mapping customer_id to a list of task records. Read it directly to group costs by workflow:
from apeiros import ApeirosAgent
from collections import defaultdict
# Aggregate cost and call count by task name
workflow_costs: dict[str, dict] = defaultdict(lambda: {"calls": 0, "cost": 0.0})
for customer_id, tasks in ApeirosAgent._registry.items():
for task in tasks:
name = task.get("task", "unnamed")
workflow_costs[name]["calls"] += 1
workflow_costs[name]["cost"] += task.get("cost_estimate", 0.0)
# Print breakdown
print(f"{'Workflow':<35} {'Calls':>6} {'Total Cost':>12} {'Avg/Call':>10}")
print("─" * 70)
for name, stats in sorted(workflow_costs.items(), key=lambda x: -x[1]["cost"]):
avg = stats["cost"] / stats["calls"] if stats["calls"] else 0
print(f"{name:<35} {stats['calls']:>6} ${stats['cost']:>10.4f} ${avg:>8.4f}")
Output:
Workflow Calls Total Cost Avg/Call
──────────────────────────────────────────────────────────────────────
research-and-draft-report 12 $4.0320 $0.3360
answer-support-question 847 $0.4235 $0.0005
The research workflow costs 672× more per call than support. That's where to focus.
Use a workflow/step naming convention
For multi-step pipelines, prefix task names with the workflow:
# Step-level granularity
agent.start_task("research/web-search")
agent.start_task("research/synthesis")
agent.start_task("research/draft")
agent.start_task("support/classify")
agent.start_task("support/respond")
This lets you group by prefix to compare workflows, or drill into individual steps:
# Costs by top-level workflow
research_total = sum(
t["cost_estimate"]
for tasks in ApeirosAgent._registry.values()
for t in tasks
if t.get("task", "").startswith("research/")
)
Find the expensive step
Once you know which workflow costs the most, find the specific step:
# Which step in research/ is most expensive?
step_costs = defaultdict(float)
for tasks in ApeirosAgent._registry.values():
for t in tasks:
name = t.get("task", "")
if name.startswith("research/"):
step_costs[name] += t.get("cost_estimate", 0.0)
for step, cost in sorted(step_costs.items(), key=lambda x: -x[1]):
print(f" {step:<30} ${cost:.4f}")
research/synthesis $2.8800
research/web-search $0.8640
research/draft $0.2880
research/synthesis is the bottleneck. Common fixes: reduce context passed in, cache intermediate results, or switch that step to a cheaper model.
Profile a single run with debug mode
Before deploying changes, profile one execution to see per-step cost in real time:
import apeiros
apeiros.instrument()
apeiros.start_session(budget=5.00, debug=True)
# Run your workflow...
apeiros.end_session()
With debug=True, Apeiros prints cost and warnings after every model call:
[apeiros] step=1 tokens=12400+3800 cost=$0.1296 budget_used=2.6%
[apeiros] step=2 tokens=28000+9200 cost=$0.2976 budget_used=8.5% ⚠ context_bloat