Enterprise AI buying, evaluated honestly

The gap between AI operations and finance is a unit economics gap. Treat it like alignment and it never closes.

Operations writes the memo. Finance is handed a $300K bill that mixes seats, capacity, and compute. Nothing on that bill is a workflow, so no one can answer the only question finance ever asks: what did each automation cost, and what did it return. The fix is not a steering committee. It is a bill whose line items are workflows.

Direct answer (verified 2026-05-06)

Enterprise AI stalls between ops and finance because the bill is denominated in seats and the value is denominated in workflows. Finance has no per-workflow dollar line item to compare against measured throughput, so the ROI claim from operations cannot be audited. The structural fix is metered runtime billing: write execution_duration_seconds per execution, multiply by a flat rate per minute, group by (workflow_id, month), and serve the result as the bill.

Mediar implements that pipeline in one route handler at apps/web/src/app/api/billing/usage/route.ts. Public list rate is $0.75 per minute of executor runtime; full pricing at www.mediar.ai/pricing.

What "the gap" actually is, when you trace it down

Every conference panel on enterprise AI calls this an alignment problem. Ops and finance speak different languages, runs the script. The CIO needs to translate. The CFO needs AI literacy. Find me a CFO without AI literacy.

What is actually happening is that the bill operations is asked to defend has no entity in it that finance can audit. A typical UiPath enterprise quote bills six developer seats, eighteen attended robot licenses, four unattended robot slots, an Orchestrator instance, document AI on a transaction meter, and AI Center compute on an hourly meter. The memo from ops claims the deployment will save $1.4M a year on claims intake. Finance asks the obvious next question: which line on this $267K invoice corresponds to claims intake? The answer is none of them. Claims intake runs across some of the eighteen attended robots, on a slice of the Orchestrator, with a fraction of the document AI volume, charged to a partial allocation of AI Center compute. Operations rebuilds the per-workflow cost from queue exports in a spreadsheet, marks it up with the implementation labor cost, and hands it back. Finance has to either trust that spreadsheet or treat the whole contract as overhead and skip ROI tracking entirely.

Most do the second one. That is the gap. It is not attitudinal; it is structural. The bill literally cannot answer the question.

What the two bills actually look like

On the left, a real-shape RPA quote. On the right, the JSON the Mediar billing route handler emits today. Same period, same scope of automation, completely different audit-ability. Tab between them; the difference is the entire argument of this page.

The bill is the gap

INVOICE: UiPath Enterprise Cloud
Bill to: Acme Insurance, Center of Excellence
Period:  Q1 2026

Item                                        Qty   Unit       Rate         Amount
UiPath Studio Pro (developer seat)            6    annual     $4,200       $25,200
UiPath Attended Robot                        18    annual     $5,040       $90,720
UiPath Unattended Robot                       4    annual    $13,200       $52,800
UiPath Orchestrator (Standard)                1    annual    $48,000       $48,000
Document Understanding (transactions)    180,000   txn        $0.04        $7,200
AI Center (compute hours)                   600    hour      $12.50        $7,500
Premium Support                               1    annual    $36,000       $36,000

Subtotal                                                                  $267,420
Implementation services (separate SOW)                                    $185,000
Total Q1                                                                  $452,420

# Finance question: which of our 47 production workflows produced
# what dollar value, and which ones cost more than they returned?
# Answer in this bill: not derivable. Seats and hours are not workflows.
-60% line items finance can audit

Why metered runtime is the only structure that works

Cloud infrastructure crossed this exact line in 2010 when AWS started publishing per-service, per-account usage at hourly granularity. Before that, finance treated server capacity as a fixed asset and engineering treated it as free; after, finance had a meter that mapped to applications and engineering had a constraint that mapped to features. Cost-per-feature became real. The accounting treatment for cloud as cost-of-goods-sold (COGS) versus operating expense settled within five years.

Enterprise AI is at the same fork. An automation platform that bills on seats teaches finance to treat AI as overhead. A platform that bills on metered runtime per workflow teaches finance to treat AI as COGS, the way they already treat the AWS line. The unit (a minute of executor runtime) is small enough to absorb workflow variance; the dimension (workflow_id) is what business cases are written against. The intersection is the per-workflow cost.

That is the only number that closes the gap, because it is the only number that lives in both languages. Ops can multiply it by frequency to get monthly cost. Finance can divide it into the per-execution time savings to get cost-of-goods. Both teams are now indexing into the same cell.

167 lines

The Mediar billing route is 167 lines. It joins workflow_executions to deployed_workflows, groups by (workflow_id, year-month), multiplies execution_duration_seconds by a per-minute constant, and returns the bill as JSON. The same JSON renders to the customer's invoice and to their internal cost-per-process dashboard. There is no allocation step.

apps/web/src/app/api/billing/usage/route.ts

The CFO test, and what it means in practice

Hand any enterprise AI vendor a single test before signing. Ask them to produce, today, a bill scoped to one named workflow you already run, for the past month, with the executions count, the total duration, the unit cost, and the rolled-up total. If the answer requires a spreadsheet, an export, or a quarterly review with the customer success manager, the bill format does not close the gap. If the answer is a URL the finance team can bookmark and refresh, it does.

The Mediar answer to that test is app.mediar.ai/web's billing page, which is the same JSON shape pulled from Postgres on every request. The UiPath answer is a queue report from Insights plus a spreadsheet that allocates seat-license cost across workflows, owned by the Center of Excellence and refreshed quarterly. Both are bills. Only one is the right shape.

None of this means UiPath is wrong for everyone. Estates with always-saturated unattended robots running 24x7 have a crossover point where flat-license amortization beats the meter. The honest call is to model both shapes against your actual workflow profile and let the number decide. The point of this page is that finance cannot make that call against an opaque bill, which is why the AI buying conversation stalls when it is run on the wrong meter.

Bring one production workflow on the call. We will show what its bill looks like.

A 30-minute walk through the billing route and the per-workflow line item, against your real workflow profile. No slides; the actual JSON, the actual rate, the actual numbers your finance team would see on day one.

Frequently asked questions

What is the actual gap between AI operations and finance in an enterprise AI buy?

It is a unit-economics gap, not a vocabulary gap. Operations writes a memo about an AI agent that will run a workflow 500 times a week and save 8 hours per run. Finance is asked to approve a $300K to $700K total contract value (TCV) that mixes Studio seats, Robot licenses, Orchestrator, document AI transactions, AI Center compute, and a separate implementation SOW. Nothing in that bill is the workflow finance is being asked to approve. There is no line that says 'claims_intake produced $187K in March at a runtime cost of $4,463.' Finance has nowhere to put the AI-ROI claim against the AI-cost reality, so the deal stalls or the deal happens and the post-mortem is unwinnable. The fix is a bill whose line items are workflows, with a unit (a minute of runtime) finance can cross-multiply against the unit of business value (a claim, a renewal, a reconciliation).

Why doesn't existing RPA pricing close the gap? Doesn't UiPath have execution logs?

Two reasons. First, UiPath bills on capacity not consumption. An Unattended Robot license is roughly $13K per year per concurrent slot regardless of whether the robot ran for 8 hours or 8 minutes that month. Finance gets a fixed annual fee that does not move when throughput moves, which means the unit cost of any one workflow is undefined; it changes every time another workflow is deployed onto the same robot. Second, the per-execution telemetry that does exist (Orchestrator's queue items table, Insights dashboards) is in a separate place from the bill. To answer 'what did claims_intake cost in March,' someone in the RPA Center of Excellence has to export a queue report, allocate a fraction of the Robot license to that workflow based on duration, allocate a fraction of Orchestrator, allocate a slice of AI Center, and rebuild a per-workflow cost in a spreadsheet. That spreadsheet is the workaround for the gap. It does not actually close it because finance cannot audit it; only the RPA team can.

What does the Mediar billing pipeline actually look like in code?

It is one Next.js route handler at apps/web/src/app/api/billing/usage/route.ts. The runtime (the open-source Terminator executor) writes execution_duration_seconds to the workflow_executions table on every run, alongside workflow_id, status, and started_at. The route handler queries that table for the last 365 days, groups by (workflow_id, year-month), multiplies durationMinutes by a single constant RATE_PER_MINUTE, and emits a JSON with the shape { ratePerMinute, months[].workflows[].{name, executions, totalMinutes, cost} }. The UI at /billing pulls that JSON and renders it as the customer's bill. The same shape feeds the customer's internal cost-per-process dashboard. There is no allocation step, no per-license slicing, and no spreadsheet. The bill is the runtime telemetry, multiplied by a rate.

What rate does Mediar bill at, and is it negotiable?

The public list rate is $0.75 per minute of executor runtime, plus a $10K turn-key program fee that converts to credits on the same meter. In the source code, the rate is a single constant at the top of the billing route; per-customer contracts substitute different constants without changing any of the data shape. A workflow that runs 10 minutes a day for a year costs roughly $1,575 in runtime at the list rate; one that runs 8 hours a day costs roughly $145K at the list rate, which is the territory where finance starts asking whether the workflow should be re-architected with an API instead of a desktop agent. That conversation is healthy; it is what unit-economics visibility unlocks.

How does this compare to what a CFO sees from a typical enterprise AI vendor?

Most enterprise AI contracts (RPA, agent platforms, copilots) bill on seats, capacity, or token volume. None of those line items maps cleanly to a workflow or a process. A CFO presented with such a bill has two options: trust the operations team's per-workflow ROI math (which is built bottom-up in a spreadsheet from non-billed telemetry), or treat the entire contract as overhead and exclude it from process P&L entirely. Both paths are why enterprise AI ROI conversations feel disingenuous. A meter that emits a per-workflow line item gives the CFO a third option: treat each automation as a unit of cost-of-goods-sold, exactly the way infrastructure cloud costs are treated since AWS published per-service usage in 2010. The accounting treatment is not new; the data shape is.

Can the finance team see the unit cost without going through operations?

Yes, and it is the point. The /billing UI authenticates against Clerk, scopes to the org via the row-level security policy on workflow_executions and deployed_workflows, and serves the JSON straight from Postgres. A finance user with org access reads the same page the RPA Center of Excellence reads. The per-workflow cost line, the per-month total, and the running 365-day history are all there. Finance does not need a quarterly export from operations; they pull the page. That is the structural shift that closes the gap. It also means the operations team stops being the translation layer between the bill and the business case, which is most of the reason these conversations were stuck in the first place.

What about the parts of the bill that aren't runtime, like the $10K program fee?

The program fee is a one-time onboarding cost; it converts to runtime credits with a small bonus, so on the meter it appears as a prepaid balance that draws down workflow by workflow. From a finance perspective it is amortized across the workflows that run against the credit, not a separate capital line. Implementation services beyond the program (custom integrations the open-source Terminator SDK can't cover, or compliance-specific deployments) are billed as time-and-materials with a separate line and a separate SOW. The runtime meter never mixes with services billing; that separation is what keeps the per-workflow unit cost auditable.

Where does this fall apart, honestly?

Three places. First, workflows with extreme variance in duration (a 10-second happy path versus a 4-minute exception path) make per-workflow averages noisy; finance gets a stable monthly total but per-execution unit cost swings, and the operations team has to break the workflow into named variants to get clean attribution. Second, workflows that share state across runs (queues that drain in batches, end-of-day reconciliations) bill as the long single execution they actually are, which can spike a monthly total even though the per-claim cost is unchanged; the right fix is to refactor the workflow, not the meter. Third, very high concurrency at peak (1,000+ simultaneous executions) compresses unit cost less than seat-based RPA does at the same scale; the crossover where UiPath's flat-license amortization beats Mediar's meter is around the territory of always-on, always-saturated robots in 24x7 operation. For long-tail and burst workloads, which is most enterprise RPA, the meter wins.