A 2026 budget read
Skyvern pricing 2026: forecasting spend, and the boundary the credit unit cannot cross.
Skyvern shipped a credits-based pricing model on 30 January 2026, retiring the flat rate of five cents per step. The four published tiers (Free, Hobby at $29, Pro at $149, Enterprise at custom) are now on skyvern.com/pricing. Most coverage stops at copying the tier list. This piece is for someone with a 2026 budget number to defend, and it walks the three things the published tiers do not tell you on their own: how to translate the action averages into a real forecast, where the Pro tier silently stops being the right number, and the architectural boundary that decides whether the credit unit is even measuring the work you actually need to automate.
The 2026 published tier ladder
Free, Hobby, Pro, Enterprise — with the action equivalents the launch post posted as guidance.
The action numbers below are the working approximations the 30 January 2026 launch post published. They are not contractual limits; they are the cleanest way to read what each tier is actually for.
Free
$0/mo
~1,000 credits · ~170 actions
CAPTCHA solving, basic support, 1 concurrent session.
Hobby
$29/mo
~30,000 credits · ~1,200 actions
Priority support, faster execution, webhook integrations.
Pro
$149/mo
~150,000 credits · ~6,200 actions
Team workspaces, 2FA credential management, residential proxies, advanced CAPTCHA, 25 concurrent sessions.
Enterprise
Custom
Unlimited · No published cap
Self-hosted deployment, HIPAA, SOC 2 Type II, SSO, dedicated account manager, SLA. Unlimited concurrency.
Q1. Why don't the published action numbers translate into a 2026 forecast?
The numbers Skyvern publishes (170 / 1,200 / 6,200 actions) are tier averages. The launch post itself names the variables that bend the average per workflow: runtime, page complexity, retries, and anti-bot measures (CAPTCHA, proxies, geo-targeting). A click on a static page that hits no anti-bot fabric and needs no retries burns well below the average. A multi-step authenticated flow on a hardened portal that triggers a CAPTCHA, a proxy switch, and a vision retry burns well above. The bundling is the design intent of the credit unit; it is also why a single number can carry a 5x to 10x variance per portal.
That variance is fine for a steady, predictable workload that fills the bundled allowance month after month. It is fatal for a forecast where the buyer wants to defend a 2026 budget number. If half your target workflows live on hardened portals and half on well-behaved internal extranets, the average lies to you in both directions, and the only honest forecast is one built from your own portal mix.
Q2. What does an honest 2026 forecast actually look like?
Four steps, in order. First: classify each target workflow by surface. A workflow is browser-tab native if every screen the agent touches is rendered through Chromium and exposes a DOM. It is desktop-bound if any screen is a SAP GUI window, an Oracle Forms session, a mainframe terminal, an Epic Hyperspace chart, or any other native Windows control surface. The credit unit prices the first set; it does not price the second set at all.
Second: for the browser-tab subset, run a measurement on the Free tier. Pick three representative workflows: one well-behaved internal portal, one public web SaaS, one hardened external portal with CAPTCHA. Run each five times and divide credits consumed by clicks attempted. You will get three real per-action numbers from your actual portal mix, almost certainly bracketing the 25-credit average from above and below.
Third: weight by frequency. If 60% of your monthly executions hit the well-behaved portal and 40% hit the hardened portal, the weighted credit-per-action average for your specific workload is the only number that should drive tier sizing. Add a 30 to 50 percent variance buffer for retries, anti-bot drift, and end-of-month batches. That is your 2026 monthly credit ceiling.
Fourth: divide that ceiling by the tier credit allowance to pick Hobby or Pro. If your weighted ceiling pushes past 150,000 credits per month, the public price page stops being load-bearing and the forecast belongs at the Enterprise contract, where compliance, concurrency, and credit pricing are all custom-quoted together. Skip this step at your own peril; the most common 2026 forecasting mistake is sizing Pro from the published 6,200-action average and then watching real usage land 60% higher because the workload sits on the hardened portal end of the variance distribution.
Q3. Where does Pro at $149 silently stop being the right number?
The launch post puts Pro at 25 concurrent sessions and Enterprise at unlimited. The published $149 number is the credit price; it is not a concurrency price. Two workloads commonly cross the 25-session ceiling: end-of-month batch jobs (claims pulls, reconciliation sweeps, lead enrichment cron drops) and seasonal spikes (open enrollment, tax season, retail peaks). On either workload, the binding constraint is parallelism, not credits, and the tier you actually need is Enterprise even if your monthly credit consumption fits inside Pro.
A 2026 forecast that uses the published Pro price for a spiky workload is forecasting the wrong tier. The right answer is one of three: queue the spike (extends wall-clock time, stays inside Pro), shard the workflow across multiple Pro accounts (creates operational fragmentation), or move that workload to Enterprise. The first option costs you elapsed time, the second costs you ops overhead, the third costs you a custom contract. None of them are visible from the public price page.
Q4. What does a credit actually measure, and where does it stop measuring?
Read the Skyvern README at github.com/Skyvern-AI/skyvern (AGPL-3.0). The runtime is a Playwright-compatible SDK that drives a managed Chromium instance. The selection layer is a vision LLM that reads screenshots of the rendered viewport, plus DOM context. The anti-bot fabric (CAPTCHA, proxies, geo-targeting) is proprietary and lives in the cloud product on top of the open-source core. A credit is the unit of compute spent inside that managed Chromium fleet on your behalf, including the vision call, the proxy hop, the CAPTCHA solve, and any retries. The unit is well chosen for the surface it measures.
The boundary is sharp. The moment a workflow leaves the browser tab, the credit unit stops describing the work. A SAP GUI window is not a tab. An Oracle Forms session is not a tab. A Jack Henry green-screen terminal is not a tab. An Epic Hyperspace patient chart inside a Citrix shell is not a tab. An Excel sheet that the user edits in place is not a tab. None of those surfaces render through Chromium, none of them expose a DOM the way a modern web app does, and none of them have a screenshot pipeline the managed-Chromium runtime can read with the same vision model. There is no surface for the credit to measure.
That boundary is the single most important fact for a 2026 forecast. If half your target workflows cross it, no Skyvern tier can price half of your 2026 spend, regardless of how generous the Enterprise contract is. The 2026 buying decision in that case isn't which tier; it is which unit you should be buying in at all.
Q5. What is the natural unit on the desktop side of that boundary?
Wall-clock time. A desktop agent runs on the user's own Windows session, drives the operating system through the UI Automation accessibility tree (the same interface screen readers use to describe a Windows application to a blind user), and burns minutes the user can see in their own Task Manager. There is no managed VM to amortize, no vision LLM round-trip per click, and no proxy fabric to pay for. The cost is the wall-clock time the agent is active on the desktop, plus the cost of any optional cloud calls the workflow chose to make. The natural unit is one minute of desktop runtime.
The reason wall-clock minutes can be priced flat (without a per-portal multiplier) is that variance is collapsed at the recorder layer, not at the billing layer. The Mediar desktop agent's recorder names exactly which events count as a meaningful step. Mouse moves and bare alphanumeric keystrokes are filtered. That filtering is what makes the meter honest in seconds.
// apps/desktop/src-tauri/src/workflow_recorder.rs
//
// /// Check if an event is "meaningful" for step-by-step recording
// /// Meaningful events are user actions that should be reviewed:
// /// - Click (single/double)
// /// - TextInputCompleted (aggregated keystrokes)
// /// - Standalone Enter, Delete, Escape, Tab keys
// /// - Keyboard shortcuts (Ctrl/Alt/Shift/Win + key)
// /// - Application switch (Alt+Tab)
// /// - Clipboard operations
fn is_meaningful_event(event: &TerminatorWorkflowEvent) -> bool {
match event {
TerminatorWorkflowEvent::Click(_) => true,
TerminatorWorkflowEvent::BrowserClick(_) => true,
TerminatorWorkflowEvent::TextInputCompleted(_) => true,
TerminatorWorkflowEvent::Hotkey(_) => true,
TerminatorWorkflowEvent::Clipboard(_) => true,
TerminatorWorkflowEvent::ApplicationSwitch(_) => true,
TerminatorWorkflowEvent::BrowserTabNavigation(_) => true,
TerminatorWorkflowEvent::Keyboard(kb_event) => {
if !kb_event.is_key_down { return false; }
let special_keys: &[u32] = &[0x0D, 0x2E, 0x1B, 0x09]; // Enter, Del, Esc, Tab
if special_keys.contains(&kb_event.key_code) { return true; }
kb_event.ctrl_pressed || kb_event.alt_pressed || kb_event.win_pressed
}
TerminatorWorkflowEvent::Mouse(mouse_event) => matches!(
mouse_event.event_type,
MouseEventType::Click | MouseEventType::DoubleClick | MouseEventType::RightClick
),
// Mouse moves and bare alphanumeric keystrokes fall through
_ => false,
}
}From apps/desktop/src-tauri/src/workflow_recorder.rs, lines 252 to 316. Seven event variants are treated as meaningful, plus modifier-bearing or special keyboard events. Mouse moves fall through. The recorder does the work of collapsing 30 keystrokes of typing into one TextInputCompleted step before any pricing question is even asked.
“Credits represent a unit of browser execution. Different workflows consume different amounts of credits depending on runtime, page complexity, retries, and anti-bot measures (CAPTCHA, proxies, geo-targeting).”
Skyvern Day 5 launch post (30 Jan 2026), against Mediar's per-minute desktop runtime price drawn against a $10,000 program prepay
The two units are not in conflict; they are sized for two different surfaces. A credit measures managed-Chromium execution plus the cloud fabric around it. A desktop minute measures wall-clock time the agent spends driving the OS. A mixed 2026 workload almost certainly needs both, priced separately, against two different vendors.
Q6. When is Skyvern's 2026 pricing the right buy?
When the workflow lives entirely inside a Chromium tab, the usage is steady enough to fit a tier, and the bundled CAPTCHA and proxy work the credit price subsidizes is doing real work. Vendor portal logins, payer claim status checks, lead enrichment from public web sources, document downloads from a hardened extranet, the long tail of B2B SaaS form fills. On those workloads the 2026 credit price is genuinely cheaper than per-step billing would have been at the same volume, the proxy fabric saves you the cost of running your own, and the tier ladder maps cleanly to ops team scale.
It stops being the right buy when the workflow crosses out of the tab, when usage is spiky enough that the concurrency ceiling binds before the credit ceiling does, or when the compliance frame the buyer has to swallow makes the public per-action price irrelevant compared to the Enterprise contract. None of those three cases are flaws in Skyvern's pricing; they are signals that the surface, the rhythm, or the contract you actually need is somewhere else. The 2026 forecast is honest only when the unit you are billed in matches the surface your workflow lives on.
Bring a 2026 forecast that has to cross the browser-tab boundary.
If your 2026 plan has workflows in SAP GUI, Oracle Forms, Jack Henry, Fiserv, FIS, or Epic Hyperspace, those minutes need a different unit than Skyvern's credit. Twenty minutes is enough to record one live and replay it against the Windows UI Automation tree, with the meter ticking in seconds the whole time.
Frequently asked questions
Did Skyvern's pricing change in 2026?
Yes. On 30 January 2026, Skyvern shipped a Day 5 launch post titled 'Simpler Pricing Model' that retired the previous flat rate of five cents per step and introduced four tiers denominated in monthly credits: Free at $0 with roughly 1,000 credits, Hobby at $29 with roughly 30,000 credits, Pro at $149 with roughly 150,000 credits, and Enterprise at custom pricing with unlimited credits. The launch post also published rough action approximations: 170 actions on Free, 1,200 on Hobby, 6,200 on Pro. The structure is mirrored on skyvern.com/pricing as of April 2026.
Are there annual or volume discounts on the 2026 published tiers?
The published Hobby and Pro tiers are monthly only on skyvern.com/pricing. There is no public annual price for either. Enterprise is custom and the buyer-side levers (multi-year, prepaid credit packs, regional pricing) sit inside the contract, not on the public page. If your 2026 budget needs a contractual annual commit on credits, that conversation lives at the Enterprise tier, which is also where the bundled compliance frame (HIPAA, SOC 2 Type II, SSO, dedicated account manager, SLA) starts carrying weight.
Can I run Skyvern self-hosted in 2026 to avoid the credit pricing?
Yes for the runtime, no for the bundled cloud features. The Skyvern repository at github.com/Skyvern-AI/skyvern is licensed AGPL-3.0 and the runtime is a Playwright-compatible SDK driving managed Chromium. You can deploy it on your own infrastructure, but the cloud product's anti-bot fabric (CAPTCHA solving, residential proxies, geo-targeting) stays proprietary and is not in the public repository. You also carry the cost of the vision LLM yourself and you size your own concurrency. Self-hosting is credible for steady, predictable workloads where the bundled cloud features are not load-bearing. It is not the answer for hardened portals where the anti-bot fabric is the load-bearing piece.
How does the 2026 credit price compare to the previous per-step price for one workflow?
The old model charged a flat $0.05 per step. The 2026 Hobby tier ratios imply roughly 25 credits per action and roughly $0.024 per action when the bundled allowance is fully consumed, which is about half the old per-step rate. The honest caveat is that any per-action number is a tier average. A click on a static page that triggers no anti-bot path and needs no retries burns well below 25 credits. A multi-step authenticated flow on a hardened portal that hits a CAPTCHA, a proxy switch, and a vision retry burns well above. The averages size a tier; they do not quote a single workflow.
What happens if my 2026 usage spikes past the Pro tier's concurrency limit?
The Day 5 launch post puts Pro at 25 concurrent sessions and Enterprise at unlimited. If your end-of-month batch routinely needs 30 or 40 parallel sessions, the published $149 line stops being the right number to forecast against, because the binding constraint is concurrency, not credits. You will either queue the spike (which extends wall-clock time but stays inside Pro), or move to Enterprise pricing for that workload. Both are reasonable answers, but neither is the published Pro number, and a 2026 forecast that uses the Pro number for a spiky workload is forecasting the wrong tier.
Does Skyvern's 2026 pricing cover desktop apps like SAP GUI, Oracle Forms, Jack Henry, Fiserv, Epic Hyperspace?
No. Every Skyvern tier in 2026 is sized for browser-tab work. The runtime drives a managed Chromium instance through a Playwright-compatible SDK; the selection layer is a vision LLM reading the rendered viewport. There is no Windows desktop runtime, no Citrix runtime, no mainframe terminal connector. If your workflow has to drive a SAP GUI window, an Oracle Forms session, a Jack Henry green-screen, or an Epic Hyperspace patient chart inside a Citrix shell, the credit unit cannot price it because the surface the credit measures is not where your work happens. That is the architectural boundary the 2026 pricing inherits from the runtime, not a limitation that goes away with a higher tier.
How does Mediar's per-minute pricing compare on a worked example?
Mediar charges $0.75 per minute of runtime, drawn against a $10,000 turn-key program prepay that converts to credits with a small bonus. A 3-minute desktop workflow runs at $2.25 of meter time, and that number does not change whether the underlying app is a hardened portal or a plain Excel sheet, because the unit is wall-clock time on the OS rather than browser-execution credits. The trade is that Mediar does not price browser-only work where the bundled CAPTCHA and proxy fabric are the load-bearing cost; that is exactly the surface Skyvern's credit unit is sized for. Pick the unit that matches where your workflow lives, not the smaller headline number.
More from the Mediar topic series
Keep reading
Skyvern pricing decoded: what a credit actually buys, and where it stops
The structural read of Skyvern's January 2026 credits-based pricing: per-action math from the published tiers, the architectural reason a credit only buys browser-tab runtime, and the desktop-runtime replay code that explains why a different unit takes over.
CloudCruise, traced through BADGER: a guide to the architecture and where it stops
Five execution strategies on top of a directed-graph DSL, and the input-surface boundary that decides whether a browser-RPA tool can touch your workflow at all.
RPA agent UI input layer: accessibility tree versus pixels
The choice of input surface is the most consequential architectural decision an RPA agent makes. Walks the tree-versus-pixel split and what each gives up.