Skip to main content
Intuned’s cost is based on several usage metrics: Visit Usage and billing to learn how to monitor your consumption and set limits. Check each project’s breakdown on the Usage page to identify which metrics to optimize. Below are ideas for optimizing each metric.

Compute hours

Speed up individual runs

The faster your automation runs, the less compute time you use. Follow the techniques in Make automations faster to reduce execution time—including resource blocking, network interception, and batching evaluate calls.

Reduce retries

When an automation fails repeatedly, it exhausts all retry attempts before failing. Each retry consumes compute time for a run that ultimately fails. Fix transient issues instead of relying on retries to work around them.
Use the trace viewer to debug failed runs and identify the root cause of errors.
Each API run has startup overhead—loading the browser, navigating to the site, and rendering assets. Reduce this by grouping related operations:
  • Pagination — Instead of running one API per page, scrape multiple pages in a single run since the browser is already loaded.
  • Related tasks — If you have several small APIs that hit the same site, consider merging them to avoid reloading the site for each one.

Speed up AuthSession checks

If you have AuthSessions enabled, the check API runs before every API call to verify the session is still valid. A slow check API adds overhead to every run. Keep your check API fast by:
  • Only checking for essential authentication indicators
  • Using simple selectors that resolve quickly
  • Avoiding unnecessary navigation or waiting
The techniques in Make automations faster also apply to your check API.

AI spend

Use cheaper models

Different AI models have different costs per token. When using AI SDK methods, specify a cheaper model if your task doesn’t require the most capable one.

Use Intuned SDK methods with built-in caching

Intuned SDK methods like extractStructuredData have caching built in to avoid redundant LLM calls for identical inputs. Enable caching when processing similar pages.

Use deterministic code when you can

If you’re using AI for tasks that could be done with selectors and JavaScript, replace them. AI extraction is useful for unstructured or unpredictable content, but costs add up for predictable patterns. See Replace AI code with deterministic code.

JobRuns

Reduce Job frequency

Check how often your Jobs run. If you’re polling for data that rarely changes, reduce the schedule frequency.

Skip unchanged items

If your Job processes multiple items but only some change between runs, skip items you’ve already scraped. Use the KV cache to track what’s been processed and detect changes. This reduces the number of payloads processed per JobRun.

Tune concurrency

When a Job runs, the concurrency setting in your replication settings controls how many machines run in parallel. More machines means faster completion but higher peak cost since you’re charged for all active machines simultaneously. If speed isn’t critical, reduce concurrency to spread the load over time.

Machine size surcharge

Large and x-large machines cost more per hour. You typically need them for:
  • JavaScript-heavy websites that consume lots of memory
  • Automations that need to be very fast
  • Automations with long-running browser sessions that accumulate memory usage
  • Automations that require headful mode or stealth mode
To reduce machine size requirements:

Upgrade your plan for better rates

Upgrading from the Developer plan to the Startup plan gives you lower overage rates:
MetricDeveloperStartup
Compute overage$0.12/hour$0.10/hour
JobRun overage$0.12/run$0.10/run
Large machine surcharge$0.18/hour$0.15/hour
X-Large machine surcharge$0.48/hour$0.40/hour
If your monthly overages are significant, the Startup plan may pay for itself through reduced rates. See Plans and billing for full pricing details.

Need help optimizing?

If your costs are higher than expected, reach out to us. We can review your usage patterns and suggest optimizations, or discuss custom plans for high-volume workloads.