Skip to main content

Overview

Jobs are blueprints that define when, how, and what browser automations to execute. Instead of triggering individual API runs manually, Jobs let you orchestrate multiple automation executions together—running them in bulk, on a schedule, or both. They handle scheduling, retries, concurrency management, and result aggregation. The most common use case: set up a scraper to run on a schedule and configure a sink to deliver results directly to your system. You create the scraper, configure the Job with a schedule and a webhook or S3 sink, and the data flows into your service automatically—no polling or manual exports required.
Job (blueprint)
  └── JobRun (execution instance)
        ├── Run 1 (API execution for payload item 1)
        │     └── Attempt 1, Attempt 2, etc.
        ├── Run 2 (API execution for payload item 2)
        │     └── Attempt 1, Attempt 2, etc.
        └── Run N (API execution for payload item N)
              └── Attempt 1, Attempt 2, etc.

Usage

Create a Job

Navigate to Jobs

Open your Project in the Intuned dashboard and select the Jobs tab.

Create new Job

Select Create Job to open the configuration editor. A JSON editor appears where you can define your Job configuration.

Trigger a JobRun

A Job can have a schedule, but the schedule is optional. Even if a Job has a schedule configured, you can trigger it on-demand whenever you want. Each trigger creates a new JobRun.
1

Navigate to Jobs

Go to the Jobs tab in your Project.
2

Find your Job

Locate your Job in the list.
3

Trigger the Job

Select next to the Job, then select Trigger.

Monitor a Job

Intuned provides full observability into every JobRun, giving you complete visibility into what happened and why. Each execution generates detailed logs, browser traces, and session recordings that help you debug issues and understand your automation’s behavior.
1

Open a JobRun

Navigate to the Jobs tab and select a JobRun to view its details.
2

Track progress

See how many payload items are pending, running, completed, or failed.
3

View Runs and Attempts

View individual Runs and their Attempts, including browser traces and logs.

Pause and resume a Job

Pausing a Job stops new JobRuns from starting and prevents in-progress JobRuns from executing new payload items. Currently running Runs will be canceled and retried when you resume. Use pause when you need to temporarily stop execution—for example, to fix an issue or update credentials.
Pause: Navigate to the Jobs tab, select next to the Job, then select Pause.Resume: Select next to the paused Job, then select Resume. This re-enables scheduling and continues any paused JobRuns from where they left off.

Terminate a JobRun

Terminating immediately stops a specific JobRun instance. All in-progress Runs are canceled, and no further payload items execute. Use terminate when you need to stop a JobRun completely—for example, if it was triggered by mistake or is no longer needed.
1

Find the JobRun

Navigate to the Jobs tab and select a Job with active JobRuns.
2

Terminate the JobRun

Select next to the active JobRun, then select Terminate.

Delete a Job

Deleting a Job removes it permanently from your Project. Any in-progress JobRuns will be terminated. You cannot undo this action.
1

Navigate to Jobs

Go to the Jobs tab in your Project.
2

Delete the Job

Select next to the Job, then select Delete.
3

Confirm deletion

Confirm when prompted.

Extend Run timeout

Jobs have a requestTimeout configuration that controls how long each Run Attempt can take before it fails. The default is 600 seconds (10 minutes). For long-running automations, call extendTimeout to reset the timer:
import { extendTimeout } from "@intuned/runtime";

export default async function handler({ page, parameters }) {
    extendTimeout(); // reset timer to the defined requestTimeout value
    // Perform long-running scraping...
    return { success: true };
}

Use Jobs with AuthSessions

Jobs fully support AuthSessions. When you configure a Job for an authenticated Project, specify the AuthSession in the Job configuration. All Runs in the JobRun use the same AuthSession, including extended Runs added via extendPayload.
{
  "id": "daily-dashboard-scraper",
  "payload": [
    {
      "api": "scrape-analytics",
      "parameters": { "reportType": "daily" }
    }
  ],
  "auth_session": {
    "id": "company-admin-session"
  }
}
Jobs validate the AuthSession when the JobRun starts and before each Run attempt. If validation fails and can’t recover, the JobRun pauses—you can fix the AuthSession manually and resume it from where it paused.
For detailed information about AuthSessions, authentication patterns, and how validation works, see the AuthSessions documentation and Intuned indepth.

Extend Jobs dynamically

Jobs support nested scheduling, where an API can dynamically extend the JobRun’s payload during execution using the extendPayload function. This is useful when the full scope of work isn’t known until you start executing—for example, scraping an e-commerce site where product URLs aren’t known upfront:
import { extendPayload } from "@intuned/sdk/runtime";
import { Page } from "playwright";

export default async function run(params: any, page: Page) {
  await page.goto('https://example.com/products');
  
  const productLinks = await page.$$eval('a.product', links => 
    links.map(a => a.href)
  );
  
  for (const link of productLinks) {
    extendPayload({
      api: "scrape-product-details",
      parameters: { productUrl: link }
    });
  }
  
  return {
    discoveredProducts: productLinks.length,
    message: `Extended job with ${productLinks.length} product scraping tasks`
  };
}
Important considerations:
  • extendPayload only works within JobRuns (not standalone Runs)
  • Extended items are added to the same JobRun and tracked together
  • Extended items respect the Job’s retry and concurrency configuration
  • Extended items automatically inherit the JobRun’s AuthSession
  • You can call extendPayload multiple times within a single API execution

Configuration reference

Basic Job structure

Every Job requires an ID and payload. Here’s a minimal example:
{
  "id": "daily-book-scraper",
  "payload": [
    {
      "api": "scrape-category",
      "parameters": {
        "category": "Poetry"
      }
    },
    {
      "api": "scrape-category",
      "parameters": {
        "category": "Travel"
      }
    }
  ],
  "configuration": {
    "retry": {
      "maximumAttempts": 3
    },
    "maximumConcurrentRequests": 1
  }
}

Payload configuration

The payload array defines what APIs to execute:
"payload": [
  {
    "apiName": "login-and-extract",
    "parameters": {
      "username": "[email protected]",
      "targetUrl": "https://example.com/dashboard"
    },
    "retry": {
        "maximumAttempts": 5
    }
  }
]
FieldDescription
apiNameThe name of the API to run (must exist in your Project)
parametersParameters to pass to the API
retry (optional)Override Job-level retry setting for this payload item

Execution configuration

"configuration": {
  "retry": {
    "maximumAttempts": 3
  },
  "maximumConcurrentRequests": 10,
  "requestTimeout": 600
}
FieldDefaultDescription
maximumAttempts3Default maximum attempts for each payload item
maximumConcurrentRequestsProject defaultMaximum payload items executing simultaneously
requestTimeout600Seconds to wait for each Run attempt before failing

Schedule configuration

Jobs support two scheduling methods: Intervals — Run every X period:
"schedule": {
  "intervals": [
    { "every": "1h" },
    { "every": "86400000" }
  ]
}
Intervals can be milliseconds (number) or ms-formatted strings like "1h", "30m", "7d". Calendars — Run at specific times:
"schedule": {
  "calendars": [
    {
      "dayOfWeek": { "start": "MONDAY", "end": "FRIDAY" },
      "hour": { "start": 9, "end": 17 },
      "minute": "0",
      "comment": "Run weekdays 9am-5pm at the start of every hour"
    }
  ]
}
Calendar fields support single values ("hour": 9), ranges ("hour": { "start": 9, "end": 17 }), and wildcards ("month": "*").
Jobs trigger when any interval or calendar condition is met. If you configure both “every 7 days” and “first of every month”, the Job runs when either condition occurs.

Sink configuration

Send Job results to external systems. See the Sinks API reference for detailed options. Webhook:
"sink": {
  "type": "webhook",
  "url": "https://webhook.site/demo"
}
S3:
"sink": {
  "type": "s3",
  "bucket": "job-results",
  "region": "us-east-1",
  "skipOnFail": false,
  "accessKeyId": "XXXXXXXXXXXXXXXXXXXX",
  "secretAccessKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

AuthSession configuration

"auth_session": {
  "id": "auth-session-123",
  "checkAttempts": 3,
  "createAttempts": 2
}
FieldDefaultDescription
id (required)The ID of a credential-based AuthSession
checkAttempts3Times to validate before each Run attempt
createAttempts3Times to recreate if invalid

Best practices

  • Keep APIs focused: Design each API to handle a single concern. Use Jobs to orchestrate multiple APIs rather than building monolithic automations.
  • Use nested scheduling for discovery patterns: When scraping lists before details, use one API to discover items and extendPayload to process each.
  • Limit concurrency for rate-limited targets: Set maximumConcurrentRequests to 1-5 for rate-limited sites. Increase for robust targets.
  • Include metadata in parameters: Pass identifiers or context ({ "category": "electronics", "batchId": "2024-10-16" }) for easier debugging.
  • Use sinks for production workflows: Configure webhooks or S3 to automatically capture results rather than manually exporting.
  • Test before scheduling: Create a QA instance of your Job (no schedule or sink) to test manually before setting up the production version.
  • Use service account AuthSessions: Use dedicated service accounts rather than personal credentials for clearer audit trails.
  • Monitor JobRun history regularly: Check for patterns in failures. Consistent failures in specific payload items may indicate API issues.

Limitations

  • Execution order is not guaranteed: Payload items may execute in any order depending on concurrency and worker availability.
  • Extended payload items execute asynchronously: Items added via extendPayload are queued and execute as workers become available.
  • No conditional execution within Jobs: Jobs execute all payload items. Use nested scheduling and API-level logic for conditional workflows.
  • Schedule precision: Scheduled JobRuns trigger within a reasonable window but not at the exact millisecond.
  • Sink delivery is at-least-once: Results may be delivered multiple times in rare failure scenarios. Handle duplicates idempotently.
  • AuthSession is shared across all Runs: You cannot use different AuthSessions for different payload items within the same JobRun.
  • Only credential-based AuthSessions support auto-recreation: Recorder-based AuthSessions must be manually recreated when they expire.

FAQs

Direct Run API calls execute a single API immediately. Jobs orchestrate multiple API calls together with scheduling, retries, and concurrency management. Use direct Runs for one-off executions; use Jobs for batch processing and recurring automations.
Yes. Changes apply to future JobRuns—they don’t affect JobRuns currently in progress.
Include multiple payload items with the same api value but different parameters. Each creates a separate Run.
Each call adds items to the current JobRun’s queue. All extended items are tracked together and execute according to the Job’s concurrency and retry settings.
No. All Runs within a JobRun use the same AuthSession. Create separate Jobs for different AuthSessions.
Extended Runs automatically inherit the JobRun’s AuthSession. You don’t need to specify authentication for extended payload items.