Skip to main content
In this quickstart, you’ll build a scraper that extracts structured product information from an e-commerce site and deploy it to Intuned. By the end, you’ll have a working scraper ready to run on demand or on a schedule.

Prerequisites

  • An active Intuned account (sign up here). No credit card required—Intuned has a free plan
  • Basic familiarity with TypeScript or Python

Create and deploy your first scraper

You can develop Intuned Projects in two ways:
  • Online IDE — Zero setup. Write, test, and deploy directly from your browser.
  • CLI — Local development with full version control and CI/CD integration.
Choose your preferred approach below.

Log in and create project

  1. Go to app.intuned.io/projects and log in.
  2. Select Create Project.
  3. Select your language (TypeScript or Python).
  4. Choose the ecommerce template.
  5. Name it ecommerce-scraper-quickstart.
  6. Ensure IDE is selected as Type.
  7. Select Create and Open.
Create Project ScreenshotExpected result: The Intuned IDE opens with your project loaded.
What you just got: An Intuned Project groups related browser automations together. Each file in the api/ folder becomes a callable function that controls a browser using Playwright, accepts parameters, and returns structured results. When you deploy this project, all its APIs go live together as a single deployable unit.

Explore the project code

In the file explorer, you’ll see two API files:api/list - Navigates the e-commerce site, extracts product info from all pages, and triggers details for each product found.api/details - Visits each product page and extracts detailed information (price, SKU, descriptions, variants).
These two APIs work together—list discovers products and triggers details for each one using extendPayload. This pattern works well for job runs where the scope of work is determined at runtime, allowing your automation to adapt to whatever data it discovers.

Run your scraper in the IDE

Test the scraper’s list API to see it working in real-time.
  1. In the top toolbar, select list from the API dropdown.
  2. Select Params #1 next to it—you’ll see empty params {}.
  3. Select the Run button. Run API IDE
Expected result: The browser panel on the right shows the list scraper executing live. You’ll see it navigate through all product pages, extract data, and paginate automatically. The terminal below what executed and the result of the Run.Extended payloads: The IDE also displays a link to view the extended payloads created from this run. For each product found, you’ll see a payload containing the API name details and the product parameters. These payloads represent additional runs that execute when running in a Job context.

Deploy your project

Deploy your scraper to Intuned’s infrastructure.
  1. Select the Deploy button in the top-right corner of the IDE.
  2. Leave Create default job toggle selected
  3. In the deployment dialog, select Deploy to start.
  4. Watch the live deployment logs until you see “Ready”.
Expected result: A success message appears. Your scraper is now live and ready to run.

Trigger the default job and view results

Now trigger the default job to see your scraper run live
  1. In the deployment success dialog, select Trigger Default Job (or navigate to Jobs).
  2. Accept and trigger the job.
  3. A new Job Run execution will start, select it to view the details.
Jobs are the common way to run scrapers. They allow you to configure schedules, set data destinations (webhooks, S3, etc.), and control execution options like machine allocation and retries. The default job is created with your project without a schedule or destination—it’s a simple way to test your scraper and see the full execution flow.
Trigger Default JobExpected result: You’ll be taken to the Job Run results page. The scraper takes a few minutes to execute—you’ll see the list of API runs and their results as soon as they start executing and complete.Your scraper is now deployed and fully operational.

What’s next?

  • Jobs — Jobs are the common way to run scrapers. Configure a schedule (daily, hourly, or custom) and define a sink to send your scraper results to a webhook, S3 bucket, or other destination.
  • Authentication — For scrapers that require login, Intuned provides built-in authentication support. You define how to log in and how to verify a session, and Intuned handles the rest—validating sessions before runs, reusing them when possible, and recreating them when expired.
  • Monitoring and traces — Every run generates detailed logs, browser traces, and session recordings. Use these tools to debug failures, verify your scraper is working correctly, and understand what happened during execution.
  • Flexible automations — Build scrapers your way. Write deterministic code, use AI-driven extraction, or combine both in a hybrid approach. Use any library or package—Intuned is unopinionated by design.
  • Intuned Agent quickstart — You can write your scraper logic manually like in this quickstart, or use Intuned Agent to generate scrapers from a prompt and schema. Intuned Agent can also help you update existing scrapers, fix failed runs, and iterate on your code faster.
  • Cookbook — Browse full working examples of scrapers and other automations. Each example includes complete code you can use as a starting point for your own projects.
  • Online IDE — Learn more about the Intuned IDE used in this quickstart.
  • Local development (CLI) — Learn more about the Intuned CLI used in this quickstart.