Skip to main content
Intuned is flexible—there are many ways to structure automations into projects. Here are common patterns to guide you.

Project patterns

Single-site scraper

Scrape entities from one website using a list-details pattern. Best for: Entity scrapers—product catalogs, job postings, event listings, public records. Structure:
my-scraper/
├── api/
│   ├── list           # Scrape paginated list
│   └── details        # Scrape individual item
├── helpers/
│   ├── transformers   # Date parsing, data cleanup
│   └── schema         # Shared data models
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
listScrape the list pageOptional filters, offset, limit
detailsScrape extended data for one itemItem URL or ID from list
The list API returns lightweight records. The details API fetches everything else: images, full descriptions, related items, or data requiring interaction.

Automate a site without an API (RPA)

Build an integration to a site that lacks an API—or extend what exists. Best for: Insurance portals, government websites, or any legacy website that has no official APIs. Structure:
insurance-portal/
├── api/
│   ├── get_claims        # List existing claims
│   ├── get_claim_status  # Check claim status
│   ├── submit_claim      # File a new claim
│   ├── upload_document   # Attach documents
│   └── download_eob      # Download explanation of benefits
├── helpers/
│   ├── navigation        # Common navigation steps
│   ├── patient           # Select patient flow
│   └── schema            # Data models
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
get_claimsList claimspatient_id, optional filters
submit_claimFile new claimpatient_id, claim_data
upload_documentAttach file to claimclaim_id, document_url
download_eobGet explanation of benefitsclaim_id
This pattern typically uses AuthSessions. Users log in once, and you’ll reuse the session across API calls.

Crawler

Crawl websites to extract content. Use this when a structured scraper isn’t a good fit—either the site has no clear structure, or you need something that works across multiple sites. Best for: AI/LLM data pipelines, search indexes, content aggregation, site archiving, knowledge bases. Structure:
web-crawler/
├── api/
│   ├── map        # Discover all URLs on a site
│   ├── scrape     # Extract content from single URL
│   ├── crawl      # Crawl and extract from subpages
│   └── search     # Search the web, get content
├── helpers/
│   ├── parser     # HTML to markdown, content extraction
│   └── schema     # Output schemas
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
mapDiscover all URLs on a site (fast)start_url, limit
scrapeExtract content from single URLurl, formats (markdown, html, metadata)
crawlCrawl site and extract from subpagesstart_url, max_depth, limit
searchSearch the web, return full contentquery, limit
How it works:
  • map quickly returns all discoverable URLs without extracting content.
  • scrape handles a single page—returns markdown, HTML, metadata.
  • crawl combines discovery and extraction for an entire site.
  • search performs web search and scrapes results.

Multi-site template scraper

Scrape multiple sites that share the same structure—typically from a platform that generates sites for clients. Best for: Shopify stores (e-commerce), Lever/Greenhouse boards (job postings), franchise websites, sites built on the same CMS. Structure:
shopify-scraper/
├── api/
│   ├── list           # Scrape product list from any store
│   └── details        # Scrape product details
├── helpers/
│   ├── transformers   # Price parsing, variant handling
│   └── schema         # Unified product schema
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
listScrape product listsite_url, optional filters
detailsScrape product detailssite_url, product_url
Output: Consistent schema across all sites. A product from store A has the same structure as one from store B.
If sites diverge significantly in structure, split them into separate projects. This pattern works best when the template is consistent.

Nested category scraper

Scrape sites with nested navigation: categories, subcategories, items. Best for: E-commerce catalogs, classified ads, business directories. Structure:
ecommerce-scraper/
├── api/
│   ├── get_categories    # Get top-level categories
│   ├── list              # Get items in a category
│   └── details           # Get product details
├── helpers/
│   ├── transformers      # Price, variant parsing
│   └── schema            # Product schema
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
get_categoriesGet all categories/subcategoriesOptional parent_category
listGet items in a categorycategory_url, pagination
detailsGet full product dataproduct_url
Start with get_categories to discover the lists, then list for each list, then details for each item.

AI agent automation

Use AI to automate workflows across unknown or varied websites. The target site isn’t known in advance—the AI adapts at runtime. Best for: Multi-provider comparisons, cross-site data aggregation, generic web tasks where building per-site scrapers isn’t feasible. Structure:
insurance-quotes/
├── api/
│   ├── find_providers       # Search for top providers
│   ├── get_quote            # Get quote from any provider site
│   └── download_quote_pdf   # Download quote document
├── helpers/
│   ├── agent                # AI agent setup (Browser-Use, Stagehand)
│   └── schema               # Unified quote schema
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
find_providersSearch for providersquery, count
get_quoteNavigate to site, find form, submitsite_url, car_details
download_quote_pdfDownload quote document from providersite_url, quote_id
How it works: AI agents (like Browser-Use or Stagehand) handle navigation, form discovery, and data extraction. The same API works on sites it’s never seen before.
Use this pattern when target sites are unknown or too numerous to build individual scrapers. For known, stable sites, deterministic scrapers are more reliable.

Utility project

Simple, focused APIs. Building blocks for larger systems. Best for: Markdown extraction, PDF collection, screenshot services, page archiving. Structure:
web-utilities/
├── api/
│   ├── get_markdown     # Convert page to markdown
│   ├── get_pdfs         # Extract PDF links/attachments
│   ├── screenshot       # Capture page screenshot
│   └── archive_page     # Save full page content
├── Intuned.json
└── pyproject.toml
APIPurposeParameters
get_markdownConvert page to markdownurl, include_images
get_pdfsExtract all PDF links/attachmentsurl
screenshotCapture page screenshoturl, viewport, full_page
archive_pageSave full page contenturl, format
Keep utilities minimal and predictable. They compose well into larger workflows.

Multi-purpose project

Consolidate many APIs into one project. Use nested folders to organize. Teams typically manage these projects with the Intuned CLI for local development and deploy via CI/CD pipelines. Best for: Automations that share significant logic and helpers (without helper packages), and organizations wanting to manage multiple different purpose automations in one project. Structure:
company-automations/
├── api/
│   ├── my-scraper/
│   │   ├── list
│   │   └── details
│   ├── insurance-portal/
│   │   ├── get_claims
│   │   ├── submit_claim
│   │   └── download_eob
│   ├── web-crawler/
│   │   ├── map
│   │   ├── scrape
│   │   └── crawl
│   └── markdown-extractor/
│       └── get_markdown
├── helpers/
│   ├── schema            # Shared schemas
│   ├── parsers           # Common parsers
│   └── transformers      # Data transformers
├── Intuned.json
└── pyproject.toml
API naming: Folders become part of the API name. my-scraper/list is the name of the API here. Workflow:
  1. Develop locally using Intuned CLI.
  2. Source control with git.
  3. Deploy with CI/CD (GitHub Actions) using intuned deploy.
Use this pattern with care. Consolidating too much into one project means all APIs deploy together, share configuration, and can’t scale independently. Consider whether separate projects would be cleaner—the same tradeoffs apply as putting all code in one monorepo vs. splitting into focused repositories.

Design guidelines

Whatever pattern you choose, keep these principles in mind:

One project, one purpose

Everything in a project deploys together and shares configuration (AuthSessions, secrets, settings). Split when automations target different sites, have different auth requirements, or change independently. Combine when automations work on the same platform, share authentication, or share significant code.

Small, focused APIs

Each API should perform one logical unit of browser work.
BenefitExplanation
RetryabilityIf step 3 fails, retry step 3—not all 10
ParallelizationFan out thousands of runs with Jobs
ObservabilityMonitor and debug at the run level
A good API does one thing, succeeds or fails independently, and is retriable with the same parameters.