Intuned is flexible—there are many ways to structure automations into projects. Here are common patterns to guide you.
Project patterns
Single-site scraper
Scrape entities from one website using a list-details pattern.
Best for: Entity scrapers—product catalogs, job postings, event listings, public records.
Structure:
my-scraper/
├── api/
│ ├── list # Scrape paginated list
│ └── details # Scrape individual item
├── helpers/
│ ├── transformers # Date parsing, data cleanup
│ └── schema # Shared data models
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
list | Scrape the list page | Optional filters, offset, limit |
details | Scrape extended data for one item | Item URL or ID from list |
The list API returns lightweight records. The details API fetches everything else: images, full descriptions, related items, or data requiring interaction.
Automate a site without an API (RPA)
Build an integration to a site that lacks an API—or extend what exists.
Best for: Insurance portals, government websites, or any legacy website that has no official APIs.
Structure:
insurance-portal/
├── api/
│ ├── get_claims # List existing claims
│ ├── get_claim_status # Check claim status
│ ├── submit_claim # File a new claim
│ ├── upload_document # Attach documents
│ └── download_eob # Download explanation of benefits
├── helpers/
│ ├── navigation # Common navigation steps
│ ├── patient # Select patient flow
│ └── schema # Data models
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
get_claims | List claims | patient_id, optional filters |
submit_claim | File new claim | patient_id, claim_data |
upload_document | Attach file to claim | claim_id, document_url |
download_eob | Get explanation of benefits | claim_id |
This pattern typically uses AuthSessions. Users log in once, and you’ll reuse the session across API calls.
Crawler
Crawl websites to extract content. Use this when a structured scraper isn’t a good fit—either the site has no clear structure, or you need something that works across multiple sites.
Best for: AI/LLM data pipelines, search indexes, content aggregation, site archiving, knowledge bases.
Structure:
web-crawler/
├── api/
│ ├── map # Discover all URLs on a site
│ ├── scrape # Extract content from single URL
│ ├── crawl # Crawl and extract from subpages
│ └── search # Search the web, get content
├── helpers/
│ ├── parser # HTML to markdown, content extraction
│ └── schema # Output schemas
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
map | Discover all URLs on a site (fast) | start_url, limit |
scrape | Extract content from single URL | url, formats (markdown, html, metadata) |
crawl | Crawl site and extract from subpages | start_url, max_depth, limit |
search | Search the web, return full content | query, limit |
How it works:
map quickly returns all discoverable URLs without extracting content.
scrape handles a single page—returns markdown, HTML, metadata.
crawl combines discovery and extraction for an entire site.
search performs web search and scrapes results.
Multi-site template scraper
Scrape multiple sites that share the same structure—typically from a platform that generates sites for clients.
Best for: Shopify stores (e-commerce), Lever/Greenhouse boards (job postings), franchise websites, sites built on the same CMS.
Structure:
shopify-scraper/
├── api/
│ ├── list # Scrape product list from any store
│ └── details # Scrape product details
├── helpers/
│ ├── transformers # Price parsing, variant handling
│ └── schema # Unified product schema
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
list | Scrape product list | site_url, optional filters |
details | Scrape product details | site_url, product_url |
Output: Consistent schema across all sites. A product from store A has the same structure as one from store B.
If sites diverge significantly in structure, split them into separate projects. This pattern works best when the template is consistent.
Nested category scraper
Scrape sites with nested navigation: categories, subcategories, items.
Best for: E-commerce catalogs, classified ads, business directories.
Structure:
ecommerce-scraper/
├── api/
│ ├── get_categories # Get top-level categories
│ ├── list # Get items in a category
│ └── details # Get product details
├── helpers/
│ ├── transformers # Price, variant parsing
│ └── schema # Product schema
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
get_categories | Get all categories/subcategories | Optional parent_category |
list | Get items in a category | category_url, pagination |
details | Get full product data | product_url |
Start with get_categories to discover the lists, then list for each list, then details for each item.
AI agent automation
Use AI to automate workflows across unknown or varied websites. The target site isn’t known in advance—the AI adapts at runtime.
Best for: Multi-provider comparisons, cross-site data aggregation, generic web tasks where building per-site scrapers isn’t feasible.
Structure:
insurance-quotes/
├── api/
│ ├── find_providers # Search for top providers
│ ├── get_quote # Get quote from any provider site
│ └── download_quote_pdf # Download quote document
├── helpers/
│ ├── agent # AI agent setup (Browser-Use, Stagehand)
│ └── schema # Unified quote schema
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
find_providers | Search for providers | query, count |
get_quote | Navigate to site, find form, submit | site_url, car_details |
download_quote_pdf | Download quote document from provider | site_url, quote_id |
How it works: AI agents (like Browser-Use or Stagehand) handle navigation, form discovery, and data extraction. The same API works on sites it’s never seen before.
Use this pattern when target sites are unknown or too numerous to build individual scrapers. For known, stable sites, deterministic scrapers are more reliable.
Utility project
Simple, focused APIs. Building blocks for larger systems.
Best for: Markdown extraction, PDF collection, screenshot services, page archiving.
Structure:
web-utilities/
├── api/
│ ├── get_markdown # Convert page to markdown
│ ├── get_pdfs # Extract PDF links/attachments
│ ├── screenshot # Capture page screenshot
│ └── archive_page # Save full page content
├── Intuned.json
└── pyproject.toml
| API | Purpose | Parameters |
|---|
get_markdown | Convert page to markdown | url, include_images |
get_pdfs | Extract all PDF links/attachments | url |
screenshot | Capture page screenshot | url, viewport, full_page |
archive_page | Save full page content | url, format |
Keep utilities minimal and predictable. They compose well into larger workflows.
Multi-purpose project
Consolidate many APIs into one project. Use nested folders to organize. Teams typically manage these projects with the Intuned CLI for local development and deploy via CI/CD pipelines.
Best for: Automations that share significant logic and helpers (without helper packages), and organizations wanting to manage multiple different purpose automations in one project.
Structure:
company-automations/
├── api/
│ ├── my-scraper/
│ │ ├── list
│ │ └── details
│ ├── insurance-portal/
│ │ ├── get_claims
│ │ ├── submit_claim
│ │ └── download_eob
│ ├── web-crawler/
│ │ ├── map
│ │ ├── scrape
│ │ └── crawl
│ └── markdown-extractor/
│ └── get_markdown
├── helpers/
│ ├── schema # Shared schemas
│ ├── parsers # Common parsers
│ └── transformers # Data transformers
├── Intuned.json
└── pyproject.toml
API naming: Folders become part of the API name. my-scraper/list is the name of the API here.
Workflow:
- Develop locally using Intuned CLI.
- Source control with git.
- Deploy with CI/CD (GitHub Actions) using
intuned deploy.
Use this pattern with care. Consolidating too much into one project means all APIs deploy together, share configuration, and can’t scale independently. Consider whether separate projects would be cleaner—the same tradeoffs apply as putting all code in one monorepo vs. splitting into focused repositories.
Design guidelines
Whatever pattern you choose, keep these principles in mind:
One project, one purpose
Everything in a project deploys together and shares configuration (AuthSessions, secrets, settings).
Split when automations target different sites, have different auth requirements, or change independently.
Combine when automations work on the same platform, share authentication, or share significant code.
Small, focused APIs
Each API should perform one logical unit of browser work.
| Benefit | Explanation |
|---|
| Retryability | If step 3 fails, retry step 3—not all 10 |
| Parallelization | Fan out thousands of runs with Jobs |
| Observability | Monitor and debug at the run level |
A good API does one thing, succeeds or fails independently, and is retriable with the same parameters.