Overview
Intuned Projects are code-based. You write browser automations in Python or TypeScript, deploy them to Intuned’s infrastructure, and run them on demand or on a schedule. Since you’re writing code, you decide the approach: deterministic scripts, AI-driven automation, crawlers, or a combination. This page covers the common patterns and when each one fits.Deterministic automations
Deterministic automations use Playwright (or other libraries) to interact with the browser directly. You write selectors, handle navigation, and extract data with explicit logic. Trade-offs:- Faster execution and lower cost per run (no AI inference)
- Predictable—same input, same output
- Requires upfront work to write and maintain selectors
- Breaks when site structure changes
- Extracting structured data from a known site repeatedly (product prices, job listings, inventory)
- Monitoring sources where speed matters
- Form submissions and data entry where accuracy is critical
- Product scraper — Python | TypeScript
- Form submission automation — Python | TypeScript
AI-driven automations
AI-driven automations use LLMs to interpret pages and decide actions. You describe what you want instead of writing selectors. Several libraries/APIs support this approach:- Stagehand — AI-powered browser automation with natural language commands
- Browser-Use — Agent framework for browser tasks
- Computer Use APIs — Anthropic, OpenAI, Gemini, or any other computer use API
- @Intuned/Browser — Intuned’s Browser SDK has many data extraction utilities
- Works across different sites without site-specific code
- Adapts to layout changes and variations
- Slower and more expensive per run
- Less predictable—output can vary
- One-off scraping where selectors aren’t worth maintaining
- Gathering data across many different sites (company research, market data)
- Automating workflows on sites that change frequently
- Computer Use APIs — Python | TypeScript
- Browser-Use agent — Python
- Stagehand automation — Python | TypeScript
Crawlers
A crawler discovers and visits pages beyond a single URL. It follows links, handles pagination, or iterates through sitemaps to process content across a site. Crawl4AI is an excellent library for building crawlers. Trade-offs:- Discovers pages automatically—you don’t need every URL upfront
- Can process hundreds or thousands of pages per Job
- More complex to debug when something fails mid-crawl
- Indexing documentation sites
- Collecting assets (PDFs, images) across a site
- Finding public contact information across directories
- Tracking content changes across competitor sites
- Crawl4AI — Python
- E-commerce category crawler — Python | TypeScript
- Native crawler — Python | TypeScript
Hybrid automations
You can combine approaches in a single Project. A few patterns that work well:Crawler with conditional parsing
A crawler visits pages across multiple sites. For known structures—job boards on Lever or Greenhouse—it uses deterministic parsers. For unknown sites, it falls back to AI. This handles common cases quickly and affordably while covering edge cases. View example (hybrid-crawler/crawl API) — Python | TypeScript
Deterministic automation with AI fallback
A form automation runs deterministic code for the standard flow. If an element isn’t found or the structure has changed, it catches the error and hands off to AI. View example (hybrid-rpa/fill-form API) — Python | TypeScript
Deterministic scraper with AI extraction
A scraper uses selectors for most fields. For fields that need interpretation—like categorizing unstructured text—it calls AI for just those extractions. View example (hybrid-scraper/list & hybrid-scraper/details APIs) — Python | TypeScript