Overview
Intuned Projects are code-based. You write browser automations in Python or TypeScript, deploy them to Intuned’s infrastructure, and run them on demand or on a schedule. Since you’re writing code, you decide the approach: deterministic scripts, AI-driven automation, crawlers, or a combination. This page covers the common patterns and when each one fits.Deterministic automations
Deterministic automations use Playwright (or other libraries) to interact with the browser directly. You write selectors, handle navigation, and extract data with explicit logic. Trade-offs:- Faster execution and lower cost per run (no AI inference)
- Predictable—same input, same output
- Requires upfront work to write and maintain selectors
- Breaks when site structure changes
- Extracting structured data from a known site repeatedly (product prices, job listings, inventory)
- Monitoring sources where speed matters
- Form submissions and data entry where accuracy is critical
AI-driven automations
AI-driven automations use LLMs to interpret pages and decide actions. You describe what you want instead of writing selectors. Several libraries/APIs support this approach:- Stagehand — AI-powered browser automation with natural language commands
- Browser-Use — Agent framework for browser tasks
- Computer Use APIs — Anthropic, OpenAI, Gemini, or any other computer use API
- @Intuned/Browser — Intuned’s Browser SDK has many data extraction utilities
- Works across different sites without site-specific code
- Adapts to layout changes and variations
- Slower and more expensive per run
- Less predictable—output can vary
- One-off scraping where selectors aren’t worth maintaining
- Gathering data across many different sites (company research, market data)
- Automating workflows on sites that change frequently
- AI-powered company research scraper
- Adaptive form automation with AI
- Browser-Use example
- Stagehand example
Crawlers
A crawler discovers and visits pages beyond a single URL. It follows links, handles pagination, or iterates through sitemaps to process content across a site. Crawl4AI is an excellent library for building crawlers. Trade-offs:- Discovers pages automatically—you don’t need every URL upfront
- Can process hundreds or thousands of pages per Job
- More complex to debug when something fails mid-crawl
- Indexing documentation sites
- Collecting assets (PDFs, images) across a site
- Finding public contact information across directories
- Tracking content changes across competitor sites