Skip to main content

Overview

By the end of this guide, you’ll have an Intuned project with a scraping Job that sends data to Supabase for processing and storage. You’ll:
  1. Set up Supabase Storage or an Edge Function endpoint for Intuned.
  2. Configure a Job with a Supabase-compatible sink.
  3. Trigger a Job and verify data lands in your database.

Prerequisites

Before you begin, ensure you have the following:
This guide assumes you have a basic understanding of Intuned Projects and Jobs. If you’re new to Intuned, start with the getting started guide.

When to use Supabase integration

Scrapers built on Intuned typically run via Jobs on a schedule. When a JobRun completes, you want that data sent somewhere for processing or persistence.The Supabase integration delivers scraped data to your Supabase database.
While this guide focuses on scraping, Supabase integration works for any Intuned Job—delivering Run results from any automation.

Choose your approach

This guide covers two approaches for connecting Intuned with Supabase:
ConsiderationSupabase StorageWebhook
Delivery guaranteeIntuned guarantees file deliveryYour endpoint must handle failures
ReprocessingRe-trigger database webhooks anytimeNot possible without re-running Jobs
ComplexityRequires storage + webhook setupSimpler setup
DebuggingInspect files in storage + logsCheck Edge Function logs
Recommendation: Use Supabase Storage for production workloads. Intuned guarantees results are written to your bucket, and you can always reprocess by re-triggering the database webhook.

Guide

Create a products table

Both approaches require a table to store the scraped data. In your Supabase SQL editor, run:
create table products (
  id serial primary key,
  name text unique not null,
  price numeric not null,
  url text,
  sku text,
  intuned_run_id text not null,
  created_at timestamptz default now()
);

Set up the integration

Best for: Reliability and reprocessing capabilityIntuned writes results to Supabase Storage, then a database trigger invokes an Edge Function to process each file. This approach guarantees data delivery and lets you reprocess files if needed.
1

Create a storage bucket

  1. Go to your Supabase dashboard
  2. Select Storage from the sidebar
  3. Select New bucket
  4. Name it ecommerce-ingest
  5. Keep it as a private bucket
See Supabase Storage quickstart for details.
2

Enable S3 compatibility

  1. In Storage, select S3 under Configuration
  2. Turn on S3 protocol connection
Supabase S3 Configuration showing endpoint and region
Note your Endpoint and Region values.
3

Create access keys

  1. In the S3 Configuration page, select New access key
  2. Copy both the Access key ID and Secret access key
Supabase S3 access keys modal
Save these credentials securely. The secret key won’t be shown again.
See Supabase S3 authentication for more details.
4

Create a Job with Supabase Storage sink

  1. Go to app.intuned.io
  2. Open your ecommerce-scraper-quickstart project
  3. Select the Jobs tab
  4. Select Create Job
  5. Fill in the Job ID and payloads
  6. Enable Sink
  7. Select S3 Compatible
  8. Enter your Supabase Storage credentials:
    • Bucket: ecommerce-ingest
    • Region: Your region (e.g., us-east-2)
    • Access Key ID: Your S3 access key
    • Secret Access Key: Your S3 secret key
    • Prefix: ecommerce-quickstart/supabase-bucket/ (optional)
    • Custom endpoint: Your Supabase Storage endpoint
  9. Select Force path-style URLs
  10. Select Create Job
Intuned Job configuration with S3 compatible sink for Supabase Storage
For advanced S3 sink options, see the AWS S3 integration.
5

Trigger the Job

  1. In the Jobs tab, find your Job
  2. Select Actions > Trigger
Triggering the Intuned Job
Results appear in your Supabase Storage bucket as JSON files.
Supabase Storage bucket with Intuned result files
6

Create the processing Edge Function

Create an Edge Function that processes files when they’re added to storage.
  1. Go to Edge Functions in your Supabase dashboard
  2. Select Create a new function
  3. Name it intuned-ingest-bucket
Creating the bucket processing Edge Function
Replace the default code with:
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { createClient } from "jsr:@supabase/supabase-js@2";

const BUCKET_NAME = "ecommerce-ingest";
const PATH_PREFIX = "ecommerce-quickstart/supabase-bucket/";

interface WebhookPayload {
  type: "INSERT" | "UPDATE" | "DELETE";
  table: string;
  schema: string;
  record: {
    id: string;
    bucket_id: string;
    name: string;
    metadata: { size: number; mimetype: string } | null;
  };
  old_record: null;
}

Deno.serve(async (req: Request) => {
  const payload: WebhookPayload = await req.json();
  const { record } = payload;

  // Skip RLS test inserts (no metadata)
  if (!record.metadata) {
    return new Response(
      JSON.stringify({ processed: false, reason: "no metadata" }),
      { status: 200 }
    );
  }

  // Only process JSON files in the correct bucket and path
  if (
    record.bucket_id !== BUCKET_NAME ||
    !record.name.startsWith(PATH_PREFIX) ||
    !record.name.endsWith(".json")
  ) {
    return new Response(
      JSON.stringify({ processed: false, reason: "path mismatch" }),
      { status: 200 }
    );
  }

  const supabase = createClient(
    Deno.env.get("SUPABASE_URL")!,
    Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
  );

  // Download the JSON file
  const { data: fileData, error: downloadError } = await supabase.storage
    .from(record.bucket_id)
    .download(record.name);

  if (downloadError) {
    console.error("Download error:", downloadError);
    return new Response(
      JSON.stringify({ error: downloadError.message }),
      { status: 500 }
    );
  }

  const jsonContent = JSON.parse(await fileData.text());
  const result = jsonContent.apiInfo.result.result;

  // Transform the product data
  const product = {
    name: result.product.name,
    price: parseFloat(result.product.price.replace(/[^0-9.]/g, "")) || 0,
    url: result.product.detailsUrl,
    sku: result.product.sku,
    intuned_run_id: jsonContent.apiInfo.runId,
  };

  const { error } = await supabase
    .from("products")
    .upsert([product], { onConflict: "name" });

  if (error) {
    console.error("Insert error:", error);
    return new Response(
      JSON.stringify({ error: error.message }),
      { status: 500 }
    );
  }

  return new Response(
    JSON.stringify({ processed: true, inserted: 1 }),
    { status: 200 }
  );
});
Select Deploy function to save.See Supabase Edge Functions for more details.
7

Create the database webhook

Set up a database webhook to trigger your Edge Function when files are added to storage.
  1. Go to Database > Webhooks in your Supabase dashboard
  2. If prompted, select Enable Webhooks
  3. Select Create a new hook
  4. Configure the webhook:
    • Name: intuned-bucket-trigger
    • Table: Select storage.objects from the dropdown
    • Events: Select Insert
    • Type: Select Supabase Edge Functions
    • Edge Function: Select intuned-ingest-bucket
  5. Select Create webhook
See Supabase database webhooks for more details.
8

Verify the results

After triggering the Job, check your Supabase products table. As files are written to storage, the database webhook fires and your Edge Function processes each one.
Supabase products table populated with scraped data

Data format

Intuned sends Run results as JSON. The payload structure includes metadata about the Run and the extracted data:
{
  "apiInfo": {
    "runId": "run_abc123",
    "apiName": "list",
    "result": {
      "status": "success",
      "result": {
        "product": {
          "name": "Example Product",
          "price": "$29.99",
          "detailsUrl": "https://example.com/product",
          "sku": "SKU-12345"
        }
      }
    }
  }
}
Your Edge Function code extracts the nested result object and transforms it for your database schema.

Troubleshooting

Edge Function not receiving data

Cause: The Edge Function isn’t receiving webhook requests from Intuned. Common reasons include incorrect endpoint URL, missing authentication headers, or the function not being deployed. Solution: Verify the endpoint URL matches your Edge Function URL exactly. Check that apikey and Authorization headers are set correctly with your Supabase anon key. Confirm the function is deployed in your Supabase dashboard. Ensure Verify JWT with legacy secret is turned off.

Files not appearing in storage

Cause: Intuned can’t write files to your Supabase Storage bucket. This typically happens with incorrect S3 credentials, missing path-style URL configuration, or wrong bucket name. Solution: Regenerate S3 access keys in Supabase and update your Job configuration. Enable Force path-style URLs in the Job sink settings. Verify the bucket name and endpoint match your Supabase project exactly.

Database webhook not triggering

Cause: The database webhook isn’t firing when files are added to storage. The webhook may be configured for the wrong table or event type. Solution: Verify the webhook targets the storage.objects table. Ensure the Insert event is selected. Check Edge Function logs for errors. Make sure the Edge Function is selected as the webhook target.

Data not inserting into products table

Cause: The Edge Function is receiving data but can’t insert it into your database. Common issues include table schema mismatch, unique constraint violations, or missing database permissions. Solution: Verify your products table matches the schema from the “Create a products table” step. Check if products with the same name already exist (the upsert should handle this, but verify the onConflict clause). Ensure the Edge Function uses the service role key for database access.

Job paused with sink error

Cause: For S3-compatible sinks, Intuned automatically pauses the Job when it fails to write data to storage. Solution: Check the Job status in the Intuned dashboard (shows as “Paused”). Fix the underlying credential or configuration issue. Update the Job configuration if needed, then select Resume from the dashboard. The Job continues from where it paused.