Skip to main content

Overview

By the end of this guide, you’ll have an Intuned project that triggers an RPA automation (via standalone Runs) and sinks its results to S3. You’ll:
  1. Create an S3 bucket and configure AWS credentials for Intuned.
  2. Trigger a Standalone Run with an S3 sink.

Prerequisites

You’ll need:
  • An AWS account with S3 access.
  • An Intuned account.
This guide assumes familiarity with Intuned Projects and standalone Runs. If you’re new to Intuned, start with the getting started guide.

When to use S3 integration with Standalone Runs

Standalone Runs expose a start and result API for executing single API calls on demand. To receive results or check on the Run status, you poll the result endpoint. S3 integration automatically delivers the results to your S3 bucket as a JSON file. From there, you can process results using AWS tools like Lambda—or connect to other services.

Guide

1. Create an S3 bucket and access credentials

Create an S3 bucket and IAM credentials that Intuned can use to write data:

Create an S3 bucket

  1. Log in to the AWS Management Console
  2. Navigate to the S3 service
  3. Select Create bucket
  4. Enter a unique bucket name (e.g., my-intuned-data)
Choose a descriptive bucket name that makes it easy to identify its purpose (e.g., company-intuned-production).

Configure bucket settings

When creating your bucket:
  1. Object Ownership: Set to “Access Control Lists (ACLs) disabled”
  2. Block Public Access: Keep all public access blocked (recommended for security)
  3. Bucket Versioning: Optional - enable if you want to keep historical versions of files
  4. Encryption: Optional - enable default encryption for data at rest
  5. Select Create bucket to finish
Intuned only needs write access to your bucket, so keeping public access blocked is safe and recommended.

Create an IAM user for Intuned

Create a dedicated IAM user with limited permissions for Intuned:
  1. Navigate to IAM in the AWS Console
  2. Select Users in the left sidebar, then Create user
  3. Enter a username (e.g., intuned-s3-writer)
  4. Select Next, which takes you to the permissions page
On the permissions page:
  1. Select Attach existing policies directly
  2. Select Create policy (opens in new tab)
  3. Select the JSON tab and paste this policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
    }
  ]
}
  1. Replace YOUR-BUCKET-NAME with your actual bucket name
  2. Select Next, which takes you to the Review page
  3. Name the policy IntunedS3WritePolicy
  4. Select Create policy
Replace YOUR-BUCKET-NAME in the policy with your actual bucket name. Don’t use root account credentials - always create a dedicated IAM user.

Attach policy and generate access keys

Back in the user creation flow:
  1. Refresh the policies list
  2. Search for IntunedS3WritePolicy
  3. Select the checkbox next to the policy
  4. Select Next to go to the Review page
  5. Select Create user
Then open the newly created user page:
  1. Go to the Security credentials tab
  2. Select Create access key
  3. Choose Application running outside AWS and select Next
  4. Select Create access key
  5. Copy the Access key ID - you’ll need this for Intuned
  6. Copy the Secret access key - you’ll need this for Intuned (only shown once)
  7. Download the CSV or save these credentials securely
Store your credentials securely. The secret access key is only shown once and cannot be retrieved later. Never commit credentials to version control.

Note your configuration details

You now have everything needed to configure S3 in Intuned. Save these details:
  • Bucket name: Your S3 bucket name
  • Region: Your AWS region (e.g., us-west-2)
  • Access key ID: From the IAM user
  • Secret access key: From the IAM user
You’ll use these in the next section to trigger your Run.

2. Trigger a Run with an S3 sink

Now that your S3 bucket is ready, add an S3 sink to a Run so results are delivered to your bucket.

Prepare a Project

You can use an existing Project or create a new one. For this example, we’ll use the book-consultations-quickstart project that you can deploy using the Deploy your first RPA quickstart tutorial.

Trigger a Run with S3 sink

  1. Go to app.intuned.io
  2. Open your book-consultations-quickstart project
  3. Select the Runs tab
  4. Select Start Run
  5. Fill in the Run details:
    • API: book-consultations
    • Parameters:
{
  "name": "Jane Smith",
  "email": "[email protected]",
  "phone": "+1(555)123-4567",
  "date": "2025-12-10",
  "time": "14:30",
  "topic": "web-scraping"
}
  1. Enable sink configuration and add your S3 details:
    • Type: s3
    • Bucket: Your S3 bucket name (e.g., my-intuned-data)
    • Region: Your AWS region (e.g., us-west-2)
    • Access Key ID: Your IAM user access key
    • Secret Access Key: Your IAM user secret key
    • Prefix (optional): A path prefix to organize files (e.g., book-consultations-data/)
    • Skip On Fail (optional): Check to skip writing if the Run fails.
Standalone Run sink configuration
  1. Select Start Run

Inspect data in S3

After the Run completes, view your data in S3:
  1. Navigate to the S3 Console
  2. Open your bucket (e.g., my-intuned-data)
  3. Navigate to {prefix}/runs/{runId}.json and examine the file.
{
  "workspaceId": "<workspace-id>",
  "apiInfo": {
    "name": "book-consultations",
    "runId": "<run-id>",
    "parameters": {
      "name": "Jane Smith",
      "email": "[email protected]",
      "phone": "+1(555)123-4567",
      "date": "2025-12-10",
      "time": "14:30",
      "topic": "web-scraping"
    },
    "result": { 
      "status": "completed",
      "result": {
        "success": true,
        "date": "2025-12-10",
        "message": "Consultation successfully booked for 2025-12-10 at 14:30"
      },
      "statusCode": 200 
    }
  },
  "project": {
    "id": "<project-id>",
    "name": "book-consultations-quickstart"
  }
}

Configuration options

For full details on S3 sink configuration and available options, see the S3 Sink API Reference. Key configuration fields:
FieldRequiredDescription
bucketYesS3 bucket name
regionYesAWS region (e.g., us-west-2)
accessKeyIdYesAWS access key ID
secretAccessKeyYesAWS secret access key
prefixNoPath prefix for organizing files
skipOnFailNoSkip writing failed Runs to S3 (default: false)
endpointNoCustom endpoint for S3-compatible services
forcePathStyleNoUse path-style URLs for S3-compatible services

Processing data from S3

Once data lands in S3, you can process it in various ways depending on your needs. A common pattern is using an AWS Lambda function that triggers automatically when a new file arrives. Typical processing tasks include:
  • Normalizing the data structure
  • Removing empty fields
  • Validating against a schema
  • Triggering workflows such as sending emails, updating billing systems, or invoking other services

Best practices

  • Use least privilege IAM policies: Create a dedicated IAM user for Intuned with only s3:PutObject permission. Restrict access to specific bucket paths using resource ARNs. Never use root account credentials.
  • Organize data with prefixes: Use meaningful prefix structures like {environment}/{project-name}/{date}/ to make data easier to find, manage, and set lifecycle policies on.
  • Set up lifecycle policies: Reduce storage costs by transitioning older data to S3 Glacier and deleting data you no longer need. This can reduce costs significantly for infrequently accessed data.
  • Monitor usage and costs: Enable S3 Storage Lens for bucket-level insights, set up CloudWatch alarms for unexpected growth, and use Cost Explorer to track costs by bucket.