Amazon S3 is a simple storage service that you can use to store and retrieve data from anywhere on the web. Similar to FTP/SFTP, it lets external tools and services store and retrieve large amounts of data.
You should use Amazon S3 for uploads of data to ODP on a scheduled basis from external systems. Examples of recurring uploads include:
- Product feeds
- Customer updates
- List subscriptions
- Consent updates
CSV imports using Amazon S3 must adhere to the file format and name requirements outlined in Import data using CSV.
To use Amazon S3, you need the bucket (location) you want to access and the access key/secret for authentication. To retrieve these:
- Go to Account Settings > Integrations.
- Select the AWS integration.
- Click Generate Access Keys.
Your Key ID and Secret Access Key display below Access Keys in Account Settings > Integrations > AWS, which you can copy and paste into either the command-line interface or a third-party application to complete the Amazon S3 integration with ODP.
Your Amazon S3 bucket URLs also display here. All clients have an Amazon S3 bucket for imports and another for exports. The structure of the bucket URLs is shown below:
- Data imports –
- Data exports –
Data in the Amazon S3 buckets expires after 7 days.
After you place a file in your Zaius-incoming bucket, ODP automatically imports it. You can fully automate the import process if you regularly create data for imports and can schedule to add that data to your S3 bucket.
Use the following command to copy a local file to Amazon S3:
aws s3 cp zaius\_customers.csv s3://zaius-incoming/<your tracker ID>/ --sse
Use the following command to copy a directory of files to Amazon S3:
aws s3 sync /tmp/yourlocaldir/ s3://zaius-incoming-temp/<your tracker ID>/ --sse
- Run a Start Export Job API request for your desired export format (CSV or parquet), delimiter (comma, tab, pipe), and objects.
- Copy the
pathvalue from the API response body. For example:
- Run the following command, replacing
pathvalue you copied in step 2. This retrieves all Amazon S3 files for the export you requested in step 1.
aws s3 cp <PATH> . --recursive --sse
Using the example from step 2:
aws s3 cp s3://zaius-outgoing/lz3CnPijk15xYhTw7DU4wx/data-exports/3a44cik3-e981-53bf-6499-f9fc6851fae . --recursive --sse
This command outputs the contents of your requested export ID to your current directory. To specify a location, replace
. with the directory path. For example, to output the directory to your desktop, specify one of the following, depending on your operating system:
- OS X –
- Windows –
ODP exports are a set of files in a directory folder identified by the export ID you provide. Be sure you transfer the entire directory to get the entirety of the export.
You can use whichever third-party application you prefer. Common developer tools you can use to perform uploads to Amazon S3 include:
- Cyberduck (Windows and Mac) – marketer friendly
- AWS CLI (Windows, Mac, and Linux)
- AWS SDKs (Java, Python, Node.js, PHP, and more)
The following instructions are for Cyberduck, which is a free cloud storage browser that you can use with Amazon S3.
Download and launch a free tool, like Cyberduck.
In Cyberduck, expand Action and select New Bookmark.
Select Amazon S3 from the drop-down list at the top of the pop-up window.
(Optional) Enter a Nickname.
Enter the Access Key ID (the Key ID from ODP).
Enter the Secret Access Key (the Secret Access Key from ODP).
Expand More Options and enter the Path (the Data Exports bucket URL from ODP). Remove
s3:/from the beginning of the URL, leaving only one forward slash.
To locate your exact URL, go to Account Settings > Integrations > AWS. Under Bucket URL(s), copy the Data Exports value.
Close the pop-up window to save the new bookmark.
For information about using Cyberduck to import AWS files, see their documentation.
Updated 3 months ago