AWS S3

The @sme-uploader/aws-s3 plugin can be used to upload files directly to an S3 bucket.
Uploads can be signed using either Companion or a custom signing function.

const AwsS3 = require('@sme-uploader/aws-s3')
const ms = require('ms')

uploader.use(AwsS3, {
  limit: 2,
  timeout: ms('1 minute'),
  companionUrl: 'https://uploader-companion.myapp.com/'
})

There are broadly two ways of uploading to S3 in a browser. A server can generate a presigned URL for a PUT upload, or a server can generate form data for a POST upload. Companion uses a POST upload. See POST Uploads for some caveats if you would like to use POST uploads without Companion. See Generating a presigned upload URL server-side for an example of a PUT upload.

There is also a separate plugin for S3 Multipart uploads. Multipart in this sense refers to Amazon’s proprietary chunked, resumable upload mechanism for large files. See the @sme-uploader/aws-s3-multipart documentation.

Installation

This plugin is published as the @sme-uploader/aws-s3 package.

Install from NPM:

npm install @sme-uploader/aws-s3

In the CDN package, it is available on the SmeUploader global object:

const AwsS3 = SmeUploader.AwsS3

Options

The @sme-uploader/aws-s3 plugin has the following configurable options:

id: 'AwsS3'

A unique identifier for this plugin. Defaults to 'AwsS3'.

companionUrl

When using Companion to sign S3 uploads, set this option to the root URL of the Companion instance.

uploader.use(AwsS3, {
  companionUrl: 'https://uploader-companion.my-app.com/'
})

companionHeaders: {}

Note: This only applies when using Companion to sign S3 uploads.

Custom headers that should be sent along to Companion on every request.

metaFields: []

Pass an array of field names to specify the metadata fields that should be stored in S3 as Object Metadata. This takes values from each file’s file.meta property.

  • Set this to ['name'] to only send the name field.
  • Set this to an empty array [] (the default) to not send any fields.

getUploadParameters(file)

Note: When using Companion to sign S3 uploads, do not define this option.

A function that returns upload parameters for a file.
Parameters should be returned as an object, or a Promise for an object, with keys { method, url, fields, headers }.

The method field is the HTTP method to be used for the upload.
This should be one of either PUT or POST, depending on the type of upload used.

The url field is the URL to which the upload request will be sent.
When using a presigned PUT upload, this should be the URL to the S3 object with signing parameters included in the query string.
When using a POST upload with a policy document, this should be the root URL of the bucket.

The fields field is an object with form fields to send along with the upload request.
For presigned PUT uploads, this should be left empty.

The headers field is an object with request headers to send along with the upload request.
When using a presigned PUT upload, it’s a good idea to provide headers['content-type']. That will ensure that the request uses the same content-type that was used to generate the signature. Without it, the browser may decide on a different content-type instead, causing S3 to reject the upload.

timeout: 30 * 1000

When no upload progress events have been received for this amount of milliseconds, assume the connection has an issue and abort the upload. This is passed through to XHR Upload; see its documentation page for details.
Set to 0 to disable this check.

The default is 30 seconds.

limit: 0

Limit the amount of uploads going on at the same time. This is passed through to XHR Upload; see its documentation page for details.
Set to 0 to disable limiting.

getResponseData(responseText, response)

This is an advanced option intended for use with almost S3-compatible storage solutions.

Customize response handling once an upload is completed. This passes the function through to @sme-uploader/xhr-upload, see its documentation for API details.

This option is useful when uploading to an S3-like service that doesn’t reply with an XML document, but with something else such as JSON.

locale: {}

Localize text that is shown to the user.

The default English strings are:

strings: {
  // Shown in the StatusBar while the upload is being signed.
  preparingUpload: 'Preparing upload...'
}

S3 Bucket configuration

S3 buckets do not allow public uploads by default.
In order to allow SME Uploader to upload directly to a bucket, at least its CORS permissions need to be configured, and you potentially need to change some of the Public access settings that provide an extra layer of public access protection even if the correct CORS permissions are in place.

CORS permissions can be found in the S3 Management Console.
Click the bucket that will receive the uploads, then go into the “Permissions” tab and select the “CORS configuration” button.
An XML document will be shown that contains the CORS configuration.

It is good practice to use two CORS rules: one for viewing the uploaded files, and one for uploading files.

Depending on which settings were enabled during bucket creation, AWS S3 may have defined a CORS rule that allows public reading already.
This rule looks like:

<CORSRule>
  <AllowedOrigin>*</AllowedOrigin>
  <AllowedMethod>GET</AllowedMethod>
  <MaxAgeSeconds>3000</MaxAgeSeconds>
</CORSRule>

If uploaded files should be publically viewable, but a rule like this is not present, add it.

A different <CORSRule> is necessary to allow uploading.
This rule should come before the existing rule, because S3 only uses the first rule that matches the origin of the request.

At minimum, the domain from which the uploads will happen must be whitelisted, and the definitions from the previous rule must be added:

<AllowedOrigin>https://my-app.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>

When using Companion, which generates a POST policy document, the following permissions must be granted:

<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>Authorization</AllowedHeader>
<AllowedHeader>x-amz-date</AllowedHeader>
<AllowedHeader>x-amz-content-sha256</AllowedHeader>
<AllowedHeader>content-type</AllowedHeader>

When using a presigned upload URL, the following permissions must be granted:

<AllowedMethod>PUT</AllowedMethod>

The final configuration should look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <CORSRule>
    <AllowedOrigin>https://my-app.com</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>Authorization</AllowedHeader>
    <AllowedHeader>x-amz-date</AllowedHeader>
    <AllowedHeader>x-amz-content-sha256</AllowedHeader>
    <AllowedHeader>content-type</AllowedHeader>
  </CORSRule>
  <CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
  </CORSRule>
</CORSConfiguration>

Even with these CORS rules in place, you browser might still encounter HTTP status 403 responses with AccessDenied in the response body when it tries to POST to your bucket. In this case, within the “Permissions” tab of the S3 Management Console, choose “Public access settings”.

It will list general Public access settings for this bucket, which can override the rules imposed by your CORS settings. Click on edit to manage these settings. Under Manage public access control lists (ACLs) for this bucket, make sure that Block new public ACLs and uploading public objects (Recommended) is unchecked, and Save these settings.

In-depth documentation about CORS rules is available on the AWS documentation site.

POST uploads

Companion uses POST uploads by default, but you can also use them with your own endpoints. There are a few things to be aware of when doing so:

  • The @sme-uploader/aws-s3 plugin attempts to read the <Location> XML tag from POST upload responses. S3 does not respond with an XML document by default. When generating the form data for POST uploads, you must set the success_action_status field to 201.
    // `s3` is an instance of the AWS JavaScript SDK's S3 client
    s3.createPresignedPost({
      ...,
      Fields: {
        ...,
        success_action_status: '201'
      }
    })

S3 alternatives

Many other object storage providers have an identical API to S3, so you can use the @sme-uploader/aws-s3 plugin with them as well. To use them with Companion, you can set the COMPANION_AWS_ENDPOINT variable to the endpoint of your preferred service.

DigitalOcean Spaces

For example, with DigitalOcean Spaces, you could do something like this:

export COMPANION_AWS_ENDPOINT="https://{region}.digitaloceanspaces.com"
export COMPANION_AWS_BUCKET="my-space-name"

The {region} string will be replaced by the contents of the COMPANION_AWS_REGION environment variable.

For a working example that you can run and play around with, see the digitalocean-spaces folder in the SME Uploader repository.

Google Cloud Storage

For Google Cloud Storage, you need to take a few more steps. For the @sme-uploader/aws-s3 plugin to be able to upload to a GCS bucket, it needs the Interoperability setting enabled. You can enable the Interoperability setting and generate interoperable storage access keys by going to Google Cloud Storage » Settings » Interoperability. Then set the environment variables for Companion like this:

export COMPANION_AWS_ENDPOINT="https://storage.googleapis.com"
export COMPANION_AWS_BUCKET="YOUR-GCS-BUCKET-NAME"
export COMPANION_AWS_KEY="GOOGxxxxxxxxx" # The Access Key
export COMPANION_AWS_SECRET="YOUR-GCS-SECRET" # The Secret

You do not need to configure the region with GCS.

You also need to configure CORS differently. Unlike Amazon, Google does not offer a UI for CORS configurations. Instead, an HTTP API must be used. If you haven’t done this already, see Configuring CORS on a Bucket in the GCS documentation, or follow the steps below to do it using Google’s API playground.

GCS has multiple CORS formats, both XML and JSON. Unfortunately, their XML format is different from Amazon’s, so we can’t simply use the one from the S3 Bucket configuration section. Google appears to favour the JSON format, so we will use that.

JSON CORS configuration

The JSON format consists of an array of CORS configuration objects. An example using POST policy document uploads is shown here:

{
  "cors": [
    {
      "origin": ["https://my-app.com"],
      "method": ["GET", "POST"],
      "maxAgeSeconds": 3000
    },
    {
      "origin": ["*"],
      "method": ["GET"],
      "maxAgeSeconds": 3000
    }
  ]
}

Most AWS configurations should be fairly simple to port to this format. When using presigned PUT uploads, replace the "POST" method by "PUT" in the first entry.

If you have the gsutil command-line tool, you can apply this configuration using the gsutil cors command.

gsutil cors set THAT-FILE.json gs://BUCKET-NAME

Otherwise, you can manually apply it through the OAuth playground:

  1. Get a temporary API token from the Google OAuth2.0 playground
    1. Select the “Cloud Storage JSON API v1” » “devstorage.full_control” scope
    2. Press “Authorize APIs” and allow access
  2. Click “Step 3 - Configure request to API”
  3. Configure it as follows:
    • HTTP Method: PATCH
    • Request URI: https://www.googleapis.com/storage/v1/b/YOUR_BUCKET_NAME
    • Content-Type: application/json (should be the default)
    • Press “Enter request body” and input your CORS configuration
  4. Then, finally, press “Send the request”.

Examples

Generating a presigned upload URL server-side

The getUploadParameters function can return a Promise, so upload parameters can be prepared server-side.
That way, no private keys to the S3 bucket need to be shared on the client.
For example, there could be a PHP server endpoint that prepares a presigned URL for a file:

uploader.use(AwsS3, {
  getUploadParameters (file) {
    // Send a request to our PHP signing endpoint.
    return fetch('/s3-sign.php', {
      method: 'post',
      // Send and receive JSON.
      headers: {
        accept: 'application/json',
        'content-type': 'application/json'
      },
      body: JSON.stringify({
        filename: file.name,
        contentType: file.type
      })
    }).then((response) => {
      // Parse the JSON response.
      return response.json()
    }).then((data) => {
      // Return an object in the correct shape.
      return {
        method: data.method,
        url: data.url,
        fields: data.fields,
        // Provide content type header required by S3
        headers: {
          'Content-Type': file.type
        }
      }
    })
  }
})

See the aws-presigned-url example in the uploader repository for a small example that implements both the server-side and the client-side.

Retrieving presign parameters of the uploaded file

Once the file is uploaded, it is possible to retrieve the parameters that were
generated in getUploadParameters(file) via the file.meta field:

uploader.on('upload-success', (file, data) => {
  file.meta['key'] // the S3 object key of the uploaded file
})