Guides / Getting insights and analytics / Search analytics / Analytic metrics and reports

Using the Log Transfer API with Amazon Web Services

Log Transfer lets you transfer your Analytics logs to your favorite cloud provider. This guide walks you through the process of creating a Log Transfer job for Amazon S3.

Algolia doesn’t cover the extra costs of storing your logs on Amazon S3. For an estimate of storage costs, check out the S3 pricing page.

Log Transfer is only available for Enterprise users.

Giving Log Transfer access to your AWS bucket

This guide assumes you’ve already created an AWS bucket for your analytics logs. If you haven’t yet, please do this first.

Log Transfer needs write access to your S3 bucket. You can set this up with the AWS Policy Generator.

  • Select the “S3 Bucket Policy” policy.
  • In the Principal field, add the following service account: arn:aws:iam::809847575144:user/algolia-logs. This is what we use to transfer logs to our customers.
  • In the Actions menu, select s3:putObject. This lets our account write files in your bucket. This is the only permission we need.
  • In the Amazon Resource Name (ARN) field, paste the ARN of your bucket name in the following form: arn:aws:s3:::<bucket_name>/<key_name> (e.g., arn:aws:s3:::mybucket/logs/* if you want your logs to be transferred in the logs directory of the bucket mybucket).

Generate the policy, and copy/paste it in the Bucket Policy section of the Permissions tab of your S3 management console.

Create a job

Once your bucket is ready, make a POST request to /1/logtransfer/searchapi. You need to add basic authentication for your Algolia account (the API key you use needs the logs ACL) and a JSON payload.

The payload should contain:

  • name: the name of your job,
  • provider: your target cloud provider (in this case S3),
  • destination: the bucket and sub-directory you want to push your logs to.
$
$
$
$
$
$
$
$
$
$
curl -X POST \
http://logs.us.algolia.com/1/logtransfer/searchapi \
-H 'Content-Type: application/json' \
-H 'X-Algolia-API-Key: ${API_KEY}' \
-H 'X-Algolia-Application-Id: ${APP_ID}' \
-d '{
"name": "my job",
"provider": "S3",
"destination": "mybucket/logs/"
}'

The request above uploads your logs to the “logs” sub-directory of the “mybucket” bucket. The returned payload gives you the ID of your job. However, you must validate the job before it starts running. In the meantime, it should be marked as pending.

1
2
3
4
5
6
7
8
{
  "id": "e2020b7f-4a0a-4722-a0e7-45ac973b14fd",
  "name": "my job",
  "status": "pending",
  "provider": "S3",
  "destination": "mybucket/logs/",
  "createdAt": "2019-12-30T09:41:26.586146813Z"
}

You can list your jobs by making a GET request to /1/logtransfer/searchapi, or get information on a specific job with a GET request to /1/logtransfer/searchapi/JOB_ID.

Validate your job

Initially, a job is marked as “pending”. This means that it’s waiting for user validation. The logs aren’t pushed at this stage.

First, make a GET request to /1/logtransfer/searchapi/JOB_ID/validate route.

When you validate a job, we try to write a token to the destination you specified.

If we fail to write to the destination, we return an error message.

1
2
3
4
{
  "status": 422,
  "message": "The specified S3 bucket does not exist"
}

If the request is successful, we return a status 200 OK with the name of the token file:

1
2
3
{
  "file": "the-file-containing-the-token"
}

Copy the token from the file, and make a POST request to /1/logtransfer/searchapi/JOB_ID/validate with the token as the payload:

1
2
3
{
  "token": "the-token-in-the-file"
}

If the token you sent matches the one we wrote, the validation is successful and we return the status 204 No Content. Your job is assigned a “running” status. You can verify this with a GET request to /1/logtransfer/searchapi/JOB_ID.

1
2
3
4
5
6
7
8
{
  "id": "e2020b7f-4a0a-4722-a0e7-45ac973b14fd",
  "name": "my job",
  "status": "running",
  "provider": "S3",
  "destination": "mybucket/logs/",
  "createdAt": "2019-12-30T09:41:26.586146813Z"
}

Your logs should now be pushed to your AWS bucket.

Did you find this page helpful?