{"_id":"5845bbe063c11b2500379662","project":"56ba46e2ce5d540d00e2d7a7","__v":1,"category":{"_id":"582601f155b1060f00ec4173","project":"56ba46e2ce5d540d00e2d7a7","__v":0,"version":"56ba46e2ce5d540d00e2d7aa","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-11-11T17:37:53.355Z","from_sync":false,"order":1,"slug":"guides","title":"Guides"},"parentDoc":null,"user":"5732062ad720220e008ea1d2","version":{"_id":"56ba46e2ce5d540d00e2d7aa","project":"56ba46e2ce5d540d00e2d7a7","__v":13,"createdAt":"2016-02-09T20:06:58.727Z","releaseDate":"2016-02-09T20:06:58.727Z","categories":["56ba46e3ce5d540d00e2d7ab","5771a6b145c7080e0072927f","5771a72eb0ea6b0e006a5221","5772e5b20a6d610e00dea073","577c3006b20f211700593629","57ae587bca3e310e00538155","57ae593a7c93fa0e001e6b50","57b1f8263ff6c519005cf074","582601f155b1060f00ec4173","582a62857a96051b0070b011","58ebfae58d5a860f00851fb9","590a75a1ec0d5e190095ab38","59e5253fd460b50010237bed"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"updates":["5a27314f16f74700122b35aa"],"next":{"pages":[],"description":""},"createdAt":"2016-12-05T19:11:28.566Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":9,"body":"This guide covers how to use the [archive feature](https://app.logdna.com/manage/archiving) located under the Settings pane of the [LogDNA web app](https://app.logdna).\n\n## Overview\n\nArchiving is an automatic function that exports your logs from LogDNA to an external source. Archived logs are in JSON format and preserve metadata associated with each line. Once archiving is configured for your account, your logs will be exported daily in a compressed format (.json.gz). The first time you configure archiving, your archived logs will typically appear within 24-48 hours.\n\n## AWS S3\n\nTo export your logs to an S3 bucket, ensure that you have an AWS account with access to S3.\n\n### Create a bucket\n1. In [AWS S3](https://console.aws.amazon.com/s3/), click the Create bucket button\n2. Give your bucket a unique name and select a region for it to reside in.\n3. Click the Next button until you create your bucket and exit the bucket creation model.\n\n### Configure your bucket\n1. Click on your bucket and select the Permissions section\n2. Click the Add users button and enter `logdna:::at:::logdna.com` as the email. Alternatively, you can also use this identifier: `659c621e261e7ffa5d8f925bbe9fe1698f3637878e96bc1a9e7216838799b71a`\n3. Check both the Read and Write permission boxes for Object access and click \n\n### Configure LogDNA\n1. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving)\n2. Under the S3 Archiving section, input the name of your newly created S3 bucket, and click Save.\n\n## Azure Blob Storage\n\nTo export your logs to Azure Blob Storage, ensure that you have an Azure account with access to storage accounts.\n\n1. [Create a Storage Account](https://docs.microsoft.com/en-us/azure/storage/storage-create-storage-account) on Microsoft Azure\n2. Once created, click your storage account and then click Access Keys under the heading Settings\n3. Create a key if you do not already have one\n4. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving)\n5. Under the Azure Blob Storage archiving section, input your storage account name and key and then Click Save.\n\n## Google Cloud Storage\n\nTo export your logs to Google Cloud Storage, ensure that you have a Google Cloud Platform account and project with access to storage.\n\n1. Ensure that [Google Cloud Storage JSON API](https://console.cloud.google.com/apis/library/storage-api.googleapis.com/) is enabled.\n2. Create a new bucket (or use an existing one) in [Google Cloud Storage](https://console.cloud.google.com/storage/).\n3. Update the permissions of the bucket and add a new member `archiver@logdna-internal-oauth.iam.gserviceaccount.com` with the role of `Storage Admin`.\n4. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving).\n5. Under the Google Cloud Storage Archiving section, input your ProjectId and Bucket and then click save.\n\n## OpenStack Swift\n\nTo export your logs to OpenStack Swift, ensure that you have an OpenStack account with access to Swift. \n\n1. Set up Swift by following [these instructions](https://www.swiftstack.com/docs/cookbooks/swift_usage/auth.html#v2-auth).\n2. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving).\n3. Under the OpenStack Swift Archiving section, input your Username, Password, Auth URL, and Tenant Name and then click Save.\n\n## Digital Ocean Spaces\n\nTo export your logs to Digital Ocean Spaces, ensure that you have a Digital Ocean account with access to storage.\n\n1. Create a new space (or use an existing one) in [Digital Ocean Spaces](https://cloud.digitalocean.com/spaces).\n2. Create a new spaces access key in [Digital Ocean Applications & API](https://cloud.digitalocean.com/settings/api/tokens). Make sure to save the access key and secret key.\n3. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving).\n4. Under the Digital Ocean Spaces Archiving section, input your Bucket, Region, AccessKey, and SecretKey. Note that your region can be found in your spaces url e.g. `https://my-logdna-bucket.nyc3.digitaloceanspaces.com` has the region `nyc3`.\n\n## Security\n\nBy default, LogDNA encrypts your archived data in transit, and requests server-side encryption where possible, including using x-amz-server-side-encryption upon upload of logs to S3.\n\n## Reading archived logs\n\nLog files are stored in a zipped JSON lines format. While we do not currently support re-ingesting historical data, there are a number of tools we can recommend to parse your archived logs.\n\n### Amazon Athena\n\nAmazon Athena is a serverless interactive query service that can analyze large datasets residing in S3 buckets. You can use Amazon Athena to define a schema and query results using SQL. More information about Amazon Athena is available [here](https://aws.amazon.com/athena/).\n\n### Google BigQuery\n\nGoogle BigQuery is a serverless enterprise data warehouse that can analyze large datasets. One of our customers, Life.Church, has generously shared a command line utility, [DNAQuery](https://github.com/lifechurch/dnaquery), that loads LogDNA archived data into Google BigQuery. More information about Google Big Query is available [here](https://cloud.google.com/bigquery/).\n\n### jq\n\njq is handy command line tool used to parse JSON data. Once your archive has been uncompressed, you can use jq to parse your archive log files. More information about jq is available [here](https://stedolan.github.io/jq/).","excerpt":"","slug":"archiving","type":"basic","title":"Archiving"}
This guide covers how to use the [archive feature](https://app.logdna.com/manage/archiving) located under the Settings pane of the [LogDNA web app](https://app.logdna). ## Overview Archiving is an automatic function that exports your logs from LogDNA to an external source. Archived logs are in JSON format and preserve metadata associated with each line. Once archiving is configured for your account, your logs will be exported daily in a compressed format (.json.gz). The first time you configure archiving, your archived logs will typically appear within 24-48 hours. ## AWS S3 To export your logs to an S3 bucket, ensure that you have an AWS account with access to S3. ### Create a bucket 1. In [AWS S3](https://console.aws.amazon.com/s3/), click the Create bucket button 2. Give your bucket a unique name and select a region for it to reside in. 3. Click the Next button until you create your bucket and exit the bucket creation model. ### Configure your bucket 1. Click on your bucket and select the Permissions section 2. Click the Add users button and enter `logdna@logdna.com` as the email. Alternatively, you can also use this identifier: `659c621e261e7ffa5d8f925bbe9fe1698f3637878e96bc1a9e7216838799b71a` 3. Check both the Read and Write permission boxes for Object access and click ### Configure LogDNA 1. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving) 2. Under the S3 Archiving section, input the name of your newly created S3 bucket, and click Save. ## Azure Blob Storage To export your logs to Azure Blob Storage, ensure that you have an Azure account with access to storage accounts. 1. [Create a Storage Account](https://docs.microsoft.com/en-us/azure/storage/storage-create-storage-account) on Microsoft Azure 2. Once created, click your storage account and then click Access Keys under the heading Settings 3. Create a key if you do not already have one 4. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving) 5. Under the Azure Blob Storage archiving section, input your storage account name and key and then Click Save. ## Google Cloud Storage To export your logs to Google Cloud Storage, ensure that you have a Google Cloud Platform account and project with access to storage. 1. Ensure that [Google Cloud Storage JSON API](https://console.cloud.google.com/apis/library/storage-api.googleapis.com/) is enabled. 2. Create a new bucket (or use an existing one) in [Google Cloud Storage](https://console.cloud.google.com/storage/). 3. Update the permissions of the bucket and add a new member `archiver@logdna-internal-oauth.iam.gserviceaccount.com` with the role of `Storage Admin`. 4. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving). 5. Under the Google Cloud Storage Archiving section, input your ProjectId and Bucket and then click save. ## OpenStack Swift To export your logs to OpenStack Swift, ensure that you have an OpenStack account with access to Swift. 1. Set up Swift by following [these instructions](https://www.swiftstack.com/docs/cookbooks/swift_usage/auth.html#v2-auth). 2. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving). 3. Under the OpenStack Swift Archiving section, input your Username, Password, Auth URL, and Tenant Name and then click Save. ## Digital Ocean Spaces To export your logs to Digital Ocean Spaces, ensure that you have a Digital Ocean account with access to storage. 1. Create a new space (or use an existing one) in [Digital Ocean Spaces](https://cloud.digitalocean.com/spaces). 2. Create a new spaces access key in [Digital Ocean Applications & API](https://cloud.digitalocean.com/settings/api/tokens). Make sure to save the access key and secret key. 3. Go to the [Archive pane of the LogDNA web app](https://app.logdna.com/manage/archiving). 4. Under the Digital Ocean Spaces Archiving section, input your Bucket, Region, AccessKey, and SecretKey. Note that your region can be found in your spaces url e.g. `https://my-logdna-bucket.nyc3.digitaloceanspaces.com` has the region `nyc3`. ## Security By default, LogDNA encrypts your archived data in transit, and requests server-side encryption where possible, including using x-amz-server-side-encryption upon upload of logs to S3. ## Reading archived logs Log files are stored in a zipped JSON lines format. While we do not currently support re-ingesting historical data, there are a number of tools we can recommend to parse your archived logs. ### Amazon Athena Amazon Athena is a serverless interactive query service that can analyze large datasets residing in S3 buckets. You can use Amazon Athena to define a schema and query results using SQL. More information about Amazon Athena is available [here](https://aws.amazon.com/athena/). ### Google BigQuery Google BigQuery is a serverless enterprise data warehouse that can analyze large datasets. One of our customers, Life.Church, has generously shared a command line utility, [DNAQuery](https://github.com/lifechurch/dnaquery), that loads LogDNA archived data into Google BigQuery. More information about Google Big Query is available [here](https://cloud.google.com/bigquery/). ### jq jq is handy command line tool used to parse JSON data. Once your archive has been uncompressed, you can use jq to parse your archive log files. More information about jq is available [here](https://stedolan.github.io/jq/).