Archiving Log Files

Learn how to use the log archiving feature in LogDNA, the easiest, fastest cloud log management software.

This guide covers how to use the archive feature located under the Settings pane of the LogDNA web app.

Overview

Archiving is an automatic function that exports your logs from LogDNA to an external source. Archived logs are in JSON format and preserve metadata associated with each line. Once archiving is configured for your account, your logs will be exported daily in a compressed format (.json.gz). The first time you configure archiving, your archived logs will typically appear within 24-48 hours.

AWS S3

To export your logs to an S3 bucket, ensure that you have an AWS account with access to S3.

Create a bucket

  1. In AWS S3, click the Create bucket button
  2. Give your bucket a unique name and select a region for it to reside in.
  3. Click the Next button until you create your bucket and exit the bucket creation model.

Configure your bucket

  1. Click on your bucket and select the Permissions section
  2. Click the Add users button and enter logdna@logdna.com as the email. Alternatively, you can also use this identifier: 659c621e261e7ffa5d8f925bbe9fe1698f3637878e96bc1a9e7216838799b71a
  3. Check both the Read and Write permission boxes for Object access and click

Configure LogDNA

  1. Go to the Archive pane of the LogDNA web app
  2. Under the S3 Archiving section, input the name of your newly created S3 bucket, and click Save.

Azure Blob Storage

To export your logs to Azure Blob Storage, ensure that you have an Azure account with access to storage accounts.

  1. Create a Storage Account on Microsoft Azure
  2. Once created, click your storage account and then click Access Keys under the heading Settings
  3. Create a key if you do not already have one
  4. Go to the Archive pane of the LogDNA web app
  5. Under the Azure Blob Storage archiving section, input your storage account name and key and then Click Save.

Google Cloud Storage

To export your logs to Google Cloud Storage, ensure that you have a Google Cloud Platform account and project with access to storage.

  1. Ensure that Google Cloud Storage JSON API is enabled.
  2. Create a new bucket (or use an existing one) in Google Cloud Storage.
  3. Update the permissions of the bucket and add a new member archiver@logdna-internal-oauth.iam.gserviceaccount.com with the role of Storage Admin.
  4. Go to the Archive pane of the LogDNA web app.
  5. Under the Google Cloud Storage Archiving section, input your ProjectId and Bucket and then click save.

OpenStack Swift

To export your logs to OpenStack Swift, ensure that you have an OpenStack account with access to Swift.

  1. Set up Swift by following these instructions.
  2. Go to the Archive pane of the LogDNA web app.
  3. Under the OpenStack Swift Archiving section, input your Username, Password, Auth URL, and Tenant Name and then click Save.

Digital Ocean Spaces

To export your logs to Digital Ocean Spaces, ensure that you have a Digital Ocean account with access to storage.

  1. Create a new space (or use an existing one) in Digital Ocean Spaces.
  2. Create a new spaces access key in Digital Ocean Applications & API. Make sure to save the access key and secret key.
  3. Go to the Archive pane of the LogDNA web app.
  4. Under the Digital Ocean Spaces Archiving section, input your Bucket, Region, AccessKey, and SecretKey. Note that your region can be found in your spaces url e.g. https://my-logdna-bucket.nyc3.digitaloceanspaces.com has the region nyc3.

IBM Cloud Object Storage Archiving

To export your logs to IBM Cloud Object Storage Archiving, ensure that you have an IBM Cloud account with access to storage.

  1. Create a new object storage service (or use an existing one) in IBM Cloud Object Storage.
  2. Create a new bucket (or use an existing one) in your service for LogDNA dump files.
  3. Go to the Archive pane of the LogDNA web app.
  4. Under the IBM Cloud Object Storage Archiving section, input your Bucket, Endpoint, API Key, and Resource Instance ID and then click Save.

Security

By default, LogDNA encrypts your archived data in transit, and requests server-side encryption where possible, including using x-amz-server-side-encryption upon upload of logs to S3.

Reading archived logs

Log files are stored in a zipped JSON lines format. While we do not currently support re-ingesting historical data, there are a number of tools we can recommend to parse your archived logs.

Amazon Athena

Amazon Athena is a serverless interactive query service that can analyze large datasets residing in S3 buckets. You can use Amazon Athena to define a schema and query results using SQL. More information about Amazon Athena is available here.

Google BigQuery

Google BigQuery is a serverless enterprise data warehouse that can analyze large datasets. One of our customers, Life.Church, has generously shared a command line utility, DNAQuery, that loads LogDNA archived data into Google BigQuery. More information about Google Big Query is available here.

jq

jq is handy command line tool used to parse JSON data. Once your archive has been uncompressed, you can use jq to parse your archive log files. More information about jq is available here.

Archiving Log Files

Learn how to use the log archiving feature in LogDNA, the easiest, fastest cloud log management software.