Ingestion

LogDNA offers a variety of integrations for data ingestion, including AWS CloudWatch, Kubernetes, Docker, Heroku, Elastic Beanstalk, and more for simplified, centralized log management.

What is ingestion?

Ingestion refers to the process of formatting and uploading data from external sources like applications, platforms, and servers. LogDNA automatically ingests log data for fast, real-time log management and analysis. Learn how to format log lines, make use of LogDNA's automatic parsing, and upload log line metadata.

Line components

Nearly all log line strings contain the three components below.

Timestamp

Timestamp is required for all ingested log lines. As a general rule, if a timestamp follows the ISO 8601 format, it will be parsed correctly. LogDNA also accepts most other timestamp formats, but if your timestamp is not picked up correctly, let us know and we'll see what we can do.

Log Level

Log level typically follows timestamp and is automatically parsed. We look for common formats, such as a timestamp followed by a separator followed by the log level. Common log levels include:

  • CRITICAL
  • DEBUG
  • EMERGENCY
  • ERROR
  • FATAL
  • INFO
  • SEVERE
  • TRACE
  • WARN

Message

Message is a string that represents the core descriptive component of a log line and is usually preceded by timestamp and level. A message typically contains a mixture of static and variable substrings and allows for easy human interpretation. For example:

User myemail@email.com requested /API/accountdetails/

Source information

Source information metadata is also ingested alongside the log line and is displayed in the All Sources menu in the web app.

Hostname

A hostname is the name of the source of the log line, and is automatically picked up by the LogDNA agent as well as syslog-based ingestion. However, a host must be specified when submitting lines via the REST API or [code libraries].

Tags

A tag can be used to group lines and more than one tag can be applied to a given line. Tags show up under the All Tags menu in the web app. Tagging is supported by both the LogDNA agent as well as custom-template supported syslog-based ingestion, such as rsyslog or syslog-ng. At the time of writing, only source tags are currently supported, but more types are planned.

Other information

Other optional source information can be specified, such as

  • IP address
  • MAC address

The above information is automatically picked up by the LogDNA agent and can be specified for the REST API. The LogDNA agent also picks up some instance metadata, such as instance type.

Application information

In addition to source information, app information is also ingested. The LogDNA agent automatically parses the app name as the filename (e.g. error.log) while syslog-based ingestion uses the syslog-generated APP-NAME tag. For the REST API and code libraries, the app name must be specified.

Log Parsing

LogDNA automatically parses certain types of log lines that enable the use of field search for those lines.

Supported Types

LogDNA automatically parses the following log line types:

  • Apache
  • AWS ELB
  • AWS S3
  • Cron
  • HAProxy
  • Heroku
  • JSON
  • Logfmt
  • MongoDB
  • Nagios
  • Nginx
  • PostgreSQL
  • Ruby/Rails
  • Syslog
  • Tomcat
  • Windows Events

JSON Parsing

As long as the log message ends in a }, your JSON object will be parsed, even if the JSON object does not span the entire message. If do not want your JSON object to be parsed, you can simply append an additional character after the ending } such as . a period.

If your JSON contains a message field, that field will be used for display and search in the log viewer. We also parse out (and override any existing) log levels if you include a level field.

Reserved fields

For JSON parsed lines, LogDNA uses a number of reserved fields to keep track of specific types of data. Please note that using the following reserved fields in your root JSON object will result in an underscore (_) prepended to those fields inside the context menu (e.g. internally status is stored as _status). However, you can still search normally inside our web app without being aware of this storage behavior (e.g. you can still just search status:200 as we will automatically search both status and _status). For reference, common reserved fields can be found below:

  • _source
  • _type
  • auth
  • bytes
  • connect
  • method
  • namespace
  • path
  • pod
  • request
  • response
  • service
  • space
  • status
  • timestamp
  • user

Metadata

Metadata is a field reserved for custom information associated with a log line. Sending metadata is currently supported by the REST API, as well as our Node.JS, and Python code libraries.

Caveats

WARNING: If your parsed fields contain inconsistent value types, field parsing may fail, but we will keep the line if possible. For example, if a line is passed with a meta object, such as meta.myfield of type String, any subsequent lines with meta.myfield must have a String as the value type for meta.myfield. This caveat applies to all parsed fields, including JSON.

Ingestion Delays

From time to time, there will be delays in processing new log data due to volume of logs coming in. There are 2 types of delays: Live Tail latency and Indexing latency.

Live Tail Latency

We strive for a Live Tail latency of 1s in all cases. Typically our live tail latency averages about 10s. You can see what the current latency is on https://status.logdna.com.

Indexing Latency

Indexing refers to the time between a line is ingested and when it's available for search. By default, our indexing process updates every 15s. As soon as a line is available in Live Tail, it should be indexed within our system within 15s.

Service limits

Please be aware of the following service limits for ingestion:

  • Message size: 16 KB
  • Hostname length: 256 characters
  • App name length: 512 characters
  • Level: 80 characters
  • Tags: 80 characters
  • Depth of parsed nested fields: 3
  • # Unique parsed fields: 1000 per day
  • Domains within hostnames are truncated. FQDN settings available upon request.