Ingestion refers to the process of formatting and uploading data from external sources like applications, platforms, and servers. LogDNA automatically ingests log data for fast, real-time log management and analysis. Learn how to format log lines, make use of LogDNA's automatic parsing, and upload log line metadata.
Go the app to start adding & ingesting a log source:
Head to the "Add a Source" page and get your account-specific installation instructions and LogDNA Ingestion Key there.
Nearly all log line strings contain the three components below, though only
message is required.
Message is a string that represents the core descriptive component of a log line and is usually preceded by timestamp and level. A message typically contains a mixture of static and variable substrings and allows for easy human interpretation. For example:
User [email protected] requested /API/accountdetails/
Timestamp is required for all ingested log lines. As a general rule, if a timestamp follows the ISO 8601 format, it will be parsed correctly. LogDNA also accepts most other timestamp formats, but if your timestamp is not picked up correctly, let us know and we'll see what we can do.
Log level typically follows timestamp and is automatically parsed. We look for common formats, such as a timestamp followed by a separator followed by the log level. Common log levels include:
Source information metadata is also ingested alongside the log line and is displayed in the All Sources menu in the web app. The only required parameter is
A hostname is the name of the source of the log line, and is automatically picked up by the LogDNA agent as well as syslog-based ingestion. However, a host must be specified when submitting lines via the REST API or code libraries.
A tag can be used to group lines and more than one tag can be applied to a given line. Tags show up under the All Tags menu in the web app. Tagging is supported by both the LogDNA agent as well as custom-template supported syslog-based ingestion, such as rsyslog or syslog-ng. At the time of writing, only source tags are currently supported, but more types are planned.
Other optional source information can be specified, such as
- IP address
- MAC address
In addition to source information, app information is also ingested. The LogDNA agent automatically parses the app name as the filename (e.g. error.log) while syslog-based ingestion uses the syslog-generated APP-NAME tag. For the REST API and code libraries, the app name must be specified.
LogDNA automatically parses the following log line types:
- AWS CloudFront
- AWS CloudWatch
- AWS ELB
- AWS ECS
- AWS S3
- Docker Swarm
- Docker Cloud/Compose
- IIS Log
- Windows Events
As long as the log message ends in a
}, your last JSON object in the log message will be parsed, even if the JSON object does not span the entire message. If do not want your JSON object to be parsed, you can simply append an additional character after the ending
} such as
. a period.
If your JSON contains a
message field, that field will be used for display and search in the log viewer. We also parse out (and override any existing) log levels if you include a
For JSON parsed lines, LogDNA uses a number of reserved fields to keep track of specific types of data. Please note that using the following reserved fields in your root JSON object will result in an underscore (_) prepended to those fields inside the context menu (e.g. internally
status is stored as
_status). However, you can still search normally inside our web app without being aware of this storage behavior (e.g. you can still just search
status:200 as we will automatically search both
_status). For reference, common reserved fields can be found below:
WARNING: If your parsed fields contain inconsistent value types, field parsing may fail, but we will keep the line if possible. For example, if a line is passed with a meta object, such as
meta.myfield of type String, any subsequent lines with
meta.myfield must have a String as the value type for
meta.myfield. This caveat applies to all parsed fields, including JSON.
From time to time, there will be delays in processing new log data due to volume of logs coming in. There are 2 types of delays: Live Tail latency and Indexing latency.
We strive for a Live Tail latency of 1s in all cases. Typically our live tail latency averages about 10s. You can see what the current latency is on https://status.logdna.com.
Indexing refers to the time between a line is ingested and when it's available for search. By default, our indexing process updates every 30s. As soon as a line is available in Live Tail, it should be indexed within our system within 30s.
Review the following service limits for ingestion.
- Body size: 10 MB*
- Message size: 16 KB
- Metadata size: 32 KB
- Hostname length: 256 characters
- App name length: 512 characters
- Level: 80 characters
- Tags: 80 characters
- Depth of parsed nested fields: 3
- Number of unique parsed fields: Typically 500 per day
- Domains within hostnames are truncated. FQDN settings available upon request.
*This is the server-enforced maximum body size. Ingestion clients may further reduce this.
Data size measurement
Please be aware that the sizes of log data sent can increase in size after the JSON string gets parsed in Node.js. Measurement is taken on how much data is on the LogDNA side, after it is parsed as JSON, and not how much data is sent in a line.
Updated 2 months ago