{"_id":"582cb011e290ea0f00ec08ed","version":{"_id":"56ba46e2ce5d540d00e2d7aa","project":"56ba46e2ce5d540d00e2d7a7","__v":13,"createdAt":"2016-02-09T20:06:58.727Z","releaseDate":"2016-02-09T20:06:58.727Z","categories":["56ba46e3ce5d540d00e2d7ab","5771a6b145c7080e0072927f","5771a72eb0ea6b0e006a5221","5772e5b20a6d610e00dea073","577c3006b20f211700593629","57ae587bca3e310e00538155","57ae593a7c93fa0e001e6b50","57b1f8263ff6c519005cf074","582601f155b1060f00ec4173","582a62857a96051b0070b011","58ebfae58d5a860f00851fb9","590a75a1ec0d5e190095ab38","59e5253fd460b50010237bed"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"project":"56ba46e2ce5d540d00e2d7a7","parentDoc":null,"category":{"_id":"582601f155b1060f00ec4173","project":"56ba46e2ce5d540d00e2d7a7","__v":0,"version":"56ba46e2ce5d540d00e2d7aa","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-11-11T17:37:53.355Z","from_sync":false,"order":1,"slug":"guides","title":"Guides"},"user":"5732062ad720220e008ea1d2","__v":0,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-11-16T19:14:25.399Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":0,"body":"Ingestion refers to the process of formatting and uploading log lines to LogDNA. This guide covers how to format log lines, make use of LogDNA's automatic parsing, and upload log line metadata.\n\n## Line components\n\nNearly all log line strings contain the three components below.\n\n### Timestamp\n\nTimestamp is required for all ingested log lines. As a general rule, if a timestamp follows the [ISO 8601](https://xkcd.com/1179/) format, it will be parsed correctly. LogDNA also accepts most other timestamp formats, but if your timestamp is not picked up correctly, [let us know](mailto:support:::at:::logdna.com) and we'll see what we can do.\n\n### Level\n\nLog level typically follows timestamp and is automatically parsed. We look for common formats, such as a timestamp followed by a separator followed by the log level. Common log levels include:\n* CRITICAL\n* DEBUG\n* EMERGENCY\n* ERROR\n* FATAL\n* INFO\n* SEVERE\n* TRACE\n* WARN\n\n### Message\n\nMessage is a string that represents the core descriptive component of a log line and is usually preceded by timestamp and level. A message typically contains a mixture of static and variable substrings and allows for easy human interpretation. For example:\n```\nUser myemail@email.com requested /API/accountdetails/\n```\n\n## Source information \n\nSource information metadata is also ingested alongside the log line and is displayed in the All Sources menu in the web app.\n\n### Hostname\nA hostname is the name of the source of the log line, and is automatically picked up by the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) as well as syslog-based ingestion. However, a host must be specified when submitting lines via the [REST API](https://docs.logdna.com/docs/api) or [code libraries].\n\n### Tags\n\nA host tag can be used to group hosts of a similar type and more than one tag can be applied to a given host. Tagged hosts show up under the All Hosts menu in the web app as a dynamic group. Host tagging is supported by both the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) as well as custom-template supported syslog-based ingestion, such as [rsyslog](https://docs.logdna.com/docs/rsyslog) or [syslog-ng](https://docs.logdna.com/docs/syslog-ng). \n\n### Other information\n\nOther optional source information can be specified, such as\n* IP address\n* MAC address\n\nThe above information is automatically picked up by the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) and can be specified for the [REST API](https://docs.logdna.com/docs/api). The [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) also picks up some instance metadata, such as instance type.\n\n## App information\n\nIn addition to source information, app information is also ingested. The [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) automatically parses the app name as the filename (e.g. error.log) while syslog-based ingestion uses the [syslog-generated APP-NAME tag](https://tools.ietf.org/html/rfc5424#section-6.2.5). For the [REST API](https://docs.logdna.com/docs/api) and code libraries, the app name must be specified.\n\n## Parsing\n\nLogDNA automatically parses certain types of log lines that enable the use of [field search](https://docs.logdna.com/docs/search#section-field-search) for those lines.\n\n### Supported Types\n\nLogDNA automatically parses the following log line types:\n* Apache\n* AWS ELB\n* AWS S3\n* Cron\n* HAProxy\n* Heroku\n* JSON\n* Logfmt\n* MongoDB\n* Nagios\n* Nginx\n* PostgreSQL\n* Ruby/Rails\n* Syslog\n* Tomcat\n\n#### JSON Parsing\n\nAs long as the log message ends in a `}`, your JSON object will be parsed, even if the JSON object does not span the entire message. If do not want your JSON object to be parsed, you can simply append an additional character after the ending `}` such as `.` a period.\n\nIf your JSON contains a `message` field, that field will be used for display and search in the log viewer. We also parse out (and override any existing) log levels if you include a `level` field.\n\n### Reserved fields\n\nFor JSON parsed lines, LogDNA uses a number of reserved fields to keep track of specific types of data. Please note that using the following reserved fields in your root JSON object will result in an underscore (_) prepended to those fields inside the context menu (e.g. internally `status` is stored as `_status`). However, you can still search normally inside our web app without being aware of this storage behavior (e.g. you can still just search `status:200` as we will automatically search both `status` and `_status`). For reference, common reserved fields can be found below:\n* _source\n* _type\n* auth\n* bytes\n* connect\n* method\n* namespace\n* path\n* pod\n* request\n* response\n* service\n* space\n* status\n* timestamp\n* user\n\n## Metadata\n\nMetadata is a field reserved for custom information associated with a log line. Sending metadata is currently supported by the [REST API](https://docs.logdna.com/docs/api), as well as our [Node.JS,](https://github.com/logdna/nodejs) and [Python](https://github.com/logdna/python) code libraries.\n\n## Caveats\n\n**WARNING**: If your parsed fields contain inconsistent value types, field parsing may fail, but we will keep the line if possible. For example, if a line is passed with a meta object, such as `meta.myfield` of type String, any subsequent lines with `meta.myfield` must have a String as the value type for `meta.myfield`. This caveat applies to all parsed fields, including JSON.\n\n## Service limits\n\nPlease be aware of the following service limits for ingestion:\n* Maximum message size: *16 KB*\n* Maximum hostname length: *80 characters*\n* Maximum depth of parsed nested fields: *3*\n* Maximum number of unique parsed fields: *500 per day*\n* Domains within hostnames are truncated. FQDN settings available [upon request](mailto:support@logdna.com).","excerpt":"","slug":"ingestion","type":"basic","title":"Ingestion"}
Ingestion refers to the process of formatting and uploading log lines to LogDNA. This guide covers how to format log lines, make use of LogDNA's automatic parsing, and upload log line metadata. ## Line components Nearly all log line strings contain the three components below. ### Timestamp Timestamp is required for all ingested log lines. As a general rule, if a timestamp follows the [ISO 8601](https://xkcd.com/1179/) format, it will be parsed correctly. LogDNA also accepts most other timestamp formats, but if your timestamp is not picked up correctly, [let us know](mailto:support@logdna.com) and we'll see what we can do. ### Level Log level typically follows timestamp and is automatically parsed. We look for common formats, such as a timestamp followed by a separator followed by the log level. Common log levels include: * CRITICAL * DEBUG * EMERGENCY * ERROR * FATAL * INFO * SEVERE * TRACE * WARN ### Message Message is a string that represents the core descriptive component of a log line and is usually preceded by timestamp and level. A message typically contains a mixture of static and variable substrings and allows for easy human interpretation. For example: ``` User myemail@email.com requested /API/accountdetails/ ``` ## Source information Source information metadata is also ingested alongside the log line and is displayed in the All Sources menu in the web app. ### Hostname A hostname is the name of the source of the log line, and is automatically picked up by the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) as well as syslog-based ingestion. However, a host must be specified when submitting lines via the [REST API](https://docs.logdna.com/docs/api) or [code libraries]. ### Tags A host tag can be used to group hosts of a similar type and more than one tag can be applied to a given host. Tagged hosts show up under the All Hosts menu in the web app as a dynamic group. Host tagging is supported by both the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) as well as custom-template supported syslog-based ingestion, such as [rsyslog](https://docs.logdna.com/docs/rsyslog) or [syslog-ng](https://docs.logdna.com/docs/syslog-ng). ### Other information Other optional source information can be specified, such as * IP address * MAC address The above information is automatically picked up by the [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) and can be specified for the [REST API](https://docs.logdna.com/docs/api). The [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) also picks up some instance metadata, such as instance type. ## App information In addition to source information, app information is also ingested. The [LogDNA agent](https://docs.logdna.com/docs/logdna-agent) automatically parses the app name as the filename (e.g. error.log) while syslog-based ingestion uses the [syslog-generated APP-NAME tag](https://tools.ietf.org/html/rfc5424#section-6.2.5). For the [REST API](https://docs.logdna.com/docs/api) and code libraries, the app name must be specified. ## Parsing LogDNA automatically parses certain types of log lines that enable the use of [field search](https://docs.logdna.com/docs/search#section-field-search) for those lines. ### Supported Types LogDNA automatically parses the following log line types: * Apache * AWS ELB * AWS S3 * Cron * HAProxy * Heroku * JSON * Logfmt * MongoDB * Nagios * Nginx * PostgreSQL * Ruby/Rails * Syslog * Tomcat #### JSON Parsing As long as the log message ends in a `}`, your JSON object will be parsed, even if the JSON object does not span the entire message. If do not want your JSON object to be parsed, you can simply append an additional character after the ending `}` such as `.` a period. If your JSON contains a `message` field, that field will be used for display and search in the log viewer. We also parse out (and override any existing) log levels if you include a `level` field. ### Reserved fields For JSON parsed lines, LogDNA uses a number of reserved fields to keep track of specific types of data. Please note that using the following reserved fields in your root JSON object will result in an underscore (_) prepended to those fields inside the context menu (e.g. internally `status` is stored as `_status`). However, you can still search normally inside our web app without being aware of this storage behavior (e.g. you can still just search `status:200` as we will automatically search both `status` and `_status`). For reference, common reserved fields can be found below: * _source * _type * auth * bytes * connect * method * namespace * path * pod * request * response * service * space * status * timestamp * user ## Metadata Metadata is a field reserved for custom information associated with a log line. Sending metadata is currently supported by the [REST API](https://docs.logdna.com/docs/api), as well as our [Node.JS,](https://github.com/logdna/nodejs) and [Python](https://github.com/logdna/python) code libraries. ## Caveats **WARNING**: If your parsed fields contain inconsistent value types, field parsing may fail, but we will keep the line if possible. For example, if a line is passed with a meta object, such as `meta.myfield` of type String, any subsequent lines with `meta.myfield` must have a String as the value type for `meta.myfield`. This caveat applies to all parsed fields, including JSON. ## Service limits Please be aware of the following service limits for ingestion: * Maximum message size: *16 KB* * Maximum hostname length: *80 characters* * Maximum depth of parsed nested fields: *3* * Maximum number of unique parsed fields: *500 per day* * Domains within hostnames are truncated. FQDN settings available [upon request](mailto:support@logdna.com).