The v2 Agent is now generally available for Kubernetes. Other platforms and OSes will follow.
The fundamental change from our v1, Node.js-based Agent is that it relies on the Linux kernel to watch the log files and directories for changes. That way, the Agent is notified when log files are changed and added, rather than having to poll these files constantly. This implementation frees up CPU utilization, improves stability, removes duplicate lines with symbolic linked log files, and improves correctness and accuracy. This implementation is written in Rust and enables ConfigMap support.
The v2 Agent is open-sourced under the MIT license. Please check it out and offer your contributions in the GitHub repo.
Anybody running Kubernetes 1.9+ can install this Agent.
kubectl apply -f https://assets.logdna.com/clients/agent-namespace.yaml kubectl create secret generic logdna-agent-key -n logdna-agent --from-literal=logdna-agent-key=<YOUR LOGDNA INGESTION KEY> kubectl apply -f https://assets.logdna.com/clients/agent-resources.yaml
Updating requires a complete reinstallation, deleting your current Agent and applying the steps to install.
It is highly recommended to keep a backup of your current Agent configuration. If you don't have one handy, run
kubectl get ds logdna-agent -o yaml > old-logdna-agent.yaml.
Upgrading from v1.X.X:
kubectl delete -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds.yaml
Upgrading from v2.X.X:
kubectl delete -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-v2.yaml
Upgrading from the v2 beta:
kubectl delete -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-v2-beta.yaml
Upgrading from v2.1.7+:
kubectl delete -f https://assets.logdna.com/clients/agent-namespace.yaml
To copy over your old configuration to your new Agent, first make a local copy of the new Agent's yaml:
curl https://raw.githubusercontent.com/logdna/logdna-agent-v2/master/k8s/logdna-agent.yaml --output logdna-agent.yaml
Then copy the
env section of your old yaml to the local copy you just made.
Now, use this to install your new Agent:
kubectl create -f logdna-agent.yaml kubectl create secret generic logdna-agent-key -n logdna-agent --from-literal=logdna-agent-key=<YOUR LOGDNA INGESTION KEY>
If you do not intend to keep any part of your old configuration, then you can use the default provided yaml, as in the How to install your Agent section above.
Configuration is done through environment variables which are found in the
env section of your LogDNA Agent's Kubernetes YAML.
By default everything under /var/log is sent to LogDNA. To add other directories:
- Refer to this for the globber syntax we support.
LOGDNA_INCLUSION_REGEX_RULESif you would like to use regex
To exclude directories, use
- Refer to this for the
globbersyntax we support.
LOGDNA_EXCLUSION_REGEX_RULESif you would like to use regex
LOGDNA_HOST- LogDNA host where you'll send the logs. Defaults to logs.logdna.com
LOGDNA_ENDPOINT- Endpoint to send logs to. Defaults to /logs/ingest/
LOGDNA_INGESTION_KEY- Your LogDNA ingestion key.
LOGDNA_USE_SSL- Use TLS 1.2 when sending logs. Defaults to true
LOGDNA_USE_COMPRESSION- Use compression when sending logs. Defaults to true
LOGDNA_GZIP_LEVEL- Compression level for gzip, from 1 to 9. Default is
LOGDNA_HOSTNAME- Hostname of your server. e.g. my-server
LOGDNA_IP- IP of your server. e.g. 127.0.0.1
LOGDNA_TAGS- comma-separated tags to add to each line. e.g. prod,ussouth,backend
LOGDNA_MAC- MAC address of device.
Note about special characters
If your Kubernetes setup has RBAC enabled, the v2 Agent will automatically and by default pull in Kubernetes metadata, like labels and annotations.
In the v2.0.x versions of the Agent that had been live for users in our private beta, this Kubernetes metadata (and corresponding calls to the Kubernetes API) was not enabled by default. And previous versions of our Agent had also relied on
docker.sock rather than more official Kubernetes APIs for retrieving this data; we've replaced that with RBAC-based access.
Without RBAC enabled, the Agent will still work, and it may still be able to grab some metadata and apply it to your logs, but it's not guaranteed. And while we're quite parsimonious with how often we call the Kubernetes API, you'll still see the occasional error log noting that the Kubernetes API call has failed. For the highest fidelity data and the best experience, make sure you enable RBAC.
There is an edge case where log lines may be missed while the LogDNA Agent is restarting. Because Agent v2 relies on kernel notifications to know what files have changed, and it will not receive those notifications during the time it takes to restart the Agent. When it is back up, the Agent will try and figure out what happened. For example, if a log file that was 10MB before restart is now 5MB, it will have to guess if the file is new, old or truncated. But that method of inference is not as effective as receiving notifications from the kernel, and may lead to some duplicated or lost logs.
Now that the Agent depends on the kernel, only unique files are tracked and log lines are de-duplicated. Metadata from the symbolic links are lost and won’t be seen in the LogDNA app.
When the Agent is unable to send loglines to LogDNA, by default, it writes those loglines to temporary .retry files, which it will then read from and attempt to resend. The oldest .retry files are cleared when they accumulate 50% of the allocated temporary space.
By writing .retry files to a temporary directory, this protects the Agent from locking the disk with too many .retry files, causing the server to catastrophically fail. The tradeoff is that if the Agent pod were restarted, the temporary files would be deleted, and log data would be lost. We'll be working on a feature that enables the Agent to write to disk without those risks.
Updated 2 months ago