-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge sending logs through otel-collector #147
Conversation
This sends logs collected by fluentd to otelcol via fluentforward. There are some limitations noted by TODOs that I will file issues to track but they should not affect the common cases. It's mostly around configuring of hec_exporter TLS settings. It still attaches k8s metadata on the fluentd side as it uses various annotations to construct source/sourcetype in some cases. May not be worth trying to fix with move to filelog receiver.
* Remove hec token * remove ingestHost, ingestPort, ingestProtocol * add back extraEnvs for fluentd
- Fix changelog ordering - Don't include fluentd configmap when agent not enabled - Enable http-forwarder for all telemetry types since signalfx exporter sends metadata updates
helm-charts/splunk-otel-collector/templates/config/_otel-agent.tpl
Outdated
Show resolved
Hide resolved
helm-charts/splunk-otel-collector/templates/config/_otel-collector.tpl
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
- Removed `logsBackend`, configure `splunk_hec` exporter directly (#123) | ||
- Removed `splunk.com/index` annotation for logs (#123) | ||
- Removed `fluentd.config.indexFields` as all fields sent are indexed (#123) | ||
- Removed `fluentforward` receiver from gateway (#127) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonder if this was* desired overall since there may be other users than our td-agent? Maybe keep for now with a deprecation warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems unlikely. Would rather not encourage its use. I've run into several issues with the receiver. They can always add to their own config.
container_id ${record.dig("docker","container_id")} | ||
pod_uid ${record.dig("kubernetes","pod_id")} | ||
node_name "#{ENV['K8S_NODE_NAME']}" | ||
cluster_name {{ .Values.clusterName }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why drop node and cluster names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'll get attached by the resource processor in the agent now.
🎉 |
No description provided.