-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTLP HTTP Log Output Plugin #1084
Comments
Below is an example OTEL collector config. It's contrived, but hopefully illustrates why it might be desirable. It's also intentionally small/limited. In the example, Falco Sidekick would send OTLP logs, either via HTTP or gRPC, to an OTEL collector, such as [https](opentelemetry-collector-contrib).
It also sets
Ideally, the
While the example assumes Kubernetes, you should be able to easily change extensions:
basicauth:
client_auth:
username: elastic
password: changeme
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
http:
endpoint: ${env:MY_POD_IP}:4318
processors:
memory_limiter:
check_interval: 2s
limit_percentage: 80
spike_limit_percentage: 20
filter/keep-falco:
logs:
include:
match_type: strict
record_attributes:
- key: service.name
value: "falco"
attributes/psirt-standard:
- actions:
- key: service.owner
value: psirt
action: upsert
- key: location.kind
value: on-premises
action: upsert
- key: location.site
value: yvr7
action: upsert
transform/falco:
error_mode: ignore
log_statements:
- context: log
statements:
- set(cache, ParseJSON(body)) # ParseJSON once, into the cache, so we don't have to ParseJSON on every statement
- set(attributes["level"], cache["Priority"]) # Set the level based on the Falco Sidekick Priority field
- set(attributes["k8s.namespace.name"], cache["ns"]) # Align with standard OTEL attributes
batch: {}
exporters:
elasticsearch:
endpoint: https://elastic.example.com:9200
auth:
authenticator: basicauth
otlphttp/loki:
endpoint: "http://loki-gateway.loki.svc:80/otlp"
service:
pipelines:
logs/falco:
receivers: [otlp]
processors:
- filter/keep-falco
- transform/falco
- memory_limiter
- attributes/psirt-standard
- batch
exporters:
- otlphttp/loki
- elastichsearch Please note, I haven't tested this configuration as I haven't written anything to do this. It's just an illustrative example of something that I might reasonably like to accomplish, but currently can't. |
Hi, In the last release we introduced the OTLP metrics, after a discussion with the PR author, we agreed the Logs will be next step. I hope to have them in the v2.31.0. |
@supertylerc I successfully integrated the OTLP Logs in the PR #1109. It will be available in the upcoming 2.31. |
Motivation
I'm frustrated by the inflexibility of the logging output options. I want to be able to ship to a centralized logging collector that can then further manipulate the data. By support OTLP as the protocol with HTTP as the transport, I could have virtually unlimited log output options, either by vendors or systems that support OTLP directly or by shipping first to an OTEL collector than can then export to $VENDOR.
Feature
Support OTLP with HTTP transport for outputting log messages. Ideally, include some (preferably configurable) field that uniquely identifies Falco, such as
service.name=falco
. This would allow for easier identification when parsing/manipulating data in an OTEL pipeline.The OTLP HTTP output should be validated with an OTEL collector, such as
otel-collector-contrib
with anotlp/http
receiver.Alternatives
Add more logging backends and update existing logging backends to be more flexible. For example, update the Loki backend to let me set additional, arbitrary labels instead of supporting only filters for predefined fields (e.g., what if I want to add an
owner
label to the log entry, because all of my logs must have an associated owner?).Additional context
By supporting OTLP log outputs, end users could potentially gain additional features. Beyond the normal OTEL features, such as easily creating custom metrics based on the logs, injecting new fields to improve standards adherence, etc., with a little bit more work, it might be possible to link a log message to an OTLP trace.
The text was updated successfully, but these errors were encountered: