Configuration - open-telemetry/opentelemetry-demo

Configuration in the context of the OpenTelemetry demo project (https://github.com/open-telemetry/opentelemetry-demo/) involves setting up and configuring various services and components. Here are some possible configuration options with examples:

OpenTelemetry Collector Configuration

The OpenTelemetry Collector is a vendor-neutral daemon that receives, processes, and exports telemetry data (metrics, traces, and logs) from a variety of sources. Here’s an example of a basic valid OpenTelemetry Collector configuration file:

receivers:
otlp:
protocols:
grpc:
endpoint: "localhost:4317"
http:
endpoint: "localhost:4318"
exporters:
# NOTE: Prior to v0.86.0 use the `logging` instead of `debug`.
debug:
connectors:
example:
service:
pipelines:
traces:
receivers:
- otlp
processors:
- batch
exporters:
- example
metrics:
receivers:
- example
exporters:
- debug

This configuration includes OTLP receivers for both gRPC and HTTP protocols, a debug exporter, and an example connector used in a pipeline for traces and metrics.

Source: https://opentelemetry.io/docs/collector/build-connector/

Configuring OpenTelemetry Instrumentation

OpenTelemetry instrumentation can be configured using environment variables, which are supported across multiple programming languages and SDKs. For example, the OTEL_SERVICE_NAME environment variable sets the name of the application:

export OTEL_SERVICE_NAME=my-application

Source: https://grafana.com/docs/opentelemetry/instrumentation/configuration/environment-variables

OpenTelemetry Collector for Grafana Cloud

To send traces to Grafana Cloud’s Tempo service using the OpenTelemetry Collector, create a configuration file like this:

receivers:
otlp:
protocols:
grpc:
http:
exporters:
tempo:
endpoint: "https://<your-tempo-endpoint>"
headers:
authorization: "Bearer <your-tempo-token>"
service:
pipelines:
traces:
receivers:
- otlp
exporters:
- tempo

Replace <your-tempo-endpoint> and <your-tempo-token> with your actual Tempo endpoint and token.

Source: https://grafana.com/docs/opentelemetry/instrumentation/configuration/

Scaling the OpenTelemetry Collector

The OpenTelemetry Collector can be scaled horizontally by running multiple instances and configuring them to load balance the data. Here’s an example of a configuration that load balances data between two collectors:

receivers:
otlp:
protocols:
grpc:
http:
exporters:
jaeger:
endpoint: "http://collector-2:14268/api/traces"
service:
pipelines:
traces:
receivers:
- otlp
exporters:
- jaeger
extensions:
health_check:

In this example, the jaeger exporter is configured to send data to a second collector instance running at collector-2:14268.

Source: https://grafana.com/docs/opentelemetry/collector/how-to-scale/

Sending Logs to Loki using the OpenTelemetry Collector

To send logs to Loki using the OpenTelemetry Collector, configure the Loki receiver in the collector configuration:

receivers:
loki:
url: "http://loki:3100/loki/api/v1/push"
exporters:
loki:
url: "http://loki:3100/loki/api/v1/push"
service:
pipelines:
logs:
receivers:
- loki
exporters:
- loki

This configuration includes a Loki receiver and exporter, both configured to send data to a Loki instance running at loki:3100.

Source: https://grafana.com/docs/opentelemetry/collector/send-logs-to-loki/loki-receiver