0

I'm trying to create a Kubernetes Helm Package to provide tracing support in Azure Kubernetes Services. The helm package includes:

  • Jaeger helm package as dependency:
dependencies:
- name: jaeger
  version: "3.1.2"
  repository: "https://jaegertracing.github.io/helm-charts"
  • Open telemetry collector - ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          http:
          grpc:
exporters:
  otlp/jaeger:
    endpoint: "http://test-jaeger-collector.traces-dapr-jaeger-01.svc.cluster.local:14268"
    tls:
      insecure: true
service:
  pipelines:
    traces:
      receivers: [otlp]
      # List your exporter here.
      exporters: [otlp/jaeger]

  • Open telemetry collector - Service:
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opencesus
    component: otel-collector
spec:
  ports:
    - name: otel # Default endpoint for OTEL receiver.
      port: 4317
      protocol: TCP
      targetPort: 4317
  selector:
    component: otel-collector
  • Open telemetry collector - Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  replicas: 1 # scale out based on your usage
  selector:
    matchLabels:
      app: opentelemetry
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      containers:
        - name: otel-collector
          image: otel/opentelemetry-collector-contrib-dev:latest
          command:
            - "/otelcontribcol"
            - "--config=/conf/otel-collector-config.yaml"
            - "--feature-gates=-component.UseLocalHostAsDefaultHost"
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 400Mi
          ports:
            - containerPort: 4317 # Default endpoint for OTEL receiver.  
          volumeMounts:
            - name: otel-collector-config-vol
              mountPath: /conf
          livenessProbe:
            httpGet:
              path: /
              port: 13133
          readinessProbe:
            httpGet:
              path: /
              port: 13133
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol

On Helm Package installation if I get the deployment/otel-collector logs:

 kubectl logs -n traces-dapr-jaeger-01 deployment/otel-collector

I get this error:

c/otlp.go:153   Starting HTTP server    {"kind": "receiver", "name": "otlp", "data_type": "traces", "endpoint": "0.0.0.0:4318"}
"traces", "endpoint": "0.0.0.0:4318"}
2024-08-26T08:28:50.915Z        info    service@v0.107.1-0.20240821120936-6764622672bc/service.go:230     Everything is ready. Begin running and processing data.
vice.go:230     Everything is ready. Begin running and processing data.
2024-08-26T08:29:14.190Z        info    otelcol@v0.107.1-0.20240821120936-6764622672bc/collector.go:325   Received signal from OS {"signal": "terminated"}
2024-08-26T08:29:14.190Z        info    service@v0.107.1-0.20240821120936-6764622672bc/ser2024-08-26T08:29:14.190Z        info    service@v0.107.1-0.20240821120936-6764622672bc/service.go:267     Starting shutdown...
vice.go:267     Starting shutdown...
2024-08-26T08:29:14.190Z        info    extensions/extensions.go:64     Stopping extensions...
2024-08-26T08:29:14.190Z        info    service@v0.107.1-0.20240821120936-6764622672bc/service.go:281     Shutdown complete.
Lion
  • 1

1 Answers1

0

In order to set up OpenTelemetry and Jaeger for tracing in your AKS cluster follow below steps-

add and install Jaeger openTelemetry

helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm repo update
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm install jaeger jaegertracing/jaeger --namespace tracing --create-namespace

enter image description here

Create a ConfigMap for the OpenTelemetry Collector

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: tracing
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          http:
          grpc:
exporters:
  otlp/jaeger:
    endpoint: "http://jaeger-collector.tracing.svc.cluster.local:14268/api/traces"
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp/jaeger]

enter image description here

Create a Deployment for the OpenTelemetry Collector

apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  namespace: tracing
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  replicas: 1
  selector:
    matchLabels:
      app: opentelemetry
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      containers:
        - name: otel-collector
          image: otel/opentelemetry-collector-contrib:0.91.0  # You can use a stable version
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 400Mi
          ports:
            - containerPort: 4317 
          volumeMounts:
            - name: otel-collector-config-vol
              mountPath: /conf
          livenessProbe:
            httpGet:
              path: /
              port: 13133
          readinessProbe:
            httpGet:
              path: /
              port: 13133
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol

enter image description here

Create a Service for the OpenTelemetry Collector

apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  namespace: tracing
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
    - name: otel
      port: 4317
      protocol: TCP
      targetPort: 4317
  selector:
    app: opentelemetry

enter image description here

Verify the Setup

kubectl get pods -n tracing

enter image description here

Now when you check logs

kubectl logs -n tracing deployment/otel-collector

enter image description here

Now the collector is running without the same error you described (shutdown after receiving a termination signal from the OS). Your logs show active metric processing, including CPU and memory metrics

Arko
  • 151
  • 2