Documentation

Audit kagent prompts#

Audit all prompts (inputs) and replies (outputs) from your agents.

Such auditing capability can help security, compliance, and similar teams examine how your users interact with your kagent environment. For example, you might check that users are not sharing personally identifiable information (PII) or other sensitive information when chatting with LLMs, or follow up on past conversations with agents to understand what systems have been interacted with.

About auditing prompts#

Kagent integrates with OpenTelemetry (OTel) systems to emit input/output messages as log events, which you can ingest into your logging or security information and event management (SIEM) systems for auditing purposes.

Kagent supports logging input/output messages for the following LLM providers:

  • OpenAI
  • Anthropic

Before you begin#

  1. Install kagent in your cluster.

  2. Add the OpenTelemetry Helm repository.

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    helm repo update
  3. Set up a logging backend that supports OpenTelemetry's OTLP protocol, such as Grafana Loki, Datadog, Splunk, or other OTLP-compatible systems. Example Loki configuration:

    helm upgrade --install loki loki \
    --repo https://grafana.github.io/helm-charts \
    --version 6.24.0 \
    --namespace telemetry \
    --create-namespace \
    --values - <<EOF
    loki:
    commonConfig:
    replication_factor: 1
    schemaConfig:
    configs:
    - from: 2024-04-01
    store: tsdb
    object_store: s3
    schema: v13
    index:
    prefix: loki_index_
    period: 24h
    auth_enabled: false
    singleBinary:
    replicas: 1
    minio:
    enabled: true
    gateway:
    enabled: false
    test:
    enabled: false
    monitoring:
    selfMonitoring:
    enabled: false
    grafanaAgent:
    installOperator: false
    lokiCanary:
    enabled: false
    limits_config:
    allow_structured_metadata: true
    memberlist:
    service:
    publishNotReadyAddresses: true
    deploymentMode: SingleBinary
    backend:
    replicas: 0
    read:
    replicas: 0
    write:
    replicas: 0
    ingester:
    replicas: 0
    querier:
    replicas: 0
    queryFrontend:
    replicas: 0
    queryScheduler:
    replicas: 0
    distributor:
    replicas: 0
    compactor:
    replicas: 0
    indexGateway:
    replicas: 0
    bloomCompactor:
    replicas: 0
    bloomGateway:
    replicas: 0
    EOF
  4. Install a tracing backend that supports OpenTelemetry's OTLP protocol, such as Tempo, Jaeger, Zipkin, or other OTLP-compatible systems. Example Tempo configuration:

    helm upgrade --install tempo tempo \
    --repo https://grafana.github.io/helm-charts \
    --version 1.16.0 \
    --namespace telemetry \
    --create-namespace \
    --values - <<EOF
    persistence:
    enabled: false
    tempo:
    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: 0.0.0.0:4317
    EOF

Optional: Install an OpenTelemetry collector#

For production environments, use an OpenTelemetry collector as an intermediary between kagent and your logging backend. The collector provides more control over what metadata and content you send to your logging systems.

  1. Create a Helm values file for the OpenTelemetry collector with the following configuration.

    • Debug exporter to view detailed logs for testing and verification purposes.

    • Logging OTLP receiver, exporter, and pipeline that you can point to your logging backend. Replace <your-logging-backend-endpoint> with the endpoint of your logging backend. For example:

      • Grafana Loki in the same cluster: http://loki.telemetry.svc.cluster.local:3100
      • Datadog: https://api.datadoghq.com
      • A custom OTLP endpoint: https://your-otel-endpoint.com:4317
    • Tracing OTLP receiver, exporter, and pipeline that you can point to your tracing backend. Replace <your-tracing-backend-endpoint> with the endpoint of your tracing backend. For example:

      • Tempo in the same cluster: http://tempo.telemetry.svc.cluster.local:4317
      • Jaeger: http://jaeger.telemetry.svc.cluster.local:6831
      • Zipkin: http://zipkin.telemetry.svc.cluster.local:9411
      • A custom OTLP endpoint: https://your-otel-endpoint.com:4317
    cat > otel-collector-audit.yaml <<EOF
    mode: deployment
    image:
    repository: otel/opentelemetry-collector
    config:
    receivers:
    otlp:
    protocols:
    grpc:
    endpoint: 0.0.0.0:4317
    http:
    endpoint: 0.0.0.0:4318
    processors:
    batch:
    timeout: 10s
    send_batch_size: 1024
    exporters:
    debug:
    verbosity: detailed
    otlphttp:
    endpoint: "http://loki.telemetry.svc.cluster.local:3100/otlp"
    tls:
    insecure: true
    otlp/tempo:
    endpoint: "http://tempo.telemetry.svc.cluster.local:4317"
    tls:
    insecure: true
    service:
    pipelines:
    logs:
    receivers: [otlp]
    processors: [batch]
    exporters: [debug, otlphttp]
    traces:
    receivers: [otlp]
    processors: [batch]
    exporters: [otlp/tempo]
    EOF
  2. Install the OTel collector using the Helm values file that you created.

    helm install opentelemetry-collector-audit open-telemetry/opentelemetry-collector \
    --namespace telemetry \
    --create-namespace \
    --values otel-collector-audit.yaml
  3. Verify that the collector is running.

    kubectl get pods -n telemetry -l app.kubernetes.io/name=opentelemetry-collector

    Example output:

    NAME READY STATUS RESTARTS AGE
    opentelemetry-collector-audit-xxxxxxxxx-xxxxx 1/1 Running 0 30s

Configure kagent to use the collector#

After the OpenTelemetry collector is running, configure kagent to send logs.

  1. Get your current Helm values for kagent.

    helm get values kagent -n kagent -o yaml > values.yaml
  2. Update the Helm values file to including the following endpoints. Replace the logging and tracing endpoint with the OTel collector endpoints that you previously set up. Make sure that you use a supported LLM provider, such as OpenAI or Anthropic.

    Note: If you find the traces emitted by default too verbose, you can disable them by setting otel.tracing.enabled=false. Logging still works even if tracing is disabled.

    providers:
    # OpenAI or Anthropic are supported for auditing
    default: openAI
    openAI:
    apiKey: $OPENAI_API_KEY
    otel:
    tracing:
    enabled: true
    exporter:
    otlp:
    endpoint: http://opentelemetry-collector-audit.telemetry.svc.cluster.local:4317
    timeout: 15
    insecure: true
    logging:
    enabled: true
    exporter:
    otlp:
    endpoint: http://opentelemetry-collector-audit.telemetry.svc.cluster.local:4317
    timeout: 15
    insecure: true
  3. Upgrade kagent with the updated Helm values. Replace <version> with the version of kagent that you want to upgrade to.

    helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
    --namespace kagent \
    --version $VERSION \
    -f values.yaml

Verify the setup#

  1. Generate some traffic by invoking an agent that uses OpenAI or Anthropic. For example, you might ask the Helm agent how many Helm releases are currently deployed in your cluster.

  2. Check that logs are being collected by the OpenTelemetry collector.

    kubectl -n telemetry logs -l app.kubernetes.io/name=opentelemetry-collector

    Example output:

    Trace ID: c421bc11e93daaceee59b9e5ff8aa6d0
    Span ID: 02552aea76988990
    Flags: 1
    LogRecord #115
    ObservedTimestamp: 2026-01-12 16:02:13.278948633 +0000 UTC
    Timestamp: 2026-01-12 16:02:13.278943258 +0000 UTC
    SeverityText:
    SeverityNumber: Info(9)
    Body: Map({"content":"NAME \tNAMESPACE\tREVISION\tUPDATED \tSTATUS \tCHART \tAPP VERSION\nkagent \tkagent \t3 \t2026-01-12 09:40:29.568344 -0500 -0500\tdeployed\tkagent-0.7.8 \t \nkagent-crds\tkagent \t1 \t2026-01-09 11:31:00.362347 -0500 -0500\tdeployed\tkagent-crds-0.7.8\t \n"})
    Attributes:
    -> gen_ai.system: Str(openai)
    -> event.name: Str(gen_ai.tool.message)
    Trace ID: c421bc11e93daaceee59b9e5ff8aa6d0
    Span ID: 02552aea76988990
    Flags: 1
    LogRecord #116
    ObservedTimestamp: 2026-01-12 16:02:13.278959716 +0000 UTC
    Timestamp: 2026-01-12 16:02:13.278954424 +0000 UTC
    SeverityText:
    SeverityNumber: Info(9)
    Body: Map({"content":"NAME \tNAMESPACE\tREVISION\tUPDATED \tSTATUS \tCHART \tAPP VERSION\nloki \ttelemetry\t1 \t2026-01-09 11:08:29.179771 -0500 -0500\tdeployed\tloki-6.24.0 \t3.3.2 \nopentelemetry-collector-audit\ttelemetry\t2 \t2026-01-12 09:40:18.82713 -0500 -0500 \tdeployed\topentelemetry-collector-0.143.0\t0.143.0 \ntempo \ttelemetry\t1 \t2026-01-09 11:24:31.685855 -0500 -0500\tdeployed\ttempo-1.16.0 \t2.6.1 \n"})
    Attributes:
    -> gen_ai.system: Str(openai)
    -> event.name: Str(gen_ai.tool.message)
    Trace ID: c421bc11e93daaceee59b9e5ff8aa6d0
    Span ID: 02552aea76988990
    Flags: 1
    LogRecord #117
    ObservedTimestamp: 2026-01-12 16:02:13.278976383 +0000 UTC
    Timestamp: 2026-01-12 16:02:13.278964341 +0000 UTC
    SeverityText:
    SeverityNumber: Info(9)
    Body: Map({"content":"In the kagent and telemetry namespaces."})
    Attributes:
    -> gen_ai.system: Str(openai)
    -> event.name: Str(gen_ai.user.message)
    ...

Cleanup#

To remove the OpenTelemetry collector:

helm uninstall opentelemetry-collector-audit -n telemetry
kubectl delete namespace telemetry

To disable logging and tracing in kagent:

helm upgrade kagent oci://ghcr.io/kagent-dev/kagent/helm/kagent \
--namespace kagent \
--reuse-values \
--version $VERSION \
--set otel.logging.enabled=false \
--set otel.tracing.enabled=false
Kagent Lab: Discover kagent and kmcp
Free, on‑demand lab: build custom AI agents with kagent and integrate tools via kmcp on Kubernetes.