      -----------------------------------------------------------------
       C wrapper for the C++ OpenTelemetry API, enabling C integration
      -----------------------------------------------------------------


Summary
------------------------------------------------------------------------------

   1. Introduction
   2. Build instructions
     2.1. Prerequisites for building the OTel C wrapper library
     2.2. Compiling and installing the OTel C++ client
     2.3. Compiling and installing the OTel C wrapper library
   3. Testing the operation of the library
   4. Basic concepts of OpenTelemetry
   5. Library API overview
   6. YAML configuration
     6.1. Document structure
     6.2. Exporters
     6.3. Samplers
     6.4. Processors
     6.5. Readers
     6.6. Providers
     6.7. Signals
     6.8. Environment variables
   7. Tracing example
   8. Metrics example
   9. Logging example
  10. Thread safety
  11. Known bugs and limitations


1. Introduction
------------------------------------------------------------------------------

This document will not cover the internal structure of OpenTelemetry (often
abbreviated as OTel) or the environment that led to the need for an
observability framework.  Those interested can find detailed reasons for its use
and relevant documentation in many sources, especially on the official OTel
website https://opentelemetry.io/ .

The OTel C wrapper library was primarily developed for use in the HAProxy OTel
filter to enable effective observation - specifically, exporting telemetry data
to simplify the analysis of software performance and behavior.

This library is a C wrapper for the official OTel C++ client, with the source
repository available at https://github.com/open-telemetry/opentelemetry-cpp .


2. Build instructions
------------------------------------------------------------------------------

2.1. Prerequisites for building the OTel C wrapper library
------------------------------------------------------------------------------

To simplify the process of compiling all the libraries required by the OTel C
wrapper, several shell scripts have been created and are available in the
scripts/build directory.  All scripts have been tested on the following Linux
distributions for the amd64 architecture:

  * debian 11 / 12 / 13
  * ubuntu 20.04.6 / 22.04.5 / 24.04.3 / 25.10
  * tuxedo 24.04.3
  * rhel 8.10 / 9.5 / 10.0
  * rocky 9.5
  * opensuse-leap 15.5 / 15.6

Linux distributions for other architectures, as well as other operating systems
supported by the OTel C++ client (such as BSD, macOS, and Windows), have not
been tested.

To install all the required packages for compiling and installing the OTel C
wrapper library, execute the linux-update.sh script located in the scripts/build
directory:

  # cd scripts/build
  # ./linux-update.sh

Note: the '%' prompt indicates that the command is being executed by a non-
      privileged user, whereas the '#' prompt indicates that the command is
      being executed by the root user.

The script linux-update.sh does not accept any arguments.

To summarize, the script installs the necessary packages for compiling and
setting up all required libraries, with the most important ones being:

  * GNU GCC Compiler and Development Environment
  * GNU autoconf / automake / libtool / make
  * Kitware CMake
  * various developer packages of the system libraries (libc, curl, ssl, zlib,
    lzma, systemd)
  * various utilities for downloading source code repositories (git, wget, curl)


2.2. Compiling and installing the OTel C++ client
------------------------------------------------------------------------------

Installing all the prerequisite libraries can be quite demanding, which is why
several installation scripts have been created to simplify the process.  Each
prerequisite library has its own dedicated installation script, but it is not
recommended to run them separately.  Instead, it's advised to use one of the
following two scripts:

  * build.sh - each prerequisite library is compiled and installed individually,
    following a predefined order
  * build-bundle.sh - the OTel C++ client is compiled in such a way that it
    automatically downloads and compiles all the necessary prerequisite
    libraries

It is strongly recommended to use the provided build scripts rather than relying
on system-installed dependency packages, which are likely outdated or compiled
with options incompatible with the OTel C wrapper.

If none of the attached build-*.sh scripts is used, the patches in
scripts/build/ must be applied to the OpenTelemetry C++ source tree before
compilation and the same CMake configuration options found in
scripts/build/opentelemetry-cpp-1.26.0-install.sh must be used.

Whichever script is used, the result should be the same.  However, it is
recommended to use the build-bundle.sh script for this task.  In that script,
the AWS-LC and curl build steps are commented out by default, so the script uses
the OpenSSL and curl libraries present on the system.  An SSL library (OpenSSL
or another supported implementation) must be installed on the system; if
curl is not available, the OTel C++ build process downloads and compiles it
automatically.  Users can uncomment the AWS-LC and curl steps if they prefer to
build those from source as well.

Example of how to run an installation script:

  # cd scripts/build
  # ./build-bundle.sh

Both scripts accept multiple input arguments.  The first argument specifies the
destination directory where packages will be installed, and the second argument
defines the location of the root directory for installation.  However, it is not
recommended to set the 'install-dir' argument, as it is intended solely for
debugging purposes.

  # ./build-bundle.sh [ prefix-dir [ install-dir [ lib-type ] ] ]

The lib-type argument controls how the OTel C++ SDK libraries are built.
Accepted values are "dynamic" (default) and "static".  When set to "static", the
SDK is compiled as static archives (.a) instead of shared libraries (.so); this
is required when the OTel C wrapper itself will be linked statically.

By default, libraries are installed in the /opt directory (prefix = '/opt').

Finally, the installation script will verify that all library dependencies for
programs in the <prefix>/bin directory and libraries in the <prefix>/lib
directory are met, which is done by running the ldd utility.

Note: it is possible that some prerequisite libraries are already installed on
      the system (as part of the operating system).  This can lead to errors
      when compiling the OTel C++ client.  For this reason, it is recommended to
      use the provided script for installation and to install it in a non-system
      directory, such as /opt, /usr/local, or any other non-system directory of
      your choice.

If one prefers to use the AWS-LC cryptographic library instead of the
system-installed OpenSSL, the AWS-LC and curl build steps can be uncommented in
the installation scripts.

List of (almost) all dependencies for the OTel C++ client package:

  * AWS libcrypto (AWS-LC)
    https://github.com/aws/aws-lc
  * curl - a command-line tool for transferring data from or to a server using
    URLs
    https://github.com/curl/curl
  * Abseil - C++ Common Libraries
    https://github.com/abseil/abseil-cpp
  * c-ares - a modern DNS (stub) resolver library
    https://github.com/c-ares/c-ares
  * RE2, a regular expression library
    https://github.com/google/re2
  * Protocol Buffers - Google's data interchange format
    https://github.com/protocolbuffers/protobuf
  * JSON for Modern C++
    https://github.com/nlohmann/json
  * GoogleTest - Google's C++ test framework
    https://github.com/google/googletest
  * Benchmark - a library to benchmark code snippets, similar to unit tests
    https://github.com/google/benchmark
  * gRPC - an RPC library and framework
    https://github.com/grpc/grpc
  * Rapid YAML - a C++ library to parse and emit YAML
    https://github.com/biojppm/rapidyaml
  * OpenTelemetry C++ - the C++ OpenTelemetry client
    https://github.com/open-telemetry/opentelemetry-cpp

This is not a complete list of dependencies; other libraries present on the
operating system, such as libidn, libpsl, libunistring, zlib, and libzstd, are
also required.

Additional information on this topic can be found at:
https://github.com/open-telemetry/opentelemetry-cpp/blob/main/docs/dependencies.md


2.3. Compiling and installing the OTel C wrapper library
------------------------------------------------------------------------------

Along with the OTel C++ client, the OTel C wrapper library depends on a YAML
parsing library.  By default, rapidyaml (ryml) is used, and this is the
recommended configuration.  Since rapidyaml is already built and installed as
part of the OTel C++ client dependencies, no additional steps are required.

Alternatively, libfyaml can be used instead by explicitly specifying the
--with-libfyaml option (autotools) or -DWITH_LIBFYAML=ON (CMake).  Both
options cannot be used at the same time.

  * Rapid YAML - a C++ library to parse and emit YAML
    https://github.com/biojppm/rapidyaml
  * libfyaml - a fully-featured YAML 1.2 and JSON parser/writer
    https://github.com/pantoniou/libfyaml

Once the OTel C++ client is installed, the OTel C wrapper library can be
compiled and installed.  It can also be built in debug mode, which enables
detailed logging of the library's operations, such as internal function
calls.

In this example, we will install two versions of the library: the release
version first, followed by the debug version.  Both versions will be installed
in the /opt directory.

  % git clone https://github.com/haproxytech/opentelemetry-c-wrapper.git
  % cd opentelemetry-c-wrapper
  % ./scripts/bootstrap
  % ./configure --prefix=/opt --with-opentelemetry=/opt
  % make
  # make install

  % ./scripts/distclean
  % ./scripts/bootstrap
  % ./configure --prefix=/opt --with-opentelemetry=/opt --enable-debug
  % make
  # make install

Alternatively, the library can be compiled using CMake:

  % mkdir build && cd build
  % cmake -DCMAKE_INSTALL_PREFIX=/opt -DOPENTELEMETRY_DIR=/opt ..
  % make
  # make install

To build the debug version, add the -DENABLE_DEBUG=ON option to the cmake
command above.

The wrapper library can also be built as a static archive (.a) instead of a
shared library (.so).  This requires the OTel C++ SDK to be compiled as static
libraries as well (see section 2.2, lib-type argument).

With autotools, both static and shared libraries are built by default.  To build
only a static library, pass --disable-shared:

  % ./configure --prefix=/opt --with-opentelemetry=/opt \
        --disable-shared

With CMake, use the BUILD_STATIC option:

  % cmake -DCMAKE_INSTALL_PREFIX=/opt -DOPENTELEMETRY_DIR=/opt \
        -DBUILD_STATIC=ON ..


3. Testing the operation of the library
------------------------------------------------------------------------------

The library includes a test suite located in the test/ directory.  Test
programs are not compiled during the regular build; use 'make test' to compile
them.  The main test program simulates a worker process that generates traces,
metrics, and logs.

  % make test
  % cd test
  % ./otel-c-wrapper-test --help
  % ./otel-c-wrapper-test --runcount=10 --threads=8

The test program uses otel-cfg.yml as its library configuration file.

When built with the debug option, the test binary is named
otel-c-wrapper-test_dbg.  If present at install time, the test binary is
installed into the <prefix>/bin directory by 'make install'.

For integration testing with an OpenTelemetry Collector and a backend (like
Elasticsearch/Kibana or ClickHouse), you can use the Docker Compose setup
provided in `test/elastic-apm/`.

A reference OpenTelemetry Collector configuration is provided in test/otelcol/.
The OpenTelemetry Collector source is available at
https://github.com/open-telemetry/opentelemetry-collector .  The configuration
collects all three signals (traces, metrics, and logs) over OTLP/gRPC and
OTLP/HTTP, and exports traces to a Jaeger instance reachable at a local IP
address via OTLP/HTTP.

  % cd test/elastic-apm
  % docker compose up -d

Ensure that the `otel-cfg.yml` configuration file is correctly set up to point
to the collector's endpoint.


4. Basic concepts of OpenTelemetry
------------------------------------------------------------------------------

OpenTelemetry (OTel) is an observability framework for cloud-native software.
It provides a standardized, vendor-neutral way to create and collect telemetry
data (traces, metrics, and logs).  Understanding its core concepts is key to
using this wrapper library effectively.

The main components of OTel are:

  * Signals: OTel classifies telemetry data into several categories, known as
    signals.  The three primary signals are traces, metrics, and logs.  This
    library currently supports traces, metrics, and logs.

  * API (Application Programming Interface): This is a set of interfaces that
    you use to instrument your code.  For example, you use the API to get a
    tracer, start a span, or record a metric.  The API is decoupled from the
    implementation.

  * SDK (Software Development Kit): This is the official implementation of the
    API.  The SDK provides the configuration and logic for processing telemetry
    data.  It allows you to configure exporters, processors, and samplers.

  * Exporter: An exporter is a component that sends telemetry data to a specific
    backend or collector.  For example, you might use an OTLP (OpenTelemetry
    Protocol) exporter to send data to an OTel Collector, or a Jaeger exporter
    to send data directly to a Jaeger instance.

  * Collector: The OTel Collector is a standalone service that can receive,
    process, and export telemetry data.  It acts as a flexible pipeline,
    allowing you to transform and filter data before it reaches your
    observability backend.

Key concepts related to tracing:

  * Trace: A trace represents the entire journey of a request as it moves
    through all the services in a distributed system.  A single trace is
    composed of one or more spans.

  * Span: A span represents a single unit of work or operation within a trace,
    such as an HTTP request, a database query, or a function call.  Spans have
    a start time, an end time, attributes (key-value pairs), events (timestamped
    log messages), links (references to related spans in other traces), and a
    status.

  * Context Propagation: This is the mechanism that allows OTel to correlate
    spans across different services.  When a service makes a call to another
    service, context (which includes the current trace ID and span ID) is
    injected into the request (e.g., as HTTP headers).  The receiving service
    extracts this context to create a new child span, linking it to the parent
    span in the calling service.

For a more in-depth understanding, it is highly recommended to review the
official OpenTelemetry documentation: https://opentelemetry.io/docs/concepts/


5. Library API overview
------------------------------------------------------------------------------

The library provides a pure C API on top of the OpenTelemetry C++ SDK.  The API
is organized around instance structs that each carry a pointer to an
operations vtable (a struct of function pointers), one pair per telemetry
signal:

  * struct otelc_tracer - creates trace spans and propagates context
  * struct otelc_meter  - creates and records metric instruments
  * struct otelc_logger - emits structured log records

Every instance struct carries an 'err' member (a character pointer holding the
last error message), a 'scope_name' member (the instrumentation scope name read
from the YAML configuration), and an 'ops' pointer to the operations vtable.
Operations are invoked through the ops pointer:

  tracer->ops->start_span(tracer, "name")

The header <opentelemetry-c-wrapper/define.h> provides the convenience macros
OTELC_OPS() and OTELC_OPSR() (the latter passes &ptr so the callee can set the
pointer to NULL on destroy/end):

  OTELC_OPS(tracer, start_span, "name")
  OTELC_OPSR(span, end)

The typical usage follows this lifecycle:

  1. otelc_init(cfgfile, &err)      - parse the YAML configuration file
  2. otelc_*_create(&err)           - allocate a signal instance
  3. instance->ops->start(instance) - start the signal pipeline
  4. (use the signal)               - create spans, record metrics, logs
  5. otelc_deinit(...)              - shut down all signals, free memory

The otelc_deinit() function accepts pointers to all three signal types and
destroys whichever ones are non-NULL:

  otelc_deinit(&tracer, &meter, &logger);

Functions that can fail return OTELC_RET_OK (0) on success or OTELC_RET_ERROR
(-1) on failure.  Functions that create resources return a pointer on success or
NULL on failure.

The library also provides several utility types for passing structured data to
the API:

  * struct otelc_value    - a tagged union for bool, integer, double, and
                            string values
  * struct otelc_kv       - a key-value pair (key string + otelc_value)
  * struct otelc_text_map - a dynamic array of key-value string pairs

Additional utility functions complete the public API:

  * otelc_ext_init()                - register custom malloc/free/thread-ID
  * otelc_log_set_handler()         - install an SDK diagnostic log callback
  * otelc_log_set_level()           - set the SDK internal log level
  * otelc_processor_dropped_count() - query dropped span/log counts
  * otelc_span_context_create()     - construct a span context from raw IDs

All public headers reside under include/opentelemetry-c-wrapper/.  Including
<opentelemetry-c-wrapper/include.h> pulls in all of them.


6. YAML configuration
------------------------------------------------------------------------------

The library reads its configuration from a YAML file whose path is passed to
the otelc_init() function.  Named pipeline components are defined in top-level
sections and bound together per signal type in the 'signals' section.  All
string values have a maximum length of 4095 characters.  Boolean values accept
"true" / "1" for true and "false" / "0" for false (case-insensitive).


6.1. Document structure
------------------------------------------------------------------------------

The YAML file contains the following top-level sections:

  * exporters  - define where telemetry data is sent
  * samplers   - control trace sampling strategy (traces only)
  * processors - define how telemetry is batched before export (traces and logs
                 only)
  * readers    - configure periodic metric collection intervals (metrics only)
  * providers  - set resource attributes attached to all telemetry
  * signals    - bind the above components together per signal type

Each section (except 'signals') contains named configuration blocks.  The
'signals' section references these blocks by name, either as a single name or
as a YAML list of names.

Here is a minimal configuration that exports traces to stdout:

  exporters:
    my_exporter:
      type:     ostream
      filename: /dev/stdout

  processors:
    my_processor:
      type: single

  samplers:
    my_sampler:
      type: always_on

  providers:
    my_provider:
      resources:
        - service.name: "my-service"

  signals:
    traces:
      scope_name: "my-application"
      exporters:  my_exporter
      samplers:   my_sampler
      processors: my_processor
      providers:  my_provider

A complete configuration covering all three signals and multiple exporter types
can be found in the file test/otel-cfg.yml.

Components that spawn background threads (batch processors, OTLP File and HTTP
exporters, and periodic metric readers) accept two optional settings that
control the operating system thread properties:

  * thread_name - sets the OS thread name via pthread_setname_np(), truncated
                  to 15 characters.  Useful for identifying SDK threads in
                  debuggers and monitoring tools.

  * cpu_id      - bounds the thread to a specific CPU core via
                  pthread_setaffinity_np().  Accepts a core number
                  (0-OTEL_MAX_CPU_ID) or -1 to leave the affinity unset
                  (the default).  This setting requires sched.h and is only
                  effective on Linux.

Both settings are configured per component.  For example:

  processors:
    my_processor:
      type:        batch
      thread_name: "proc/batch trac"
      cpu_id:      2

The OTLP/gRPC exporter accepts these settings in the YAML file for
configuration consistency, but they are not applied because the OTel
C++ SDK does not provide runtime options for gRPC exporter threads.


6.2. Exporters
------------------------------------------------------------------------------

Exporters define where telemetry data is sent.  Each named exporter block must
include a 'type' key that selects the exporter backend.  The available exporter
types and their signal support are:

  Type              Traces  Metrics  Logs
  ----              ------  -------  ----
  otlp_file         yes     yes      yes
  otlp_grpc         yes     yes      yes
  otlp_http         yes     yes      yes
  ostream           yes     yes      yes
  memory            yes     yes      no
  zipkin            yes     no       no
  elasticsearch     no      no       yes

Exporter availability depends on the build configuration; unsupported types
produce an error at startup.

The configuration keys for each exporter type are listed below.  All keys are
optional unless noted otherwise.


OTLP File exporter (otlp_file)

Writes telemetry data to rotating files in OTLP format.

  type (string, mandatory)
      Must be "otlp_file".

  thread_name (string, default: "")
      Name assigned to the exporter's background thread.

  file_pattern (string, default: "otel-logfile-%F-%N.txt")
      Output file naming pattern.

  alias_pattern (string, default: "")
      Symbolic link or alias pattern for the current output file.

  flush_interval (integer, default: 30000000, range: 100000-60000000)
      Flush interval in microseconds.

  flush_count (integer, default: 256, range: 16-8192)
      Number of records buffered before flushing to disk.

  file_size (integer, default: 20971520, range: 65536-2147483648)
      Maximum file size in bytes (64 KB to 2 GB).

  rotate_size (integer, default: 3, range: 1-256)
      Number of rotated files to retain.


OTLP gRPC exporter (otlp_grpc)

Sends telemetry data to a collector over gRPC.

  type (string, mandatory)
      Must be "otlp_grpc".

  thread_name (string, default: "")
      Read from YAML for configuration consistency but not used by the SDK, as
      gRPC exporters do not support runtime thread naming.

  endpoint (string)
      Server endpoint URL.  Signal-specific defaults:
        traces:  "http://localhost:4317/v1/traces"
        metrics: "http://localhost:4317/v1/metrics"
        logs:    "http://localhost:4317/v1/logs"

  use_ssl_credentials (boolean, default: false)
      Enable TLS for the gRPC connection.

  ssl_credentials_cacert_path (string, default: "")
      Path to a CA certificate file for server verification.

  ssl_credentials_cacert_as_string (string, default: "")
      CA certificate content provided as an inline string.

  ssl_client_key_path (string, default: "")
      Path to the client private key.  Requires the SDK MTLS build flag
      (ENABLE_OTLP_GRPC_SSL_MTLS_PREVIEW).

  ssl_client_key_string (string, default: "")
      Client private key as an inline string.  Requires the SDK MTLS build flag.

  ssl_client_cert_path (string, default: "")
      Path to the client certificate.  Requires the SDK MTLS build flag.

  ssl_client_cert_string (string, default: "")
      Client certificate as an inline string.  Requires the SDK MTLS build flag.

  timeout (integer, default: 10, range: 1-60)
      Request timeout in seconds.

  user_agent (string, default: "")
      Custom user-agent string sent with each request.

  max_threads (integer, default: 0, range: 1-1024)
      Maximum number of concurrent gRPC threads.  A value of 0 uses the SDK
      default.

  compression (string, default: "")
      Compression algorithm (e.g., "gzip").

  max_concurrent_requests (integer, default: 0, range: 1-1024)
      Maximum number of concurrent export requests.  A value of 0 uses the SDK
      default.


OTLP HTTP exporter (otlp_http)

Sends telemetry data to a collector over HTTP using the curl library.

  type (string, mandatory)
      Must be "otlp_http".

  thread_name (string, default: "")
      Name assigned to the exporter's background thread.

  endpoint (string)
      Server endpoint URL.  Signal-specific defaults:
        traces:  "http://localhost:4318/v1/traces"
        metrics: "http://localhost:4318/v1/metrics"
        logs:    "http://localhost:4318/v1/logs"

  content_type (string, default: "json")
      HTTP request body encoding.  Accepted values: "json", "binary".

  json_bytes_mapping (string, default: "hexid")
      How binary data is encoded in JSON output.  Accepted values: "hexid",
      "hex", "base64".

  use_json_name (boolean, default: false)
      Use JSON field names instead of protobuf field names.

  debug (boolean, default: false)
      Enable debug logging for HTTP requests.  When enabled, the SDK internal
      log level is also raised to Debug.

  timeout (integer, default: 10, range: 1-60)
      Request timeout in seconds.

  http_headers (map, default: none)
      Custom HTTP headers sent with each request.  Specified as a YAML sequence
      of key-value pairs:

        http_headers:
          - X-Custom-Header: "value"
          - Authorization:   "Bearer token"

  max_concurrent_requests (integer, default: 64, range: 1-1024)
      Maximum number of concurrent HTTP requests.

  max_requests_per_connection (integer, default: 8, range: 1-1024)
      Maximum number of requests sent over a single HTTP connection before
      reconnecting.

  background_thread_wait_for (integer, default: 0, range: 0-3600000)
      How long the curl background thread waits (in milliseconds) after its last
      request before exiting.  A value of 0 keeps the thread alive indefinitely,
      which prevents crashes caused by thread respawn failures in the SDK's
      noexcept MaybeSpawnBackgroundThread() call.  Requires the corresponding
      SDK patch (see scripts/build/).

  ssl_insecure_skip_verify (boolean, default: false)
      Skip TLS certificate verification.

  ssl_ca_cert_path (string, default: "")
      Path to a CA certificate file for server verification.

  ssl_ca_cert_string (string, default: "")
      CA certificate content provided as an inline string.

  ssl_client_key_path (string, default: "")
      Path to the client private key.

  ssl_client_key_string (string, default: "")
      Client private key as an inline string.

  ssl_client_cert_path (string, default: "")
      Path to the client certificate.

  ssl_client_cert_string (string, default: "")
      Client certificate as an inline string.

  ssl_min_tls (string, default: "")
      Minimum TLS protocol version (e.g., "1.2").

  ssl_max_tls (string, default: "")
      Maximum TLS protocol version (e.g., "1.3").

  ssl_cipher (string, default: "")
      TLS cipher list for TLS 1.2 and earlier.

  ssl_cipher_suite (string, default: "")
      TLS ciphersuites for TLS 1.3.

  compression (string, default: "")
      Compression algorithm (e.g., "gzip").


OStream exporter (ostream)

Writes human-readable telemetry data to a C++ output stream (stdout, stderr,
or a file).

  type (string, mandatory)
      Must be "ostream".

  filename (string, default: "stdout")
      Output destination.  Accepted values: "stdout", "stderr", or a file path.


In-Memory exporter (memory)

Stores telemetry data in a circular memory buffer.  Supported for traces and
metrics only.

  type (string, mandatory)
      Must be "memory".

  buffer_size (integer, range: 16-65536)
      Maximum number of records in the circular buffer.  Defaults to the SDK's
      MAX_BUFFER_SIZE value.


Zipkin exporter (zipkin)

Sends trace data to a Zipkin backend.  Supported for traces only.

  type (string, mandatory)
      Must be "zipkin".

  endpoint (string, default: "http://localhost:9411/api/v2/spans")
      Zipkin collector endpoint URL.

  format (string, default: "")
      Transport format.  Accepted values: "json", "protobuf".  When empty, the
      SDK default is used.

  service_name (string, default: "default-service")
      Service name reported to Zipkin.

  ipv4 (string, default: "")
      IPv4 address of the service endpoint.

  ipv6 (string, default: "")
      IPv6 address of the service endpoint.


Elasticsearch exporter (elasticsearch)

Sends log records to an Elasticsearch cluster.  Supported for logs only.

  type (string, mandatory)
      Must be "elasticsearch".

  host (string, default: "localhost")
      Elasticsearch server hostname.

  port (integer, default: 9200, range: 1-65535)
      Elasticsearch server port.

  index (string, default: "logs")
      Index name where log records are stored.

  response_timeout (integer, default: 30, range: 1-3600)
      Response timeout in seconds.

  debug (boolean, default: false)
      Enable debug logging for Elasticsearch requests.

  http_headers (map, default: none)
      Custom HTTP headers.  Uses the same YAML format as the OTLP HTTP
      exporter's http_headers option.


6.3. Samplers
------------------------------------------------------------------------------

Samplers control which traces are recorded and exported.  They are configured
in the top-level 'samplers' section and referenced from the traces signal.  Only
one sampler is active per tracer.

  type (string, default: "trace_id_ratio_based")
      Sampler type.  Accepted values: "always_on", "always_off",
      "trace_id_ratio_based", "parent_based".

Type "always_on" samples every trace unconditionally.  No additional
configuration keys.

Type "always_off" rejects every trace unconditionally.  No additional
configuration keys.

Type "trace_id_ratio_based" samples traces based on a configurable
probability.  Additional key:

  ratio (double, default: 1.0, range: 0.0-1.0)
      Probability that a trace is sampled.  A value of 0.0 means no traces are
      sampled; 1.0 means all traces are sampled.

Type "parent_based" delegates the sampling decision based on the parent span's
context.  When no parent span exists, the delegate sampler is used.  Additional
keys:

  delegate (string, default: "always_on")
      Root sampler used when there is no parent span.  Accepted values:
      "always_on", "always_off", "trace_id_ratio_based".

  ratio (double, default: 1.0, range: 0.0-1.0)
      Sampling ratio for the delegate, used only when delegate is set to
      "trace_id_ratio_based".

  remote_sampled (string, default: "always_on")
      Sampler applied when the remote parent has the sampled flag set.
      Accepted values: "always_on", "always_off".

  remote_not_sampled (string, default: "always_off")
      Sampler applied when the remote parent does not have the sampled flag.
      Accepted values: "always_on", "always_off".

  local_sampled (string, default: "always_on")
      Sampler applied when the local parent has the sampled flag set.
      Accepted values: "always_on", "always_off".

  local_not_sampled (string, default: "always_off")
      Sampler applied when the local parent does not have the sampled flag.
      Accepted values: "always_on", "always_off".


6.4. Processors
------------------------------------------------------------------------------

Processors control how telemetry records are batched and delivered to exporters.
They are used by the traces and logs signals.  Metrics use readers instead of
processors (see section 6.5).

  type (string, mandatory)
      Processor type.  Accepted values: "batch", "single".

  thread_name (string, default: "")
      Name assigned to the processor's background thread (only meaningful in
      batch mode).

Type "batch" buffers records in a bounded circular queue and exports them in
batches at regular intervals.  Additional keys:

  max_queue_size (integer, default: 2048, range: 64-65536)
      Maximum number of records the queue can hold.  Must be greater than or
      equal to max_export_batch_size.

  schedule_delay (integer, range: 1-60000)
      Time in milliseconds between consecutive export cycles.
      Default: 5000 for traces, 1000 for logs.

  export_timeout (integer, default: 30000, range: 1-60000)
      Maximum time in milliseconds allowed for a single export operation.

  max_export_batch_size (integer, default: 512, range: 1-65536)
      Maximum number of records per export batch.

When the queue fills up, new records are silently dropped.  The library provides
otelc_processor_dropped_count() to monitor drop rates (see section 11 for
details).

Type "single" exports each record immediately without batching.  No additional
configuration keys beyond 'type' and 'thread_name'.


6.5. Readers
------------------------------------------------------------------------------

Readers configure periodic metric collection and export.  They are used
exclusively by the metrics signal and pair with exporters to form metric
pipelines.

  thread_name (string, default: "")
      Name assigned to the reader's background thread.

  export_interval (integer, default: 60000, range: 100-3600000)
      Time in milliseconds between consecutive metric exports.
      Must be greater than or equal to export_timeout.

  export_timeout (integer, default: 30000, range: 100-3600000)
      Maximum time in milliseconds allowed for a single metric export operation.


6.6. Providers
------------------------------------------------------------------------------

Providers define resource attributes that are attached to all telemetry emitted
by a signal.  These attributes identify the service producing the data and are
carried in every span, metric data point, and log record.

  resources (map)
      A YAML sequence of key-value pairs specifying resource attributes:

        providers:
          my_provider:
            resources:
              - service.name:        "my-service"
              - service.version:     "1.0.0"
              - service.namespace:   "production"
              - service.instance.id: "host-1"

      Any string key-value pair is accepted.  Common attributes follow
      the OpenTelemetry semantic conventions and include service.name,
      service.version, service.namespace, and service.instance.id.

Resource attributes from the YAML file are merged with attributes detected from
environment variables (see section 6.8).  YAML attributes take precedence over
environment attributes for duplicate keys.


6.7. Signals
------------------------------------------------------------------------------

The 'signals' section is the central binding point that ties pipeline components
together.  Each signal -- traces, metrics, or logs -- references the named
components it needs from the other top-level sections.

When exporters and processors (or readers) are specified as YAML lists, each
entry forms a separate pipeline within the same provider.  The library pairs
list entries by position: for traces and logs, processor[i] is paired with
exporter[i]; for metrics, exporter[i] is paired with reader[i].  When one list
is shorter than the other, its last entry is reused for the remaining pairs.
A single scalar name (not a list) is also accepted.


Traces signal

  scope_name (string, mandatory)
      Instrumentation scope name for the tracer.

  exporters (string or list of strings)
      Exporter name(s) from the top-level 'exporters' section.

  samplers (string)
      Sampler name from the top-level 'samplers' section.

  processors (string or list of strings)
      Processor name(s) from the top-level 'processors' section.

  providers (string)
      Provider name from the top-level 'providers' section.


Metrics signal

  scope_name (string, mandatory)
      Instrumentation scope name for the meter.

  exporters (string or list of strings)
      Exporter name(s) from the top-level 'exporters' section.

  readers (string or list of strings)
      Reader name(s) from the top-level 'readers' section.

  providers (string)
      Provider name from the top-level 'providers' section.


Logs signal

  scope_name (string, mandatory)
      Instrumentation scope name for the logger.

  exporters (string or list of strings)
      Exporter name(s) from the top-level 'exporters' section.

  processors (string or list of strings)
      Processor name(s) from the top-level 'processors' section.

  providers (string)
      Provider name from the top-level 'providers' section.

  min_severity (string, default: "TRACE")
      Minimum log severity level.  Log records below this severity are discarded
      before reaching the processor.  Accepted values (case-sensitive): TRACE,
      TRACE2, TRACE3, TRACE4, DEBUG, DEBUG2, DEBUG3, DEBUG4, INFO, INFO2, INFO3,
      INFO4, WARN, WARN2, WARN3, WARN4, ERROR, ERROR2, ERROR3, ERROR4, FATAL,
      FATAL2, FATAL3, FATAL4.


6.8. Environment variables
------------------------------------------------------------------------------

Resource attributes can also be set through environment variables, independently
of the YAML file.  Attributes from environment variables are detected at startup
and merged with those specified in the YAML providers section.  YAML values take
precedence for duplicate keys.

  OTEL_RESOURCE_ATTRIBUTES
      Comma-separated list of key=value pairs to use as resource attributes.

      Example:
        export OTEL_RESOURCE_ATTRIBUTES="service.name=test,version=1.0"

  OTEL_SERVICE_NAME
      Sets the service.name resource attribute.  Takes precedence over a
      service.name value in OTEL_RESOURCE_ATTRIBUTES.


7. Tracing example
------------------------------------------------------------------------------

Here is a simple example demonstrating how to use the C wrapper library to
create a tracer, start a span, and export it.

```c
#include <stdio.h>
#include <stdlib.h>
#include <opentelemetry-c-wrapper/include.h>

int main(int argc, char **argv)
{
    struct otelc_tracer *tracer;
    struct otelc_span   *span;
    char                *err = NULL;

    /* Initialize the library */
    if (otelc_init("otel-cfg.yml", &err) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to init: %s\n", err);
        free(err);
        return 1;
    }

    /* Create and start a tracer */
    tracer = otelc_tracer_create(&err);
    if (tracer == NULL) {
        fprintf(stderr, "Failed to create tracer: %s\n", err);
        free(err);
        otelc_deinit(NULL, NULL, NULL);
        return 1;
    }

    if (tracer->ops->start(tracer) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to start tracer\n");
        otelc_deinit(&tracer, NULL, NULL);
        return 1;
    }

    /* Start a new span */
    span = tracer->ops->start_span(tracer, "my-operation");
    if (span == NULL) {
        fprintf(stderr, "Failed to start span\n");
        otelc_deinit(&tracer, NULL, NULL);
        return 1;
    }

    /* ... perform work ... */

    /* End the span */
    span->ops->end(&span);

    /* Clean up */
    otelc_deinit(&tracer, NULL, NULL);

    return 0;
}
```


8. Metrics example
------------------------------------------------------------------------------

Here is an example showing how to create a counter instrument and record a
measurement.

```c
#include <stdio.h>
#include <stdlib.h>
#include <opentelemetry-c-wrapper/include.h>

int main(int argc, char **argv)
{
    struct otelc_meter *meter;
    struct otelc_value  value;
    char               *err = NULL;
    int64_t             counter_id;

    /* Initialize the library */
    if (otelc_init("otel-cfg.yml", &err) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to init: %s\n", err);
        free(err);
        return 1;
    }

    /* Create and start a meter */
    meter = otelc_meter_create(&err);
    if (meter == NULL) {
        fprintf(stderr, "Failed to create meter: %s\n", err);
        free(err);
        otelc_deinit(NULL, NULL, NULL);
        return 1;
    }

    if (meter->ops->start(meter) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to start meter\n");
        otelc_deinit(NULL, &meter, NULL);
        return 1;
    }

    /* Create a counter instrument */
    counter_id = meter->ops->create_instrument(meter,
        "requests", "Total request count", "1",
        OTELC_METRIC_INSTRUMENT_COUNTER_UINT64, NULL);
    if (counter_id == OTELC_RET_ERROR) {
        fprintf(stderr, "Failed to create instrument\n");
        otelc_deinit(NULL, &meter, NULL);
        return 1;
    }

    /* Record a measurement */
    value.u_type         = OTELC_VALUE_UINT64;
    value.u.value_uint64 = 1;
    meter->ops->update_instrument(meter, counter_id, &value);

    /* Clean up */
    otelc_deinit(NULL, &meter, NULL);

    return 0;
}
```


9. Logging example
------------------------------------------------------------------------------

Here is an example showing how to emit a structured log record.

```c
#include <stdio.h>
#include <stdlib.h>
#include <opentelemetry-c-wrapper/include.h>

int main(int argc, char **argv)
{
    struct otelc_logger *logger;
    char                *err = NULL;

    /* Initialize the library */
    if (otelc_init("otel-cfg.yml", &err) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to init: %s\n", err);
        free(err);
        return 1;
    }

    /* Create and start a logger */
    logger = otelc_logger_create(&err);
    if (logger == NULL) {
        fprintf(stderr, "Failed to create logger: %s\n", err);
        free(err);
        otelc_deinit(NULL, NULL, NULL);
        return 1;
    }

    if (logger->ops->start(logger) != OTELC_RET_OK) {
        fprintf(stderr, "Failed to start logger\n");
        otelc_deinit(NULL, NULL, &logger);
        return 1;
    }

    /* Emit a log record */
    logger->ops->log_span(logger, OTELC_LOG_SEVERITY_INFO,
        0, NULL, NULL, NULL, NULL, 0,
        "Application started successfully");

    /* Clean up */
    otelc_deinit(NULL, NULL, &logger);

    return 0;
}
```


10. Thread safety
------------------------------------------------------------------------------

The library is designed for multi-threaded use.  All data-plane operations
(creating spans, recording metrics, emitting logs) are thread-safe and can be
called concurrently from any number of threads.

Spans are stored in a sharded map with 64 independently-locked shards,
distributing contention across threads.  Metric instrument operations are
serialized through a single lock, as the instrument set is typically small and
rarely modified after startup.

Lifecycle operations -- otelc_init(), otelc_*_create(), start(), destroy(), and
otelc_deinit() -- must be called from a single thread, typically the main
thread.  These are intended to run during program startup and shutdown, not
concurrently with data-plane operations.

Each thread is identified internally by a numeric ID.  Applications can provide
a custom thread-ID function via otelc_ext_init() before calling otelc_init();
otherwise, the library uses its own internal assignment.

SDK background threads (batch processor export threads, OTLP File and HTTP
exporter threads, periodic metric reader threads) can optionally be bound to
a specific CPU core via the cpu_id YAML setting.  See the YAML configuration
section for details.


11. Known bugs and limitations
------------------------------------------------------------------------------

Currently, not all categories of telemetry (referred to as 'signals' in OTel
terminology) are supported by this library.  Baggage, traces, metrics, and logs
are supported at the moment.  Span events are supported as part of the traces
signal.  As for profiles, the signal is still experimental in the OTel
specification and is not supported by this library.

The OTel C++ SDK batch processors (BatchSpanProcessor for traces and
BatchLogRecordProcessor for logs) use bounded, lock-free circular buffers to
queue telemetry data for export.  When the application produces spans or log
records faster than the export thread can drain them, the buffer fills up and
subsequent items are silently dropped.  Several factors influence whether drops
occur:

  * Production rate vs. export throughput: High-concurrency workloads with
    many threads generating telemetry can overwhelm the export thread,
    especially when the exporter involves network I/O (OTLP/gRPC, OTLP/HTTP)
    rather than a local sink.

  * Batch processor configuration: The max_queue_size parameter controls the
    circular buffer capacity (default 2048).  Larger values absorb longer
    bursts but consume more memory.  The schedule_delay parameter controls
    how frequently the export thread wakes up to drain the buffer; shorter
    intervals reduce the chance of overflow but increase CPU overhead.

  * Export latency: Slow or unreachable collectors stall the export thread,
    preventing it from freeing buffer slots.  Under sustained load this
    quickly leads to drops.

Drops are an expected part of the SDK's back-pressure mechanism -- they keep the
application from blocking or running out of memory.  The wrapper library exposes
otelc_processor_dropped_count(type) (type 0 for traces, 1 for logs) so that
callers can monitor drop rates and adjust batch processor settings or collector
capacity accordingly.

Additional limitations:
  * Span lifecycle: Spans are stored in a shared map accessible from any thread.
    It is the caller's responsibility to end all spans before destroying the
    tracer.  If any spans remain when the tracer is destroyed, they are ended
    implicitly.
  * ABI compatibility: Some features (like `AddLink`) depend on the specific
    ABI version of the underlying C++ OpenTelemetry library.  The
    `start_span_with_options` operation provides an alternative for establishing
    span links at creation time that works on all ABI versions.
  * Drop reporting asymmetry in the C++ SDK: The BatchSpanProcessor logs a
    warning to stderr ("BatchSpanProcessor queue is full - dropping span.")
    when a span is dropped, but the BatchLogRecordProcessor silently
    discards log records without any diagnostic output.  The wrapper library
    provides otelc_processor_dropped_count() to query drop counts for both
    signal types uniformly.
