Get Confluent | Sign up for Confluent Cloud or download Confluent Platform

Confluent Cloud Metrics API

Download OpenAPI specification:Download

Introduction

The Confluent Cloud Metrics API provides actionable operational metrics about your Confluent Cloud deployment. This is a queryable HTTP API in which the user will POST a query written in JSON and get back a time series of metrics specified by the query.

Comprehensive documentation is available on docs.confluent.io.

Available Metrics Reference

Please see the Metrics Reference for a list of available metrics.

This information is also available programmatically via the descriptors endpoint.

Authentication

Confluent uses API keys for integrating with Confluent Cloud. Applications must be authorized and authenticated before they can access or manage resources in Confluent Cloud. You can manage your API keys in the Confluent Cloud Dashboard or Confluent Cloud CLI.

An API key is owned by a User or Service Account and inherits the permissions granted to the owner.

Today, you can divide API keys into two classes:

  • Cloud API Keys - These grant access to the Confluent Cloud Control Plane APIs, such as for Provisioning and Metrics integrations.
  • Cluster API Keys - These grant access to a single Confluent cluster, such as a specific Kafka or Schema Registry cluster.

Cloud API Keys are required for the Metrics API. Cloud API Keys can be created using the Confluent Cloud CLI.

ccloud api-key create --resource cloud

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

The Confluent Cloud Metrics API also supports OAuth 2.0 by allowing Confluent Security Token Service (STS) tokens as credentials to authenticate to metrics API. See steps to [Authenticate access to Confluent Cloud APIs using Confluent STS tokens] (https://docs.confluent.io/cloud/current/access-management/authenticate/oauth/access-rest-apis-sts.html#authenticate- access-to-ccloud-apis-using-confluent-security-token-service-sts-tokens)

Versioning

Confluent APIs ensure stability for your integrations by avoiding the introduction of breaking changes to customers unexpectedly. Confluent will make non-breaking API changes without advance notice. Thus, API clients must follow the Compatibility Policy below to ensure your ingtegration remains stable. All APIs follow the API Lifecycle Policy described below, which describes the guarantees API clients can rely on.

Breaking changes will be widely communicated in advance in accordance with our Deprecation Policy. Confluent will provide timelines and a migration path for all API changes, where available. Be sure to subscribe to one or more communication channels so you don't miss any updates!

One exception to these guidelines is for critical security issues. We will take any necessary actions to mitigate any critical security issue as soon as possible, which may include disabling the vulnerable functionality until a proper solution is available.

Do not consume any Confluent API unless it is documented in the API Reference. All undocumented endpoints should be considered private, subject to change without notice, and not covered by any agreements.

Note: The "v1" in the URL is not a "major version" in the Semantic Versioning sense. It is a "generational version" or "meta version", as seen in other APIs like Github API or the Stripe API.

Changelog

2022-12-01

The dataset health-plus is now available in preview.

See the Datasets section for more details.

2022-10-18

The io.confluent.kafka.server/cluster_active_link_count is now deprecated. Please use the io.confluent.kafka.server/cluster_link_count metric instead.

New metrics available in /export

The following metrics are now available in the /export endpoint:

  • io.confluent.kafka.server/request_bytes
  • io.confluent.kafka.server/response_bytes
  • io.confluent.kafka.server/cluster_link_destination_response_bytes
  • io.confluent.kafka.server/cluster_link_source_response_bytes
  • io.confluent.kafka.server/cluster_link_count
  • io.confluent.kafka.server/cluster_link_mirror_topic_count
  • io.confluent.kafka.server/cluster_link_mirror_topic_offset_lag
  • io.confluent.kafka.server/cluster_link_mirror_topic_bytes

2022-10-17

API Version 1 is marked sunset

All API Version 1 endpoints are no longer supported from 2022-10-17. API users should migrate to API Version 2.

2021-09-23

API Version 1 is now deprecated

All API Version 1 endpoints are now deprecated and will be removed on 2022-04-04. API users should migrate to API Version 2.

2021-08-24

Metric-specific aggregation functions

New metrics are being introduced that require alternative aggregation functions (e.g. MAX). When querying those metrics, using agg: "SUM" will return an error. It is recommended that clients omit the agg field in the request such that the required aggregation function for the specific metric is automatically applied on the backend.

Note: The initial version of Metrics API required clients to effectively hardcode agg: "SUM" in all queries. In early 2021, the agg field was made optional, but many clients have not been updated to omit the agg field.

Cursor-based pagination for /query endpoint

The /query endpoint now supports cursor-based pagination similar to the /descriptors and /attributes endpoints.

2021-02-10

API Version 2 is now Generally Available (GA)

See the Version 2 section below for a detailed description of changes and migration guide.

2020-12-04

API Version 2 (Preview)

Version 2 of the Metrics API is now available in Preview. See the Version 2 section below for a detailed description of changes.

2020-07-08

Correction for active_connection_count metric

A bug in the active_connection_count metric that affected a subset of customers was fixed. Customers exporting the metric to an external monitoring system may observe a discontinuity between historical results and current results due to this one-time correction.

2020-04-01

This release includes the following changes from the preview release:

New format request attribute

The /query request now includes a format attribute which controls the result structuring in the response body. See the /query endpoint definition for more details.

New /available endpoint

The new /available endpoint allows determining which metrics are available for a set of resources (defined by labels). This endpoint can be used to determine which subset of metrics are currently available for a specific resource (e.g. a Confluent Cloud Kafka cluster).

Metric type changes

The CUMULATIVE_(INT|DOUBLE) metric type enumeration was changed to COUNTER_(INT|DOUBLE). This was done to better align with OpenTelemetry conventions. In tandem with this change, several metrics that were improperly classified as GAUGEs were re-classified as COUNTERs.

Metric name changes

The /delta suffix has been removed from the following metrics:

  • io.confluent.kafka.server/received_bytes/delta
  • io.confluent.kafka.server/sent_bytes/delta
  • io.confluent.kafka.server/request_count/delta

2020-09-15

Retire /available endpoint

The /available endpoint (which was in Preview status) has been removed from the API. The /descriptors endpoint can still be used to determine the universe of available metrics for Metrics API.

The legacy metric names are deprecated and will stop functioning on 2020-07-01.

API Lifecycle Policy

The following status labels are applicable to APIs, features, and SDK versions, based on the current support status of each:

  • Early Access – May change at any time. Not recommended for production usage. Not officially supported by Confluent. Intended for user feedback only. Users must be granted explicit access to the API by Confluent.
  • Preview – Unlikely to change between Preview and General Availability. Not recommended for production usage. Officially supported by Confluent for non-production usage. For Closed Previews, users must be granted explicit access to the API by Confluent.
  • Generally Available (GA) – Will not change at short notice. Recommended for production usage. Officially supported by Confluent for non-production and production usage.
  • Deprecated – No longer supported. Will be removed in the future at the announced date. Use is discouraged and migration following the upgrade guide is recommended.
  • Sunset – Removed, and no longer supported or available.

Resources, operations, and individual fields in the OpenAPI spec will be annotated with x-lifecycle-stage, x-deprecated-at, and x-sunset-at. These annotations will appear in the corresponding API Reference Documentation. An API is "Generally Available" unless explicitly marked otherwise.

Compatibility Policy

Confluent APIs are governed by Confluent Cloud Upgrade Policy in which we will make backward incompatible changes and deprecations approximately once per year, and will provide 180 days notice via email to all registered Confluent Cloud users.

Backward Compatibility

An API version is backwards-compatible if a program written against the previous version of the API will continue to work the same way, without modification, against this version of the API.

Confluent considers the following changes to be backwards-compatible:

  • Adding new API resources.
  • Adding new optional parameters to existing API requests (e.g., query string or body).
  • Adding new properties to existing API responses.
  • Changing the order of properties in existing API responses.
  • Changing the length or format of object IDs or other opaque strings.
    • Unless otherwise documented, you can safely assume object IDs we generate will never exceed 255 characters, but you should be able to handle IDs of up to that length. If you're using MySQL, for example, you should store IDs in a VARCHAR(255) COLLATE utf8_bin column.
    • This includes adding or removing fixed prefixes (such as lkc- on kafka cluster IDs).
    • This includes API keys, API tokens, and similar authentication mechanisms.
    • This includes all strings described as "opaque" in the docs, such as pagination cursors.
  • Omitting properties with null values from existing API responses.

Client Responsibilities

  • Resource and rate limits, and the default and maximum sizes of paginated data are not considered part of the API contract and may change (possibly dynamically). It is the client's responsibility to read the road signs and obey the speed limit.
  • If a property has a primitive type and the API documentation does not explicitly limit its possible values, clients must not assume the values are constrained to a particular set of possible responses.
  • If a property of an object is not explicitly declared as mandatory in the API, clients must not assume it will be present.
  • A resource may be modified to return a "redirection" response (e.g. 301, 307) instead of directly returning the resource. Clients must handle HTTP-level redirects, and respect HTTP headers (e.g. Location).

Deprecation Policy

Confluent will announce deprecations at least 180 days in advance of a breaking change and we will continue to maintain the deprecated APIs in their original form during this time.

Exceptions to this policy apply in case of critical security vulnerabilities or functional defects.

Communication

When a deprecation is announced, the details and any relevant migration information will be available on the following channels:

Object Model

The object model for the Metrics API is designed similarly to the OpenTelemetry standard.

Metrics

A metric is a numeric attribute of a resource, measured at a specific point in time, labeled with contextual metadata gathered at the point of instrumentation.

There are two types of metrics:

  • GAUGE: An instantaneous measurement of a value. Gauge metrics are implicitly averaged when aggregating over time.

    Example: io.confluent.kafka.server/retained_bytes

  • COUNTER: The count of occurrences in a single (one minute) sampling interval (unless otherwise stated in the metric description). Counter metrics are implicitly summed when aggregating over time.

    Example: io.confluent.kafka.server/received_bytes

The list of metrics and their labels is available at /docs/descriptors.

Resources

A resource represents an entity against which metrics are collected. For example, a Kafka cluster, a Kafka Connector, a ksqlDB application, etc.

Each metric descriptor is associated with one or more resource descriptors, representing the resource types to which that metric can apply. A metric data point is associated with a single resource instance, identified by the resource labels on that metric data point.

For example, metrics emitted by Kafka Connect are associated to the connector resource type. Data points for those metrics include resource labels identifying the specific connector instance that emitted the metric.

The list of resource types and labels are discoverable via the /descriptors/resources endpoint.

Labels

A label is a key-value attribute associated with a metric data point.

Labels can be used in queries to filter or group the results. Labels must be prefixed when used in queries:

  • metric.<label> (for metric labels), for example metric.topic
  • resource.<resource-type>.<label> (for resource labels), for example resource.kafka.id.

The set of valid label keys for a metric include:

  • The label keys defined on that metric's descriptor itself
  • The label keys defined on the resource descriptor for the metric's associated resource type

For example, the io.confluent.kafka.server/received_bytes metric has the following labels:

  • resource.kafka.id - The Kafka cluster to which the metric pertains
  • metric.topic - The Kafka topic to which the bytes were produced
  • metric.partition - The partition to which the bytes were produced

Datasets

A dataset is a logical collection of metrics that can be queried together. The dataset is a required URL template parameter for every endpoint in this API. The following datasets are currently available:

Dataset Description
cloud

generally-available

Metrics originating from Confluent Cloud resources.

Requests to this dataset require a resource filter (e.g. Kafka cluster ID, Connector ID, etc.) in the query for authorization purposes. The client's API key must be authorized for the resource referenced in the filter.

cloud-custom

generally-available

Metrics originating from custom Confluent Cloud resources (e.g. a Custom Connector).

Requests to this dataset require a resource filter (e.g. Custom Connector ID, etc.) in the query for authorization purposes. The client's API key must be authorized for the resource referenced in the filter.

health-plus

preview

Metrics originating from Confluent Platform resources.

Client Considerations and Best Practices

Rate Limiting

To protect the stability of the API and keep it available to all users, Confluent employs multiple safeguards. Users who send many requests in quick succession or perform too many concurrent operations may be throttled or have their requested rejected with an error. When a rate limit is breached, an HTTP 429 Too Many Requests error is returned. The following headers are sent back to provide assistance in dealing with rate limits.

Header Description
rateLimit-limit The maximum number of requests you're permitted to make per time period.
rateLimit-reset The relative time in seconds until the current rate limit window resets.
rateLimit-remaining The number of requests remaining in the current rate-limit window. Important: This differs from Github and Twitter's same-named header which uses UTC epoch seconds. We use relative time to avoid client/server time synchronization issues.

Rate limits are enforced at multiple scopes. You get two sets of the headers above, each specifying the limit of one scope.

Global Rate Limits

A global rate limit of 60 requests per IP address, per minute is enforced.

Per-endpoint Rate Limits

Additionally, some endpoint-specific rate limits are enforced.

Endpoint Rate limit
/v2/metrics/{dataset}/export 80 requests per resource, per hour, per principal.
See the export endpoint documentation for details.

Retries

Implement retry logic in your client to gracefully handle transient API failures. This should be done by watching for error responses and building in a retry mechanism. This mechanism should follow a capped exponential backoff policy to prevent retry amplification ("retry storms") and also introduce some randomness ("jitter") to avoid the thundering herd effect.

Metric Data Latency

Metric data points are typically available for query in the API within 5 minutes of their origination at the source. This latency can vary based on network conditions and processing overhead. Clients that are polling (or "scraping") metrics into an external monitoring system should account for this latency in their polling requests. API requests that fail to incorporate the latency into the query interval may have incomplete data in the response.

Pagination

Cursors, tokens, and corresponding pagination links may expire after a short amount of time. In this case, the API will return a 400 Bad Request error and the client will need to restart from the beginning.

The client should have no trouble pausing between rate limiting windows, but persisting cursors for hours or days is not recommended.

Version 2

generally-available

Version 2 of the Metrics API adds the ability to query metrics for Kafka Connect, ksqlDB, and Schema Registry.

This capability is enabled by the introduction of a Resource abstraction into the API object model. Resources represent the entity against which metrics are collected.

Migration Guide

The following endpoint URLs have changed in version 2:

Endpoint Version 1 (Sunset) Version 2
Metrics discovery /metrics/{dataset}/descriptors /metrics/{dataset}/descriptors/metrics

The label prefix syntax has changed in version 2:

Label Version 1 (Sunset) Version 2
Resource labels (new) N/A resource.<label>
Kafka cluster ID metric.label.cluster_id resource.kafka.id
All other metric labels metric.label.<label> metric.<label>

This example shows a request to /v1/metrics/cloud/query migrated into the new v2 syntax

Version 1 Request

{
  "group_by": [
    "metric.label.topic"
  ],
  "aggregations": [{
    "metric": "io.confluent.kafka.server/received_bytes",
    "agg": "SUM"
  }],
  "filter": {
    "field": "metric.label.cluster_id",
    "op": "EQ",
    "value": "lkc-00000"
  },
  "granularity": "ALL",
  "intervals" : [
    "2020-01-01T00:00:00Z/PT1H"
  ]
}

Version 2 Request

{
  "group_by": [
    "metric.topic"
  ],
  "aggregations": [{
    "metric": "io.confluent.kafka.server/received_bytes",
    "agg": "SUM"
  }],
  "filter": {
    "field": "resource.kafka.id",
    "op": "EQ",
    "value": "lkc-00000"
  },
  "granularity": "ALL",
  "intervals" : [
    "2020-01-01T00:00:00Z/PT1H"
  ]
}

List metric descriptors

Lists all the metric descriptors for a dataset.

A metric descriptor represents metadata for a metric, including its data type and labels. This metadata is provided programmatically to enable clients to dynamically adjust as new metrics are added to the dataset, rather than hardcoding metric names in client code.

Authorizations:
api-keyconfluent-sts-access-token
path Parameters
dataset
required
string (Dataset)

The dataset to list metric descriptors for.

query Parameters
page_size
integer [ 1 .. 1000 ]
Default: 100

The maximum number of results to return. The page size is an integer in the range from 1 through 1000.

page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination.

resource_type
string (ResourceType)

The type of the resource to list metric descriptors for.

Responses

Response samples

Content type
application/json
{
  • "data": [
    ],
  • "links": null,
  • "meta": {
    }
}

List resource descriptors

Lists all the resource descriptors for a dataset.

Authorizations:
api-keyconfluent-sts-access-token
path Parameters
dataset
required
string (Dataset)

The dataset to list resource descriptors for.

query Parameters
page_size
integer [ 1 .. 1000 ]
Default: 100

The maximum number of results to return. The page size is an integer in the range from 1 through 1000.

page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination.

Responses

Response samples

Content type
application/json
{
  • "data": [
    ],
  • "meta": {
    },
  • "links": {
    }
}

Query metric values

Query for metric values in a dataset.

Authorizations:
api-keyconfluent-sts-access-token
path Parameters
dataset
required
string (Dataset)

The dataset to query.

query Parameters
page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination. Pagination is only supported for requests containing a group_by element.

Request Body schema: application/json
required
Array of objects (Aggregation) = 1 items

Specifies which metrics to query and the aggregation operator to apply across the group_by labels. Currently, only one aggregation per request is supported.

group_by
Array of strings

Specifies how data gets bucketed by label(s); see here on using labels for grouping query results.

granularity
required
string <ISO-8601 duration (PnDTnHnMn.nS) or ALL> (Granularity)
Enum: "PT1M" "PT5M" "PT15M" "PT30M" "PT1H" "PT4H" "PT6H" "PT12H" "P1D" "ALL"

Defines the time buckets that the aggregation is performed for. Buckets are specified in ISO-8601 duration syntax, but only the enumerated values are supported. Buckets are aligned to UTC boundaries. The special ALL value defines a single bucket for all intervals.

The allowed granularity for a query is restricted by the length of that query's interval.

Do not confuse intervals with retention time. Confluent uses granularity and intervals to validate requests. Retention time is the length of time Confluent stores data. For example, requests with a granularity of PT1M can have a maximum interval of six hours. A request with a granularity of PT1M and an interval of 12 hours would fail validation. For more information about retention time, see the FAQ.

Granularity equal to or greater than PT1H can use any interval.

Granularity Maximum Interval Length
PT1M (1 minute) 6 hours
PT5M (5 minutes) 1 day
PT15M (15 minutes) 4 days
PT30M (30 minutes) 7 days
PT1H (1 hour) Any
PT4H (4 hours) Any
PT6H (6 hours) Any
PT12H (12 hours) Any
P1D (1 day) Any
ALL Any
Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

Array of objects (OrderBy) <= 1 items

Sort ordering for result groups. Only valid for granularity: "ALL". If not specified, defaults to the first aggregation in descending order.

Note that this ordering applies to the groups. Within a group (or for ungrouped results), data points are always ordered by timestamp in descending order.

intervals
required
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) that the query runs over. A time range is an ISO-8601 interval.

The keyword now can be used in place of a timestamp to refer to the current time. Offset and truncation modifiers can be also be applied to the now expression:

Modifier Syntax Examples
Offset (+|-)<amount>(m|h|d) -2m (minus 2 minutes)
-1h (minus 1 hour)
Truncation |(m|h|d) |m (round down to start of minute)
|h (round down to start of hour)

All hour/day truncation is performed against the UTC timezone.

If now is 2020-01-01T02:13:27Z, some examples are:

  • now-2m|m: now minus 2 minutes, truncated to start of minute.
    Resolves to 2020-01-01T02:11:00Z
  • now|h: now truncated to start of hour.
    Resolves to 2020-01-01T02:00:00Z
  • now-1d|d: now minus 1 day, truncated to start of day.
    Resolves to 2019-12-31T00:00:00Z

When using now, it is recommended to apply a negative offset to avoid incomplete data (see metric availability delays) and align to minute boundaries (e.g. now-2m|m).

limit
integer [ 1 .. 1000 ]
Default: 100

The maximum number of groups to return. Only supported with a non-empty group_by field. The maximum number of data points in the response is equal to limit * (interval / granularity). For example, with an interval of 1 day, granularity of PT1H, and limit of 2 there will be a maximum of 48 data points in the response (24 for each group). For granularity of ALL, the limit defaults to 1000.

format
string (ResponseFormat)
Default: "FLAT"
Enum: "FLAT" "GROUPED"

Desired response format for query results.

  • FLAT (default): Each item in the response data array represents a data point in the timeseries. Each data point contains the timestamp, metric aggregation value and attributes for the group_by labels.
  • GROUPED: Each item in the response data array represents a group. Each group contains attributes for the group_by labels and an array of points for the metric aggregation timeseries. Only allowed when group_by is non-empty.

Please see the response schema and accompanying examples for more details.

Responses

Request samples

Content type
application/json
{
  • "group_by": [
    ],
  • "aggregations": [
    ],
  • "filter": {
    },
  • "order_by": [
    ],
  • "granularity": "PT1H",
  • "intervals": [
    ],
  • "limit": 5
}

Response samples

Content type
application/json
Example
{
  • "data": [
    ]
}

Export metric values

Export current metric values in OpenMetrics format or Prometheus format, suitable for import into an external monitoring system. Returns the single most recent data point for each metric, for each distinct combination of labels.

Supported datasets and metrics

By default only the cloud and cloud-custom datasets are supported for this endpoint.

Some metrics and labels within the datasets may not be exportable. To request a particular metric or label be added, please contact Confluent Support.

Metric translation

Metric and label names are translated to adhere to Prometheus restrictions. The resource. and metric. prefixes from label names are also dropped to simplify consumption in downstream systems.

Counter metrics are classified as the Prometheus gauge type to conform to required semantics.

The counter type in Prometheus must be monotonically increasing, whereas Confluent Metrics API counters are represented as deltas.

Timestamp offset

To account for metric data latency, this endpoint returns metrics from the current timestamp minus a fixed offset. The current offset is 5 minutes rounded down to the start of the minute. For example, if a request is received at 12:06:41, the returned metrics will have the timestamp 12:01:00 and represent the data for the interval 12:01:00 through 12:02:00 (exclusive).

NOTE: Confluent may choose to lengthen or shorten this offset based on operational considerations. Doing so is considered a backwards-compatible change.

To accommodate this offset, the timestamps in the response should be honored when importing the metrics. For example, in prometheus this can be controlled using the honor_timestamps flag.

Rate limits

Since metrics are available at minute granularity, it is expected that clients scrape this endpoint at most once per minute. To allow for ad-hoc testing, the rate limit is enforced at hourly granularity. To accommodate retries, the rate limit is 80 requests per hour rather than 60 per hour.

The rate limit is evaluated on a per-resource basis. For example, the following requests would each be allowed an 80-requests-per-hour rate:

  • GET /v2/metrics/cloud/export?resource.kafka.id=lkc-1&resource.kafka.id=lkc-2
  • GET /v2/metrics/cloud/export?resource.kafka.id=lkc-3

Rate limits for this endpoint are also scoped to the authentication principal. This allows multiple systems to export metrics for the same resources by configuring each with a separate service account.

If the rate limit is exceeded, the response body will include a message indicating which resource exceeded the limit.

{
  "errors": [
    {
      "status": "429",
      "detail": "Too many requests have been made for the following resources: kafka.id:lkc-12345. Please see the documentation for current rate limits."
    }
  ]
}

Return limits

To ensure values are returned in a timely fashion, data points returned per resource type per metric is limited to 30000. If you have more than 30000 unique combinations of labels then the response will be truncated to return the first 30000 data points.

For example, if a request is made for 4 Kafka clusters(resource type 1) and 3 Kafka connectors(resource type 2)

  • Assuming each Kafka cluster has 8000 topics and there are 5 metrics with only topic label It translates to 32000 unique label combinations per metric(capped to 30000) which in turn translates to 150,000 data points for 5 metrics.
  • Assuming each Kafka connector has 3 metrics It translates to 9 data points for 3 Kafka connectors.

The data points returned for this request totals to 150,009.

In this case, we would recommend making separate scrape jobs up to 3 kafka clusters in order to stay within the return limits for a resource type.

Example Prometheus scrape configuration

Here is an example prometheus configuration for scraping this endpoint:

scrape_configs:
  - job_name: Confluent Cloud
    scrape_interval: 1m
    scrape_timeout: 1m
    honor_timestamps: true
    static_configs:
      - targets:
        - api.telemetry.confluent.cloud
    scheme: https
    basic_auth:
      username: <Cloud API Key>
      password: <Cloud API Secret>
    metrics_path: /v2/metrics/cloud/export
    params:
      "resource.kafka.id":
        - lkc-1
        - lkc-2

Ignoring failed metrics

Using ignore_failed_metrics param, you can get a partial response consisting of successful metrics. Unsuccessful metrics would be ignored if the failure count was below the configured failure threshold percentage, otherwise an error would be returned.

In case a partial response is returned, it would contain an additional StateSet metric export_status with one sample for each metric. If the associated metric fetch was unsuccessful, the value of the sample for that metric will be 1, otherwise it will be 0. Each sample will have a metric and a resource label which denotes the name of the metric for which the status is shown and name of the resource to which the metric belongs respectively.

A sample partial response is shown in the Responses section wherein two metrics were requested and one of them was unsuccessful.

Authorizations:
api-keyconfluent-sts-access-token
path Parameters
dataset
required
string (Dataset)

The dataset to export metrics for.

query Parameters
resource.kafka.id
Array of strings

The ID of the Kafka cluster to export metrics for. This parameter can be specified multiple times (e.g. ?resource.kafka.id=lkc-1&resource.kafka.id=lkc-2).

resource.connector.id
Array of strings

The ID of the Connector to export metrics for. This parameter can be specified multiple times.

resource.custom_connector.id
Array of strings

The ID of the custom Connector to export metrics for. This parameter can be specified multiple times.

resource.ksql.id
Array of strings

The ID of the ksqlDB application to export metrics for. This parameter can be specified multiple times.

resource.schema_registry.id
Array of strings

The ID of the Schema Registry to export metrics for. This parameter can be specified multiple times.

metric
Array of strings

The metric to export. If this parameter is not specified, all metrics for the resource will be exported. This parameter can be specified multiple times.

ignore_failed_metrics
boolean

Ignore failed metrics and export only successful metrics if allowed failure threshold is not breached. If this parameter is set to true, a StateSet metric (export_status) will be included in the response to report which metrics were successful and which failed.

Responses

Response samples

Content type
Example
# HELP confluent_kafka_server_received_bytes The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_received_bytes gauge
confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicA"} 10.0 1609459200
confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicB"} 20.0 1609459200
confluent_kafka_server_received_bytes{kafka_id="lkc-2",topic="topicA"} 30.0 1609459200
# HELP confluent_kafka_server_sent_bytes The delta count of bytes of the customer's data sent to the network. Each sample is the number of bytes sent since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_sent_bytes gauge
confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicA"} 90.0 1609459200
confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicB"} 80.0 1609459200
confluent_kafka_server_sent_bytes{kafka_id="lkc-2",topic="topicA"} 70.0 1609459200

Query label values

Enumerates label values for a single metric.

Authorizations:
api-keyconfluent-sts-access-token
path Parameters
dataset
required
string (Dataset)

The dataset to query.

query Parameters
page_token
string

The next page token. The token is returned by the previous request as part of meta.pagination.

Request Body schema: application/json
metric
string

The metric that the label values are enumerated for.

group_by
required
Array of strings non-empty

The label(s) that the values are enumerated for.

Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

intervals
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) for which available metrics will be listed. A time range is an ISO-8601 interval. When unspecified, the value defaults to the last hour before the request was made

limit
integer [ 1 .. 1000 ]
Default: 100

Responses

Request samples

Content type
application/json
{
  • "metric": "io.confluent.kafka.server/sent_bytes",
  • "group_by": [
    ],
  • "filter": {
    },
  • "limit": 3,
  • "intervals": [
    ]
}

Response samples

Content type
application/json
{
  • "data": [
    ],
  • "meta": {
    }
}