Get Confluent | Sign up for Confluent Cloud or download Confluent Platform

Confluent Cloud Metrics API

Download OpenAPI specification:Download

Introduction

The Confluent Cloud Metrics API provides actionable operational metrics about your Confluent Cloud deployment. This is a queryable HTTP API in which the user will POST a query written in JSON and get back a time series of metrics specified by the query.

Comprehensive documentation is available on docs.confluent.io.

Authentication

Confluent uses API keys for integrating with Confluent Cloud. Applications must be authorized and authenticated before they can access or manage resources in Confluent Cloud. You can manage your API keys in the Confluent Cloud Dashboard or Confluent Cloud CLI.

An API key is owned by a User or Service Account and inherits the permissions granted to the owner.

Today, you can divide API keys into two classes:

  • Cloud API Keys - These grant access to the Confluent Cloud Control Plane APIs, such as for Provisioning and Metrics integrations.
  • Cluster API Keys - These grant access to a single Confluent cluster, such as a specific Kafka or Schema Registry cluster.

Cloud API Keys are required for the Metrics API. Cloud API Keys can be created using the Confluent Cloud CLI.

ccloud api-key create --resource cloud

All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.

api-key

API keys must be sent as an Authorization: Basic {key} header with the Key ID as the username and the Key Secret as the password. Remember that HTTP Basic authorization requires you to colon-separate and base64 encode your key. For example, if your API Key ID is ABCDEFGH123456789 and the corresponding API Key Secret is XNCIW93I2L1SQPJSJ823K1LS902KLDFMCZPWEO, then the authorization header will be

Authorization: Basic QUJDREVGR0gxMjM0NTY3ODk6WE5DSVc5M0kyTDFTUVBKU0o4MjNLMUxTOTAyS0xERk1DWlBXRU8=

This example header can be generated (using Mac OS X syntax) from the API key with

$ echo -n "ABCDEFGH123456789:XNCIW93I2L1SQPJSJ823K1LS902KLDFMCZPWEO" | base64
Security Scheme Type HTTP
HTTP Authorization Scheme basic

Versioning

Confluent APIs ensure stability for your integrations by avoiding the introduction of breaking changes to customers unexpectedly. Confluent will make non-breaking API changes without advance notice. Thus, API clients must follow the Compatibility Policy below to ensure your ingtegration remains stable. All APIs follow the API Lifecycle Policy described below, which describes the guarantees API clients can rely on.

Breaking changes will be widely communicated in advance in accordance with our Deprecation Policy. Confluent will provide timelines and a migration path for all API changes, where available. Be sure to subscribe to one or more communication channels so you don't miss any updates!

One exception to these guidelines is for critical security issues. We will take any necessary actions to mitigate any critical security issue as soon as possible, which may include disabling the vulnerable functionality until a proper solution is available.

Do not consume any Confluent API unless it is documented in the API Reference. All undocumented endpoints should be considered private, subject to change without notice, and not covered by any agreements.

Note: The "v1" in the URL is not a "major version" in the Semantic Versioning sense. It is a "generational version" or "meta version", as seen in other APIs like Github API or the Stripe API.

Changelog

2021-09-23

API Version 1 is now deprecated

All API Version 1 endpoints are now deprecated and will be removed on 2022-04-04. API users should migrate to API Version 2.

2021-08-24

Metric-specific aggregation functions

New metrics are being introduced that require alternative aggregation functions (e.g. MAX). When querying those metrics, using agg: "SUM" will return an error. It is recommended that clients omit the agg field in the request such that the required aggregation function for the specific metric is automatically applied on the backend.

Note: The initial version of Metrics API required clients to effectively hardcode agg: "SUM" in all queries. In early 2021, the agg field was made optional, but many clients have not been updated to omit the agg field.

Cursor-based pagination for /query endpoint

The /query endpoint now supports cursor-based pagination similar to the /descriptors and /attributes endpoints.

2021-02-10

API Version 2 is now Generally Available (GA)

See the Version 2 section below for a detailed description of changes and migration guide.

2020-12-04

API Version 2 (Preview)

Version 2 of the Metrics API is now available in Preview. See the Version 2 section below for a detailed description of changes.

2020-07-08

Correction for active_connection_count metric

A bug in the active_connection_count metric that affected a subset of customers was fixed. Customers exporting the metric to an external monitoring system may observe a discontinuity between historical results and current results due to this one-time correction.

2020-04-01

This release includes the following changes from the preview release:

New format request attribute

The /query request now includes a format attribute which controls the result structuring in the response body. See the /query endpoint definition for more details.

New /available endpoint

The new /available endpoint allows determining which metrics are available for a set of resources (defined by labels). This endpoint can be used to determine which subset of metrics are currently available for a specific resource (e.g. a Confluent Cloud Kafka cluster).

Metric type changes

The CUMULATIVE_(INT|DOUBLE) metric type enumeration was changed to COUNTER_(INT|DOUBLE). This was done to better align with OpenTelemetry conventions. In tandem with this change, several metrics that were improperly classified as GAUGEs were re-classified as COUNTERs.

Metric name changes

The /delta suffix has been removed from the following metrics:

  • io.confluent.kafka.server/received_bytes/delta
  • io.confluent.kafka.server/sent_bytes/delta
  • io.confluent.kafka.server/request_count/delta

2020-09-15

Retire /available endpoint

The /available endpoint (which was in Preview status) has been removed from the API. The /descriptors endpoint can still be used to determine the universe of available metrics for Metrics API.

The legacy metric names are deprecated and will stop functioning on 2020-07-01.

API Lifecycle Policy

The following status labels are applicable to APIs, features, and SDK versions, based on the current support status of each:

  • Early Access – May change at any time. Not recommended for production usage. Not officially supported by Confluent. Intended for user feedback only. Users must be granted explicit access to the API by Confluent.
  • Preview – Unlikely to change between Preview and General Availability. Not recommended for production usage. Officially supported by Confluent for non-production usage. For Closed Previews, users must be granted explicit access to the API by Confluent.
  • Generally Available (GA) – Will not change at short notice. Recommended for production usage. Officially supported by Confluent for non-production and production usage.
  • Deprecated – No longer supported. Will be removed in the future at the announced date. Use is discouraged and migration following the upgrade guide is recommended.
  • Sunset – Removed, and no longer supported or available.

Resources, operations, and individual fields in the OpenAPI spec will be annotated with x-lifecycle-stage, x-deprecated-at, and x-sunset-at. These annotations will appear in the corresponding API Reference Documentation. An API is "Generally Available" unless explicitly marked otherwise.

Compatibility Policy

Confluent APIs are governed by Confluent Cloud Upgrade Policy in which we will make backward incompatible changes and deprecations approximately once per year, and will provide 180 days notice via email to all registered Confluent Cloud users.

Backward Compatibility

An API version is backwards-compatible if a program written against the previous version of the API will continue to work the same way, without modification, against this version of the API.

Confluent considers the following changes to be backwards-compatible:

  • Adding new API resources.
  • Adding new optional parameters to existing API requests (e.g., query string or body).
  • Adding new properties to existing API responses.
  • Changing the order of properties in existing API responses.
  • Changing the length or format of object IDs or other opaque strings.
    • Unless otherwise documented, you can safely assume object IDs we generate will never exceed 255 characters, but you should be able to handle IDs of up to that length. If you're using MySQL, for example, you should store IDs in a VARCHAR(255) COLLATE utf8_bin column.
    • This includes adding or removing fixed prefixes (such as lkc- on kafka cluster IDs).
    • This includes API keys, API tokens, and similar authentication mechanisms.
    • This includes all strings described as "opaque" in the docs, such as pagination cursors.
  • Omitting properties with null values from existing API responses.

Client Responsibilities

  • Resource and rate limits, and the default and maximum sizes of paginated data are not considered part of the API contract and may change (possibly dynamically). It is the client's responsibility to read the road signs and obey the speed limit.
  • If a property has a primitive type and the API documentation does not explicitly limit its possible values, clients must not assume the values are constrained to a particular set of possible responses.
  • If a property of an object is not explicitly declared as mandatory in the API, clients must not assume it will be present.
  • A resource may be modified to return a "redirection" response (e.g. 301, 307) instead of directly returning the resource. Clients must handle HTTP-level redirects, and respect HTTP headers (e.g. Location).

Deprecation Policy

Confluent will announce deprecations at least 180 days in advance of a breaking change and we will continue to maintain the deprecated APIs in their original form during this time.

Exceptions to this policy apply in case of critical security vulnerabilities or functional defects.

Communication

When a deprecation is announced, the details and any relevant migration information will be available on the following channels:

Object Model

The object model for the Metrics API is designed similarly to the OpenTelemetry standard.

Metrics

A metric is a numeric attribute of a resource, measured at a specific point in time, labeled with contextual metadata gathered at the point of instrumentation.

There are two types of metrics:

  • GAUGE: An instantaneous measurement of a value. Gauge metrics are implicitly averaged when aggregating over time.

    Example: io.confluent.kafka.server/retained_bytes

  • COUNTER: The count of occurrences in a single (one minute) sampling interval (unless otherwise stated in the metric description). Counter metrics are implicitly summed when aggregating over time.

    Example: io.confluent.kafka.server/received_bytes

The list of metrics and their labels is discoverable via the /descriptors/metrics endpoint. The table below documents the metrics currently available in the API:

Resource Metric
Name Labels Name Labels
kafka id io.confluent.kafka.server/received_bytes
  • topic
  • partition
io.confluent.kafka.server/sent_bytes
  • topic
  • partition
io.confluent.kafka.server/received_records
  • topic
  • partition
io.confluent.kafka.server/sent_records
  • topic
  • partition
io.confluent.kafka.server/retained_bytes
  • topic
  • partition
io.confluent.kafka.server/request_count
  • type
io.confluent.kafka.server/active_connection_count
io.confluent.kafka.server/partition_count
io.confluent.kafka.server/successful_authentication_count
io.confluent.kafka.server/cluster_link_destination_response_bytes
  • link_name
io.confluent.kafka.server/cluster_link_source_response_bytes
io.confluent.kafka.server/cluster_active_link_count
  • mode
io.confluent.kafka.server/cluster_link_mirror_topic_count
  • link_name
  • topic
io.confluent.kafka.server/cluster_link_mirror_topic_offset_lag
  • link_name
  • topic
io.confluent.kafka.server/cluster_link_mirror_topic_bytes
  • link_name
  • topic
connector id io.confluent.kafka.connect/received_bytes
io.confluent.kafka.connect/sent_bytes
io.confluent.kafka.connect/received_records
io.confluent.kafka.connect/sent_records
io.confluent.kafka.connect/dead_letter_queue_records
ksql id io.confluent.kafka.ksql/streaming_unit_count
schema_registry id io.confluent.kafka.schema_registry/schema_count

Resources

A resource represents an entity against which metrics are collected. For example, a Kafka cluster, a Kafka Connector, a ksqlDB application, etc.

Each metric descriptor is associated with one or more resource descriptors, representing the resource types to which that metric can apply. A metric data point is associated with a single resource instance, identified by the resource labels on that metric data point.

For example, metrics emitted by Kafka Connect are associated to the connector resource type. Data points for those metrics include resource labels identifying the specific connector instance that emitted the metric.

The list of resource types and labels are discoverable via the /descriptors/resources endpoint.

Labels

A label is a key-value attribute associated with a metric data point.

Labels can be used in queries to filter or group the results. Labels must be prefixed when used in queries:

  • metric.<label> (for metric labels)
  • resource.<label> (for resource labels) Resource labels are typically of the form <resource-type>.<label>, for example kafka.id.

The set of valid label keys for a metric include:

  • The label keys defined on that metric's descriptor itself
  • The label keys defined on the resource descriptor for the metric's associated resource type

For example, the io.confluent.kafka.server/received_bytes metric has the following labels:

  • resource.kafka.id - The Kafka cluster to which the metric pertains
  • metric.topic - The Kafka topic to which the bytes were produced
  • metric.partition - The partition to which the bytes were produced

Datasets

A dataset is a logical collection of metrics that can be queried together. The dataset is a required URL template parameter for every endpoint in this API. The following datasets are currently available:

Dataset Description
cloud

generally-available

Metrics originating from Confluent Cloud resources.

Requests to this dataset require a resource filter (e.g. Kafka cluster ID, Connector ID, etc.) in the query for authorization purposes. The client's API key must be authorized for the resource referenced in the filter.

hosted-monitoring

preview

Metrics originating from self-managed Confluent Platform components sent via Confluent Telemetry Reporter.

Requests to this dataset do not require a filter for authorization purposes. Instead, they are implicitly scoped to resources for which the client's API key is authorized.

Client Considerations and Best Practices

Rate Limiting

To protect the stability of the API and keep it available to all users, Confluent employs multiple safeguards. Users who send many requests in quick succession or perform too many concurrent operations may be throttled or have their requested rejected with an error. When a rate limit is breached, an HTTP 429 Too Many Requests error is returned. Our current limit is 50 requests per minute per IP address.

Retries

Implement retry logic in your client to gracefully handle transient API failures. This should be done by watching for error responses and building in a retry mechanism. This mechanism should follow a capped exponential backoff policy to prevent retry amplification ("retry storms") and also introduce some randomness ("jitter") to avoid the thundering herd effect.

Metric Availability Delays

Metric data points are typically available for query in the API a within a few minutes of their origination at the source. This delay can vary based on network conditions and processing overhead. Clients that are polling (or "scraping") metrics into an external monitoring system should account for this delay in their polling requests. API requests that fail to incorporate the availability delay into the query interval may have incomplete data in the response.

Pagination

Cursors, tokens, and corresponding pagination links may expire after a short amount of time. In this case, the API will return a 400 Bad Request error and the client will need to restart from the beginning.

The client should have no trouble pausing between rate limiting windows, but persisting cursors for hours or days is not recommended.

Version 2

generally-available

Version 2 of the Metrics API adds the ability to query metrics for Kafka Connect, ksqlDB, and Schema Registry.

This capability is enabled by the introduction of a Resource abstraction into the API object model. Resources represent the entity against which metrics are collected.

Migration Guide

The following endpoint URLs have changed in version 2:

Endpoint Version 1 Version 2
Metrics discovery /metrics/{dataset}/descriptors /metrics/{dataset}/descriptors/metrics

The label prefix syntax has changed in version 2:

Label Version 1 Version 2
Resource labels (new) N/A resource.<label>
Kafka cluster ID metric.label.cluster_id resource.kafka.id
All other metric labels metric.label.<label> metric.<label>

This example shows a request to /v1/metrics/cloud/query migrated into the new v2 syntax

Version 1 Request

{
  "group_by": [
    "metric.label.topic"
  ],
  "aggregations": [{
    "metric": "io.confluent.kafka.server/received_bytes",
    "agg": "SUM"
  }],
  "filter": {
    "field": "metric.label.cluster_id",
    "op": "EQ",
    "value": "lkc-00000"
  },
  "granularity": "ALL",
  "intervals" : [
    "2020-01-01T00:00:00Z/PT1H"
  ]
}

Version 2 Request

{
  "group_by": [
    "metric.topic"
  ],
  "aggregations": [{
    "metric": "io.confluent.kafka.server/received_bytes",
    "agg": "SUM"
  }],
  "filter": {
    "field": "resource.kafka.id",
    "op": "EQ",
    "value": "lkc-00000"
  },
  "granularity": "ALL",
  "intervals" : [
    "2020-01-01T00:00:00Z/PT1H"
  ]
}

List metric descriptors

Lists all the metric descriptors for a dataset.

A metric descriptor represents metadata for a metric, including its data type and labels. This metadata is provided programmatically to enable clients to dynamically adjust as new metrics are added to the dataset, rather than hardcoding metric names in client code.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to list metric descriptors for. Currently the only supported dataset name is cloud.

query Parameters
page_size
integer [ 1 .. 1000 ]
Default: 100

The maximum number of results to return. The page size is an integer in the range from 1 through 1000.

page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination.

resource_type
required
string (ResourceType)

The type of the resource to list metric descriptors for.

Responses

Response samples

Content type
application/json
{
  • "data":
    [
    ],
  • "links": null,
  • "meta":
    {
    }
}

List resource descriptors

Lists all the resource descriptors for a dataset.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to list resource descriptors for. Currently the only supported dataset name is cloud.

query Parameters
page_size
integer [ 1 .. 1000 ]
Default: 100

The maximum number of results to return. The page size is an integer in the range from 1 through 1000.

page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination.

Responses

Response samples

Content type
application/json
{
  • "data":
    [
    ],
  • "meta":
    {
    },
  • "links":
    {
    }
}

Query metric values

Query for metric values in a dataset.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to query. Currently the only supported dataset name is cloud.

query Parameters
page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination. Pagination is only supported for requests containing a group_by element.

Request Body schema: application/json
required
Array of objects (Aggregation) 1 items

Specifies which metrics to query and the aggregation operator to apply across the group_by labels. Currently, only one aggregation per request is supported.

group_by
Array of strings

Specifies how data gets bucketed by label(s).

granularity
required
string <ISO-8601 duration (PnDTnHnMn.nS) or ALL> (Granularity)
Enum: "PT1M" "PT5M" "PT15M" "PT30M" "PT1H" "PT4H" "PT6H" "PT12H" "P1D" "ALL"

Defines the time buckets that the aggregation is performed for. Buckets are specified in ISO-8601 duration syntax, but only the enumerated values are supported. Buckets are aligned to UTC boundaries. The special ALL value defines a single bucket for all intervals.

The allowed granularity for a query is restricted by the length of that query's interval.

Granularity Maximum Interval Length
PT1M (1 minute) 6 hours
PT5M (5 minutes) 1 day
PT15M (15 minutes) 4 days
PT30M (30 minutes) 7 days
PT1H (1 hour) Unlimited
PT4H (4 hours) Unlimited
PT6H (6 hours) Unlimited
PT12H (12 hours) Unlimited
P1D (1 day) Unlimited
ALL Unlimited
Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

Array of objects (OrderBy) non-empty

Sort ordering for result groups. Only valid for granularity: "ALL". If not specified, defaults to the first aggregation in descending order.

Note that this ordering applies to the groups. Within a group (or for ungrouped results), data points are always ordered by timestamp in descending order.

intervals
required
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) that the query runs over. A time range is an ISO-8601 interval.

The keyword now can be used in place of a timestamp to refer to the current time. Offset and truncation modifiers can be also be applied to the now expression:

Modifier Syntax Examples
Offset (+|-)<amount>(m|h|d) -2m (minus 2 minutes)
-1h (minus 1 hour)
Truncation |(m|h|d) |m (round down to start of minute)
|h (round down to start of hour)

All hour/day truncation is performed against the UTC timezone.

If now is 2020-01-01T02:13:27Z, some examples are:

  • now-2m|m: now minus 2 minutes, truncated to start of minute.
    Resolves to 2020-01-01T02:11:00Z
  • now|h: now truncated to start of hour.
    Resolves to 2020-01-01T02:00:00Z
  • now-1d|d: now minus 1 day, truncated to start of day.
    Resolves to 2019-12-31T00:00:00Z

When using now, it is recommended to apply a negative offset to avoid incomplete data (see metric availability delays) and align to minute boundaries (e.g. now-2m|m).

limit
integer [ 1 .. 1000 ]
Default: 100

The maximum number of groups to return. The maximum number of data points in the response is equal to limit * (interval / granularity). For example, with an interval of 1 day, granularity of PT1H, and limit of 2 there will be a maximum of 48 data points in the response (24 for each group).

format
string (ResponseFormat)
Default: "FLAT"
Enum: "FLAT" "GROUPED"

Desired response format for query results.

  • FLAT (default): Each item in the response data array represents a data point in the timeseries. Each data point contains the timestamp, metric aggregation value and attributes for the group_by labels.
  • GROUPED: Each item in the response data array represents a group. Each group contains attributes for the group_by labels and an array of points for the metric aggregation timeseries. Only allowed when group_by is non-empty.

Please see the response schema and accompanying examples for more details.

Responses

Request samples

Content type
application/json
{
  • "group_by":
    [
    ],
  • "aggregations":
    [
    ],
  • "filter":
    {
    },
  • "order_by":
    [
    ],
  • "granularity": "PT1H",
  • "intervals":
    [
    ],
  • "limit": 5
}

Response samples

Content type
application/json
Example
{
  • "data":
    [
    ]
}

Export metric values

Early Access Request Access

Export current metric values in Prometheus format, suitable for import into an external monitoring system. Returns the single most recent data point for each metric, for each distinct combination of labels.

Supported datasets and metrics

Only the cloud dataset is supported for this endpoint.

Only a subset of metrics and labels from the dataset are included in the export response. To request a particular metric or label be added, please contact Confluent Support.

Metric translation

Metric and label names are translated to adhere to Prometheus restrictions. The resource. and metric. prefixes from label names are also dropped to simplify consumption in downstream systems.

Counter metrics are classified as the Prometheus gauge type to conform to required semantics.

The counter type in Prometheus must be monotonically increasing, whereas Confluent Metrics API counters are represented as deltas.

Timestamp offset

To account for metric availability delays, this endpoint returns metrics from the current timestamp minus a fixed offset. The current offset is 2 minutes rounded down to the start of the minute. For example, if a request is received at 12:03:41, the returned metrics will have the timestamp 12:01:00 and represent the data for the interval 12:01:00 through 12:02:00 (exclusive).

NOTE: Confluent may choose to lengthen or shorten this offset based on operational considerations. Doing so is considered a backwards-compatible change.

To accommodate this offset, the timestamps in the response should be honored when importing the metrics. For example, in prometheus this can be controlled using the honor_timestamps flag.

Rate limits

Since metrics are available at minute granularity, it is expected that clients scrape this endpoint at most once per minute. To allow for ad-hoc testing, the rate limit is enforced at hourly granularity. A small buffer is included to accommodate retries.

The rate limit is evaluated on a per-resource basis. For example, the following requests would each be allowed a once-per-minute rate:

  • GET /v2/metrics/cloud/export?resource.kafka.id=lkc-1&resource.kafka.id=lkc-2
  • GET /v2/metrics/cloud/export?resource.kafka.id=lkc-3

Example Prometheus scrape configuration

Here is an example prometheus configuration for scraping this endpoint:

scrape_configs:
  - job_name: Confluent Cloud
    scrape_interval: 1m
    scrape_timeout: 1m
    honor_timestamps: true
    static_configs:
      - targets:
        - api.telemetry.confluent.cloud
    scheme: https
    basic_auth:
      username: <Cloud API Key>
      password: <Cloud API Secret>
    metrics_path: /v2/metrics/cloud/export
    params:
      "resource.kafka.id":
        - lkc-1
        - lkc-2
Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to export metrics for. Currently the only supported dataset name is cloud.

query Parameters
resource.kafka.id
string

The ID of the Kafka cluster to export metrics for.

resource.connector.id
string

The ID of the Connector to export metrics for.

resource.ksql.id
string

The ID of the ksqlDB application to export metrics for.

resource.schema_registry.id
string

The ID of the Schema Registry to export metrics for.

Responses

Response samples

Content type
# HELP confluent_kafka_server_received_bytes The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_received_bytes gauge
confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicA"} 10.0 1609459200
confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicB"} 20.0 1609459200
confluent_kafka_server_received_bytes{kafka_id="lkc-2",topic="topicA"} 30.0 1609459200

# HELP confluent_kafka_server_sent_bytes The delta count of bytes of the customer's data sent to the network. Each sample is the number of bytes sent since the previous data sample. The count is sampled every 60 seconds.
# TYPE confluent_kafka_server_sent_bytes gauge
confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicA"} 90.0 1609459200
confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicB"} 80.0 1609459200
confluent_kafka_server_sent_bytes{kafka_id="lkc-2",topic="topicA"} 70.0 1609459200

Query label values

Enumerates label values for a single metric.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to query.

query Parameters
page_token
string

The next page token. The token is returned by the previous request as part of meta.pagination.

Request Body schema: application/json
metric
string

The metric that the label values are enumerated for.

group_by
required
Array of strings 1 items

The label(s) that the values are enumerated for.

Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

intervals
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) for which available metrics will be listed. A time range is an ISO-8601 interval. When unspecified, the value defaults to the last hour before the request was made

limit
integer [ 1 .. 1000 ]
Default: 100

Responses

Request samples

Content type
application/json
{
  • "metric": "io.confluent.kafka.server/sent_bytes",
  • "group_by":
    [
    ],
  • "filter":
    {
    },
  • "limit": 3,
  • "intervals":
    [
    ]
}

Response samples

Content type
application/json
{
  • "data":
    [
    ],
  • "meta":
    {
    }
}

Version 1

deprecated

Version 1 of Metrics API supports querying metrics for Kafka clusters.

List all metric descriptors Deprecated

Lists all the metric descriptors for a dataset.

A metric descriptor represents metadata for a metric, including its data type and labels. This metadata is provided programmatically to enable clients to dynamically adjust as new metrics are added to the dataset, rather than hardcoding metric names in client code.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to list metric descriptors for. Currently the only supported dataset name is cloud.

query Parameters
page_size
integer [ 1 .. 1000 ]
Default: 100

The maximum number of results to return. The page size is an integer in the range from 1 through 1000.

page_token
string (PageToken)

The next page token. The token is returned by the previous request as part of meta.pagination.

Responses

Response samples

Content type
application/json
{
  • "data":
    [
    ],
  • "links": null,
  • "meta":
    {
    }
}

Query metric values Deprecated

Queries metrics in a dataset.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to query. Currently the only supported dataset name is cloud.

Request Body schema: application/json
required
Array of objects (Aggregation) 1 items

Specifies which metrics to query and the aggregation operator to apply across the group_by labels. Currently, only one aggregation per request is supported.

group_by
Array of strings

Specifies how data gets bucketed by label(s).

granularity
required
string <ISO-8601 duration (PnDTnHnMn.nS) or ALL> (Granularity)
Enum: "PT1M" "PT5M" "PT15M" "PT30M" "PT1H" "PT4H" "PT6H" "PT12H" "P1D" "ALL"

Defines the time buckets that the aggregation is performed for. Buckets are specified in ISO-8601 duration syntax, but only the enumerated values are supported. Buckets are aligned to UTC boundaries. The special ALL value defines a single bucket for all intervals.

The allowed granularity for a query is restricted by the length of that query's interval.

Granularity Maximum Interval Length
PT1M (1 minute) 6 hours
PT5M (5 minutes) 1 day
PT15M (15 minutes) 4 days
PT30M (30 minutes) 7 days
PT1H (1 hour) Unlimited
PT4H (4 hours) Unlimited
PT6H (6 hours) Unlimited
PT12H (12 hours) Unlimited
P1D (1 day) Unlimited
ALL Unlimited
Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

Array of objects (OrderBy) non-empty

Sort ordering for result groups. Only valid for granularity: "ALL". If not specified, defaults to the first aggregation in descending order.

Note that this ordering applies to the groups. Within a group (or for ungrouped results), data points are always ordered by timestamp in descending order.

intervals
required
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) that the query runs over. A time range is an ISO-8601 interval.

The keyword now can be used in place of a timestamp to refer to the current time. Offset and truncation modifiers can be also be applied to the now expression:

Modifier Syntax Examples
Offset (+|-)<amount>(m|h|d) -2m (minus 2 minutes)
-1h (minus 1 hour)
Truncation |(m|h|d) |m (round down to start of minute)
|h (round down to start of hour)

All hour/day truncation is performed against the UTC timezone.

If now is 2020-01-01T02:13:27Z, some examples are:

  • now-2m|m: now minus 2 minutes, truncated to start of minute.
    Resolves to 2020-01-01T02:11:00Z
  • now|h: now truncated to start of hour.
    Resolves to 2020-01-01T02:00:00Z
  • now-1d|d: now minus 1 day, truncated to start of day.
    Resolves to 2019-12-31T00:00:00Z

When using now, it is recommended to apply a negative offset to avoid incomplete data (see metric availability delays) and align to minute boundaries (e.g. now-2m|m).

limit
integer [ 1 .. 1000 ]
Default: 100

The maximum number of groups to return. The maximum number of data points in the response is equal to limit * (interval / granularity). For example, with an interval of 1 day, granularity of PT1H, and limit of 2 there will be a maximum of 48 data points in the response (24 for each group).

format
string (ResponseFormat)
Default: "FLAT"
Enum: "FLAT" "GROUPED"

Desired response format for query results.

  • FLAT (default): Each item in the response data array represents a data point in the timeseries. Each data point contains the timestamp, metric aggregation value and attributes for the group_by labels.
  • GROUPED: Each item in the response data array represents a group. Each group contains attributes for the group_by labels and an array of points for the metric aggregation timeseries. Only allowed when group_by is non-empty.

Please see the response schema and accompanying examples for more details.

Responses

Request samples

Content type
application/json
{
  • "group_by":
    [
    ],
  • "aggregations":
    [
    ],
  • "filter":
    {
    },
  • "order_by":
    [
    ],
  • "granularity": "PT1H",
  • "intervals":
    [
    ],
  • "limit": 5
}

Response samples

Content type
application/json
Example
{
  • "data":
    [
    ]
}

Query label values Deprecated

Enumerates label values for a single metric.

Authorizations:
path Parameters
dataset
required
string (Dataset)

The dataset to query.

query Parameters
page_token
string

The next page token. The token is returned by the previous request as part of meta.pagination.

Request Body schema: application/json
metric
required
string

The metric that the label values are enumerated for.

group_by
required
Array of strings 1 items

The label(s) that the values are enumerated for.

Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter)

Metric filter.

intervals
Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty

Defines the time range(s) for which available metrics will be listed. A time range is an ISO-8601 interval. When unspecified, the value defaults to the last hour before the request was made

limit
integer [ 1 .. 1000 ]
Default: 100

Responses

Request samples

Content type
application/json
{
  • "metric": "io.confluent.kafka.server/sent_bytes",
  • "group_by":
    [
    ],
  • "filter":
    {
    },
  • "limit": 3,
  • "intervals":
    [
    ]
}

Response samples

Content type
application/json
{
  • "data":
    [
    ],
  • "meta":
    {
    }
}