Download OpenAPI specification:Download
The Confluent Cloud Metrics API provides actionable operational metrics about your Confluent
Cloud deployment. This is a queryable HTTP API in which the user will POST
a query written in
JSON and get back a time series of metrics specified by the query.
Comprehensive documentation is available on docs.confluent.io.
This information is also available programmatically via the descriptors endpoint.
Confluent uses API keys for integrating with Confluent Cloud. Applications must be authorized and authenticated before they can access or manage resources in Confluent Cloud. You can manage your API keys in the Confluent Cloud Dashboard or Confluent Cloud CLI.
An API key is owned by a User or Service Account and inherits the permissions granted to the owner.
Today, you can divide API keys into two classes:
Cloud API Keys are required for the Metrics API. Cloud API Keys can be created using the Confluent Cloud CLI.
ccloud api-key create --resource cloud
All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication will also fail.
The Confluent Cloud Metrics API also supports OAuth 2.0 by allowing Confluent Security Token Service (STS) tokens as credentials to authenticate to metrics API. See steps to [Authenticate access to Confluent Cloud APIs using Confluent STS tokens] (https://docs.confluent.io/cloud/current/access-management/authenticate/oauth/access-rest-apis-sts.html#authenticate- access-to-ccloud-apis-using-confluent-security-token-service-sts-tokens)
Confluent APIs ensure stability for your integrations by avoiding the introduction of breaking changes to customers unexpectedly. Confluent will make non-breaking API changes without advance notice. Thus, API clients must follow the Compatibility Policy below to ensure your ingtegration remains stable. All APIs follow the API Lifecycle Policy described below, which describes the guarantees API clients can rely on.
Breaking changes will be widely communicated in advance in accordance with our Deprecation Policy. Confluent will provide timelines and a migration path for all API changes, where available. Be sure to subscribe to one or more communication channels so you don't miss any updates!
One exception to these guidelines is for critical security issues. We will take any necessary actions to mitigate any critical security issue as soon as possible, which may include disabling the vulnerable functionality until a proper solution is available.
Do not consume any Confluent API unless it is documented in the API Reference. All undocumented endpoints should be considered private, subject to change without notice, and not covered by any agreements.
Note: The "v1" in the URL is not a "major version" in the Semantic Versioning sense. It is a "generational version" or "meta version", as seen in other APIs like Github API or the Stripe API.
health-plus
is now available in preview.See the Datasets section for more details.
cluster_active_link_count
metric is deprecatedThe io.confluent.kafka.server/cluster_active_link_count
is now deprecated. Please use the
io.confluent.kafka.server/cluster_link_count
metric instead.
/export
The following metrics are now available in the /export
endpoint:
io.confluent.kafka.server/request_bytes
io.confluent.kafka.server/response_bytes
io.confluent.kafka.server/cluster_link_destination_response_bytes
io.confluent.kafka.server/cluster_link_source_response_bytes
io.confluent.kafka.server/cluster_link_count
io.confluent.kafka.server/cluster_link_mirror_topic_count
io.confluent.kafka.server/cluster_link_mirror_topic_offset_lag
io.confluent.kafka.server/cluster_link_mirror_topic_bytes
All API Version 1 endpoints are no longer supported from 2022-10-17. API users should migrate to API Version 2.
All API Version 1 endpoints are now deprecated and will be removed on 2022-04-04. API users should migrate to API Version 2.
New metrics are being introduced that require alternative aggregation functions (e.g. MAX
).
When querying those metrics, using agg: "SUM"
will return an error.
It is recommended that clients omit the agg
field in the request such that the required
aggregation function for the specific metric is automatically applied on the backend.
Note: The initial version of Metrics API required clients to effectively hardcode
agg: "SUM"
in all queries. In early 2021, theagg
field was made optional, but many clients have not been updated to omit theagg
field.
/query
endpointThe /query
endpoint now supports cursor-based pagination similar to the /descriptors
and
/attributes
endpoints.
See the Version 2 section below for a detailed description of changes and migration guide.
Version 2 of the Metrics API is now available in Preview. See the Version 2 section below for a detailed description of changes.
active_connection_count
metricA bug in the active_connection_count
metric that affected a subset of customers was fixed.
Customers exporting the metric to an external monitoring system may observe a discontinuity
between historical results and current results due to this one-time correction.
This release includes the following changes from the preview release:
format
request attributeThe /query
request now includes a format
attribute which controls the result structuring in
the response body. See the /query
endpoint definition for more details.
/available
endpointThe new /available
endpoint allows determining which metrics are available for a set of
resources (defined by labels). This endpoint can be used to determine which subset of metrics
are currently available for a specific resource (e.g. a Confluent Cloud Kafka cluster).
The CUMULATIVE_(INT|DOUBLE)
metric type enumeration was changed to COUNTER_(INT|DOUBLE)
.
This was done to better align with OpenTelemetry conventions. In tandem with this change,
several metrics that were improperly classified as GAUGE
s were re-classified as COUNTER
s.
The /delta
suffix has been removed from the following metrics:
io.confluent.kafka.server/received_bytes/delta
io.confluent.kafka.server/sent_bytes/delta
io.confluent.kafka.server/request_count/delta
/available
endpointThe /available
endpoint (which was in Preview status) has been removed from the API.
The /descriptors
endpoint can still be used to determine the universe of available
metrics for Metrics API.
The legacy metric names are deprecated and will stop functioning on 2020-07-01.
The following status labels are applicable to APIs, features, and SDK versions, based on the current support status of each:
Resources, operations, and individual fields in the
OpenAPI spec will be annotated with
x-lifecycle-stage
, x-deprecated-at
, and x-sunset-at
. These annotations will appear in the
corresponding API Reference Documentation. An API is "Generally Available" unless explicitly
marked otherwise.
Confluent APIs are governed by Confluent Cloud Upgrade Policy in which we will make backward incompatible changes and deprecations approximately once per year, and will provide 180 days notice via email to all registered Confluent Cloud users.
An API version is backwards-compatible if a program written against the previous version of the API will continue to work the same way, without modification, against this version of the API.
Confluent considers the following changes to be backwards-compatible:
VARCHAR(255) COLLATE utf8_bin
column.lkc-
on kafka cluster IDs).301
, 307
) instead
of directly returning the resource. Clients must handle HTTP-level redirects, and respect
HTTP headers (e.g. Location
).Confluent will announce deprecations at least 180 days in advance of a breaking change and we will continue to maintain the deprecated APIs in their original form during this time.
Exceptions to this policy apply in case of critical security vulnerabilities or functional defects.
When a deprecation is announced, the details and any relevant migration information will be available on the following channels:
The object model for the Metrics API is designed similarly to the OpenTelemetry standard.
A metric is a numeric attribute of a resource, measured at a specific point in time, labeled with contextual metadata gathered at the point of instrumentation.
There are two types of metrics:
GAUGE
: An instantaneous measurement of a value.
Gauge metrics are implicitly averaged when aggregating over time.Example:
io.confluent.kafka.server/retained_bytes
COUNTER
: The count of occurrences in a single (one minute) sampling
interval (unless otherwise stated in the metric description).
Counter metrics are implicitly summed when aggregating over time.Example:
io.confluent.kafka.server/received_bytes
The list of metrics and their labels is available at /docs/descriptors.
A resource represents an entity against which metrics are collected. For example, a Kafka cluster, a Kafka Connector, a ksqlDB application, etc.
Each metric descriptor is associated with one or more resource descriptors, representing the resource types to which that metric can apply. A metric data point is associated with a single resource instance, identified by the resource labels on that metric data point.
For example, metrics emitted by Kafka Connect are associated to the connector
resource type.
Data points for those metrics include resource labels identifying the specific connector
instance that emitted the metric.
The list of resource types and labels are discoverable via the /descriptors/resources
endpoint.
A label is a key-value attribute associated with a metric data point.
Labels can be used in queries to filter or group the results. Labels must be prefixed when used in queries:
metric.<label>
(for metric labels), for example metric.topic
resource.<resource-type>.<label>
(for resource labels), for example resource.kafka.id
.The set of valid label keys for a metric include:
For example, the io.confluent.kafka.server/received_bytes
metric has the following labels:
resource.kafka.id
- The Kafka cluster to which the metric pertainsmetric.topic
- The Kafka topic to which the bytes were producedmetric.partition
- The partition to which the bytes were producedA dataset is a logical collection of metrics that can be queried together. The dataset
is
a required URL template parameter for every endpoint in this API. The following datasets are
currently available:
Dataset | Description |
---|---|
cloud
|
Metrics originating from Confluent Cloud resources.
Requests to this dataset require a resource |
cloud-custom
|
Metrics originating from custom Confluent Cloud resources (e.g. a Custom Connector).
Requests to this dataset require a resource |
health-plus
|
Metrics originating from Confluent Platform resources. |
To protect the stability of the API and keep it available to all users, Confluent employs
multiple safeguards. Users who send many requests in quick succession or perform too many
concurrent operations may be throttled or have their requested rejected with an error.
When a rate limit is breached, an HTTP 429 Too Many Requests
error is returned. The
following headers are sent back to provide assistance in dealing with rate limits.
Header | Description |
---|---|
rateLimit-limit |
The maximum number of requests you're permitted to make per time period. |
rateLimit-reset |
The relative time in seconds until the current rate limit window resets. |
rateLimit-remaining |
The number of requests remaining in the current rate-limit window. Important: This differs from Github and Twitter's same-named header which uses UTC epoch seconds. We use relative time to avoid client/server time synchronization issues. |
Rate limits are enforced at multiple scopes. You get two sets of the headers above, each specifying the limit of one scope.
A global rate limit of 60 requests per IP address, per minute is enforced.
Additionally, some endpoint-specific rate limits are enforced.
Endpoint | Rate limit |
---|---|
/v2/metrics/{dataset}/export |
80 requests per resource, per hour, per principal. See the export endpoint documentation for details. |
Implement retry logic in your client to gracefully handle transient API failures. This should be done by watching for error responses and building in a retry mechanism. This mechanism should follow a capped exponential backoff policy to prevent retry amplification ("retry storms") and also introduce some randomness ("jitter") to avoid the thundering herd effect.
Metric data points are typically available for query in the API within 5 minutes of their
origination at the source. This latency can vary based on network conditions and processing
overhead. Clients that are polling (or "scraping") metrics into an external monitoring system
should account for this latency in their polling requests. API requests that fail to
incorporate the latency into the query interval
may have incomplete data in the response.
Cursors, tokens, and corresponding pagination links may expire after a short amount of time.
In this case, the API will return a 400 Bad Request
error and the client will need to restart
from the beginning.
The client should have no trouble pausing between rate limiting windows, but persisting cursors for hours or days is not recommended.
Version 2 of the Metrics API adds the ability to query metrics for Kafka Connect, ksqlDB, and Schema Registry.
This capability is enabled by the introduction of a Resource abstraction into the API object model. Resources represent the entity against which metrics are collected.
The following endpoint URLs have changed in version 2:
Endpoint | Version 1 (Sunset) | Version 2 |
---|---|---|
Metrics discovery | /metrics/{dataset}/descriptors |
/metrics/{dataset}/descriptors/metrics |
The label prefix syntax has changed in version 2:
Label | Version 1 (Sunset) | Version 2 |
---|---|---|
Resource labels (new) | N/A | resource.<label> |
Kafka cluster ID | metric.label.cluster_id |
resource.kafka.id |
All other metric labels | metric.label.<label> |
metric.<label> |
This example shows a request to /v1/metrics/cloud/query
migrated into the new v2
syntax
{
"group_by": [
"metric.label.topic"
],
"aggregations": [{
"metric": "io.confluent.kafka.server/received_bytes",
"agg": "SUM"
}],
"filter": {
"field": "metric.label.cluster_id",
"op": "EQ",
"value": "lkc-00000"
},
"granularity": "ALL",
"intervals" : [
"2020-01-01T00:00:00Z/PT1H"
]
}
{
"group_by": [
"metric.topic"
],
"aggregations": [{
"metric": "io.confluent.kafka.server/received_bytes",
"agg": "SUM"
}],
"filter": {
"field": "resource.kafka.id",
"op": "EQ",
"value": "lkc-00000"
},
"granularity": "ALL",
"intervals" : [
"2020-01-01T00:00:00Z/PT1H"
]
}
Lists all the metric descriptors for a dataset.
A metric descriptor represents metadata for a metric, including its data type and labels. This metadata is provided programmatically to enable clients to dynamically adjust as new metrics are added to the dataset, rather than hardcoding metric names in client code.
dataset required | string (Dataset) The dataset to list metric descriptors for. |
page_size | integer [ 1 .. 1000 ] Default: 100 The maximum number of results to return. The page size is an integer in the range from 1 through 1000. |
page_token | string (PageToken) The next page token. The token is returned by the previous request as part of |
resource_type | string (ResourceType) The type of the resource to list metric descriptors for. |
{- "data": [
- {
- "description": "The delta count of bytes received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds.",
- "labels": [
- {
- "description": "The name of the Kafka topic.",
- "key": "topic",
- "exportable": true
}
], - "name": "io.confluent.kafka.server/received_bytes",
- "type": "COUNTER_INT64",
- "exportable": true,
- "unit": "By",
- "lifecycle_stage": "GENERAL_AVAILABILITY",
- "resources": [
- "kafka"
]
}, - {
- "description": "The delta count of bytes sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.",
- "labels": [
- {
- "description": "The name of the Kafka topic.",
- "key": "topic",
- "exportable": true
}
], - "name": "io.confluent.kafka.server/sent_bytes",
- "type": "COUNTER_INT64",
- "exportable": true,
- "unit": "By",
- "lifecycle_stage": "GENERAL_AVAILABILITY",
- "resources": [
- "kafka"
]
}
], - "links": null,
- "meta": {
- "pagination": {
- "page_size": 3,
- "total_size": 3
}
}
}
Lists all the resource descriptors for a dataset.
dataset required | string (Dataset) The dataset to list resource descriptors for. |
page_size | integer [ 1 .. 1000 ] Default: 100 The maximum number of results to return. The page size is an integer in the range from 1 through 1000. |
page_token | string (PageToken) The next page token. The token is returned by the previous request as part of |
{- "data": [
- {
- "type": "kafka",
- "description": "A Kafka cluster.",
- "labels": [
- {
- "key": "kafka.id",
- "description": "ID of the kafka cluster",
- "exportable": true
}
]
}
], - "meta": {
- "pagination": {
- "page_size": 10,
- "total_size": 25
}
}, - "links": {
- "next": "string"
}
}
Query for metric values in a dataset.
dataset required | string (Dataset) The dataset to query. |
page_token | string (PageToken) The next page token. The token is returned by the previous request as part of |
required | Array of objects (Aggregation) = 1 items Specifies which metrics to query and the aggregation operator to apply across the | ||||||||||||||||||||||
group_by | Array of strings Specifies how data gets bucketed by label(s); see here on using labels for grouping query results. | ||||||||||||||||||||||
granularity required | string <ISO-8601 duration (PnDTnHnMn.nS) or ALL> (Granularity) Enum: "PT1M" "PT5M" "PT15M" "PT30M" "PT1H" "PT4H" "PT6H" "PT12H" "P1D" "ALL" Defines the time buckets that the aggregation is performed for.
Buckets are specified in
ISO-8601 duration syntax, but
only the enumerated values are supported. Buckets are aligned to UTC boundaries.
The special The allowed granularity for a query is restricted by the length of that query's Do not confuse intervals with retention time. Confluent uses granularity and intervals to
validate requests. Retention time is the length of time Confluent stores data. For example,
requests with a granularity of Granularity equal to or greater than
| ||||||||||||||||||||||
Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter) Metric filter. | |||||||||||||||||||||||
Array of objects (OrderBy) <= 1 items Sort ordering for result groups. Only valid for Note that this ordering applies to the groups.
Within a group (or for ungrouped results), data points are always ordered by | |||||||||||||||||||||||
intervals required | Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty Defines the time range(s) that the query runs over. A time range is an ISO-8601 interval. The keyword
All hour/day truncation is performed against the UTC timezone. If
When using | ||||||||||||||||||||||
limit | integer [ 1 .. 1000 ] Default: 100 The maximum number of groups to return. Only supported with a non-empty | ||||||||||||||||||||||
format | string (ResponseFormat) Default: "FLAT" Enum: "FLAT" "GROUPED" Desired response format for query results.
Please see the response schema and accompanying examples for more details. |
{- "group_by": [
- "metric.topic"
], - "aggregations": [
- {
- "metric": "io.confluent.kafka.server/sent_bytes",
- "agg": "SUM"
}
], - "filter": {
- "op": "AND",
- "filters": [
- {
- "field": "resource.kafka.id",
- "op": "EQ",
- "value": "lkc-1234"
}, - {
- "op": "NOT",
- "filter": {
- "field": "metric.topic",
- "op": "EQ",
- "value": "topicA"
}
}
]
}, - "order_by": [
- {
- "metric": "io.confluent.kafka.server/sent_bytes",
- "agg": "SUM",
- "order": "DESCENDING"
}
], - "granularity": "PT1H",
- "intervals": [
- "2019-10-17T20:17:00.000Z/PT2H"
], - "limit": 5
}
{- "data": [
- {
- "timestamp": "2019-10-17T20:17:00.000Z",
- "metric.topic": "foo",
- "value": 9741
}, - {
- "timestamp": "2019-10-17T20:18:00.000Z",
- "metric.topic": "foo",
- "value": 9246
}, - {
- "timestamp": "2019-10-17T20:17:00.000Z",
- "metric.topic": "bar",
- "value": 844.1
}, - {
- "timestamp": "2019-10-17T20:18:00.000Z",
- "metric.topic": "bar",
- "value": 821.1
}
]
}
Export current metric values in OpenMetrics format or Prometheus format, suitable for import into an external monitoring system. Returns the single most recent data point for each metric, for each distinct combination of labels.
By default only the cloud
and cloud-custom
datasets are supported for this endpoint.
Some metrics and labels within the datasets may not be exportable. To request a particular metric or label be added, please contact Confluent Support.
Metric and label names are translated to adhere to Prometheus restrictions.
The resource.
and metric.
prefixes from label names are also dropped to simplify consumption in downstream systems.
Counter metrics are classified as the Prometheus gauge
type to conform to required semantics.
The
counter
type in Prometheus must be monotonically increasing, whereas Confluent Metrics API counters are represented as deltas.
To account for metric data latency,
this endpoint returns metrics from the current timestamp minus a fixed offset. The current
offset is 5 minutes rounded down to the start of the minute. For example, if a request is
received at 12:06:41
, the returned metrics will have the timestamp 12:01:00
and represent the
data for the interval 12:01:00
through 12:02:00
(exclusive).
NOTE: Confluent may choose to lengthen or shorten this offset based on operational considerations. Doing so is considered a backwards-compatible change.
To accommodate this offset, the timestamps in the response should be honored when importing
the metrics. For example, in prometheus this can be controlled using the honor_timestamps
flag.
Since metrics are available at minute granularity, it is expected that clients scrape this endpoint at most once per minute. To allow for ad-hoc testing, the rate limit is enforced at hourly granularity. To accommodate retries, the rate limit is 80 requests per hour rather than 60 per hour.
The rate limit is evaluated on a per-resource basis. For example, the following requests would each be allowed an 80-requests-per-hour rate:
GET /v2/metrics/cloud/export?resource.kafka.id=lkc-1&resource.kafka.id=lkc-2
GET /v2/metrics/cloud/export?resource.kafka.id=lkc-3
Rate limits for this endpoint are also scoped to the authentication principal. This allows multiple systems to export metrics for the same resources by configuring each with a separate service account.
If the rate limit is exceeded, the response body will include a message indicating which resource exceeded the limit.
{
"errors": [
{
"status": "429",
"detail": "Too many requests have been made for the following resources: kafka.id:lkc-12345. Please see the documentation for current rate limits."
}
]
}
To ensure values are returned in a timely fashion, data points returned per resource type
per metric
is limited to 30000
.
If you have more than 30000
unique combinations of labels then the response will be truncated to return
the first 30000
data points.
For example, if a request is made for 4 Kafka clusters(resource type 1) and 3 Kafka connectors(resource type 2)
8000
topics and there are 5
metrics with only topic label
It translates to 32000
unique label combinations per metric(capped to 30000) which in turn
translates to 150,000
data points for 5 metrics.3
metrics
It translates to 9 data points for 3 Kafka connectors.The data points returned for this request totals to 150,009.
In this case, we would recommend making separate scrape jobs up to 3 kafka clusters in order to stay within the return limits for a resource type.
Here is an example prometheus configuration for scraping this endpoint:
scrape_configs:
- job_name: Confluent Cloud
scrape_interval: 1m
scrape_timeout: 1m
honor_timestamps: true
static_configs:
- targets:
- api.telemetry.confluent.cloud
scheme: https
basic_auth:
username: <Cloud API Key>
password: <Cloud API Secret>
metrics_path: /v2/metrics/cloud/export
params:
"resource.kafka.id":
- lkc-1
- lkc-2
Using ignore_failed_metrics
param, you can get a partial response consisting of successful metrics. Unsuccessful metrics would be ignored
if the failure count was below the configured failure threshold percentage, otherwise an error would be returned.
In case a partial response is returned, it would contain an additional StateSet metric export_status
with one sample for each metric.
If the associated metric fetch was unsuccessful, the value of the sample for that metric will be 1, otherwise it will be 0.
Each sample will have a metric and a resource label which denotes the name of the metric for which the status is shown and name of the resource to which the metric belongs respectively.
A sample partial response is shown in the Responses section wherein two metrics were requested and one of them was unsuccessful.
dataset required | string (Dataset) The dataset to export metrics for. |
resource.kafka.id | Array of strings The ID of the Kafka cluster to export metrics for. This parameter can be specified multiple times (e.g. |
resource.connector.id | Array of strings The ID of the Connector to export metrics for. This parameter can be specified multiple times. |
resource.custom_connector.id | Array of strings The ID of the custom Connector to export metrics for. This parameter can be specified multiple times. |
resource.ksql.id | Array of strings The ID of the ksqlDB application to export metrics for. This parameter can be specified multiple times. |
resource.schema_registry.id | Array of strings The ID of the Schema Registry to export metrics for. This parameter can be specified multiple times. |
metric | Array of strings The metric to export. If this parameter is not specified, all metrics for the resource will be exported. This parameter can be specified multiple times. |
ignore_failed_metrics | boolean Ignore failed metrics and export only successful metrics if allowed failure threshold is not breached. If this parameter is set to true, a StateSet metric (export_status) will be included in the response to report which metrics were successful and which failed. |
# HELP confluent_kafka_server_received_bytes The delta count of bytes of the customer's data received from the network. Each sample is the number of bytes received since the previous data sample. The count is sampled every 60 seconds. # TYPE confluent_kafka_server_received_bytes gauge confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicA"} 10.0 1609459200 confluent_kafka_server_received_bytes{kafka_id="lkc-1",topic="topicB"} 20.0 1609459200 confluent_kafka_server_received_bytes{kafka_id="lkc-2",topic="topicA"} 30.0 1609459200 # HELP confluent_kafka_server_sent_bytes The delta count of bytes of the customer's data sent to the network. Each sample is the number of bytes sent since the previous data sample. The count is sampled every 60 seconds. # TYPE confluent_kafka_server_sent_bytes gauge confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicA"} 90.0 1609459200 confluent_kafka_server_sent_bytes{kafka_id="lkc-1",topic="topicB"} 80.0 1609459200 confluent_kafka_server_sent_bytes{kafka_id="lkc-2",topic="topicA"} 70.0 1609459200
Enumerates label values for a single metric.
dataset required | string (Dataset) The dataset to query. |
page_token | string The next page token. The token is returned by the previous request as part of |
metric | string The metric that the label values are enumerated for. |
group_by required | Array of strings non-empty The label(s) that the values are enumerated for. |
Field Filter (object) or Compound Filter (object) or Unary Filter (object) (Filter) Metric filter. | |
intervals | Array of strings <ISO-8601 interval (<start>/<end> | <start>/<duration> | <duration>/<end>)> (Interval) non-empty Defines the time range(s) for which available metrics will be listed. A time range is an ISO-8601 interval. When unspecified, the value defaults to the last hour before the request was made |
limit | integer [ 1 .. 1000 ] Default: 100 |
{- "metric": "io.confluent.kafka.server/sent_bytes",
- "group_by": [
- "metric.topic"
], - "filter": {
- "field": "resource.kafka.id",
- "op": "EQ",
- "value": "lkc-09d19"
}, - "limit": 3,
- "intervals": [
- "2019-10-16T16:30:20Z/2019-10-24T18:57:00Z"
]
}
{- "data": [
- {
- "metric.topic": "bar"
}, - {
- "metric.topic": "baz"
}, - {
- "metric.topic": "foo"
}
], - "meta": {
- "pagination": {
- "page_size": 3,
- "next_page_token": "dG9waWNC"
}
}
}