List
Retrieves the list of Kafka topics in the specified cluster.
- TypeScript
- Python
import { cloudApi, serviceClients, Session } from "@yandex-cloud/nodejs-sdk";
const ListTopicsRequest = cloudApi.mdb.kafka_topic_service.ListTopicsRequest;
(async () => {
const authToken = process.env["YC_OAUTH_TOKEN"];
const session = new Session({ oauthToken: authToken });
const client = session.client(serviceClients.TopicServiceClient);
const result = await client.list(
ListTopicsRequest.fromPartial({
clusterId: "clusterId",
// pageSize: 0,
// pageToken: "pageToken"
})
);
console.log(result);
})();
import os
import grpc
import yandexcloud
from yandex.cloud.mdb.kafka.v1.topic_service_pb2 import ListTopicsRequest
from yandex.cloud.mdb.kafka.v1.topic_service_pb2_grpc import TopicServiceStub
token = os.getenv("YC_OAUTH_TOKEN")
sdk = yandexcloud.SDK(token=token)
service = sdk.client(TopicServiceStub)
response = service.List(
ListTopicsRequest(
cluster_id="clusterId",
# page_size = 0,
# page_token = "pageToken"
)
)
print(response)
ListTopicsRequest
clusterId
: string
ID of the Apache Kafka® cluster to list topics in.
To get the cluster ID, make a ClusterService.List request.
pageSize
: int64
The maximum number of results per page to return.
If the number of available results is larger than page_size, the service returns a ListTopicsResponse.next_page_token that can be used to get the next page of results in subsequent list requests.
pageToken
: string
Page token.
To get the next page of results, set page_token to the ListTopicsResponse.next_page_token returned by the previous list request.
ListTopicsResponse
topics
: Topic
List of Kafka topics.
nextPageToken
: string
This token allows you to get the next page of results for list requests.
If the number of results is larger than ListTopicsRequest.page_size, use the next_page_token as the value for the ListTopicsRequest.page_token parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results.
Topic
An Kafka topic. For more information, see the Concepts - Topics and partitions section of the documentation.
name
: string
Name of the topic.
clusterId
: string
ID of an Apache Kafka® cluster that the topic belongs to.
To get the Apache Kafka® cluster ID, make a ClusterService.List request.
partitions
: google.protobuf.Int64Value
The number of the topic's partitions.
replicationFactor
: google.protobuf.Int64Value
Amount of data copies (replicas) for the topic in the cluster.
One of topicConfig
User-defined settings for the topic.
topicConfig_2_8
: TopicConfig2_8
topicConfig_3
: TopicConfig3
TopicConfig2_8
A topic settings for 2.8
CleanupPolicy
CLEANUP_POLICY_UNSPECIFIED
CLEANUP_POLICY_DELETE
This policy discards log segments when either their retention time or log size limit is reached. See also: [KafkaConfig2_8.log_retention_ms][11] and other similar parameters.
CLEANUP_POLICY_COMPACT
This policy compacts messages in log.
CLEANUP_POLICY_COMPACT_AND_DELETE
This policy use both compaction and deletion for messages and log segments.
cleanupPolicy
: CleanupPolicy
Retention policy to use on old log messages.
compressionType
: CompressionType
The compression type for a given topic.
deleteRetentionMs
: google.protobuf.Int64Value
The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.
fileDeleteDelayMs
: google.protobuf.Int64Value
The time to wait before deleting a file from the filesystem.
flushMessages
: google.protobuf.Int64Value
The number of messages accumulated on a log partition before messages are flushed to disk.
This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_messages setting on the topic level.
flushMs
: google.protobuf.Int64Value
The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.
This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_ms setting on the topic level.
minCompactionLagMs
: google.protobuf.Int64Value
The minimum time in milliseconds a message will remain uncompacted in the log.
retentionBytes
: google.protobuf.Int64Value
The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete
cleanup_policy is in effect.
It is helpful if you need to control the size of log due to limited disk space.
This setting overrides the cluster-level KafkaConfig2_8.log_retention_bytes setting on the topic level.
retentionMs
: google.protobuf.Int64Value
The number of milliseconds to keep a log segment's file before deleting it.
This setting overrides the cluster-level KafkaConfig2_8.log_retention_ms setting on the topic level.
maxMessageBytes
: google.protobuf.Int64Value
The largest record batch size allowed in topic.
minInsyncReplicas
: google.protobuf.Int64Value
This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").
segmentBytes
: google.protobuf.Int64Value
This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.
This setting overrides the cluster-level KafkaConfig2_8.log_segment_bytes setting on the topic level.
preallocate
: google.protobuf.BoolValue
True if we should preallocate the file on disk when creating a new log segment.
This setting overrides the cluster-level KafkaConfig2_8.log_preallocate setting on the topic level.
TopicConfig3
A topic settings for 3.x
CleanupPolicy
CLEANUP_POLICY_UNSPECIFIED
CLEANUP_POLICY_DELETE
This policy discards log segments when either their retention time or log size limit is reached. See also: [KafkaConfig3.log_retention_ms][19] and other similar parameters.
CLEANUP_POLICY_COMPACT
This policy compacts messages in log.
CLEANUP_POLICY_COMPACT_AND_DELETE
This policy use both compaction and deletion for messages and log segments.
cleanupPolicy
: CleanupPolicy
Retention policy to use on old log messages.
compressionType
: CompressionType
The compression type for a given topic.
deleteRetentionMs
: google.protobuf.Int64Value
The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.
fileDeleteDelayMs
: google.protobuf.Int64Value
The time to wait before deleting a file from the filesystem.
flushMessages
: google.protobuf.Int64Value
The number of messages accumulated on a log partition before messages are flushed to disk.
This setting overrides the cluster-level KafkaConfig3.log_flush_interval_messages setting on the topic level.
flushMs
: google.protobuf.Int64Value
The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.
This setting overrides the cluster-level KafkaConfig3.log_flush_interval_ms setting on the topic level.
minCompactionLagMs
: google.protobuf.Int64Value
The minimum time in milliseconds a message will remain uncompacted in the log.
retentionBytes
: google.protobuf.Int64Value
The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete
cleanup_policy is in effect.
It is helpful if you need to control the size of log due to limited disk space.
This setting overrides the cluster-level KafkaConfig3.log_retention_bytes setting on the topic level.
retentionMs
: google.protobuf.Int64Value
The number of milliseconds to keep a log segment's file before deleting it.
This setting overrides the cluster-level KafkaConfig3.log_retention_ms setting on the topic level.
maxMessageBytes
: google.protobuf.Int64Value
The largest record batch size allowed in topic.
minInsyncReplicas
: google.protobuf.Int64Value
This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").
segmentBytes
: google.protobuf.Int64Value
This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.
This setting overrides the cluster-level KafkaConfig3.log_segment_bytes setting on the topic level.
preallocate
: google.protobuf.BoolValue
True if we should preallocate the file on disk when creating a new log segment.
This setting overrides the cluster-level KafkaConfig3.log_preallocate setting on the topic level.