List
Retrieves the list of Apache Kafka® clusters that belong to the specified folder.
- TypeScript
- Python
import { cloudApi, serviceClients, Session } from "@yandex-cloud/nodejs-sdk";
const ListClustersRequest =
cloudApi.dataproc.cluster_service.ListClustersRequest;
(async () => {
const authToken = process.env["YC_OAUTH_TOKEN"];
const session = new Session({ oauthToken: authToken });
const client = session.client(serviceClients.ClusterServiceClient);
const result = await client.list(
ListClustersRequest.fromPartial({
folderId: "folderId",
// pageSize: 0,
// pageToken: "pageToken",
// filter: "filter"
})
);
console.log(result);
})();
import os
import grpc
import yandexcloud
from yandex.cloud.dataproc.v1.cluster_service_pb2_grpc import ClusterServiceStub
from yandex.cloud.dataproc.v1.cluster_service_pb2 import ListClustersRequest
token = os.getenv("YC_OAUTH_TOKEN")
sdk = yandexcloud.SDK(token=token)
service = sdk.client(ClusterServiceStub)
response = service.List(
ListClustersRequest(
folder_id="folderId",
# page_size = 0,
# page_token = "pageToken",
# filter = "filter"
)
)
print(response)
ListClustersRequest
folderId
: string
ID of the folder to list Apache Kafka® clusters in.
To get the folder ID, make a yandex.cloud.resourcemanager.v1.FolderService.List request.
pageSize
: int64
The maximum number of results per page to return.
If the number of available results is larger than page_size, the service returns a ListClustersResponse.next_page_token that can be used to get the next page of results in subsequent list requests.
pageToken
: string
Page token.
To get the next page of results, set page_token to the ListClustersResponse.next_page_token returned by the previous list request.
filter
: string
Filter support is not currently implemented. Any filters are ignored.
ListClustersResponse
clusters
: Cluster
List of Apache Kafka® clusters.
nextPageToken
: string
Token that allows you to get the next page of results for list requests.
If the number of results is larger than ListClustersRequest.page_size, use next_page_token as the value for the ListClustersRequest.page_token parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results.
Cluster
An Apache Kafka® cluster resource. For more information, see the Concepts section of the documentation.
Environment
ENVIRONMENT_UNSPECIFIED
PRODUCTION
Stable environment with a conservative update policy when only hotfixes are applied during regular maintenance.
PRESTABLE
Environment with a more aggressive update policy when new versions are rolled out irrespective of backward compatibility.
Health
HEALTH_UNKNOWN
State of the cluster is unknown ([Host.health][10] of all hosts in the cluster is
UNKNOWN
).ALIVE
Cluster is alive and well ([Host.health][11] of all hosts in the cluster is
ALIVE
).DEAD
Cluster is inoperable ([Host.health][12] of all hosts in the cluster is
DEAD
).DEGRADED
Cluster is in degraded state ([Host.health][13] of at least one of the hosts in the cluster is not
ALIVE
).
Status
STATUS_UNKNOWN
Cluster state is unknown.
CREATING
Cluster is being created.
RUNNING
Cluster is running normally.
ERROR
Cluster encountered a problem and cannot operate.
UPDATING
Cluster is being updated.
STOPPING
Cluster is stopping.
STOPPED
Cluster stopped.
STARTING
Cluster is starting.
id
: string
ID of the Apache Kafka® cluster. This ID is assigned at creation time.
folderId
: string
ID of the folder that the Apache Kafka® cluster belongs to.
createdAt
: google.protobuf.Timestamp
Creation timestamp.
name
: string
Name of the Apache Kafka® cluster.
The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]*
.
description
: string
Description of the Apache Kafka® cluster. 0-256 characters long.
labels
: string
Custom labels for the Apache Kafka® cluster as key:value
pairs.
A maximum of 64 labels per resource is allowed.
environment
: Environment
Deployment environment of the Apache Kafka® cluster.
monitoring
: Monitoring
Description of monitoring systems relevant to the Apache Kafka® cluster.
- The field is ignored for response of List method.
config
: ConfigSpec
Configuration of the Apache Kafka® cluster.
- The field is ignored for response of List method.
networkId
: string
ID of the network that the cluster belongs to.
health
: Health
Aggregated cluster health.
status
: Status
Current state of the cluster.
securityGroupIds
: string
User security groups
hostGroupIds
: string
Host groups hosting VMs of the cluster.
deletionProtection
: bool
Deletion Protection inhibits deletion of the cluster
maintenanceWindow
: MaintenanceWindow
Window of maintenance operations.
plannedOperation
: MaintenanceOperation
Scheduled maintenance operation.
Monitoring
Metadata of monitoring system.
name
: string
Name of the monitoring system.
description
: string
Description of the monitoring system.
link
: string
Link to the monitoring system charts for the Apache Kafka® cluster.
ConfigSpec
Kafka
resources
: Resources
Resources allocated to Kafka brokers.
One of kafkaConfig
Kafka broker configuration.
kafkaConfig_2_8
: KafkaConfig2_8
kafkaConfig_3
: KafkaConfig3
Zookeeper
resources
: Resources
Resources allocated to ZooKeeper hosts.
KRaft
resources
: Resources
Resources allocated to KRaft controller hosts.
RestAPIConfig
enabled
: bool
Is REST API enabled for this cluster.
version
: string
Version of Apache Kafka® used in the cluster. Possible values: 2.8
, 3.0
, 3.1
, 3.2
, 3.3
, 3.4
, 3.5
, 3.6
.
kafka
: Kafka
Configuration and resource allocation for Kafka brokers.
zookeeper
: Zookeeper
Configuration and resource allocation for ZooKeeper hosts.
zoneId
: string
IDs of availability zones where Kafka brokers reside.
brokersCount
: google.protobuf.Int64Value
The number of Kafka brokers deployed in each availability zone.
assignPublicIp
: bool
The flag that defines whether a public IP address is assigned to the cluster.
If the value is true
, then Apache Kafka® cluster is available on the Internet via it's public IP address.
unmanagedTopics
: bool
Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently.
schemaRegistry
: bool
Enables managed schema registry on cluster
access
: Access
Access policy for external services.
restApiConfig
: RestAPIConfig
Configuration of REST API.
diskSizeAutoscaling
: DiskSizeAutoscaling
DiskSizeAutoscaling settings
kraft
: KRaft
Configuration and resource allocation for KRaft-controller hosts.
MaintenanceWindow
One of policy
anytime
: AnytimeMaintenanceWindow
weeklyMaintenanceWindow
: WeeklyMaintenanceWindow
MaintenanceOperation
info
: string
delayedUntil
: google.protobuf.Timestamp
Resources
resourcePresetId
: string
ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.
diskSize
: int64
Volume of the storage available to a host, in bytes. Must be greater than 2 partition segment size in bytes partitions count, so each partition can have one active segment file and one closed segment file that can be deleted.
diskTypeId
: string
Type of the storage environment for the host.
KafkaConfig2_8
Kafka version 2.8 broker configuration.
compressionType
: CompressionType
Cluster topics compression type.
logFlushIntervalMessages
: google.protobuf.Int64Value
The number of messages accumulated on a log partition before messages are flushed to disk.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting.
logFlushIntervalMs
: google.protobuf.Int64Value
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting.
logFlushSchedulerIntervalMs
: google.protobuf.Int64Value
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.
logRetentionBytes
: google.protobuf.Int64Value
Partition size limit; Kafka will discard old log segments to free up space if delete
TopicConfig2_8.cleanup_policy is in effect.
This setting is helpful if you need to control the size of a log due to limited disk space.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting.
logRetentionHours
: google.protobuf.Int64Value
The number of hours to keep a log segment file before deleting it.
logRetentionMinutes
: google.protobuf.Int64Value
The number of minutes to keep a log segment file before deleting it.
If not set, the value of log_retention_hours is used.
logRetentionMs
: google.protobuf.Int64Value
The number of milliseconds to keep a log segment file before deleting it.
If not set, the value of log_retention_minutes is used.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting.
logSegmentBytes
: google.protobuf.Int64Value
The maximum size of a single log file.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting.
logPreallocate
: google.protobuf.BoolValue
Should pre allocate file when create new segment?
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting.
socketSendBufferBytes
: google.protobuf.Int64Value
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
socketReceiveBufferBytes
: google.protobuf.Int64Value
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
autoCreateTopicsEnable
: google.protobuf.BoolValue
Enable auto creation of topic on the server
numPartitions
: google.protobuf.Int64Value
Default number of partitions per topic on the whole cluster
defaultReplicationFactor
: google.protobuf.Int64Value
Default replication factor of the topic on the whole cluster
messageMaxBytes
: google.protobuf.Int64Value
The largest record batch size allowed by Kafka. Default value: 1048588.
replicaFetchMaxBytes
: google.protobuf.Int64Value
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.
sslCipherSuites
: string
A list of cipher suites.
offsetsRetentionMinutes
: google.protobuf.Int64Value
Offset storage time after a consumer group loses all its consumers. Default: 10080.
saslEnabledMechanisms
: SaslMechanism
The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.
KafkaConfig3
Kafka version 3.x broker configuration.
compressionType
: CompressionType
Cluster topics compression type.
logFlushIntervalMessages
: google.protobuf.Int64Value
The number of messages accumulated on a log partition before messages are flushed to disk.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting.
logFlushIntervalMs
: google.protobuf.Int64Value
The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting.
logFlushSchedulerIntervalMs
: google.protobuf.Int64Value
The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.
logRetentionBytes
: google.protobuf.Int64Value
Partition size limit; Kafka will discard old log segments to free up space if delete
TopicConfig3.cleanup_policy is in effect.
This setting is helpful if you need to control the size of a log due to limited disk space.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting.
logRetentionHours
: google.protobuf.Int64Value
The number of hours to keep a log segment file before deleting it.
logRetentionMinutes
: google.protobuf.Int64Value
The number of minutes to keep a log segment file before deleting it.
If not set, the value of log_retention_hours is used.
logRetentionMs
: google.protobuf.Int64Value
The number of milliseconds to keep a log segment file before deleting it.
If not set, the value of log_retention_minutes is used.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting.
logSegmentBytes
: google.protobuf.Int64Value
The maximum size of a single log file.
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting.
logPreallocate
: google.protobuf.BoolValue
Should pre allocate file when create new segment?
This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting.
socketSendBufferBytes
: google.protobuf.Int64Value
The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
socketReceiveBufferBytes
: google.protobuf.Int64Value
The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.
autoCreateTopicsEnable
: google.protobuf.BoolValue
Enable auto creation of topic on the server
numPartitions
: google.protobuf.Int64Value
Default number of partitions per topic on the whole cluster
defaultReplicationFactor
: google.protobuf.Int64Value
Default replication factor of the topic on the whole cluster
messageMaxBytes
: google.protobuf.Int64Value
The largest record batch size allowed by Kafka. Default value: 1048588.
replicaFetchMaxBytes
: google.protobuf.Int64Value
The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.
sslCipherSuites
: string
A list of cipher suites.
offsetsRetentionMinutes
: google.protobuf.Int64Value
Offset storage time after a consumer group loses all its consumers. Default: 10080.
saslEnabledMechanisms
: SaslMechanism
The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.
Kafka
resources
: Resources
Resources allocated to Kafka brokers.
kafkaConfig_2_8
: KafkaConfig2_8
kafkaConfig_3
: KafkaConfig3
Zookeeper
resources
: Resources
Resources allocated to ZooKeeper hosts.
Access
dataTransfer
: bool
Allow access for DataTransfer.
RestAPIConfig
enabled
: bool
Is REST API enabled for this cluster.
DiskSizeAutoscaling
plannedUsageThreshold
: int64
Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold.
emergencyUsageThreshold
: int64
Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold.
diskSizeLimit
: int64
New storage size (in bytes) that is set when one of the thresholds is achieved.
KRaft
resources
: Resources
Resources allocated to KRaft controller hosts.
AnytimeMaintenanceWindow
WeeklyMaintenanceWindow
WeekDay
WEEK_DAY_UNSPECIFIED
MON
TUE
WED
THU
FRI
SAT
SUN
day
: WeekDay
hour
: int64
Hour of the day in UTC.