Skip to main content

Update

Updates the specified Apache Kafka® cluster.

import {
cloudApi,
decodeMessage,
serviceClients,
Session,
waitForOperation,
} from "@yandex-cloud/nodejs-sdk";

const Cluster = cloudApi.dataproc.cluster.Cluster;
const HadoopConfig_Service = cloudApi.dataproc.cluster.HadoopConfig_Service;
const UpdateClusterRequest =
cloudApi.dataproc.cluster_service.UpdateClusterRequest;

(async () => {
const authToken = process.env["YC_OAUTH_TOKEN"];
const session = new Session({ oauthToken: authToken });
const client = session.client(serviceClients.ClusterServiceClient);

const operation = await client.update(
UpdateClusterRequest.fromPartial({
// clusterId: "clusterId",
// updateMask: {
// paths: ["paths"]
// },
// description: "description",
// labels: {"key": "labels"},
// configSpec: {
// subclustersSpec: [{
// id: "id",
// name: "name",
// resources: {
// resourcePresetId: "resourcePresetId",
// diskTypeId: "diskTypeId",
// diskSize: 0
// },
// hostsCount: 0,
// autoscalingConfig: {
// maxHostsCount: 0,
// preemptible: true,
// measurementDuration: {
// seconds: 0,
// nanos: 0
// },
// warmupDuration: {
// seconds: 0,
// nanos: 0
// },
// stabilizationDuration: {
// seconds: 0,
// nanos: 0
// },
// cpuUtilizationTarget: 0,
// decommissionTimeout: 0
// }
// }],
// hadoop: {
// services: [HadoopConfig_Service.HDFS],
// properties: {"key": "properties"},
// sshPublicKeys: ["sshPublicKeys"],
// initializationActions: [{
// uri: "uri",
// args: ["args"],
// timeout: 0
// }]
// }
// },
// name: "name",
// serviceAccountId: "serviceAccountId",
// bucket: "bucket",
// decommissionTimeout: 0,
// uiProxy: true,
// securityGroupIds: ["securityGroupIds"],
// deletionProtection: true,
// logGroupId: "logGroupId"
})
);
const finishedOp = await waitForOperation(operation, session);

if (finishedOp.response) {
const result = decodeMessage<typeof Cluster>(finishedOp.response);
console.log(result);
}
})();

UpdateClusterRequest

clusterId : string

ID of the Apache Kafka® cluster to update.

To get the Apache Kafka® cluster ID, make a ClusterService.List request.

updateMask : google.protobuf.FieldMask
description : string

New description of the Apache Kafka® cluster.

labels : string

Custom labels for the Apache Kafka® cluster as key:value pairs.

For example, "project": "mvp" or "source": "dictionary".

The new set of labels will completely replace the old ones. To add a label, request the current set with the ClusterService.Get method, then send an ClusterService.Update request with the new label added to the set.

configSpec : ConfigSpec

New configuration and resources for hosts in the Apache Kafka® cluster.

Use update_mask to prevent reverting all cluster settings that are not listed in config_spec to their default values.

name : string

New name for the Apache Kafka® cluster.

securityGroupIds : string

User security groups

deletionProtection : bool

Deletion Protection inhibits deletion of the cluster

maintenanceWindow : MaintenanceWindow

New maintenance window settings for the cluster.

networkId : string

ID of the network to move the cluster to.

subnetIds : string

IDs of subnets where the hosts are located or a new host is being created

ConfigSpec

Kafka
resources : Resources

Resources allocated to Kafka brokers.

One of kafkaConfig

Kafka broker configuration.

  • kafkaConfig_2_8 : KafkaConfig2_8
  • kafkaConfig_3 : KafkaConfig3
Zookeeper
resources : Resources

Resources allocated to ZooKeeper hosts.

KRaft
resources : Resources

Resources allocated to KRaft controller hosts.

RestAPIConfig
enabled : bool

Is REST API enabled for this cluster.

version : string

Version of Apache Kafka® used in the cluster. Possible values: 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6.

kafka : Kafka

Configuration and resource allocation for Kafka brokers.

zookeeper : Zookeeper

Configuration and resource allocation for ZooKeeper hosts.

zoneId : string

IDs of availability zones where Kafka brokers reside.

brokersCount : google.protobuf.Int64Value

The number of Kafka brokers deployed in each availability zone.

assignPublicIp : bool

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

unmanagedTopics : bool

Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently.

schemaRegistry : bool

Enables managed schema registry on cluster

access : Access

Access policy for external services.

restApiConfig : RestAPIConfig

Configuration of REST API.

diskSizeAutoscaling : DiskSizeAutoscaling

DiskSizeAutoscaling settings

kraft : KRaft

Configuration and resource allocation for KRaft-controller hosts.

MaintenanceWindow

One of policy

  • anytime : AnytimeMaintenanceWindow
  • weeklyMaintenanceWindow : WeeklyMaintenanceWindow

Resources

resourcePresetId : string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

diskSize : int64

Volume of the storage available to a host, in bytes. Must be greater than 2 partition segment size in bytes partitions count, so each partition can have one active segment file and one closed segment file that can be deleted.

diskTypeId : string

Type of the storage environment for the host.

KafkaConfig2_8

Kafka version 2.8 broker configuration.

compressionType : CompressionType

Cluster topics compression type.

logFlushIntervalMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting.

logFlushIntervalMs : google.protobuf.Int64Value

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting.

logFlushSchedulerIntervalMs : google.protobuf.Int64Value

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

logRetentionBytes : google.protobuf.Int64Value

Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting.

logRetentionHours : google.protobuf.Int64Value

The number of hours to keep a log segment file before deleting it.

logRetentionMinutes : google.protobuf.Int64Value

The number of minutes to keep a log segment file before deleting it.

If not set, the value of log_retention_hours is used.

logRetentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of log_retention_minutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting.

logSegmentBytes : google.protobuf.Int64Value

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting.

logPreallocate : google.protobuf.BoolValue

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting.

socketSendBufferBytes : google.protobuf.Int64Value

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

socketReceiveBufferBytes : google.protobuf.Int64Value

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

autoCreateTopicsEnable : google.protobuf.BoolValue

Enable auto creation of topic on the server

numPartitions : google.protobuf.Int64Value

Default number of partitions per topic on the whole cluster

defaultReplicationFactor : google.protobuf.Int64Value

Default replication factor of the topic on the whole cluster

messageMaxBytes : google.protobuf.Int64Value

The largest record batch size allowed by Kafka. Default value: 1048588.

replicaFetchMaxBytes : google.protobuf.Int64Value

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

sslCipherSuites : string

A list of cipher suites.

offsetsRetentionMinutes : google.protobuf.Int64Value

Offset storage time after a consumer group loses all its consumers. Default: 10080.

saslEnabledMechanisms : SaslMechanism

The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.

KafkaConfig3

Kafka version 3.x broker configuration.

compressionType : CompressionType

Cluster topics compression type.

logFlushIntervalMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting.

logFlushIntervalMs : google.protobuf.Int64Value

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting.

logFlushSchedulerIntervalMs : google.protobuf.Int64Value

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

logRetentionBytes : google.protobuf.Int64Value

Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting.

logRetentionHours : google.protobuf.Int64Value

The number of hours to keep a log segment file before deleting it.

logRetentionMinutes : google.protobuf.Int64Value

The number of minutes to keep a log segment file before deleting it.

If not set, the value of log_retention_hours is used.

logRetentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of log_retention_minutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting.

logSegmentBytes : google.protobuf.Int64Value

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting.

logPreallocate : google.protobuf.BoolValue

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting.

socketSendBufferBytes : google.protobuf.Int64Value

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

socketReceiveBufferBytes : google.protobuf.Int64Value

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

autoCreateTopicsEnable : google.protobuf.BoolValue

Enable auto creation of topic on the server

numPartitions : google.protobuf.Int64Value

Default number of partitions per topic on the whole cluster

defaultReplicationFactor : google.protobuf.Int64Value

Default replication factor of the topic on the whole cluster

messageMaxBytes : google.protobuf.Int64Value

The largest record batch size allowed by Kafka. Default value: 1048588.

replicaFetchMaxBytes : google.protobuf.Int64Value

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

sslCipherSuites : string

A list of cipher suites.

offsetsRetentionMinutes : google.protobuf.Int64Value

Offset storage time after a consumer group loses all its consumers. Default: 10080.

saslEnabledMechanisms : SaslMechanism

The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.

Kafka

resources : Resources

Resources allocated to Kafka brokers.

  • kafkaConfig_2_8 : KafkaConfig2_8
  • kafkaConfig_3 : KafkaConfig3

Zookeeper

resources : Resources

Resources allocated to ZooKeeper hosts.

Access

dataTransfer : bool

Allow access for DataTransfer.

RestAPIConfig

enabled : bool

Is REST API enabled for this cluster.

DiskSizeAutoscaling

plannedUsageThreshold : int64

Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold.

emergencyUsageThreshold : int64

Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold.

diskSizeLimit : int64

New storage size (in bytes) that is set when one of the thresholds is achieved.

KRaft

resources : Resources

Resources allocated to KRaft controller hosts.

AnytimeMaintenanceWindow

WeeklyMaintenanceWindow

WeekDay
  • WEEK_DAY_UNSPECIFIED

  • MON

  • TUE

  • WED

  • THU

  • FRI

  • SAT

  • SUN

day : WeekDay
hour : int64

Hour of the day in UTC.

Operation

An Operation resource. For more information, see Operation.

id : string

ID of the operation.

description : string

Description of the operation. 0-256 characters long.

createdAt : google.protobuf.Timestamp

Creation timestamp.

createdBy : string

ID of the user or service account who initiated the operation.

modifiedAt : google.protobuf.Timestamp

The time when the Operation resource was last modified.

done : bool

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

metadata : google.protobuf.Any

Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any.

One of result

The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true, exactly one of error or response is set.

  • error : google.rpc.Status

    The error result of the operation in case of failure or cancellation.

  • response : google.protobuf.Any
    The normal response of the operation in case of success.

    If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any.