Skip to main content

types

Access

dataTransfer : bool

Allow access for DataTransfer.

AnytimeMaintenanceWindow

Cluster

An Apache Kafka® cluster resource. For more information, see the Concepts section of the documentation.

Environment

  • ENVIRONMENT_UNSPECIFIED

  • PRODUCTION

    Stable environment with a conservative update policy when only hotfixes are applied during regular maintenance.

  • PRESTABLE

    Environment with a more aggressive update policy when new versions are rolled out irrespective of backward compatibility.

Health

  • HEALTH_UNKNOWN

    State of the cluster is unknown ([Host.health][1] of all hosts in the cluster is UNKNOWN).

  • ALIVE

    Cluster is alive and well ([Host.health][2] of all hosts in the cluster is ALIVE).

  • DEAD

    Cluster is inoperable ([Host.health][3] of all hosts in the cluster is DEAD).

  • DEGRADED

    Cluster is in degraded state ([Host.health][4] of at least one of the hosts in the cluster is not ALIVE).

Status

  • STATUS_UNKNOWN

    Cluster state is unknown.

  • CREATING

    Cluster is being created.

  • RUNNING

    Cluster is running normally.

  • ERROR

    Cluster encountered a problem and cannot operate.

  • UPDATING

    Cluster is being updated.

  • STOPPING

    Cluster is stopping.

  • STOPPED

    Cluster stopped.

  • STARTING

    Cluster is starting.

id : string

ID of the Apache Kafka® cluster. This ID is assigned at creation time.

folderId : string

ID of the folder that the Apache Kafka® cluster belongs to.

createdAt : google.protobuf.Timestamp

Creation timestamp.

name : string

Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]*.

description : string

Description of the Apache Kafka® cluster. 0-256 characters long.

labels : string

Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed.

environment : Environment

Deployment environment of the Apache Kafka® cluster.

monitoring : Monitoring

Description of monitoring systems relevant to the Apache Kafka® cluster.

  • The field is ignored for response of List method.
config : ConfigSpec

Configuration of the Apache Kafka® cluster.

  • The field is ignored for response of List method.
networkId : string

ID of the network that the cluster belongs to.

health : Health

Aggregated cluster health.

status : Status

Current state of the cluster.

securityGroupIds : string

User security groups

hostGroupIds : string

Host groups hosting VMs of the cluster.

deletionProtection : bool

Deletion Protection inhibits deletion of the cluster

maintenanceWindow : MaintenanceWindow

Window of maintenance operations.

plannedOperation : MaintenanceOperation

Scheduled maintenance operation.

ClusterConnection

alias : string

Alias of cluster connection configuration. Examples: source, target.

One of clusterConnection

Type of connection to Apache Kafka® cluster.

  • thisCluster : ThisCluster

    Connection configuration of the cluster the connector belongs to. As all credentials are already known, leave this parameter empty.

  • externalCluster : ExternalClusterConnection

    Configuration of connection to an external cluster with all the necessary credentials.

ClusterConnectionSpec

alias : string

Alias of cluster connection configuration. Examples: source, target.

  • thisCluster : ThisClusterSpec

    Connection configuration of the cluster the connector belongs to. As all credentials are already known, leave this parameter empty.

  • externalCluster : ExternalClusterConnectionSpec

    Configuration of connection to an external cluster with all the necessary credentials.

ConfigSpec

Kafka

resources : Resources

Resources allocated to Kafka brokers.

One of kafkaConfig

Kafka broker configuration.

  • kafkaConfig_2_8 : KafkaConfig2_8
  • kafkaConfig_3 : KafkaConfig3

Zookeeper

resources : Resources

Resources allocated to ZooKeeper hosts.

KRaft

resources : Resources

Resources allocated to KRaft controller hosts.

RestAPIConfig

enabled : bool

Is REST API enabled for this cluster.

version : string

Version of Apache Kafka® used in the cluster. Possible values: 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6.

kafka : Kafka

Configuration and resource allocation for Kafka brokers.

zookeeper : Zookeeper

Configuration and resource allocation for ZooKeeper hosts.

zoneId : string

IDs of availability zones where Kafka brokers reside.

brokersCount : google.protobuf.Int64Value

The number of Kafka brokers deployed in each availability zone.

assignPublicIp : bool

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

unmanagedTopics : bool

Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently.

schemaRegistry : bool

Enables managed schema registry on cluster

access : Access

Access policy for external services.

restApiConfig : RestAPIConfig

Configuration of REST API.

diskSizeAutoscaling : DiskSizeAutoscaling

DiskSizeAutoscaling settings

kraft : KRaft

Configuration and resource allocation for KRaft-controller hosts.

Connector

Health

  • HEALTH_UNKNOWN

    Health of the connector is unknown.

  • ALIVE

    Connector is running.

  • DEAD

    Connector has failed to start.

Status

  • STATUS_UNKNOWN

    Connector state is unknown.

  • RUNNING

    Connector is running normally.

  • ERROR

    Connector has encountered a problem and cannot operate.

  • PAUSED

    Connector is paused.

name : string

Name of the connector.

tasksMax : google.protobuf.Int64Value

Maximum number of connector tasks. Default value is the number of brokers.

properties : string

A set of properties passed to Managed Service for Apache Kafka® with the connector configuration. Example: sync.topics.config.enabled: true.

health : Health

Connector health.

status : Status

Current status of the connector.

clusterId : string

ID of the Apache Kafka® cluster that the connector belongs to.

One of connectorConfig

Additional settings for the connector.

  • connectorConfigMirrormaker : ConnectorConfigMirrorMaker

    Configuration of the MirrorMaker connector.

  • connectorConfigS3Sink : ConnectorConfigS3Sink

    Configuration of S3-Sink connector.

ConnectorConfigMirrorMaker

sourceCluster : ClusterConnection

Source cluster connection configuration.

targetCluster : ClusterConnection

Target cluster connection configuration.

topics : string

List of Kafka topics, separated by ,.

replicationFactor : google.protobuf.Int64Value

Replication factor for automatically created topics.

ConnectorConfigMirrorMakerSpec

sourceCluster : ClusterConnectionSpec

Source cluster configuration for the MirrorMaker connector.

targetCluster : ClusterConnectionSpec

Target cluster configuration for the MirrorMaker connector.

topics : string

List of Kafka topics, separated by ,.

replicationFactor : google.protobuf.Int64Value

Replication factor for automatically created topics.

ConnectorConfigS3Sink

An Apache Kafka® S3-Sink connector resource.

topics : string

List of Kafka topics, separated by ','.

fileCompressionType : string

The compression type used for files put on GCS. The supported values are: gzip, snappy, zstd, none. Optional, the default is none.

fileMaxRecords : google.protobuf.Int64Value

Max records per file.

s3Connection : S3Connection

Credentials for connecting to S3 storage.

ConnectorConfigS3SinkSpec

Specification for Kafka S3-Sink Connector.

topics : string

List of Kafka topics, separated by ','.

fileCompressionType : string

The compression type used for files put on GCS. The supported values are: gzip, snappy, zstd, none. Optional, the default is none.

fileMaxRecords : google.protobuf.Int64Value

Max records per file.

s3Connection : S3ConnectionSpec

Credentials for connecting to S3 storage.

ConnectorSpec

An object that represents an Apache Kafka® connector.

See the documentation for details.

name : string

Name of the connector.

tasksMax : google.protobuf.Int64Value

Maximum number of connector tasks. Default value is the number of brokers.

properties : string

A set of properties passed to Managed Service for Apache Kafka® with the connector configuration. Example: sync.topics.config.enabled: true.

  • connectorConfigMirrormaker : ConnectorConfigMirrorMakerSpec

    Configuration of the MirrorMaker connector.

  • connectorConfigS3Sink : ConnectorConfigS3SinkSpec

    Configuration of S3-Sink connector.

DiskSizeAutoscaling

plannedUsageThreshold : int64

Threshold of storage usage (in percent) that triggers automatic scaling of the storage during the maintenance window. Zero value means disabled threshold.

emergencyUsageThreshold : int64

Threshold of storage usage (in percent) that triggers immediate automatic scaling of the storage. Zero value means disabled threshold.

diskSizeLimit : int64

New storage size (in bytes) that is set when one of the thresholds is achieved.

ExternalClusterConnection

bootstrapServers : string

List of bootstrap servers of the cluster, separated by ,.

saslUsername : string

SASL username to use for connection to the cluster.

saslMechanism : string

SASL mechanism to use for connection to the cluster.

securityProtocol : string

Security protocol to use for connection to the cluster.

ExternalClusterConnectionSpec

bootstrapServers : string

List of bootstrap servers of the cluster, separated by ,.

saslUsername : string

SASL username to use for connection to the cluster.

saslPassword : string

SASL password to use for connection to the cluster.

saslMechanism : string

SASL mechanism to use for connection to the cluster.

securityProtocol : string

Security protocol to use for connection to the cluster.

sslTruststoreCertificates : string

CA in PEM format to connect to external cluster. Lines of certificate separated by '\n' symbol.

ExternalS3Storage

accessKeyId : string
endpoint : string
region : string

Default is 'us-east-1'

ExternalS3StorageSpec

accessKeyId : string
secretAccessKey : string
endpoint : string
region : string

Default is 'us-east-1'.

Host

Cluster host metadata.

Role

  • ROLE_UNSPECIFIED

    Role of the host is unspecified. Default value.

  • KAFKA

    The host is a Kafka broker.

  • ZOOKEEPER

    The host is a ZooKeeper server.

Health

  • UNKNOWN

    Health of the host is unknown. Default value.

  • ALIVE

    The host is performing all its functions normally.

  • DEAD

    The host is inoperable and cannot perform any of its essential functions.

  • DEGRADED

    The host is degraded and can perform only some of its essential functions.

name : string

Name of the host.

clusterId : string

ID of the Apache Kafka® cluster.

zoneId : string

ID of the availability zone where the host resides.

role : Role

Host role. If the field has default value, it is not returned in the response.

resources : Resources

Computational resources allocated to the host.

health : Health

Aggregated host health data. If the field has default value, it is not returned in the response.

subnetId : string

ID of the subnet the host resides in.

assignPublicIp : bool

The flag that defines whether a public IP address is assigned to the node.

If the value is true, then this node is available on the Internet via it's public IP address.

KafkaConfig2_8

Kafka version 2.8 broker configuration.

compressionType : CompressionType

Cluster topics compression type.

logFlushIntervalMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting.

logFlushIntervalMs : google.protobuf.Int64Value

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting.

logFlushSchedulerIntervalMs : google.protobuf.Int64Value

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

logRetentionBytes : google.protobuf.Int64Value

Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting.

logRetentionHours : google.protobuf.Int64Value

The number of hours to keep a log segment file before deleting it.

logRetentionMinutes : google.protobuf.Int64Value

The number of minutes to keep a log segment file before deleting it.

If not set, the value of log_retention_hours is used.

logRetentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of log_retention_minutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting.

logSegmentBytes : google.protobuf.Int64Value

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting.

logPreallocate : google.protobuf.BoolValue

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting.

socketSendBufferBytes : google.protobuf.Int64Value

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

socketReceiveBufferBytes : google.protobuf.Int64Value

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

autoCreateTopicsEnable : google.protobuf.BoolValue

Enable auto creation of topic on the server

numPartitions : google.protobuf.Int64Value

Default number of partitions per topic on the whole cluster

defaultReplicationFactor : google.protobuf.Int64Value

Default replication factor of the topic on the whole cluster

messageMaxBytes : google.protobuf.Int64Value

The largest record batch size allowed by Kafka. Default value: 1048588.

replicaFetchMaxBytes : google.protobuf.Int64Value

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

sslCipherSuites : string

A list of cipher suites.

offsetsRetentionMinutes : google.protobuf.Int64Value

Offset storage time after a consumer group loses all its consumers. Default: 10080.

saslEnabledMechanisms : SaslMechanism

The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.

KafkaConfig3

Kafka version 3.x broker configuration.

compressionType : CompressionType

Cluster topics compression type.

logFlushIntervalMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting.

logFlushIntervalMs : google.protobuf.Int64Value

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting.

logFlushSchedulerIntervalMs : google.protobuf.Int64Value

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

logRetentionBytes : google.protobuf.Int64Value

Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting.

logRetentionHours : google.protobuf.Int64Value

The number of hours to keep a log segment file before deleting it.

logRetentionMinutes : google.protobuf.Int64Value

The number of minutes to keep a log segment file before deleting it.

If not set, the value of log_retention_hours is used.

logRetentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of log_retention_minutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting.

logSegmentBytes : google.protobuf.Int64Value

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting.

logPreallocate : google.protobuf.BoolValue

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting.

socketSendBufferBytes : google.protobuf.Int64Value

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

socketReceiveBufferBytes : google.protobuf.Int64Value

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

autoCreateTopicsEnable : google.protobuf.BoolValue

Enable auto creation of topic on the server

numPartitions : google.protobuf.Int64Value

Default number of partitions per topic on the whole cluster

defaultReplicationFactor : google.protobuf.Int64Value

Default replication factor of the topic on the whole cluster

messageMaxBytes : google.protobuf.Int64Value

The largest record batch size allowed by Kafka. Default value: 1048588.

replicaFetchMaxBytes : google.protobuf.Int64Value

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

sslCipherSuites : string

A list of cipher suites.

offsetsRetentionMinutes : google.protobuf.Int64Value

Offset storage time after a consumer group loses all its consumers. Default: 10080.

saslEnabledMechanisms : SaslMechanism

The list of SASL mechanisms enabled in the Kafka server. Default: SCRAM_SHA_512.

MaintenanceOperation

info : string
delayedUntil : google.protobuf.Timestamp

MaintenanceWindow

One of policy

  • anytime : AnytimeMaintenanceWindow
  • weeklyMaintenanceWindow : WeeklyMaintenanceWindow

Monitoring

Metadata of monitoring system.

name : string

Name of the monitoring system.

description : string

Description of the monitoring system.

Link to the monitoring system charts for the Apache Kafka® cluster.

Permission

AccessRole

  • ACCESS_ROLE_UNSPECIFIED

  • ACCESS_ROLE_PRODUCER

    Producer role for the user.

  • ACCESS_ROLE_CONSUMER

    Consumer role for the user.

  • ACCESS_ROLE_ADMIN

    Admin role for the user.

topicName : string

Name or prefix-pattern with wildcard for the topic that the permission grants access to.

To get the topic name, make a TopicService.List request.

role : AccessRole

Access role type to grant to the user.

allowHosts : string

Lists hosts allowed for this permission. Only ip-addresses allowed as value of single host. When not defined, access from any host is allowed.

Bare in mind that the same host might appear in multiple permissions at the same time, hence removing individual permission doesn't automatically restricts access from the allow_hosts of the permission. If the same host(s) is listed for another permission of the same principal/topic, the host(s) remains allowed.

ResourcePreset

A ResourcePreset resource for describing hardware configuration presets.

id : string

ID of the resource preset.

zoneIds : string

IDs of availability zones where the resource preset is available.

cores : int64

Number of CPU cores for a Kafka broker created with the preset.

memory : int64

RAM volume for a Kafka broker created with the preset, in bytes.

Resources

resourcePresetId : string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

diskSize : int64

Volume of the storage available to a host, in bytes. Must be greater than 2 partition segment size in bytes partitions count, so each partition can have one active segment file and one closed segment file that can be deleted.

diskTypeId : string

Type of the storage environment for the host.

S3Connection

Resource for S3Connection - settings of connection to AWS-compatible S3 storage, that are source or target of Kafka S3-connectors. YC Object Storage is AWS-compatible.

bucketName : string

One of storage

  • externalS3 : ExternalS3Storage

S3ConnectionSpec

Specification for S3Connection - settings of connection to AWS-compatible S3 storage, that are source or target of Kafka S3-connectors. YC Object Storage is AWS-compatible.

bucketName : string
  • externalS3 : ExternalS3StorageSpec

ThisCluster

ThisClusterSpec

Topic

An Kafka topic. For more information, see the Concepts - Topics and partitions section of the documentation.

name : string

Name of the topic.

clusterId : string

ID of an Apache Kafka® cluster that the topic belongs to.

To get the Apache Kafka® cluster ID, make a ClusterService.List request.

partitions : google.protobuf.Int64Value

The number of the topic's partitions.

replicationFactor : google.protobuf.Int64Value

Amount of data copies (replicas) for the topic in the cluster.

One of topicConfig

User-defined settings for the topic.

  • topicConfig_2_8 : TopicConfig2_8
  • topicConfig_3 : TopicConfig3

TopicConfig2_8

A topic settings for 2.8

CleanupPolicy

  • CLEANUP_POLICY_UNSPECIFIED

  • CLEANUP_POLICY_DELETE

    This policy discards log segments when either their retention time or log size limit is reached. See also: [KafkaConfig2_8.log_retention_ms][30] and other similar parameters.

  • CLEANUP_POLICY_COMPACT

    This policy compacts messages in log.

  • CLEANUP_POLICY_COMPACT_AND_DELETE

    This policy use both compaction and deletion for messages and log segments.

cleanupPolicy : CleanupPolicy

Retention policy to use on old log messages.

compressionType : CompressionType

The compression type for a given topic.

deleteRetentionMs : google.protobuf.Int64Value

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs : google.protobuf.Int64Value

The time to wait before deleting a file from the filesystem.

flushMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_messages setting on the topic level.

flushMs : google.protobuf.Int64Value

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_ms setting on the topic level.

minCompactionLagMs : google.protobuf.Int64Value

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes : google.protobuf.Int64Value

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanup_policy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig2_8.log_retention_bytes setting on the topic level.

retentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig2_8.log_retention_ms setting on the topic level.

maxMessageBytes : google.protobuf.Int64Value

The largest record batch size allowed in topic.

minInsyncReplicas : google.protobuf.Int64Value

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

segmentBytes : google.protobuf.Int64Value

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig2_8.log_segment_bytes setting on the topic level.

preallocate : google.protobuf.BoolValue

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig2_8.log_preallocate setting on the topic level.

TopicConfig3

A topic settings for 3.x

CleanupPolicy

  • CLEANUP_POLICY_UNSPECIFIED

  • CLEANUP_POLICY_DELETE

    This policy discards log segments when either their retention time or log size limit is reached. See also: [KafkaConfig3.log_retention_ms][38] and other similar parameters.

  • CLEANUP_POLICY_COMPACT

    This policy compacts messages in log.

  • CLEANUP_POLICY_COMPACT_AND_DELETE

    This policy use both compaction and deletion for messages and log segments.

cleanupPolicy : CleanupPolicy

Retention policy to use on old log messages.

compressionType : CompressionType

The compression type for a given topic.

deleteRetentionMs : google.protobuf.Int64Value

The amount of time in milliseconds to retain delete tombstone markers for log compacted topics.

fileDeleteDelayMs : google.protobuf.Int64Value

The time to wait before deleting a file from the filesystem.

flushMessages : google.protobuf.Int64Value

The number of messages accumulated on a log partition before messages are flushed to disk.

This setting overrides the cluster-level KafkaConfig3.log_flush_interval_messages setting on the topic level.

flushMs : google.protobuf.Int64Value

The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk.

This setting overrides the cluster-level KafkaConfig3.log_flush_interval_ms setting on the topic level.

minCompactionLagMs : google.protobuf.Int64Value

The minimum time in milliseconds a message will remain uncompacted in the log.

retentionBytes : google.protobuf.Int64Value

The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanup_policy is in effect. It is helpful if you need to control the size of log due to limited disk space.

This setting overrides the cluster-level KafkaConfig3.log_retention_bytes setting on the topic level.

retentionMs : google.protobuf.Int64Value

The number of milliseconds to keep a log segment's file before deleting it.

This setting overrides the cluster-level KafkaConfig3.log_retention_ms setting on the topic level.

maxMessageBytes : google.protobuf.Int64Value

The largest record batch size allowed in topic.

minInsyncReplicas : google.protobuf.Int64Value

This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all").

segmentBytes : google.protobuf.Int64Value

This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

This setting overrides the cluster-level KafkaConfig3.log_segment_bytes setting on the topic level.

preallocate : google.protobuf.BoolValue

True if we should preallocate the file on disk when creating a new log segment.

This setting overrides the cluster-level KafkaConfig3.log_preallocate setting on the topic level.

TopicSpec

name : string

Name of the topic.

partitions : google.protobuf.Int64Value

The number of the topic's partitions.

replicationFactor : google.protobuf.Int64Value

Amount of copies of a topic data kept in the cluster.

  • topicConfig_2_8 : TopicConfig2_8
  • topicConfig_3 : TopicConfig3

UpdateConnectorConfigS3SinkSpec

Specification for update Kafka S3-Sink Connector.

topics : string

List of Kafka topics, separated by ','.

fileMaxRecords : google.protobuf.Int64Value

Max records per file.

s3Connection : S3ConnectionSpec

Credentials for connecting to S3 storage.

UpdateConnectorSpec

tasksMax : google.protobuf.Int64Value

Maximum number of connector tasks to update.

properties : string

A set of new or changed properties to update for the connector. They are passed with the connector configuration to Managed Service for Apache Kafka®. Example: sync.topics.config.enabled: false.

  • connectorConfigMirrormaker : ConnectorConfigMirrorMakerSpec

    Configuration of the MirrorMaker connector.

  • connectorConfigS3Sink : UpdateConnectorConfigS3SinkSpec

    Update specification for S3-Sink Connector.

User

A Kafka user. For more information, see the Operations - Accounts section of the documentation.

name : string

Name of the Kafka user.

clusterId : string

ID of the Apache Kafka® cluster the user belongs to.

To get the Apache Kafka® cluster ID, make a ClusterService.List request.

permissions : Permission

Set of permissions granted to this user.

UserSpec

name : string

Name of the Kafka user.

password : string

Password of the Kafka user.

permissions : Permission

Set of permissions granted to the user.

WeeklyMaintenanceWindow

WeekDay

  • WEEK_DAY_UNSPECIFIED

  • MON

  • TUE

  • WED

  • THU

  • FRI

  • SAT

  • SUN

day : WeekDay
hour : int64

Hour of the day in UTC.