Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 48 additions & 41 deletions docs/apis/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,14 @@ The Producer API allows applications to send streams of data to topics in the Ka
Examples of using the producer are shown in the [javadocs](/{version}/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html "Kafka 4.3 Javadoc").

To use the producer, add the following Maven dependency to your project:


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>
```

# Consumer API

Expand All @@ -58,13 +59,14 @@ The Consumer API allows applications to read streams of data from topics in the
Examples of using the consumer are shown in the [javadocs](/{version}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html "Kafka 4.3 Javadoc").

To use the consumer, add the following Maven dependency to your project:


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>
```

# Share Consumer API

Expand All @@ -73,13 +75,14 @@ The Share Consumer API enables applications in a share group to cooperatively co
Examples of using the share consumer are shown in the [javadocs](/{version}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaShareConsumer.html "Kafka 4.3 Javadoc").

To use the share consumer, add the following Maven dependency to your project:


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>
```

# Streams API

Expand All @@ -90,13 +93,14 @@ Examples of using this library are shown in the [javadocs](/{version}/javadoc/in
Additional documentation on using the Streams API is available [here](/43/documentation/streams).

To use Kafka Streams, add the following Maven dependency to your project:


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>4.3.0</version>
</dependency>
```

When using Scala you may optionally include the `kafka-streams-scala` library. Additional documentation on using the Kafka Streams DSL for Scala is available [in the developer guide](/43/documentation/streams/developer-guide/dsl-api.html#scala-dsl).

Expand All @@ -105,12 +109,14 @@ To use Kafka Streams DSL for Scala 2.13, add the following Maven dependency to y
> **⚠️ DEPRECATION NOTICE**: The `kafka-streams-scala` library is deprecated as of Kafka 4.3
> and will be removed in Kafka 5.0. Please migrate to using the Java Streams API directly from Scala.
> See the [migration guide](/{version}/streams/developer-guide/scala-migration) for details.

<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams-scala_2.13</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams-scala_2.13</artifactId>
<version>4.3.0</version>
</dependency>
```

# Connect API

Expand All @@ -125,12 +131,13 @@ Those who want to implement custom connectors can see the [javadoc](/{version}/j
The Admin API supports managing and inspecting topics, brokers, acls, and other Kafka objects.

To use the Admin API, add the following Maven dependency to your project:


<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>

```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>4.3.0</version>
</dependency>
```

For more information about the Admin APIs, see the [javadoc](/{version}/javadoc/index.html?org/apache/kafka/clients/admin/Admin.html "Kafka 4.3 Javadoc").
40 changes: 23 additions & 17 deletions docs/configuration/broker-configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,34 +47,40 @@ From Kafka version 1.1 onwards, some of the broker configs can be updated withou
* `cluster-wide`: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.

To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads):


$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2

```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2
```

To describe the current dynamic broker configs for broker id 0:


$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe

```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe
```

To delete a config override and revert to the statically configured or default value for broker id 0 (for example, the number of log cleaner threads):


$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads

To update the log level for a logger on broker id 0:
```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads
```

To update the log level for a logger on broker id 0:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --broker-logger 0 --add-config org.apache.kafka.server.quota.ClientQuotaManager\$ThrottledChannelReaper=DEBUG --alter
```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --broker-logger 0 --add-config org.apache.kafka.server.quota.ClientQuotaManager\$ThrottledChannelReaper=DEBUG --alter
```

Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster. All brokers in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers:


$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2

```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2
```

To describe the currently configured dynamic cluster-wide default configs:


$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe

```bash
$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe
```

All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing). If a config value is defined at different levels, the following order of precedence is used:

Expand Down
112 changes: 62 additions & 50 deletions docs/configuration/configuration-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,26 +45,29 @@ To use a configuration provider, specify it in your configuration using the `con
Configuration providers allow you to pass parameters and retrieve configuration data from various sources.

To specify configuration providers, you use a comma-separated list of aliases and the fully-qualified class names that implement the configuration providers:


config.providers=provider1,provider2
config.providers.provider1.class=com.example.Provider1
config.providers.provider2.class=com.example.Provider2

```properties
config.providers=provider1,provider2
config.providers.provider1.class=com.example.Provider1
config.providers.provider2.class=com.example.Provider2
```

Each provider can have its own set of parameters, which are passed in a specific format:


config.providers.<provider_alias>.param.<name>=<value>

```properties
config.providers.<provider_alias>.param.<name>=<value>
```

The `ConfigProvider` interface serves as a base for all configuration providers. Custom implementations of this interface can be created to retrieve configuration data from various sources. You can package the implementation as a JAR file, add the JAR to your classpath, and reference the provider's class in your configuration.

**Example custom provider configuration**


config.providers=customProvider
config.providers.customProvider.class=com.example.customProvider
config.providers.customProvider.param.param1=value1
config.providers.customProvider.param.param2=value2

```properties
config.providers=customProvider
config.providers.customProvider.class=com.example.customProvider
config.providers.customProvider.param.param1=value1
config.providers.customProvider.param.param2=value2
```

## DirectoryConfigProvider

Expand All @@ -75,16 +78,18 @@ Each file represents a key, and its content is the value. This provider is usefu
To restrict the files that the `DirectoryConfigProvider` can access, use the `allowed.paths` parameter. This parameter accepts a comma-separated list of paths that the provider is allowed to access. If not set, all paths are allowed.

**Example`DirectoryConfigProvider` configuration**


config.providers=dirProvider
config.providers.dirProvider.class=org.apache.kafka.common.config.provider.DirectoryConfigProvider
config.providers.dirProvider.param.allowed.paths=/path/to/dir1,/path/to/dir2

```properties
config.providers=dirProvider
config.providers.dirProvider.class=org.apache.kafka.common.config.provider.DirectoryConfigProvider
config.providers.dirProvider.param.allowed.paths=/path/to/dir1,/path/to/dir2
```

To reference a value supplied by the `DirectoryConfigProvider`, use the correct placeholder syntax:


${dirProvider:<path_to_file>:<file_name>}

```text
${dirProvider:<path_to_file>:<file_name>}
```

## EnvVarConfigProvider

Expand All @@ -97,16 +102,18 @@ This provider is useful for configuring applications running in containers, for
To restrict which environment variables the `EnvVarConfigProvider` can access, use the `allowlist.pattern` parameter. This parameter accepts a regular expression that environment variable names must match to be used by the provider.

**Example`EnvVarConfigProvider` configuration**


config.providers=envVarProvider
config.providers.envVarProvider.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
config.providers.envVarProvider.param.allowlist.pattern=^MY_ENVAR1_.*

```properties
config.providers=envVarProvider
config.providers.envVarProvider.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
config.providers.envVarProvider.param.allowlist.pattern=^MY_ENVAR1_.*
```

To reference a value supplied by the `EnvVarConfigProvider`, use the correct placeholder syntax:


${envVarProvider:<enVar_name>}

```text
${envVarProvider:<enVar_name>}
```

## FileConfigProvider

Expand All @@ -117,41 +124,46 @@ This provider is useful for loading configuration data from mounted files.
To restrict the file paths that the `FileConfigProvider` can access, use the `allowed.paths` parameter. This parameter accepts a comma-separated list of paths that the provider is allowed to access. If not set, all paths are allowed.

**Example`FileConfigProvider` configuration**


config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider
config.providers.fileProvider.param.allowed.paths=/path/to/config1,/path/to/config2

```properties
config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider
config.providers.fileProvider.param.allowed.paths=/path/to/config1,/path/to/config2
```

To reference a value supplied by the `FileConfigProvider`, use the correct placeholder syntax:


${fileProvider:<path_and_filename>:<property>}

```text
${fileProvider:<path_and_filename>:<property>}
```

## Example: Referencing files

Here’s an example that uses a file configuration provider with Kafka Connect to provide authentication credentials to a database for a connector.

First, create a `connector-credentials.properties` configuration file with the following credentials:


dbUsername=my-username
dbPassword=my-password

```properties
dbUsername=my-username
dbPassword=my-password
```

Specify a `FileConfigProvider` in the Kafka Connect configuration:

**Example Kafka Connect configuration with a`FileConfigProvider`**


config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider

```properties
config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider
```

Next, reference the properties from the file in the connector configuration.

**Example connector configuration referencing file properties**


database.user=${fileProvider:/path/to/connector-credentials.properties:dbUsername}
database.password=${fileProvider:/path/to/connector-credentials.properties:dbPassword}

```properties
database.user=${fileProvider:/path/to/connector-credentials.properties:dbUsername}
database.password=${fileProvider:/path/to/connector-credentials.properties:dbPassword}
```

At runtime, the configuration provider reads and extracts the values from the properties file.
Loading