Configure CPU and Memory for Confluent Platform in Confluent for Kubernetes
Setting proper requests and limits is important for the performance of the Confluent Platform clusters and their applications.
Before configuring CPU and memory resource requirements for Confluent Platform, review Cluster sizing for Confluent Platform for resource allocation planning.
Specify CPU and memory requests
Requests and limits are dependent on your workload. The best practice should follow the minimum requirements for Confluent Platform, and then benchmark and tune if needed to suit your environment.
For more information about CPU and memory resources in Kubernetes, see Resource Management for Pods and Containers.
Confluent for Kubernetes (CFK) allows you to define custom pod resource requirements for Confluent Platform
components it deploys. You specify these requirements using the requests and
limits properties for components in their custom resources (CR).
spec:
podTemplate:
resources: --- [1]
limits: --- [2]
cpu: --- [3]
memory: --- [4]
requests: --- [5]
cpu: --- [6]
memory: --- [7]
[1]
resourcesdescribe the compute resource requirements for this component CR.[2]
limitsdescribe the maximum amount of compute resources allowed. Your Confluent Platform component will throttle if it tries to use more resources than the values set here.[3] [6] Limits and requests for CPU resources are measured in CPU units. 1 CPU unit is equivalent to 1 physical CPU core or 1 virtual core.
Fractional CPU requests are allowed. For example,
resources.requests.cpu: 0.5requests half as much CPU time.For CPU resource units, the quantity expression
0.1is equivalent to the expression100m, which means “one hundred milli CPU”.[4] [7] Limits and requests for memory are measured in bytes.
[5]
requestsdescribe the minimum amount of compute resources required. If therequestssection is omitted, it defaults tolimitsif that is explicitly specified in the same CR, and otherwise to the values defined for the Kubernetes cluster.
The following example CR specifies requests for 0.25 CPU and 64 MiB of memory. The limits are set to 0.5 CPU and 128 MiB of memory.
spec:
podTemplate:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Resource definitions for CFK Init Container
CFK does not allow you to configure the resource requests or limits for the Init Container. For each Confluent Platform deployment, CFK sets the following for the Init Container:
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 512Mi
Define Java heap size
In addition to memory sizing, you can configure the Java (JVM) heap size for
Confluent components using the configuration override feature
(spec.configOverrides.jvm).
For guidance on setting JVM heap size, see the Tuning defaults for containers article.
Remove the default JVM settings
To auto scale the JVM heap size
(UseContainerSupport) or to set the JVM heap size (MaxRAMPercentage), you must remove the
existing JVM -Xms and -Xmx settings that CFK sets by default.
When only the pod memory limits value is set, use the memory limits value to remove the setting.
When both the memory limits and the requests are set, use the requests value to remove the settings.
For example, to remove the -Xmx and -Xms settings, add the following to
the component CR:
kind: <component>
spec:
configOverrides:
jvm:
- "---Xmx2G"
- "---Xms2G"
Note
To remove -Xms and -Xmx, specify the value in megabytes (M) or
gigabytes (G). Mebibytes (Mi) or gibibytes (Gi) are not
supported.
Auto scale JVM heap size
To auto scale the JVM heap size, enable UseContainerSupport by adding the
following to the component CR:
kind: <component>
spec:
configOverrides:
jvm:
- "-XX:+UseContainerSupport"
Set JVM heap size
To set the JVM heap size, in the component CR, set MaxRAMPercentage to match
the desired JVM heap size in relation to the total memory limit
(spec.resource.limits.memory) you set in Specify CPU and memory requests above.
kind: <component>
spec:
configOverrides:
jvm:
- "-XX:MaxRAMPercentage=<percentage value>"
Examples for JVM heap size configuration
The following examples show the JVM MaxRAMPercentage set to 50% of the
memory and auto-scaling enabled.
Example 1: In this example, only limits is set (4Gi) for the pod. To
remove the -Xmx and -Xms, specify the pods memory limit, such as
---Xmx4G and ---Xms4G.
apiVersion: platform.confluent.io/v1beta1
kind: Kafka
metadata:
name: kafka
namespace: confluent
spec:
podTemplate:
resources:
limits:
memory: "4Gi"
configOverrides:
jvm:
- "---Xmx4G" # This removes -Xmx parameter
- "---Xms4G" # This removes -Xms parameter
- "-XX:+UseContainerSupport"
- "-XX:MaxRAMPercentage=50.0"
Example 2: In this example, both limits and requests are set for the
pod. To remove the -Xmx and -Xms, use the pods requests memory, such as
---Xmx2G and ---Xms2G.
apiVersion: platform.confluent.io/v1beta1
kind: Kafka
metadata:
name: kafka
namespace: confluent
spec:
podTemplate:
resources:
limits:
memory: "4Gi"
requests:
memory: "2Gi"
configOverrides:
jvm:
- "---Xmx2G" # This removes -Xmx parameter
- "---Xms2G" # This removes -Xms parameter
- "-XX:+UseContainerSupport"
- "-XX:MaxRAMPercentage=50.0"
For more details about the configuration overrides feature, see Configuration overrides.