Scheduling Hazelcast Pods
Kubernetes allows assigning Pods to certain nodes. Hazelcast Platform operator also accepts same policies to explicitly allow or disallow certain assignments. Detailed documentation about scheduling policies can be found in Kubernetes docs.
The Management Center specification also supports the same capabilities. |
You can make sure that pods for Hazelcast members are run on certain nodes, using the following scheduling principles of Kubernetes: node selector, node affinity, taints, and tolerations.
Node Selector
A node selector is a hard requirement that defines the nodes on which pods should run.
If no node matches the requirements then the pods are not scheduled to be run. For example, you may want to make sure that Hazelcast pods are run only on nodes in a specific region.
To define the nodes on which Hazelcast pods should run, you can use the nodeSelector
.
For example, to run Hazelcast pods on a node in the us-west1 region, you can use the following example.
This example uses the built-in topology.kubernetes.io/region
label to specify that the pod should be run on nodes in the us-west1 region.
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast
spec:
scheduling:
nodeSelector:
topology.kubernetes.io/region: us-west1
apiVersion: hazelcast.com/v1alpha1
kind: ManagementCenter
metadata:
name: managementcenter
spec:
scheduling:
nodeSelector:
topology.kubernetes.io/region: us-west1
For more information about node selectors see the Kubernetes docs.
Node Affinity
Node affinity allows you to define both hard and soft requirements for the nodes on which pods should run. For example, you can set a hard requirement for Hazelcast pods to run only on nodes with AMD64 architecture and Linux operating system, and a soft requirement to prefer nodes in us-west1 or us-west2 regions.
This example is not supported by the node selector because of the soft requirement. |
To define a node affinity, use the nodeAffinity
field of PodSpec.
The example below will satisfy the scenario above:
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast
spec:
scheduling:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 15
preference:
matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- us-west1
- us-west2
apiVersion: hazelcast.com/v1alpha1
kind: ManagementCenter
metadata:
name: managementcenter
spec:
scheduling:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 15
preference:
matchExpressions:
- key: topology.kubernetes.io/region
operator: In
values:
- us-west1
- us-west2
For more information about node affinity see the Kubernetes docs.
Pod Affinity and Pod Anti-Affinity
You can specify pod affinity and pod anti-affinity to define on which nodes Hazelcast instances
run with respect to other pods.
For example, you may want to run Hazelcast pods on the same nodes as the pods of Deployment app1
because
your application’s performance will be improved if the pods are co-located. You may also prefer that Hazelcast pods run on different
nodes with respect to each other but since there might be small number of nodes, you don’t want to block
the scheduler if the number of Hazelcast pods is greater than the number of nodes.
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast
spec:
scheduling:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app1
topologyKey: kubernetes.io/hostname
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
values:
- hazelcast
topologyKey: kubernetes.io/hostname
apiVersion: hazelcast.com/v1alpha1
kind: ManagementCenter
metadata:
name: managementcenter
spec:
scheduling:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- app1
topologyKey: kubernetes.io/hostname
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 10
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
values:
- hazelcast
topologyKey: kubernetes.io/hostname
For more information about pod affinity see the Kubernetes docs.
Tolerations
Tolerations and corresponding taints are mechanisms to repel pods from certain nodes. Nodes can be tainted with key, value and particular effect.
This example taints the node node1
:
kubectl taint nodes node1 forbidden:NoSchedule
The following configuration cannot schedule any Hazelcast pods on node1
:
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast-sample
spec:
clusterSize: 3
repository: 'docker.io/hazelcast/hazelcast-enterprise'
version: '5.5.2-slim'
apiVersion: hazelcast.com/v1alpha1
kind: ManagementCenter
metadata:
name: managementcenter-sample
spec:
repository: 'hazelcast/management-center'
version: '5.6.0'
externalConnectivity:
type: LoadBalancer
hazelcastClusters:
- address: hazelcast-sample
name: dev
To allow the pods to run on the node node1
, you need to add tolerations to the Hazelcast pods:
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast
spec:
scheduling:
tolerations:
- key: "forbidden"
operator: "Exists"
effect: "NoSchedule"
apiVersion: hazelcast.com/v1alpha1
kind: ManagementCenter
metadata:
name: managementcenter
spec:
scheduling:
tolerations:
- key: "forbidden"
operator: "Exists"
effect: "NoSchedule"
For more information about tolerations and taints see the Kubernetes docs.
Topology Spread Constraints
Topology spread constraints control how pods are spread across the cluster.
The following configuration will ensure that Hazelcast pods will be spread across all the nodes evenly with at most 1 pod difference between the nodes.
apiVersion: hazelcast.com/v1alpha1
kind: Hazelcast
metadata:
name: hazelcast
spec:
scheduling:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app.kubernetes.io/name: hazelcast
app.kubernetes.io/instance: hazelcast
app.kubernetes.io/managed-by: hazelcast-platform-operator
Management Center has only one instance, so it is not sensible to use topology spread constraints for the ManagementCenter Custom Resource.
|
For more information about topology spread constraints see the Kubernetes docs.