Kubernetes Jobs
When a Job starts and a pod fails (exits with a non-zero exit status), the Job controller will, by default, retry the pod based on the
restartPolicy
:If
restartPolicy
isOnFailure
, the same pod is restarted on the same node.If
restartPolicy
isNever
, a new pod is created, which could be scheduled on a different node.
Each time a pod fails and is scheduled to restart, this counts as a retry.
The
backoffLimit
defines how many of these retries the Job should attempt before giving up. If not set, the default value is 6.If the Job reaches the
backoffLimit
, no further pods will be created or restarted by the Job controller, and the Job is marked as failed. However, all the pods that are still running will continue to run until they either complete or fail, and will not be terminated automatically.If a pod completes successfully, it does not count toward the
backoffLimit
. Only failed attempts are counted.
Here is an example of how you can set the backoffLimit
in your Job manifest:
yamlCopy codeapiVersion: batch/v1
kind: Job
metadata:
name: parallel-processing-job
spec:
parallelism: 3
completions: 9
backoffLimit: 4 # Specifies the number of retries before marking the job as failed
template:
spec:
containers:
- name: processor
image: processor-image
restartPolicy: OnFailure
In this example, if a pod fails to complete successfully, the Job controller will retry running the pod up to 4 times before marking the Job as failed. Remember that backoffLimit
applies to the Job as a whole, not to individual pods. So in the context of a parallel job, it's the total number of retries for the entire Job and not per pod. If one pod fails and is restarted four times, and on each attempt it fails, the Job will reach its backoffLimit
and will not create or restart any more pods.
Kubernetes resources and Limits:
While creating LimitRange if you don`t specify default request and limits values(of memory or CPU) then max value will be assigned to default values
apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container
In the above example, we specified min and max memory values of the container. If we describe the LimitRange then you will see something like this
limits: - default: memory: 1Gi defaultRequest: memory: 1Gi max: memory: 1Gi min: memory: 500Mi type: Container
If you don`t specify resource fields for a container then defatlt request and limit values of 1Gi get assigned. Same applies for CPU as well
Let's say if we create default values of memory using the LimitRange object. Then the request memory of the pod spec should not go beyond default.Memeory limit i.e . 512MB in this case(If only request option is specified)
apiVersion: v1 kind: LimitRange metadata: name: mem-limit-range spec: limits: - default: memory: 512Mi defaultRequest: memory: 256Mi type: Container
resources: requests: memory: "500Mi"
In the above case if specified only request options. So the limit value will be added by taking the value from LimitRange object which is 512Mi. So request value is more than the limit which is not true. So we get the error.
resources: requests: memory: "500Mi" limits: memory: "1Gi"
This is a valid case because we specified limit as well. So it won`t get any value from LimitRange.
Kubernetes Network policy:
Question:
There are existing Pods in Namespace space1
and space2
.
We need a new NetworkPolicy named np
that restricts all Pods in Namespace space1
to only have outgoing traffic to Pods in Namespace space2
. Incoming traffic not affected.
The NetworkPolicy should still allow outgoing DNS traffic on port 53
TCP and UDP.
Answer:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np
namespace: space1
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space2
Note: Observe the Egress rules:
There are 2 rules.
he first rule allows egress traffic over TCP and UDP on port 53, which is typically used for DNS queries.
The second egress rule allows traffic to any pod in namespaces that have the label
kubernetes.io/metadata.name
: space2
. This is done through anamespaceSelector
, which specifies that the policy should apply to traffic to namespaces that match the specified labels.egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: space2 ports: - port: 53 protocol: TCP - port: 53 protocol: UDP
If we use the above spec then the complete meaning changes. This means in the context of the modified rule, pods in
space1
can only communicate on DNS ports with pods inspace2
, and no other egress traffic is permitted to any other destinations,apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np namespace: space1 spec: podSelector: {} policyTypes: - Egress egress: - to: ports: - port: 53 protocol: TCP - port: 53 protocol: UDP - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: space2
The above correct solution I.e. answer can also be written as shown here.