-->

Replication controller and replica set

Replication controllers and replica sets both manage a group of pods identified by a label selector and ensure that a certain number is always up and running. The main difference between them is that replication controllers test for membership by name equality and replica sets can use set-based selection. Replica sets are newer and designated as the next-generation replication controllers. They are still in beta and are not fully supported by all the tools at the time of writing. Hopefully, by the time you read this, they will be full-fledged members. Kubernetes guarantees that you will always have the same number of pods running as you specified in a replication controller or a replica set. Whenever the number drops due to a problem with the hosting node or the pod itself, Kubernetes will fire up new instances. Note that, if you manually start pods and exceed the specified number, the replication controller will kill some extra pods. Replication controllers used to be central to many workflows, such as rolling updates and running one-off jobs. As Kubernetes evolved, it introduced direct support for many of these workflows, with dedicated objects such as Deployment, Job, and DaemonSet. We will meet them all later.

Random Pic Of The Day!
Random Pic Of The Day!
Question Of The Day!

Restart pods when configmap updates in Kubernetes? [closed]

I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.

So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?

BEST ANSWER:

Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.

Quote Of The Day!

Fear stops people from accomplishing their goals.
Unknown

Replication controller and replica set Anonymous 5 of 5
Replication controllers and replica sets both manage a group of pods identified by a label selector and ensure that a certain number is alwa...

Posts relacionados: No está disponible si la entrada carece de etiquetas

0 Comentarios