I tried to keep this issue as simple as possible but adding a long terminationGracePeriodSeconds increases the Pod count up to 8 or 10 sometimes.Īfter triggering a rolling update, the deployment scales up to 4: The problem seems to happen when the outdated Pod is deleted, and it gets more exaggerated if the Pod takes longer to shut down. The custom metric that it scales on may not available at Pod startup, as it is generated per Pod with a rate function, and the Prometheus scrape interval is 30 seconds. This seems to be related to the fact that it scales on a custom metrics, as the same does not occur when scaling only on CPU. The deployment has a maxSurge of 1 and a maxUnavailable of 0, and while the deployment is completely idle at 1 replica, a rolling update makes it scale up to 4 replicas or more very rapidly, and then scales down gradually. I have been experiencing some strange behaviour on the HPA v2 on v1.14.9, where it scales up a deployment unnecessarily during rolling updates.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |