Technology

Kubernetes v1.36 Elevates Pod-Level Resource Scaling to Beta – No Restart Required

2026-05-04 00:12:51

The Kubernetes community has marked a major milestone: In-Place Pod-Level Resources Vertical Scaling has graduated to Beta in version 1.36, now enabled by default. This means operators can adjust the aggregate resource budget of a running pod without necessarily restarting its containers – a leap forward for dynamic workload management.

“This feature closes a critical gap for complex pods, especially those with sidecars or multiple containers sharing a resource pool,” said a senior Kubernetes SIG Node maintainer. “It offers a safe, automated path to scale up under load while minimizing disruption.”

Background

The journey began in v1.34, when Pod-Level Resources graduated to Beta, allowing an overall resource budget per pod rather than per container. v1.35 made In-Place Vertical Scaling generally available for individual containers. Now v1.36 combines these into a unified capability: in-place scaling of the pod-level budget, often without a container restart.

Kubernetes v1.36 Elevates Pod-Level Resource Scaling to Beta – No Restart Required

The feature is controlled by the InPlacePodLevelResourcesVerticalScaling feature gate, which is now turned on by default. This enables updates to .spec.resources at the pod level while the pod is running.

How It Works

When a pod-level resize is initiated, the kubelet evaluates each container’s resizePolicy. Containers with NotRequired get their cgroup limits updated on the fly via the Container Runtime Interface (CRI). Containers with RestartContainer will be restarted to apply the new boundary safely.

This per-container policy allows operators to mix zero-downtime and disruptive updates within the same pod. For example, a main application may accept live resource changes while a sidecar requires a restart for certain adjustments.

Example: Scaling a Shared Pool

Consider a pod with a 2 CPU limit at the pod level and no per-container limits. Applying a patch to double the CPU to 4 CPUs triggers the kubelet to resize the shared pool. The kubelet first checks node capacity, then updates cgroups for containers that allow non-restart updates, and finally restarts those that require it.

  1. Initial state: Pod spec with resources.limits.cpu: "2" and two containers both with restartPolicy: NotRequired for CPU.
  2. Resize operation: kubectl patch pod ... --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'
  3. Outcome: Both containers inherit the new 4 CPU limit without restart, as long as the resize policy allows it.

What This Means

For cluster operators, the beta graduation reduces operational friction. Previously, adjusting a pod’s resource pool often required a rolling update or manual per-container recalculations. Now, a simple API call adjusts the shared budget, and the system handles the rest.

This is particularly powerful for sidecar-heavy deployments, logging aggregators, and service meshes where containers need to flex together under traffic spikes. The kubelet’s built-in safety checks – node capacity, feasibility, and sequence validation – ensure node stability even during rapid scaling events.

Maintainers expect the feature to move toward general availability in a future release, but v1.36 already offers production-grade capabilities for many use cases. “We encourage users to test in non-critical workloads first,” the SIG Node maintainer added, “but the feedback from early adopters has been very positive.”

Explore

Harmonizing Land Use: A Unified Approach to Tackle Global Food, Energy, and Conservation Conflicts ‘The Devil Wears Prada 2’ Shatters Box Office Records, Mirroring ‘Super Mario Galaxy’ Sequel Triumph 10 Reasons Why Anker's 2-in-1 USB-C Cable Is a Must-Have for Tech Enthusiasts Facebook Overhauls Groups Search with AI-Powered Hybrid System to Unlock Community Knowledge Git 2.54 Debuts 'git history' Command – A Simplified Approach to Rewriting Commits