Server-Side Apply and managedFields: the Kubernetes feature that ends controller fights
How the API server tracks per-field ownership, why your operator and ArgoCD stopped overwriting each other, and the pitfalls you'll hit once you turn SSA on
Most Kubernetes users never read the metadata.managedFields block in a YAML dump. It's boilerplate noise at the bottom of kubectl get -o yaml. Until the day ArgoCD and your operator start fighting over the same Deployment, the spec oscillates every five minutes, and the only signal is three-minute-old pods getting recreated nobody asked for. That's the day managedFields matters.
The structure exists because the client-side kubectl apply model had a fundamental flaw: the client held the "last-applied" annotation and the server had no idea who owned what. When two different actors both tried to be authoritative, whichever called apply most recently won. Server-Side Apply (SSA) moved that ownership tracking into the API server itself, per-field, so conflicts are visible, resolvable, and auditable.
Here's the full story.
What managedFields actually contains
Every object in a cluster running SSA has a managedFields array in its metadata. Each entry describes one "manager" and the fields it owns:
metadata:
managedFields:
- manager: "kubectl"
operation: "Apply"
apiVersion: "apps/v1"
fieldsType: "FieldsV1"
fieldsV1:
f:spec:
f:replicas: {}
f:template:
f:metadata:
f:labels:
f:app: {}
- manager: "kube-controller-manager"
operation: "Update"
fieldsType: "FieldsV1"
fieldsV1:
f:status:
f:availableReplicas: {}
f:readyReplicas: {}Three things worth noticing.
First, ownership is per-field, not per-object. kubectl owns spec.replicas and specific labels. kube-controller-manager owns parts of status. Both coexist on the same Deployment without stepping on each other.
Second, the operation field distinguishes Apply (SSA explicit intent) from Update (legacy imperative PUT). The merge algorithm treats them differently.
Third, the field paths use a compact encoding (f: prefix, nested objects, {} for leaf fields). That's the FieldsV1 format. It's verbose in YAML but efficient for the server to diff.
Links
The conflict a client-side apply user never saw
Imagine three actors touching the same Deployment:
A GitOps tool (ArgoCD) reconciles the Deployment from a manifest in git.
An HPA autoscaler updates
spec.replicasbased on load.A platform operator adds a sidecar container to
spec.template.spec.containers.
Under client-side apply, every one of them sends a full PUT. Last write wins. ArgoCD says "replicas should be 3", HPA says "replicas should be 17", the operator says "add sidecar". Every few minutes the values thrash. You see pods being recreated. Nobody logs anything useful because from each tool's perspective its own apply succeeded.
Under SSA, each actor declares which fields it intends to own:
ArgoCD applies with
fieldManager=argocdand declares ownership of everything exceptspec.replicas.HPA applies with
fieldManager=horizontal-pod-autoscalerand owns onlyspec.replicas.The operator applies with
fieldManager=sidecar-injectorand owns the sidecar container entry.
Now when ArgoCD reconciles, the API server sees ArgoCD doesn't claim spec.replicas, so HPA's value is preserved. When HPA scales up, the rest of the spec is untouched. When the operator injects a sidecar, ArgoCD's spec.template.spec.containers list is merged, not replaced.
This is the feature. Per-field ownership turns three fighting actors into three cooperating ones.
Force, conflict, and what actually happens on collision
When two managers both try to own the same field, SSA raises a conflict. The request fails with a 409 response and a message listing the conflicting paths. The client has three options:
Drop the conflicting field from its apply. The other manager keeps ownership.
Send
force=trueto take ownership. The other manager loses that field.Abort and investigate. Usually the right answer when the conflict is surprising.
kubectl apply --server-side --force-conflicts is the escape hatch for admins. Tools like ArgoCD and Flux offer equivalent flags. Using force without understanding the cause of the conflict is how three-way ownership oscillation starts.
The more disciplined pattern: if you repeatedly hit conflicts on a specific field, that's a signal the ownership model is wrong. Two managers shouldn't both be writing to the same field unless one of them is actively handing ownership off.
Links
List merge strategies: where subtle bugs live
Lists are where SSA gets interesting. By default, a list is treated as atomic: the whole list is owned by whoever applied it. But most Kubernetes lists in practice have semantic identity per element (containers by name, ports by number, volumes by name). The API declares merge strategies for each such field using x-kubernetes-list-type and x-kubernetes-list-map-keys markers in the OpenAPI schema.
For example, spec.template.spec.containers is declared as list-type: map with list-map-keys: [name]. When two managers apply to the containers list, SSA merges element-by-element by container name. Manager A can own container "app", manager B can own container "sidecar", both survive.
If a list has the wrong marker, merge breaks. A common failure is a Custom Resource Definition that declares a list without map keys. Two actors apply different elements; the second apply replaces the entire list. The first actor's element silently disappears. You learn about this when something that was there is gone and nobody's manager is listed.
Operationally: when writing a CRD, always think through the list-type markers for every list field. When debugging a vanishing config, look at the OpenAPI schema of the resource and check if the list has proper identity markers.
Links
How ownership transfers happen
Ownership isn't permanent. It transfers when a manager stops claiming a field. Three ways:
A manager applies without the field. If actor A previously owned
spec.replicasby applying it, then applies a new manifest that omitsspec.replicas, ownership is released. The next actor to apply the field becomes the owner.Explicit removal. A manager applies with the field present but flagged for removal (via a patch, not SSA directly). Useful for operators that want to clean up after themselves.
Force takeover. Already discussed.
force=truereassigns ownership to the caller.
The subtle one is case 1. Many GitOps tools don't realize that adding or removing a line in a manifest changes ownership declaration on next reconcile. If you remove a field from git thinking it'll revert to default, ArgoCD stops claiming it, and whatever default the controller computes is now owned by the controller. Sometimes that's what you want. Sometimes that's a surprise.
The operator pattern that gets SSA right
Operators that coexist well with GitOps and HPAs follow a specific pattern:
Apply only what you own. Don't include fields in your SSA apply that belong to other actors. If your operator manages sidecars, send only the sidecar fields, not the whole
spec.template.spec.Use a unique field manager name.
my-operator-v2, notmy-operator. This keeps ownership traceable across operator versions.Fail loudly on conflicts. Don't default to
force=true. Surface conflicts as events on the managed resource so cluster operators can see what happened.Design for shared resources. Assume another actor might own adjacent fields. Don't assume you're alone.
Crossplane (Podo #010) is an example of a controller family that uses SSA this way by default. Karpenter's NodePool reconciliation also follows the pattern, which is part of why NodePool configurations coexist cleanly with kubectl edits.
Links
When you still want client-side apply
SSA isn't free. The server does more work per request. managedFields adds size to every object (sometimes multi-kilobyte blocks). The merge logic has edge cases (especially around map-typed fields with heterogeneous keys).
For one-shot scripts, kubectl admin fiddling, ephemeral test clusters, client-side apply remains simpler. The cost of tracking field ownership across actors is overhead you don't need when there's only one actor.
For anything with more than one source of truth (GitOps plus operator plus HPA, the common production shape), SSA is non-optional. The bugs you'll hit without it are worse than the overhead.
Practical diagnostics
Three commands that pay for themselves:
kubectl get <resource> -o yaml | yq '.metadata.managedFields'- shows the full ownership map. Use when something changed and you don't know who changed it.kubectl get <resource> -o json | jq '.metadata.managedFields[] | {manager, operation, fields: .fieldsV1}'- structured view for scripting. Audit pipelines can parse this.kubectl diff --server-side <manifest>- shows what SSA will do before you actually apply. Includes conflict warnings. Indispensable for GitOps CI pipelines.
One warning: managedFields can balloon. If dozens of actors with unique field manager names apply to the same resource over a long period, the block grows. Kubernetes 1.24+ has cleanup logic that removes entries for managers that haven't touched the object recently. Earlier clusters sometimes need manual pruning on long-lived resources.
Summary
managedFields is the piece of Kubernetes metadata that makes cooperation between controllers actually work. Per-field ownership, declared intent, explicit conflict surfacing. Every GitOps-meets-operator-meets-HPA production cluster depends on it, usually without realizing.
If you're writing operators, use a unique fieldManager name and apply only what you own. If you're running GitOps, choose a tool that uses SSA natively and set conflict policies consciously. If you're debugging a resource that keeps changing without a clear cause, kubectl get -o yaml and read the managedFields. The answer is usually there.
For the broader GitOps context where this matters most, see Your Infrastructure Has an API Now. For image pipeline conflicts specifically, Karpenter Beyond the Demo walks through NodePool/EC2NodeClass ownership cleanly.



