
If you ask most engineers what Kubernetes does, they'll tell you: it runs containers. Schedules workloads. Handles scaling and restarts. Makes sure your pods are up.
That's not wrong. It's just the smallest part of what Kubernetes is.
The container orchestration surface is where most people interact with Kubernetes. It's also the least interesting part of it. The reason Kubernetes became the foundation for an entire industry of platform tooling (ArgoCD, Kyverno, cert-manager, Crossplane, External Secrets Operator, Argo Workflows, and dozens more) has nothing to do with container scheduling.
It has everything to do with the API.
What Kubernetes Actually Is
Kubernetes is a distributed API server with a declarative resource model and a watch-reconcile loop.
Unpack that sentence and the whole picture changes.
Declarative resource model means you don't tell Kubernetes what to do. You tell it what you want. You declare the desired state. Kubernetes figures out how to get there and continuously works to close the gap between desired and actual.
Distributed API server means every resource in Kubernetes (Pods, Deployments, Services, ConfigMaps) is an object managed through a consistent REST API. Every resource supports the same verbs: create, get, list, watch, update, patch, delete. Whether you're talking to a Pod or a ConfigMap or a custom resource you invented this morning, the API surface is identical.
Watch-reconcile loop means any piece of software can watch any resource for changes and react to them. This is the heartbeat of the entire Kubernetes ecosystem. It's how controllers work, how operators work, and how the entire platform maintains itself without human intervention.
That's the real model. Container orchestration is just one of the many things you can build on top of it.
The Kubernetes API Is the Platform
Every Kubernetes resource is accessed the same way. When you run kubectl get deployment, you're making an API call. When ArgoCD deploys your application, it's making API calls. When Kyverno enforces a policy, it's watching API calls. The consistency is total.
This matters because it means the entire operational model is uniform. Any tool that understands the Kubernetes API can read, write, and watch any resource. Any resource can be extended. Any capability can be expressed as a resource.
The API is not just the interface to Kubernetes. The API is Kubernetes. Everything else (the scheduler, the kubelet, the controllers) are clients of that API, operating on shared state.
Custom Resource Definitions Change Everything
Here's where it gets interesting.
Kubernetes ships with a fixed set of built-in resources: Pods, Deployments, StatefulSets, Services, ConfigMaps, Secrets, and so on. But the resource model itself is extensible via Custom Resource Definitions (CRDs).
A CRD is a schema that you register with the Kubernetes API, telling it: "I'm introducing a new resource type. Here's what it looks like." Once registered, that resource type becomes a first-class citizen of the cluster, fully accessible via the standard API, storable in etcd, and watchable by controllers.
This is how ArgoCD, Kyverno, cert-manager, and every other major Kubernetes-native tool works. They introduce CRDs that express their concepts as Kubernetes resources:
- ArgoCD introduces
ApplicationandAppProject - Kyverno introduces
ClusterPolicyandPolicy - cert-manager introduces
CertificateandIssuer - External Secrets Operator introduces
ExternalSecretandSecretStore - Kong introduces
HTTPRouteandKongPlugin
Once those CRDs exist in your cluster, you interact with them exactly like built-in resources. kubectl get application works the same way as kubectl get deployment. The API doesn't know or care that Application wasn't part of the original Kubernetes spec.
This is the mechanism that turned Kubernetes from a container scheduler into an extensible platform for encoding operational knowledge.
Custom Resources Are Operational Intent
A Custom Resource is an instance of a CRD. When you create an ArgoCD Application, you're not writing deployment scripts. You're declaring intent: this Git repository at this revision should be in sync with this namespace in this cluster.
When you create a Kyverno ClusterPolicy, you're not writing admission webhook code. You're declaring: any resource matching these conditions must satisfy these rules.
When you create a cert-manager Certificate, you're not writing ACME challenge handlers. You're declaring: I need a TLS certificate for this domain, issued by this CA, rotated before it expires.
The operational complexity (syncing Git state, enforcing admission rules, rotating certificates) is hidden inside the controller that watches the resource. You never see it. You only see the intent, expressed as a Kubernetes resource.
This is what "encoding operational knowledge as code" means. The runbook your SRE wrote for certificate rotation doesn't live in a Confluence page anymore. It lives in a cert-manager controller, executing continuously, without a human touching it.
The Operator Pattern
An Operator is a controller that watches a CRD and reconciles actual cluster state to the desired state expressed in Custom Resources. (The CNCF Operator Whitepaper goes deep on how this pattern works in production.)
The reconciliation loop is simple in concept:
watch for changes to resource X
read desired state from resource X
compare to actual state in the cluster
if actual ≠ desired: take action to close the gap
repeat
In practice, that loop encodes the operational expertise of however many engineers built the operator. cert-manager's reconciliation loop encodes everything a platform engineer knows about certificate lifecycle management. ArgoCD's reconciliation loop encodes everything a DevOps team knows about Git-based deployment.
When you run cert-manager in your cluster, you're not just installing software. You're installing the accumulated operational knowledge of the cert-manager community: every edge case handled, every failure mode covered, every expiry scenario automated.
The GoldenPath IDP runs approximately ten operators. Together, they encode the platform governance that would otherwise require constant human intervention: GitOps deployment, policy enforcement, secret synchronisation, certificate management, DNS record management, cluster autoscaling. The platform maintains itself. The human work is in declaring intent.
What Reveals the Model in Practice
The theory becomes visible through specific tools. There's a progression most platform engineers encounter, though rarely in the order that makes the underlying model clear.
The kube-prometheus-stack is often where it starts. A single Helm chart deploying Prometheus, Grafana, Alertmanager, node-exporter, kube-state-metrics, and the Prometheus Operator itself as one coordinated unit. The operator isn't reconciling a single resource. It manages the lifecycle of an entire observability stack through CRDs: Prometheus instances, ServiceMonitors, PrometheusRules, Alertmanager configurations. That's the first encounter with Kubernetes managing something far more complex than containers.
Crossplane is where the reconciliation model tends to click. Kubernetes isn't just managing containers and monitoring components. It's managing cloud infrastructure through the same watch-reconcile loop. The same API. The same pattern. An RDS instance declared in YAML, reconciled by a controller, indistinguishable in model from a Deployment or a Service.
ArgoCD makes Custom Resources concrete. A Git repository declared as an Application resource, continuously reconciled against the cluster. The deployment pipeline isn't a script. It's a Kubernetes resource with a controller behind it.
The thread connecting all of these becomes visible through platform engineering: building IDPs, working with tools like Kratix designed explicitly to extend the Kubernetes API for platform teams. At that point, the pattern is unmistakable: these tools aren't just using Kubernetes. They're extending it. The API isn't infrastructure plumbing. It's the orchestration layer the entire platform is built on.
Container scheduler with useful add-ons is a stage in understanding Kubernetes. It isn't the destination.
Why This Changes How You Think About Platform Engineering
Once you understand that Kubernetes is an extensible API, not a container scheduler, platform design looks different.
The question isn't "how do I configure Kubernetes to run my containers?" The question is "what operational knowledge can I encode as Kubernetes resources, and what controllers will reconcile them?"
Every piece of manual operational work is a candidate for an operator. Every runbook is a candidate for a CRD. Every approval process is a candidate for a controller that watches for human input and takes action.
This is why the best platform teams don't think of Kubernetes as infrastructure. They think of it as the API their platform is built on. The infrastructure runs Kubernetes. The platform is expressed in Kubernetes resources. Platform engineering as a design philosophy is exactly this distinction.
The Agentic AI Connection
There's one more implication worth naming.
If agents are going to operate on your platform (running deployments, checking compliance, managing infrastructure), the Kubernetes API is the natural interface. An agent that can read and write Kubernetes resources doesn't just have access to pod scheduling. It has access to every CRD in your cluster: every ArgoCD Application, every Kyverno Policy, every ExternalSecret, every custom resource your platform team has defined.
The governance model for agents operating on Kubernetes is the same governance model that applies to humans: RBAC on the Kubernetes API, namespace scoping, admission webhooks (Kyverno), audit logging. You don't need to build a separate security model for agent access. The API already has one.
That's not a coincidence. It's the same extensibility that makes Kubernetes a platform, not just an orchestrator.
What to Do With This
If your team uses Kubernetes primarily for running containers, you're using about 20% of what it offers.
Start by reading what CRDs exist in your cluster: kubectl get crd. You'll likely find dozens, each one representing operational knowledge that's been encoded into your platform without you necessarily knowing it. Each one is a controller running somewhere, reconciling state, doing work that would otherwise require human intervention.
Then ask: what else in your platform should be expressed as a Kubernetes resource? What runbooks should be operators? What manual processes should be reconciliation loops?
That's the question that turns a container platform into an Internal Developer Platform.
At Scaletific, we build Internal Developer Platforms on Kubernetes-native tooling, not just to run containers, but to encode platform governance as code. Talk to us about building yours.