Nomad
Nomad for Kubernetes practitioners
This page compares Kubernetes and Nomad as workload schedulers, including how their workloads are defined and scheduled and how different components map to the other's architecture. The comparisons on this page are for Kubernetes developers and admins. Refer to the "Nomad versus Kubernetes" section of the Nomad introduction page for a high-level overview of Kubernetes compared to Nomad.
Cluster architectures
Kubernetes and Nomad both implement a server and client node system where the server manages scheduling and maintains cluster operations, while the client runs the workload.
Kubernetes
The two main components of a Kubernetes cluster are the servers, known as the control plane, and the clients, known as the worker nodes.
The control plane is responsible for cluster-wide operations like scheduling workloads, as well as detecting and responding to cluster events. It runs on separate nodes from the worker. You may deploy the control plane on a single node or on multiple nodes for higher availability.
The Kubernetes control plane includes these important components:
- kube-apiserver acts as the frontend for the control plane and exposes the Kubernetes API server.
- etcd is a key-value store that contains all of the cluster data.
- kube-scheduler watches for newly created pods and assigns them to a worker node.
- kube-controller-manager runs controllers processes for the cluster, such as node, job, and service account controllers. Each controller is an additional component responsible for their respective process.
Worker nodes are the infrastructure that host the Pods, which are groups of one or more application containers. Containers in a Pod are generally tightly coupled, so an application component has its own Pod. For example, in a three-tier application with a frontend, backend, and database, each component runs in its own Pod and communicates with the other components through a Service.
Each worker node has the following components:
- kubelet is responsible for making sure containers run in a Pod. It is a long-lived agent that runs on the worker node.
- kube-proxy is an optional, long-lived agent that is responsible for maintaining networking rules and facilitating communication between Pods inside and outside of the cluster.
- Container runtime manages the execution of the containers. Popular runtimes include containerd and Docker engine.
Nomad
The two main components of a Nomad cluster are the server nodes and client nodes.
The server nodes are responsible for accepting jobs, managing clients, and computing task placements. A task is a unit of work in Nomad that has constraints and requires compute resources. Servers schedule tasks to find the optimal placement on a client given the required resources of the task, in addition to other constraints defined in the job specification.
You may run Nomad servers in a cluster in different cloud regions, unlike control plane nodes in Kubernetes. These regions are independent of each other, and data is not replicated between them. Instead, servers in different regions use the gossip protocol to support job submissions from anywhere in the cluster and query the state of other regions.
The client nodes are responsible for running the workloads. Clients communicate with their region's servers using RPC. They register themselves with the servers, send liveness heartbeats, wait for allocations, and update allocation status. An allocation is a mapping of tasks from a workload specification to a client node, with a declaration that a set of tasks should be run on a particular node. Client nodes also contain task drivers, which are the application runtimes, such as Docker, that are necessary to execute the workload.
Architecture comparison
Kubernetes requires that you install a number of different processes the control plane and nodes. Nomad, however, is a single binary, which you configure as either a server or client agent.
Terminology
The following chart lists the equivalencies between the architecture terminologies used by Kubernetes and Nomad.
| Kubernetes | Nomad |
|---|---|
| Control plane | Server nodes |
| Worker node | Client node |
| kube-apiserver | Nomad HTTP API, built into the Nomad binary |
| etcd | No direct counterpart. Clients store data in a local data_dir. |
| kube-scheduler | Built into the Nomad binary |
| kube-controllermanager | Built into the Nomad binary |
| kubelet | Nomad agent, built into the Nomad binary |
| kube-proxy | No direct counterpart. Many operations covered by Nomad service discovery. |
| Container runtime | Task driver |
Server node
This diagram illustrates the differences between the Kubernetes control plane and the Nomad server.

The Nomad server agent is a single, lightweight process that maintains cluster state and performs workload scheduling.
Client node
This diagram illustrates the differences between the Kubernetes worker node and the Nomad client, which is a single process.

The client agent fingerprints the node and provides information to the Nomad servers for scheduling. The client makes sure that the task is running and manages its lifecycle.
Task drivers are the runtime components. Nomad provides built-in drivers such as Docker, Java, exec, and QEMU. Additionally, the task driver system is pluggable so that you may use any community plugin or create your own.
Production control plane deployment
This diagram illustrates the differences between a typical Kubernetes deployment and a Nomad deployment.

Networking
A Kubernetes cluster normally has these IP networks:
- Node network: The physical network that your nodes are connected to.
- Pod network: In Kubernetes, each pod gets its own unique IP. The pod network is separate from the node network and many users choose to implement an overlay network to route traffic between pods and nodes.
- Service network: A service is a Kubernetes resource that represents a group of pods, with a permanent IP address. The service network is a system of virtual IPs that are generally separate from Pod and Node networks.
Because of these separate networks, external applications cannot directly communicate with applications within a Kubernetes cluster. Most users choose to deploy an ingress controller and expose it as the single entry point to a Kubernetes cluster.
This diagram illustrates the differences between Kubernetes and Nomad networking.

Nomad's default network is the Node network. Each task group instance uses the Node IP network and gets its own port through dynamic port assignment. Since there is no Virtual IP or an additional overlay network required, the Nomad cluster network can be part of an existing enterprise network.
In Kubernetes, Service and kube-proxy are responsible for tracking the pods and routing the traffic to them. In Nomad production deployments, we recommend integrating HashiCorp Consul for this functionality.
Workload definitions
Kubernetes and Nomad both support workloads defined as declarative specification files. In these configurations, you declare the state that you want your workload to be in, and the orchestrator schedules the workload so that it matches that declared state.
Kubernetes
Workloads in Kubernetes are defined in YAML files, with applications declared in a series of specifications. These can include but are not limited to the following:
- Deployments: These specs deploy one or more application containers.
- Services: These specs deploy services to expose one or more containers.
- Config Maps: These specs define non-sensitive application data.
- Secrets: These specs define sensitive application data.
An example three-tiered application with frontend, backend, and database would consist of:
- A Deployment spec for each tier
- A Service spec for each tier
- A Config Map spec for each tier
- A Secret spec for the database
The number of specifications required can increase as well, as Kubernetes can accommodate other types of workloads with its additional specifications.
- Deployment and ReplicaSet declare ongoing stateless workloads, such as an application proxy that does not store any data.
- StatefulSets are required for ongoing stateful workloads, such as a database that stores its data to a disk for access by other application services.
- DaemonSet are required for ongoing workloads that are local to a specific node, such as a garbage collection workload that periodically deletes cache files in a directory of a specific node.
- Jobs are required for workloads that run to completion, such as a workload that tests connectivity to an upstream application and then exits.
Nomad
Workloads in Nomad are defined in job specification (jobspec) files using the Hashicorp Configuration Language (HCL). A jobspec contains the declarative configuration for each component of a workload, including:
- Tasks that define container details
- Network settings to expose application ports
- Services that make applications available on a port
- Metadata about the job
An example three-tiered application with frontend, backend, and database is defined in one jobspec file. To logically separate the different tiers in the jobspec, use the group block. When you submit the jobspec to a Nomad server, Nomad allocates the declared components to a compatible node.
Specification comparison
This chart shows the mapping between workload terminology used by Kubernetes and Nomad.
| Kubernetes | Nomad |
|---|---|
| Deployment | Service Job |
| StatefulSet | Service Job |
| DaemonSet | System Job |
| Job | Batch or System batch job |
| CronJob | Batch with periodic block |
| Pod | Task |
| Init containers | Prestart tasks |
| Sidecar containers | Sidecar tasks |
| Service | service block definition |
| Ingress | None. Use external load balancers instead. |
| Volumes | Host volumes or CSI volumes |
| ConfigMaps | Variables |
| Secrets | secret block (uses Variables) |
| Liveness, readiness, startup probes | check block |
Workload scheduling
Scheduling is the process that evaluates the requirements of a workload and the state of the cluster to find an appropriate worker or client node to place the workload.
Kubernetes
The kube-scheduler watches for newly created Pods that are not assigned to a node. If a node meets the Pod's resource, affinity or anti-affinity, and data locality requirements, it is marked feasible. The process filters out nodes that do not meet the requirements, and then scores the remaining nodes to determine a node assignment. If multiple nodes have the same score, then one is randomly selected. Finally, a binding is created by notifying the API server when a node is selected.
Nomad
Nomad servers run scheduler workers that process evaluations from the leader server, which runs the evaluation broker. These scheduler workers then generate an allocation plan that includes a set of allocations to create, update, and evict. The scheduling process in Nomad has four primary components:
- Node: The Nomad client that runs the workload.
- Job: A declarative description of the tasks to run, bound by constraints and resource requirements.
- Allocation: A mapping of tasks to a client node that includes a declaration that a set of tasks should run on a particular node.
- Evaluation: A plan created anytime the external state, either desired or emergent, changes.
Next steps
Access the Turn a Kubernetes manifest into a Nomad job specification guide.
Review these blogs for in-depth comparisons between Nomad and Kubernetes: