Nomad
Metrics Reference
The Nomad agent collects various runtime metrics about the performance of different libraries and subsystems. These metrics are aggregated on a ten second interval and are retained for one minute.
This data can be accessed via an HTTP endpoint or via sending a signal to the
Nomad process. This data is available via HTTP at /metrics. See
Metrics for more information.
To view this data via sending a signal to the Nomad process: on Unix,
this is USR1 while on Windows it is BREAK. Once Nomad receives the signal,
it will dump the current telemetry information to the agent's stderr.
This telemetry information can be used for debugging or otherwise getting a better view of what Nomad is doing.
Telemetry information can be streamed to both statsite as well as statsd based on providing the appropriate configuration options.
To configure the telemetry output please see the agent configuration.
Below is sample output of a telemetry dump:
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.broker.total_pending': 0.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.plan.queue_depth': 0.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.malloc_count': 7568.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.total_gc_runs': 8.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.broker.total_ready': 0.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.num_goroutines': 56.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.sys_bytes': 3999992.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.heap_objects': 4135.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.heartbeat.active': 1.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.broker.total_unacked': 0.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.nomad.broker.total_waiting': 0.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.alloc_bytes': 634056.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.free_count': 3433.000
[2015-09-17 16:59:40 -0700 PDT][G] 'nomad.runtime.total_gc_pause_ns': 6572135.000
[2015-09-17 16:59:40 -0700 PDT][C] 'nomad.memberlist.msg.alive': Count: 1 Sum: 1.000
[2015-09-17 16:59:40 -0700 PDT][C] 'nomad.serf.member.join': Count: 1 Sum: 1.000
[2015-09-17 16:59:40 -0700 PDT][C] 'nomad.raft.barrier': Count: 1 Sum: 1.000
[2015-09-17 16:59:40 -0700 PDT][C] 'nomad.raft.apply': Count: 1 Sum: 1.000
[2015-09-17 16:59:40 -0700 PDT][C] 'nomad.nomad.rpc.query': Count: 2 Sum: 2.000
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.serf.queue.Query': Count: 6 Sum: 0.000
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.fsm.register_node': Count: 1 Sum: 1.296
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.serf.queue.Intent': Count: 6 Sum: 0.000
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.runtime.gc_pause_ns': Count: 8 Min: 126492.000 Mean: 821516.875 Max: 3126670.000 Stddev: 1139250.294 Sum: 6572135.000
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.raft.leader.dispatchLog': Count: 3 Min: 0.007 Mean: 0.018 Max: 0.039 Stddev: 0.018 Sum: 0.054
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.leader.reconcileMember': Count: 1 Sum: 0.007
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.leader.reconcile': Count: 1 Sum: 0.025
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.raft.fsm.apply': Count: 1 Sum: 1.306
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.client.get_allocs': Count: 1 Sum: 0.110
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.worker.dequeue_eval': Count: 29 Min: 0.003 Mean: 363.426 Max: 503.377 Stddev: 228.126 Sum: 10539.354
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.serf.queue.Event': Count: 6 Sum: 0.000
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.raft.commitTime': Count: 3 Min: 0.013 Mean: 0.037 Max: 0.079 Stddev: 0.037 Sum: 0.110
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.leader.barrier': Count: 1 Sum: 0.071
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.client.register': Count: 1 Sum: 1.626
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.nomad.eval.dequeue': Count: 21 Min: 500.610 Mean: 501.753 Max: 503.361 Stddev: 1.030 Sum: 10536.813
[2015-09-17 16:59:40 -0700 PDT][S] 'nomad.memberlist.gossip': Count: 12 Min: 0.009 Mean: 0.017 Max: 0.025 Stddev: 0.005 Sum: 0.204
Metric Types
| Type | Description | Quantiles | 
|---|---|---|
| Gauge | Gauge types report an absolute number at the end of the aggregation interval | false | 
| Counter | Counts are incremented and flushed at the end of the aggregation interval and then are reset to zero | true | 
| Timer | Timers measure the time to complete a task and will include quantiles, means, standard deviation, etc per interval. | true | 
Tagged Metrics
Nomad emits metrics in a tagged format. Each metric can support more than one tag, meaning that it is possible to do a match over metrics for datapoints such as a particular datacenter, and return all metrics with this tag. Nomad supports labels for namespaces as well.
Key Metrics
The metrics in the table below are the most important metrics for monitoring the overall health of a Nomad cluster.
When telemetry is being streamed to statsite or statsd, interval in the
table below is defined to be their flush interval. Otherwise, the interval can
be assumed to be 10 seconds when retrieving metrics using the above described
signals.
| Metrics | Description | Unit | Type | 
|---|---|---|---|
| nomad.runtime.alloc_bytes | Memory utilization | # of bytes | Gauge | 
| nomad.runtime.heap_objects | Number of objects on the heap. General memory pressure indicator | # of heap objects | Gauge | 
| nomad.runtime.num_goroutines | Number of goroutines and general load pressure indicator | # of goroutines | Gauge | 
| nomad.nomad.broker.total_pending | Evaluations that are pending until an existing evaluation for the same job completes | # of evaluations | Gauge | 
| nomad.nomad.broker.total_ready | Number of evaluations ready to be processed | # of evaluations | Gauge | 
| nomad.nomad.broker.total_unacked | Evaluations dispatched for processing but incomplete | # of evaluations | Gauge | 
| nomad.nomad.heartbeat.active | Number of active heartbeat timers. Each timer represents a Nomad Client connection | # of heartbeat timers | Gauge | 
| nomad.nomad.heartbeat.invalidate | The length of time it takes to invalidate a Nomad Client due to failed heartbeats | ms / Heartbeat Invalidation | Timer | 
| nomad.nomad.plan.evaluate | Time to validate a scheduler Plan. Higher values cause lower scheduling throughput. Similar to nomad.plan.submitbut does not include RPC time or time in the Plan Queue | ms / Plan Evaluation | Timer | 
| nomad.nomad.plan.node_rejected | Number of times a node has had a plan rejected. A node with a high rate of rejections may have an underlying issue causing it to be unschedulable. Refer to this link for more information | # of rejected plans | Counter | 
| nomad.nomad.plan.queue_depth | Number of scheduler Plans waiting to be evaluated | # of plans | Gauge | 
| nomad.nomad.plan.submit | Time to submit a scheduler Plan. Higher values cause lower scheduling throughput | ms / Plan Submit | Timer | 
| nomad.nomad.rpc.query | Number of RPC queries | RPC Queries / interval | Counter | 
| nomad.nomad.rpc.request_error | Number of RPC requests being handled that result in an error | RPC Errors / interval | Counter | 
| nomad.nomad.rpc.request | Number of RPC requests being handled | RPC Requests / interval | Counter | 
| nomad.nomad.vault.token_last_renewal | Time since last successful Vault token renewal | Milliseconds | Gauge | 
| nomad.nomad.vault.token_next_renewal | Time until next Vault token renewal attempt | Milliseconds | Gauge | 
| nomad.nomad.worker.invoke_scheduler.<type> | Time to run the scheduler of the given type | ms / Scheduler Run | Timer | 
| nomad.nomad.worker.wait_for_index | Time waiting for Raft log replication from leader. High delays result in lower scheduling throughput | ms / Raft Index Wait | Timer | 
| nomad.raft.apply | Number of Raft transactions | Raft transactions / interval | Counter | 
| nomad.raft.leader.lastContact | Time since last contact to leader. General indicator of Raft latency | ms / Leader Contact | Timer | 
| nomad.raft.replication.appendEntries | Raft transaction commit time | ms / Raft Log Append | Timer | 
| nomad.license.expiration_time_epoch | Time as epoch (seconds since Jan 1 1970) at which license will expire | Seconds | Gauge | 
Client Metrics
The Nomad client emits metrics related to the resource usage of the allocations
and tasks running on it and the node itself. Operators have to explicitly turn
on publishing host and allocation metrics. Publishing allocation and host
metrics can be turned on by setting the value of publish_allocation_metrics
publish_node_metrics to true.
By default the collection interval is 1 second but it can be changed by the
changing the value of the collection_interval key in the telemetry
configuration block.
Please see the agent configuration page for more details.
Additional labels are emitted for parameterized and
periodic jobs. Nomad
emits the parent job id as a new label parent_id. Also, the labels dispatch_id
and periodic_id are emitted, containing the ID of the specific invocation of the
parameterized or periodic job respectively. For example, a dispatch job with the id
myjob/dispatch-1312323423423, will have the following labels.
| Label | Value | 
|---|---|
| job | myjob/dispatch-1312323423423 | 
| parent_id | myjob | 
| dispatch_id | 1312323423423 | 
Host Metrics
Nomad will emit tagged metrics, in the below format:
| Metric | Description | Unit | Type | Labels | 
|---|---|---|---|---|
| nomad.client.allocated.cpu | Total amount of CPU shares the scheduler has allocated to tasks | Mhz | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocated.memory | Total amount of memory the scheduler has allocated to tasks | Megabytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocated.disk | Total amount of disk space the scheduler has allocated to tasks | Megabytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.blocked | Number of allocations waiting for previous versions to exit | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.migrating | Number of allocations migrating data from previous versions (see sticky) | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.pending | Number of allocations pending (received by the client but not yet running) | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.running | Number of allocations running | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.start | Number of allocations starting | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocations.terminal | Number of allocations terminal | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.allocs.oom_killed | Number of allocations OOM killed | Integer | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.idle | CPU utilization in idle state | Percentage | Gauge | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.system | CPU utilization in system space | Percentage | Gauge | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.total_percent | Total CPU utilization in percentage | Percentage | Gauge | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.total_ticks | Total CPU utilization in ticks | Integer | Gauge | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.total_ticks_count | Total CPU utilization in ticks since startup | Integer | Counter | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.cpu.user | CPU utilization in user space | Percentage | Gauge | cpu, datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.disk.available | Amount of space which is available | Bytes | Gauge | datacenter, disk, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.disk.inodes_percent | Disk space consumed by the inodes | Percentage | Gauge | datacenter, disk, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.disk.size | Total size of the device | Bytes | Gauge | datacenter, disk, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.disk.used_percent | Percentage of disk space used | Percentage | Gauge | datacenter, disk, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.disk.used | Amount of space which has been used | Bytes | Gauge | datacenter, disk, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.memory.available | Total amount of memory available to processes which includes free and cached memory | Bytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.memory.free | Amount of memory which is free | Bytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.memory.total | Total amount of physical memory on the node | Bytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.host.memory.used | Amount of memory used by processes | Bytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.unallocated.cpu | Total amount of CPU shares free for the scheduler to allocate to tasks | Mhz | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.unallocated.disk | Total amount of disk space free for the scheduler to allocate to tasks | Megabytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.unallocated.memory | Total amount of memory free for the scheduler to allocate to tasks | Megabytes | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
| nomad.client.uptime | Uptime of the host running the Nomad client | Seconds | Gauge | datacenter, host, node_class, node_id, node_pool, node_scheduling_eligibility, node_status | 
Allocation Metrics
The following metrics are emitted for each allocation if allocation metrics are enabled. Note that allocation metrics available may be dependent on factors such as the task driver and control group (cgroup) version in use.
| Metric | Description | Unit | Type | Labels | 
|---|---|---|---|---|
| nomad.client.allocs.complete | Number of complete allocations | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.allocated | Total CPU resources allocated by the task across all cores | MHz | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.system | Total CPU resources consumed by the task in system space | Percentage | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.throttled_periods | Total number of CPU periods that the task was throttled | Nanoseconds | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.throttled_time | Total time that the task was throttled | Nanoseconds | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.total_percent | Total CPU resources consumed by the task across all cores | Percentage | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.total_ticks | CPU ticks consumed by the process in the last collection interval | Integer | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.total_ticks_count | Total CPU ticks consumed by the task since startup | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.cpu.user | Total CPU resources consumed by the task in the user space | Percentage | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.failed | Number of failed allocations | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.allocated | Amount of memory allocated by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.cache | Amount of memory cached by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.kernel_max_usage | Maximum amount of memory ever used by the kernel for this task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.kernel_usage | Amount of memory used by the kernel for this task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.max_allocated | Maximum amount of oversubscription memory allocated by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.max_usage | Maximum amount of memory ever used by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.rss | Amount of RSS memory consumed by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.swap | Amount of memory swapped by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.memory.usage | Total amount of memory used by the task | Bytes | Gauge | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.oom_killed | Number of oom-killed allocations | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.restart | Number of task restarts | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
| nomad.client.allocs.running | Number of running allocations | Integer | Counter | alloc_id, host, job, namespace, task, task_group | 
Job Summary Metrics
Job summary metrics are emitted by the Nomad leader server.
| Metric | Description | Unit | Type | Labels | 
|---|---|---|---|---|
| nomad.nomad.job_summary.complete | Number of complete allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.failed | Number of failed allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.lost | Number of lost allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.unknown | Number of unknown allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.queued | Number of queued allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.running | Number of running allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
| nomad.nomad.job_summary.starting | Number of starting allocations for a job | Integer | Gauge | host, job, namespace, task_group | 
Job Status Metrics
Job status metrics are emitted by the Nomad leader server.
| Metric | Description | Unit | Type | Labels | 
|---|---|---|---|---|
| nomad.nomad.job_status.dead | Number of dead jobs | Integer | Gauge | host | 
| nomad.nomad.job_status.pending | Number of pending jobs | Integer | Gauge | host | 
| nomad.nomad.job_status.running | Number of running jobs | Integer | Gauge | host | 
Server Metrics
The following table includes metrics for overall cluster health in addition to those listed in Key Metrics above.
| Metric | Description | Unit | Type | Labels | 
|---|---|---|---|---|
| nomad.memberlist.gossip | Time elapsed to broadcast gossip messages | Milliseconds | Timer | host | 
| nomad.nomad.acl.bootstrap | Time elapsed for ACL.BootstrapRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.delete_policies | Time elapsed for ACL.DeletePoliciesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.delete_tokens | Time elapsed for ACL.DeleteTokensRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.get_policies | Time elapsed for ACL.GetPoliciesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.get_policy | Time elapsed for ACL.GetPolicyRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.get_token | Time elapsed for ACL.GetTokenRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.get_tokens | Time elapsed for ACL.GetTokensRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.list_policies | Time elapsed for ACL.ListPoliciesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.list_tokens | Time elapsed for ACL.ListTokensRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.resolve_token | Time elapsed for ACL.ResolveTokenRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.upsert_policies | Time elapsed for ACL.UpsertPoliciesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.acl.upsert_tokens | Time elapsed for ACL.UpsertTokensRPC call | Milliseconds | Timer | host | 
| nomad.nomad.alloc.exec | Time elapsed to establish alloc exec | Milliseconds | Timer | host | 
| nomad.nomad.alloc.get_alloc | Time elapsed for Alloc.GetAllocRPC call | Milliseconds | Timer | host | 
| nomad.nomad.alloc.get_allocs | Time elapsed for Alloc.GetAllocsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.alloc.list | Time elapsed for Alloc.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.alloc.stop | Time elapsed for Alloc.StopRPC call | Milliseconds | Timer | host | 
| nomad.nomad.alloc.update_desired_transition | Time elapsed for Alloc.UpdateDesiredTransitionRPC call | Milliseconds | Timer | host | 
| nomad.nomad.blocked_evals.cpu | Amount of CPU shares requested by blocked evals | Integer | Gauge | datacenter, host, node_class | 
| nomad.nomad.blocked_evals.memory | Amount of memory requested by blocked evals | Integer | Gauge | datacenter, host, node_class | 
| nomad.nomad.blocked_evals.job.cpu | Amount of CPU shares requested by blocked evals of a job | Integer | Gauge | host, job, namespace | 
| nomad.nomad.blocked_evals.job.memory | Amount of memory requested by blocked evals of a job | Integer | Gauge | host, job, namespace | 
| nomad.nomad.blocked_evals.total_blocked | Count of evals in the blocked state for any reason (cluster resource exhaustion or quota limits) | Integer | Gauge | host | 
| nomad.nomad.blocked_evals.total_escaped | Count of evals that have escaped computed node classes. This indicates a scheduler optimization was skipped and is not usually a source of concern. | Integer | Gauge | host | 
| nomad.nomad.blocked_evals.total_quota_limit | Count of blocked evals due to quota limits (the resources for these jobs are not counted in other blocked_evals metrics, except for total_blocked) | Integer | Gauge | host | 
| nomad.nomad.broker.batch_ready | Count of batch evals ready to be scheduled | Integer | Gauge | host | 
| nomad.nomad.broker.batch_unacked | Count of unacknowledged batch evals | Integer | Gauge | host | 
| nomad.nomad.broker.eval_waiting | Time elapsed with evaluation waiting to be enqueued | Milliseconds | Gauge | eval_id, job, namespace | 
| nomad.nomad.broker.service_ready | Count of service evals ready to be scheduled | Integer | Gauge | host | 
| nomad.nomad.broker.service_unacked | Count of unacknowledged service evals | Integer | Gauge | host | 
| nomad.nomad.broker.system_ready | Count of system evals ready to be scheduled | Integer | Gauge | host | 
| nomad.nomad.broker.system_unacked | Count of unacknowledged system evals | Integer | Gauge | host | 
| nomad.nomad.broker.total_ready | Count of evals in the ready state | Integer | Gauge | host | 
| nomad.nomad.broker.total_waiting | Count of evals waiting to be enqueued | Integer | Gauge | host | 
| nomad.nomad.client.batch_deregister | Time elapsed for Node.BatchDeregisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.deregister | Time elapsed for Node.DeregisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.derive_si_token | Time elapsed for Node.DeriveSITokenRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.derive_vault_token | Time elapsed for Node.DeriveVaultTokenRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.emit_events | Time elapsed for Node.EmitEventsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.evaluate | Time elapsed for Node.EvaluateRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.get_allocs | Time elapsed for Node.GetAllocsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.get_client_allocs | Time elapsed for Node.GetClientAllocsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.get_node | Time elapsed for Node.GetNodeRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.list | Time elapsed for Node.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.register | Time elapsed for Node.RegisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.stats | Time elapsed for Client.StatsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.update_alloc | Time elapsed for Node.UpdateAllocRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.update_drain | Time elapsed for Node.UpdateDrainRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.update_eligibility | Time elapsed for Node.UpdateEligibilityRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client.update_status | Time elapsed for Node.UpdateStatusRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_allocations.garbage_collect_all | Time elapsed for ClientAllocations.GarbageCollectAllRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_allocations.garbage_collect | Time elapsed for ClientAllocations.GarbageCollectRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_allocations.restart | Time elapsed for ClientAllocations.RestartRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_allocations.signal | Time elapsed for ClientAllocations.SignalRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_allocations.stats | Time elapsed for ClientAllocations.StatsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_csi_controller.attach_volume | Time elapsed for Controller.AttachVolumeRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_csi_controller.detach_volume | Time elapsed for Controller.DetachVolumeRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_csi_controller.validate_volume | Time elapsed for Controller.ValidateVolumeRPC call | Milliseconds | Timer | host | 
| nomad.nomad.client_csi_node.detach_volume | Time elapsed for Node.DetachVolumeRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.allocations | Time elapsed for Deployment.AllocationsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.cancel | Time elapsed for Deployment.CancelRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.fail | Time elapsed for Deployment.FailRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.get_deployment | Time elapsed for Deployment.GetDeploymentRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.list | Time elapsed for Deployment.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.pause | Time elapsed for Deployment.PauseRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.promote | Time elapsed for Deployment.PromoteRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.reap | Time elapsed for Deployment.ReapRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.run | Time elapsed for Deployment.RunRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.set_alloc_health | Time elapsed for Deployment.SetAllocHealthRPC call | Milliseconds | Timer | host | 
| nomad.nomad.deployment.unblock | Time elapsed for Deployment.UnblockRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.ack | Time elapsed for Eval.AckRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.allocations | Time elapsed for Eval.AllocationsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.create | Time elapsed for Eval.CreateRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.dequeue | Time elapsed for Eval.DequeueRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.get_eval | Time elapsed for Eval.GetEvalRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.list | Time elapsed for Eval.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.nack | Time elapsed for Eval.NackRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.reap | Time elapsed for Eval.ReapRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.reblock | Time elapsed for Eval.ReblockRPC call | Milliseconds | Timer | host | 
| nomad.nomad.eval.update | Time elapsed for Eval.UpdateRPC call | Milliseconds | Timer | host | 
| nomad.nomad.file_system.list | Time elapsed for FileSystem.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.file_system.logs | Time elapsed to establish FileSystem.LogsRPC | Milliseconds | Timer | host | 
| nomad.nomad.file_system.stat | Time elapsed for FileSystem.StatRPC call | Milliseconds | Timer | host | 
| nomad.nomad.file_system.stream | Time elapsed to establish FileSystem.StreamRPC | Milliseconds | Timer | host | 
| nomad.nomad.fsm.alloc_client_update | Time elapsed to apply AllocClientUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.alloc_update_desired_transition | Time elapsed to apply AllocUpdateDesiredTransitionraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_acl_policy_delete | Time elapsed to apply ApplyACLPolicyDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_acl_policy_upsert | Time elapsed to apply ApplyACLPolicyUpsertraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_acl_token_bootstrap | Time elapsed to apply ApplyACLTokenBootstrapraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_acl_token_delete | Time elapsed to apply ApplyACLTokenDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_acl_token_upsert | Time elapsed to apply ApplyACLTokenUpsertraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_csi_plugin_delete | Time elapsed to apply ApplyCSIPluginDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_csi_volume_batch_claim | Time elapsed to apply ApplyCSIVolumeBatchClaimraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_csi_volume_claim | Time elapsed to apply ApplyCSIVolumeClaimraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_csi_volume_deregister | Time elapsed to apply ApplyCSIVolumeDeregisterraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_csi_volume_register | Time elapsed to apply ApplyCSIVolumeRegisterraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_deployment_alloc_health | Time elapsed to apply ApplyDeploymentAllocHealthraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_deployment_delete | Time elapsed to apply ApplyDeploymentDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_deployment_promotion | Time elapsed to apply ApplyDeploymentPromotionraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_deployment_status_update | Time elapsed to apply ApplyDeploymentStatusUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_job_stability | Time elapsed to apply ApplyJobStabilityraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_namespace_delete | Time elapsed to apply ApplyNamespaceDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_namespace_upsert | Time elapsed to apply ApplyNamespaceUpsertraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_node_pool_upsert | Time elapsed to apply ApplyNodePoolUpsertraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_node_pool_delete | Time elapsed to apply ApplyNodePoolDeleteraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_plan_results | Time elapsed to apply ApplyPlanResultsraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.apply_scheduler_config | Time elapsed to apply ApplySchedulerConfigraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.autopilot | Time elapsed to apply Autopilotraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.batch_deregister_job | Time elapsed to apply BatchDeregisterJobraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.batch_deregister_node | Time elapsed to apply BatchDeregisterNoderaft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.batch_node_drain_update | Time elapsed to apply BatchNodeDrainUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.cluster_meta | Time elapsed to apply ClusterMetaraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.delete_eval | Time elapsed to apply DeleteEvalraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.deregister_job | Time elapsed to apply DeregisterJobraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.deregister_node | Time elapsed to apply DeregisterNoderaft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.deregister_si_accessor | Time elapsed to apply DeregisterSITokenAccessorraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.deregister_vault_accessor | Time elapsed to apply DeregisterVaultAccessorraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.node_drain_update | Time elapsed to apply NodeDrainUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.node_eligibility_update | Time elapsed to apply NodeEligibilityUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.node_status_update | Time elapsed to apply NodeStatusUpdateraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.persist | Time elapsed to apply Persistraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.register_job | Time elapsed to apply RegisterJobraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.register_node | Time elapsed to apply RegisterNoderaft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.update_eval | Time elapsed to apply UpdateEvalraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.upsert_node_events | Time elapsed to apply UpsertNodeEventsraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.upsert_scaling_event | Time elapsed to apply UpsertScalingEventraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.upsert_si_accessor | Time elapsed to apply UpsertSITokenAccessorsraft entry | Milliseconds | Timer | host | 
| nomad.nomad.fsm.upsert_vault_accessor | Time elapsed to apply UpsertVaultAccessorraft entry | Milliseconds | Timer | host | 
| nomad.nomad.job.allocations | Time elapsed for Job.AllocationsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.batch_deregister | Time elapsed for Job.BatchDeregisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.deployments | Time elapsed for Job.DeploymentsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.deregister | Time elapsed for Job.DeregisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.dispatch | Time elapsed for Job.DispatchRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.evaluate | Time elapsed for Job.EvaluateRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.evaluations | Time elapsed for Job.EvaluationsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.get_job_versions | Time elapsed for Job.GetJobVersionsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.get_job | Time elapsed for Job.GetJobRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.latest_deployment | Time elapsed for Job.LatestDeploymentRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.list | Time elapsed for Job.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.plan | Time elapsed for Job.PlanRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.register | Time elapsed for Job.RegisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.revert | Time elapsed for Job.RevertRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.scale_status | Time elapsed for Job.ScaleStatusRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.scale | Time elapsed for Job.ScaleRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.stable | Time elapsed for Job.StableRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job.validate | Time elapsed for Job.ValidateRPC call | Milliseconds | Timer | host | 
| nomad.nomad.job_summary.get_job_summary | Time elapsed for Job.TimerRPC call | Milliseconds | Timer | host | 
| nomad.nomad.leader.barrier | Time elapsed to establish a raft barrier during leader transition | Milliseconds | Timer | host | 
| nomad.nomad.leader.reconcileMember | Time elapsed to reconcile a serf peer with state store | Milliseconds | Timer | host | 
| nomad.nomad.leader.reconcile | Time elapsed to reconcile all serf peers with state store | Milliseconds | Timer | host | 
| nomad.nomad.namespace.delete_namespaces | Time elapsed for Namespace.DeleteNamespaces | Milliseconds | Timer | host | 
| nomad.nomad.namespace.get_namespace | Time elapsed for Namespace.GetNamespace | Milliseconds | Timer | host | 
| nomad.nomad.namespace.get_namespaces | Time elapsed for Namespace.GetNamespaces | Milliseconds | Timer | host | 
| nomad.nomad.namespace.list_namespace | Time elapsed for Namespace.ListNamespaces | Milliseconds | Timer | host | 
| nomad.nomad.namespace.upsert_namespaces | Time elapsed for Namespace.UpsertNamespaces | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.list | Time elapsed for NodePool.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.list_jobs | Time elapsed for NodePool.ListJobsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.list_nodes | Time elapsed for NodePool.ListNodesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.get_node_pool | Time elapsed for NodePool.GetNodePoolRPC call | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.upsert_node_pools | Time elapsed for NodePool.UpsertNodePoolsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.node_pool.delete_node_pools | Time elapsed for NodePool.DeleteNodePoolsRPC call | Milliseconds | Timer | host | 
| nomad.nomad.periodic.force | Time elapsed for Periodic.ForceRPC call | Milliseconds | Timer | host | 
| nomad.nomad.plan.apply | Time elapsed to apply a plan | Milliseconds | Timer | host | 
| nomad.nomad.plan.evaluate | Time elapsed to evaluate a plan | Milliseconds | Timer | host | 
| nomad.nomad.plan.node_rejected | Number of times a node has had a plan rejected | Integer | Counter | host, node_id | 
| nomad.nomad.plan.rejection_tracker.node_score | Number of times a node has had a plan rejected within the tracker window | Integer | Gauge | host, node_id | 
| nomad.nomad.plan.queue_depth | Count of evals in the plan queue | Integer | Gauge | host | 
| nomad.nomad.plan.submit | Time elapsed for Plan.SubmitRPC call | Milliseconds | Timer | host | 
| nomad.nomad.plan.wait_for_index | Time elapsed that planner waits for the raft index of the plan to be processed | Milliseconds | Timer | host | 
| nomad.nomad.plugin.delete | Time elapsed for CSIPlugin.DeleteRPC call | Milliseconds | Timer | host | 
| nomad.nomad.plugin.get | Time elapsed for CSIPlugin.GetRPC call | Milliseconds | Timer | host | 
| nomad.nomad.plugin.list | Time elapsed for CSIPlugin.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.scaling.get_policy | Time elapsed for Scaling.GetPolicyRPC call | Milliseconds | Timer | host | 
| nomad.nomad.scaling.list_policies | Time elapsed for Scaling.ListPoliciesRPC call | Milliseconds | Timer | host | 
| nomad.nomad.search.prefix_search | Time elapsed for Search.PrefixSearchRPC call | Milliseconds | Timer | host | 
| nomad.nomad.vault.create_token | Time elapsed to create Vault token | Milliseconds | Timer | host | 
| nomad.nomad.vault.distributed_tokens_revoked | Count of revoked tokens | Integer | Gauge | host | 
| nomad.nomad.vault.lookup_token | Time elapsed to lookup Vault token | Milliseconds | Timer | host | 
| nomad.nomad.vault.renew_failed | Count of failed attempts to renew Vault token | Integer | Gauge | host | 
| nomad.nomad.vault.renew | Time elapsed to renew Vault token | Milliseconds | Timer | host | 
| nomad.nomad.vault.revoke_tokens | Time elapsed to revoke Vault tokens | Milliseconds | Timer | host | 
| nomad.nomad.vault.token_last_renewal | Time since last successful Vault token renewal | Milliseconds | Timer | host | 
| nomad.nomad.vault.token_next_renewal | Time until next Vault token renewal attempt | Milliseconds | Timer | host | 
| nomad.nomad.vault.token_ttl | Time to live for Vault token | Milliseconds | Timer | host | 
| nomad.nomad.vault.undistributed_tokens_abandoned | Count of abandoned tokens | Integer | Gauge | host | 
| nomad.nomad.volume.claim | Time elapsed for CSIVolume.ClaimRPC call | Milliseconds | Timer | host | 
| nomad.nomad.volume.deregister | Time elapsed for CSIVolume.DeregisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.volume.get | Time elapsed for CSIVolume.GetRPC call | Milliseconds | Timer | host | 
| nomad.nomad.volume.list | Time elapsed for CSIVolume.ListRPC call | Milliseconds | Timer | host | 
| nomad.nomad.volume.register | Time elapsed for CSIVolume.RegisterRPC call | Milliseconds | Timer | host | 
| nomad.nomad.volume.unpublish | Time elapsed for CSIVolume.UnpublishRPC call | Milliseconds | Timer | host | 
| nomad.nomad.worker.create_eval | Time elapsed for worker to create an eval | Milliseconds | Timer | host | 
| nomad.nomad.worker.dequeue_eval | Time elapsed for worker to dequeue an eval | Milliseconds | Timer | host | 
| nomad.nomad.worker.invoke_scheduler.<type> | Time elapsed for worker to invoke the scheduler of type <type> | Milliseconds | Timer | host | 
| nomad.nomad.worker.send_ack | Time elapsed for worker to send acknowledgement | Milliseconds | Timer | host | 
| nomad.nomad.worker.submit_plan | Time elapsed for worker to submit plan | Milliseconds | Timer | host | 
| nomad.nomad.worker.update_eval | Time elapsed for worker to submit updated eval | Milliseconds | Timer | host | 
| nomad.nomad.worker.wait_for_index | Time elapsed that worker waits for the raft index of the eval to be processed | Milliseconds | Timer | host | 
| nomad.raft.appliedIndex | Current index applied to FSM | Integer | Gauge | host | 
| nomad.raft.barrier | Count of blocking raft API calls | Integer | Counter | host | 
| nomad.raft.commitNumLogs | Count of logs enqueued | Integer | Gauge | host | 
| nomad.raft.commitTime | Time elapsed to commit writes | Milliseconds | Timer | host | 
| nomad.raft.fsm.apply | Time elapsed to apply write to FSM | Milliseconds | Timer | host | 
| nomad.raft.fsm.enqueue | Time elapsed to enqueue write to FSM | Milliseconds | Timer | host | 
| nomad.raft.lastIndex | Most recent index seen | Integer | Gauge | host | 
| nomad.raft.leader.dispatchLog | Time elapsed to write log, mark in flight, and start replication | Milliseconds | Timer | host | 
| nomad.raft.leader.dispatchNumLogs | Count of logs dispatched | Integer | Gauge | host | 
| nomad.raft.replication.appendEntries | Raft transaction commit time | ms / Raft Log Append | Timer | |
| nomad.raft.state.candidate | Count of entering candidate state | Integer | Gauge | host | 
| nomad.raft.state.follower | Count of entering follower state | Integer | Gauge | host | 
| nomad.raft.state.leader | Count of entering leader state | Integer | Gauge | host | 
| nomad.raft.transition.heartbeat_timeout | Count of failing to heartbeat and starting election | Integer | Gauge | host | 
| nomad.raft.transition.leader_lease_timeout | Count of stepping down as leader after losing quorum | Integer | Gauge | host | 
| nomad.runtime.free_count | Count of objects freed from heap by go runtime GC | Integer | Gauge | host | 
| nomad.runtime.gc_pause_ns | Go runtime GC pause times | Nanoseconds | Summary | host | 
| nomad.runtime.sys_bytes | Go runtime GC metadata size | # of bytes | Gauge | host | 
| nomad.runtime.total_gc_pause_ns | Total elapsed go runtime GC pause times | Nanoseconds | Gauge | host | 
| nomad.runtime.total_gc_runs | Count of go runtime GC runs | Integer | Gauge | host | 
| nomad.serf.queue.Event | Count of memberlist events received | Integer | Summary | host | 
| nomad.serf.queue.Intent | Count of memberlist changes | Integer | Summary | host | 
| nomad.serf.queue.Query | Count of memberlist queries | Integer | Summary | host | 
| nomad.scheduler.allocs.rescheduled.attempted | Count of attempts to reschedule an allocation | Integer | Count | alloc_id, job, namespace, task_group | 
| nomad.scheduler.allocs.rescheduled.limit | Maximum number of attempts to reschedule an allocation | Integer | Count | alloc_id, job, namespace, task_group | 
| nomad.scheduler.allocs.rescheduled.wait_until | Time that a rescheduled allocation will be delayed | Float | Gauge | alloc_id, job, namespace, task_group, follow_up_eval_id | 
| nomad.state.snapshotIndex | Current snapshot index | Integer | Gauge | host | 
Raft BoltDB Metrics
Raft database metrics are emitted by the raft-boltdb library.
| Metric | Description | Unit | Type | 
|---|---|---|---|
| nomad.raft.boltdb.numFreePages | Number of free pages | Integer | Gauge | 
| nomad.raft.boltdb.numPendingPages | Number of pending pages | Integer | Gauge | 
| nomad.raft.boltdb.freePageBytes | Number of free page bytes | Integer | Gauge | 
| nomad.raft.boltdb.freelistBytes | Number of freelist bytes | Integer | Gauge | 
| nomad.raft.boltdb.totalReadTxn | Count of total read transactions | Integer | Counter | 
| nomad.raft.boltdb.openReadTxn | Number of current open read transactions | Integer | Gauge | 
| nomad.raft.boltdb.txstats.pageCount | Number of pages in use | Integer | Gauge | 
| nomad.raft.boltdb.txstats.pageAlloc | Number of page allocations | Integer | Gauge | 
| nomad.raft.boltdb.txstats.cursorCount | Count of total database cursors | Integer | Counter | 
| nomad.raft.boltdb.txstats.nodeCount | Count of total database nodes | Integer | Counter | 
| nomad.raft.boltdb.txstats.nodeDeref | Count of total database node dereferences | Integer | Counter | 
| nomad.raft.boltdb.txstats.rebalance | Count of total rebalance operations | Integer | Counter | 
| nomad.raft.boltdb.txstats.rebalanceTime | Sample of rebalance operation times | Nanoseconds | Summary | 
| nomad.raft.boltdb.txstats.split | Count of total split operations | Integer | Counter | 
| nomad.raft.boltdb.txstats.spill | Count of total spill operations | Integer | Counter | 
| nomad.raft.boltdb.txstats.spillTime | Sample of spill operation times | Nanoseconds | Summary | 
| nomad.raft.boltdb.txstats.write | Count of total write operations | Integer | Counter | 
| nomad.raft.boltdb.txstats.writeTime | Sample of write operation times | Nanoseconds | Summary | 
Agent Metrics
Agent metrics are emitted by all Nomad agents running in either client or server mode.
| Metric | Description | Unit | Type | 
|---|---|---|---|
| nomad.agent.http.exceeded | Count of HTTP connections exceeding concurrency limit | Integer | Counter |