Vault
operator raft
This command groups subcommands for operators to manage the Integrated Storage Raft backend.
Usage: vault operator raft <subcommand> [options] [args]
 This command groups subcommands for operators interacting with the Vault
 integrated Raft storage backend. Most users will not need to interact with these
 commands. Here are a few examples of the Raft operator commands:
Subcommands:
    join           Joins a node to the Raft cluster
    list-peers     Returns the Raft peer set
    remove-peer    Removes a node from the Raft cluster
    snapshot       Restores and saves snapshots from the Raft cluster
join
This command is used to join a new node as a peer to the Raft cluster. In order to join, there must be at least one existing member of the cluster. If Shamir seal is in use, then unseal keys are to be supplied before or after the join process, depending on whether it's being used exclusively for HA.
If raft is used for storage, the node must be joined before unsealing and the
leader-api-addr argument must be provided. If raft is used for ha_storage,
the node must be first unsealed before joining and the leader-api-addr must
not be provided.
Usage: vault operator raft join [options] <leader-api-addr>
  Join the current node as a peer to the Raft cluster by providing the address
  of the Raft leader node.
      $ vault operator raft join "http://127.0.0.2:8200"
The join command also allows operators to specify cloud auto-join configuration
instead of a static IP address or hostname. When provided, Vault will attempt to
automatically discover and resolve potential leader addresses based on the provided
auto-join configuration.
Vault uses go-discover to support the auto-join functionality. Please see the go-discover README for details on the format.
By default, Vault will attempt to reach discovered peers using HTTPS and port 8200.
Operators may override these through the --auto-join-scheme and --auto-join-port
CLI flags respectively.
Usage: vault operator raft join [options] <auto-join-configuration>
  Join the current node as a peer to the Raft cluster by providing cloud auto-join
  metadata configuration.
    $ vault operator raft join "provider=aws region=eu-west-1 ..."
Parameters
The following flags are available for the operator raft join command.
- -leader-ca-cert- (string: "")- CA cert to communicate with Raft leader.
- -leader-client-cert- (string: "")- Client cert to authenticate to Raft leader.
- -leader-client-key- (string: "")- Client key to authenticate to Raft leader.
- -non-voter- (bool: false) (enterprise)- This flag is used to make the server not participate in the Raft quorum, and have it only receive the data replication stream. This can be used to add read scalability to a cluster in cases where a high volume of reads to servers are needed. The default is false. See- retry_join_as_non_voterfor the equivalent config option when using- retry_joinstanzas instead.
- -retry- (bool: false)- Continuously retry joining the Raft cluster upon failures. The default is false.
Note: Please be aware that the content (not the path to the file) of the certificate or key is expected for these parameters: -leader-ca-cert, -leader-client-cert, -leader-client-key.
list-peers
This command is used to list the full set of peers in the Raft cluster.
Usage: vault operator raft list-peers
  Provides the details of all the peers in the Raft cluster.
      $ vault operator raft list-peers
Example output
{
 ...
  "data": {
    "config": {
      "index": 62,
      "servers": [
        {
          "address": "127.0.0.2:8201",
          "leader": true,
          "node_id": "node1",
          "protocol_version": "3",
          "voter": true
        },
        {
          "address": "127.0.0.4:8201",
          "leader": false,
          "node_id": "node3",
          "protocol_version": "3",
          "voter": true
        }
      ]
    }
  }
}
Use the output of list-peers to ensure that your cluster is in an expected state.
If you've removed a server using remove-peer, the server should no longer be
listed in the list-peers output. If you've added a server using add-peer or
through retry_join, check the list-peers output to see that it has been added
to the cluster and (if the node has not been added as a non-voter)
it has been promoted to a voter.
remove-peer
This command is used to remove a node from being a peer to the Raft cluster. In certain cases where a peer may be left behind in the Raft configuration even though the server is no longer present and known to the cluster, this command can be used to remove the failed server so that it is no longer affects the Raft quorum.
Usage: vault operator raft remove-peer <server_id>
  Removes a node from the Raft cluster.
      $ vault operator raft remove-peer node1
Note
Once a node is removed, its Raft data needs to be deleted before it may be joined back into an existing cluster. This requires shutting down the Vault process, deleting the data, then restarting the Vault process on the removed node.snapshot
This command groups subcommands for operators interacting with the snapshot functionality of the integrated Raft storage backend.
Backup and restore data
All Vault editions support snapshot save and restore features for data backup and restoration. Vault Enterprise users benefit from automated snapshots to local or cloud storage, and individual secret recovery.
See the manage snapshots system administration guides for more information.
Usage: vault operator raft snapshot <subcommand> [options] [args]
  This command groups subcommands for operators interacting with the snapshot
  functionality of the integrated Raft storage backend. Here are a few examples of
  the Raft snapshot operator commands:
  Installs the provided snapshot, returning the cluster to the state defined in it:
      $ vault operator raft snapshot restore raft.snap
  Saves a snapshot of the current state of the Raft cluster into a file:
      $ vault operator raft snapshot save raft.snap
  Inspects a snapshot based on a file:
      $ vault operator raft snapshot inspect raft.snap
  Please see the individual subcommand help for detailed usage information.
Subcommands:
    inspect    Inspects raft snapshot
    load       Loads the provided local or cloud storage snapshot
    restore    Installs the provided snapshot, returning the cluster to the state defined in it
    save       Saves a snapshot of the current state of the Raft cluster into a file
    unload     Unloads the snapshot with provided ID```
snapshot inspect
Inspects a snapshot file taken from a Vault raft cluster and prints a table showing the number of keys and the amount of space used.
Usage: vault operator raft snapshot inspect <snapshot_file>
For example:
$ vault operator raft snapshot inspect raft.snap
snapshot load
Enterprise
Appropriate Vault Enterprise license required
Load a new snapshot into the Vault cluster without overwriting the cluster with the snapshot's data.
Restrictions for loading snapshots
- The snapshot must be from the same cluster and have the same unseal keys as the current cluster.
- You can only load one snapshot into a cluster at a time.
- Vault automatically unloads snapshots 72 hours after they are loaded.
- If your cluster's active node changes, the loaded snapshot's status updates
to errorand it cannot be used for any recover operations. You must unload and reload the snapshot to use it again.
- You cannot load a snapshot if your cluster has mlock enabled in its Vault configuration.
Usage: vault operator raft snapshot load < SNAPSHOT_FILE | auto_snapshot_config=CONFIG_NAME url=URL >
  Loads the provided snapshot for reading or recovering data on supported paths.
  This command supports two modes of operation:
  1. Loading a local snapshot file from the filesystem.
     In this mode, the file path must be provided as the only positional argument.
         $ vault operator raft snapshot load raft.snap
  2. Loading a snapshot that was created by Vault's automated snapshot utility from cloud storage.
     In this case, the command accepts two K=V arguments: auto_snapshot_config and url.
         $ vault operator raft snapshot load auto_snapshot_config=foo url=https://foo.com/blob/snapshot
snapshot restore
Restores a snapshot of Vault data taken with vault operator raft snapshot save.
Usage: vault operator raft snapshot restore <snapshot_file>
  Installs the provided snapshot, returning the cluster to the state defined in it.
      $ vault operator raft snapshot restore raft.snap
snapshot save
Takes a snapshot of your Vault data. You can use the snapshot to restore Vault to the point in time when you took the snapshot if you use integrated storage. Vault does not support snapshots for deployments that only use raft for high-availability storage.
Usage: vault operator raft snapshot save <snapshot_file>
  Saves a snapshot of the current state of the Raft cluster into a file.
      $ vault operator raft snapshot save raft.snap
snapshot unload
Enterprise
Appropriate Vault Enterprise license required
Unload a snapshot previously loaded with vault operator raft snapshot load.
Usage: vault operator raft snapshot unload <SNAPSHOT_ID>
  Unloads the snapshot with the provided ID.
      $ vault operator raft snapshot unload 8a4f31cc-7bb2-227a-aa18-8f140fb40f10
autopilot
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend. There are 3 subcommands
supported: get-config, set-config and state.
For a more detailed overview of autopilot features, see the concepts page.
Usage: vault operator raft autopilot <subcommand> [options] [args]
This command groups subcommands for operators interacting with the autopilot
functionality of the integrated Raft storage backend.
Subcommands:
    get-config    Returns the configuration of the autopilot subsystem under integrated storage
    set-config    Modify the configuration of the autopilot subsystem under integrated storage
    state         Displays the state of the raft cluster under integrated storage as seen by autopilot
autopilot state
Displays the state of the raft cluster under integrated storage as seen by autopilot. It shows whether autopilot thinks the cluster is healthy or not.
State includes a list of all servers by nodeID and IP address.
Usage: vault operator raft autopilot state
  Displays the state of the raft cluster under integrated storage as seen by autopilot.
    $ vault operator raft autopilot state
Example output
Healthy:                         true
Failure Tolerance:               1
Leader:                          vault_1
Voters:
   vault_1
   vault_2
   vault_3
Servers:
   vault_1
      Name:              vault_1
      Address:           127.0.0.1:8201
      Status:            leader
      Node Status:       alive
      Healthy:           true
      Last Contact:      0s
      Last Term:         3
      Last Index:        61
      Version:           1.17.3
      Node Type:         voter
   vault_2
      Name:              vault_2
      Address:           127.0.0.1:8203
      Status:            voter
      Node Status:       alive
      Healthy:           true
      Last Contact:      564.765375ms
      Last Term:         3
      Last Index:        61
      Version:           1.17.3
      Node Type:         voter
   vault_3
      Name:              vault_3
      Address:           127.0.0.1:8205
      Status:            voter
      Node Status:       alive
      Healthy:           true
      Last Contact:      3.814017875s
      Last Term:         3
      Last Index:        61
      Version:           1.17.3
      Node Type:         voter
The "Failure Tolerance" of a cluster is the number of nodes in the cluster that could fail gradually without causing an outage.
When verifying the health of your cluster, check the following fields of each server:
- Healthy: whether Autopilot considers this node healthy or not
- Status: the voting status of the node. This will be voter,leader, ornon-voter.
- Last Index: the index of the last applied Raft log. This should be close to the "Last Index" value of the leader.
- Version: the version of Vault running on the server
- Node Type: the type of node. On CE, this will always be voter. See below for an explanation of Enterprise node types.
Vault Enterprise will include additional output related to automated upgrades, optimistic failure tolerance, and redundancy zones.
Example Vault enterprise output
Redundancy Zones:
   a
      Servers: vault_1, vault_2, vault_5
      Voters: vault_1
      Failure Tolerance: 2
   b
      Servers: vault_3, vault_4
      Voters: vault_3
      Failure Tolerance: 1
Upgrade Info:
   Status: await-new-voters
   Target Version: 1.17.5
   Target Version Voters:
   Target Version Non-Voters: vault_5
   Other Version Voters: vault_1, vault_3
   Other Version Non-Voters: vault_2, vault_4
   Redundancy Zones:
      a
         Target Version Voters:
         Target Version Non-Voters: vault_5
         Other Version Voters: vault_1
         Other Version Non-Voters: vault_2
      b
         Target Version Voters:
         Target Version Non-Voters:
         Other Version Voters: vault_3
         Other Version Non-Voters: vault_4
"Optimistic Failure Tolerance" describes the number of healthy active and back-up voting servers that can fail gradually without causing an outage.
Enterprise Node Types
- voter: The server is a Raft voter and contributing to quorum.
- read-replica: The server is not a Raft voter, but receives a replica of all data.
- zone-voter: The main Raft voter in a redundancy zone.
- zone-extra-voter: An additional Raft voter in a redundancy zone.
- zone-standby: A non-voter in a redundancy zone that can be promoted to a voter, if needed.
autopilot get-config
Returns the configuration of the autopilot subsystem under integrated storage.
Usage: vault operator raft autopilot get-config
  Returns the configuration of the autopilot subsystem under integrated storage.
    $ vault operator raft autopilot get-config
autopilot set-config
Modify the configuration of the autopilot subsystem under integrated storage.
Usage: vault operator raft autopilot set-config [options]
  Modify the configuration of the autopilot subsystem under integrated storage.
      $ vault operator raft autopilot set-config -server-stabilization-time 10s
Flags applicable to this command are the following:
- cleanup-dead-servers- (bool: false)- Controls whether to remove dead servers from the Raft peer list periodically or when a new server joins. This requires that- min-quorumis also set.
- last-contact-threshold- (string: "10s")- Limit on the amount of time a server can go without leader contact before being considered unhealthy.
- dead-server-last-contact-threshold- (string: "24h")- Limit on the amount of time a server can go without leader contact before being considered failed. This takes effect only when- cleanup_dead_serversis set. When adding new nodes to your cluster, the- dead_server_last_contact_thresholdneeds to be larger than the amount of time that it takes to load a Raft snapshot, otherwise the newly added nodes will be removed from your cluster before they have finished loading the snapshot and starting up. If you are using an HSM, your- dead_server_last_contact_thresholdneeds to be larger than the response time of the HSM.
Warning
  We strongly recommend keeping dead_server_last_contact_threshold at a high
duration, such as a day, as it being too low could result in removal of nodes
that aren't actually dead
- max-trailing-logs- (int: 1000)- Amount of entries in the Raft Log that a server can be behind before being considered unhealthy. If this value is too low, it can cause the cluster to lose quorum if a follower falls behind. This value only needs to be increased from the default if you have a very high write load on Vault and you see that it takes a long time to promote new servers to becoming voters. This is an unlikely scenario and most users should not modify this value.
- min-quorum- (int)- The minimum number of servers that should always be present in a cluster. Autopilot will not prune servers below this number. There is no default for this value and it should be set to the expected number of voters in your cluster when- cleanup_dead_serversis set as- true. Use the quorum size guidance to determine the proper minimum quorum size for your cluster.
- server-stabilization-time- (string: "10s")- Minimum amount of time a server must be in a healthy state before it can become a voter. Until that happens, it will be visible as a peer in the cluster, but as a non-voter, meaning it won't contribute to quorum.
- disable-upgrade-migration- (bool: false)- Controls whether to disable automated upgrade migrations, an Enterprise-only feature.