Consul
Upgrade Existing Clusters to Use Custom Resource Definitions
Upgrading to consul-helm versions >= 0.30.0
will require some changes if
you utilize the following:
connectInject.centralConfig.enabled
connectInject.centralConfig.defaultProtocol
connectInject.centralConfig.proxyDefaults
meshGateway.globalMode
- Connect annotation
consul.hashicorp.com/connect-service-protocol
Central Config Enabled
If you were previously setting centralConfig.enabled
to false
:
connectInject:
centralConfig:
enabled: false
Then instead you must use server.extraConfig
and client.extraConfig
:
client:
extraConfig: |
{"enable_central_service_config": false}
server:
extraConfig: |
{"enable_central_service_config": false}
If you were previously setting it to true
, it now defaults to true
so no
changes are required, but you can remove it from your config if you desire.
Default Protocol
If you were previously setting:
connectInject:
centralConfig:
defaultProtocol: 'http' # or any value
Now you must use custom resources to manage the protocol for new and existing services:
To upgrade, first ensure you're running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.
This version is required to support custom resources.
Next, modify your Helm values:
- Remove the
defaultProtocol
config. This won't affect existing services. - Set:
controller: enabled: true
- Remove the
Now you can upgrade your Helm chart to the latest version with the new Helm values.
From now on, any new service will require a
ServiceDefaults
resource to set its protocol:apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: my-service-name spec: protocol: 'http'
Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service's
service-defaults
config entry to aServiceDefaults
resource. See Migrating Config Entries.
Note: This setting was removed because it didn't support changing the protocol after a service was first run and because it didn't work in secondary datacenters.
Proxy Defaults
If you were previously setting:
connectInject:
centralConfig:
proxyDefaults: |
{
"key": "value" // or any values
}
You will need to perform the following steps to upgrade:
You must remove the setting from your Helm values. This won't have any effect on your existing cluster because this config is only read when the cluster is first created.
You can then upgrade the Helm chart.
If you later wish to change any of the proxy defaults settings, you will need to follow the Migrating Config Entries instructions for your
proxy-defaults
config entry.This will require Consul >= 1.9.0.
Note: This setting was removed because it couldn't be changed after initial installation.
Mesh Gateway Mode
If you were previously setting:
meshGateway:
globalMode: 'local' # or any value
You will need to perform the following steps to upgrade:
You must remove the setting from your Helm values. This won't have any effect on your existing cluster because this config is only read when the cluster is first created.
You can then upgrade the Helm chart.
If you later wish to change the mode or any other setting in
proxy-defaults
, you will need to follow the Migrating Config Entries instructions to migrate yourproxy-defaults
config entry to aProxyDefaults
resource.This will require Consul >= 1.9.0.
Note: This setting was removed because it couldn't be changed after initial installation.
connect-service-protocol Annotation
If any of your Connect services had the consul.hashicorp.com/connect-service-protocol
annotation set, e.g.
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
metadata:
annotations:
"consul.hashicorp.com/connect-inject": "true"
"consul.hashicorp.com/connect-service-protocol": "http"
...
You will need to perform the following steps to upgrade:
Ensure you're running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.
This version is required to support custom resources.
Next, remove this annotation from existing deployments. This will have no effect on the deployments because the annotation was only used when the service was first created.
Modify your Helm values and add:
controller: enabled: true
Now you can upgrade your Helm chart to the latest version.
From now on, any new service will require a
ServiceDefaults
resource to set its protocol:apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: my-service-name spec: protocol: 'http'
Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service's
service-defaults
config entry to aServiceDefaults
resource. See Migrating Config Entries.
Note: The annotation was removed because it didn't support changing the protocol and it wasn't supported in secondary datacenters.
Migrating Config Entries
A config entry that already exists in Consul must be migrated into a Kubernetes custom resource in order to manage it from Kubernetes:
Determine the
kind
andname
of the config entry. For example, the protocol would be set by a config entry withkind: service-defaults
andname
equal to the name of the service.In another example, a
proxy-defaults
config haskind: proxy-defaults
andname: global
.Once you've determined the
kind
andname
, query Consul to get its contents:$ consul config read -kind <kind> -name <name>
This will require
kubectl exec
'ing into a Consul server or client pod. If you're using ACLs, you will also need an ACL token passed via the-token
flag.For example:
$ kubectl exec consul-server-0 -- consul config read -name foo -kind service-defaults { "Kind": "service-defaults", "Name": "foo", "Protocol": "http", "MeshGateway": {}, "Expose": {}, "CreateIndex": 60, "ModifyIndex": 60 }
Now we're ready to construct a Kubernetes resource for the config entry.
It will look something like:
apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: foo annotations: 'consul.hashicorp.com/migrate-entry': 'true' spec: protocol: 'http'
The
apiVersion
will always beconsul.hashicorp.com/v1alpha1
.The
kind
will be the CamelCase version of the Consul kind, e.g.proxy-defaults
becomesProxyDefaults
.metadata.name
will be thename
of the config entry.metadata.annotations
will contain the"consul.hashicorp.com/migrate-entry": "true"
annotation.The namespace should be whatever namespace the service is deployed in. For
ProxyDefaults
, we recommend the namespace that Consul is deployed in.The contents of
spec
will be a transformation from JSON keys to YAML keys.The following keys can be ignored:
CreateIndex
,ModifyIndex
and any key that has an empty object, e.g."Expose": {}
.For example:
{ "Kind": "service-defaults", "Name": "foo", "Protocol": "http", "MeshGateway": {}, "Expose": {}, "CreateIndex": 60, "ModifyIndex": 60 }
Becomes:
apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: foo annotations: 'consul.hashicorp.com/migrate-entry': 'true' spec: protocol: 'http'
And
{ "Kind": "proxy-defaults", "Name": "global", "MeshGateway": { "Mode": "local" }, "Config": { "local_connect_timeout_ms": 1000, "handshake_timeout_ms": 10000 }, "CreateIndex": 60, "ModifyIndex": 60 }
Becomes:
apiVersion: consul.hashicorp.com/v1alpha1 kind: ProxyDefaults metadata: name: global annotations: 'consul.hashicorp.com/migrate-entry': 'true' spec: meshGateway: mode: local config: # Note that anything under config for ProxyDefaults will use the exact # same keys. local_connect_timeout_ms: 1000 handshake_timeout_ms: 10000
Run
kubectl apply
to apply the Kubernetes resource.Next, check that it synced successfully:
$ kubectl get servicedefaults foo NAME SYNCED AGE foo True 1s
If its
SYNCED
status isTrue
then the migration for this config entry was successful.If its
SYNCED
status isFalse
, usekubectl describe
to view the reason syncing failed:$ kubectl describe servicedefaults foo ... Status: Conditions: Last Transition Time: 2021-01-12T21:03:29Z Message: migration failed: Kubernetes resource does not match existing Consul config entry: consul={...}, kube={...} Reason: MigrationFailedError Status: False Type: Synced
The most likely reason is that the contents of the Kubernetes resource don't match the Consul resource. Make changes to the Kubernetes resource to match the Consul resource (ignoring the
CreateIndex
,ModifyIndex
andMeta
keys).Once the
SYNCED
status is true, you can make changes to the resource and they will get synced to Consul.