Looking to move from cli-managed to helm-managed linkerd

Hello! We’ve been running linkerd for a bit using the cli to manage it, but have since decided that it makes more sense for us to use helm. Are there recommendations for how to proceed with this transition? Any particular gotchas we should be aware of?

Hi neal, it definitely makes more sense to use Helm in production as it unlocks a whole ecosystem of tools and gives you extras like the ability to keep a record of deployed releases, ability to rollback, etc.

The docs provide instructions on how to install Linkerd with Helm for the two core charts: linkerd-crds and linkerd-control-plane. Note that extensions also have their own separate charts. Besides that, these are some things to take into account:

  • Installing via Helm no longer generates default root and issuer certificates so you have to provide them yourself
  • Make sure you migrate all the flags you used in CLI into a values.yaml file you’d feed into the helm install commands
  • You need to remove the Linkerd instance you installed via the CLI before replacing it with Helm’s, which can result in downtime for your data plane. If you really require a no-downtime migration, it’s possible, but not trivial. We can give some pointers in that case.

Let us know how it goes!

Thanks for this. We definitely require a no-downtime migration, and would appreciate those pointers!

From reading the docs, I would have thought that:

  • removing the annotation to inject linkerd from the meshed workloads
  • restarting them
  • uninstalling the control plane
    would provide for a no-downtime uninstallation. But I guess from your post that I must be missing something.

Err, maybe I misunderstand what you mean by downtime for the data plane. Do you mean that there will be some time during which the workloads that are usually in the linkerd mesh will not part of the mesh? Or do you mean that there will be some time during which the workloads are completely unavailable? If the former, that’s okay with us. If the latter, I’m more nervous. (But again, I thought after reading the docs that it’s the former.)

If you uninstall the control plane, the proxies in the data plane will continue to work with cached discovery information but won’t be able to connect to any new pods that get created. So it’s really not recommended to run without a healthy control plane.

To migrate a control plane installed with the CLI to a Helm-managed one you’ll have to have Helm “adopt” the control plane resources. This is a Helm undocumented feature. It means you’ll have to add two annotations and one label to each resource before Helm is able to adopt them, otherwise helm install will fail and complain about “invalid ownership metadata”.

For example to adopt the resources for the linkerd-crds chart, you’d have to inspect the chart definition and list all the resources it defines and annotate them like so:

kubectl annotate crd authorizationpolicies.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd authorizationpolicies.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd authorizationpolicies.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd httproutes.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd httproutes.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd httproutes.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd meshtlsauthentications.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd meshtlsauthentications.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd meshtlsauthentications.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd networkauthentications.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd networkauthentications.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd networkauthentications.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd serverauthorizations.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd serverauthorizations.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd serverauthorizations.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd servers.policy.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd servers.policy.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd servers.policy.linkerd.io app.kubernetes.io/managed-by=Helm
kubectl annotate crd serviceprofiles.linkerd.io meta.helm.sh/release-name=linkerd-crds
kubectl annotate crd serviceprofiles.linkerd.io meta.helm.sh/release-namespace=linkerd
kubectl label crd serviceprofiles.linkerd.io app.kubernetes.io/managed-by=Helm

Then you can install the chart with

helm install linkerd-crds -n linkerd linkerd/linkerd-crds

You’d have to perform the same operation for the linkerd-control-plane chart.

Note this is not a method that is officially supported, so make sure to test this out in a staging environment to make sure you’ve got it right.

On second thought, unmeshing the data plane like you describe might be simpler, as long as you don’t depend on policies or routing rules for your operation. For example if you have an AuthorizationPolicy allowing traffic only from a given ServiceAccount in the mesh, you naturally would have to unmesh the service attached to that policy first.