Deprovision
To destroy your kubefirst cluster, complete the following steps.
Prerequisites
kubefirst CLI
If you are coming from a cloud marketplace, and didn't use the kubefirst CLI, you will need to install it first.
brew install konstructio/taps/kubefirst
More information in the installation steps from our documentation.
Kubernetes CLI
If not already done, you will need kubectl
to retrieve some information from your cluster.
brew install kubectl
More information in the Kubernetes documentation.
Clouds CLIs
If the command-line tool for the chosen cloud provider is not installed, consult the following documentation for installation steps:
- Akamai
- AWS
- Civo
- DigitalOcean
- Google Cloud
- Vultr
brew install linode-cli
More information in the Linode documentation.
brew install awscli
More information in the AWS documentation.
brew tap civo/tools
brew install civo
More information in the Civo documentation.
brew install doctl
More information in the DigitalOcean documentation.
brew install google-cloud-sdk
gcloud components install gke-gcloud-auth-plugin
More information in the Google Cloud documentation.
brew tap vultr/vultr-cli
brew install vultr-cli
More information in the Vultr documentation.
Terraform CLI
To complete the deprovisioning process, you also need to install the Terraform CLI.
brew install terraform
More information in the Terraform documentation.
Obtain the kubeconfig
Before continuing, use the command-line tool for the chosen cloud provider to get the kubeconfig
for your cluster (replace <my-cluster>
with your cluster name):
- Akamai
- AWS
- Civo
- DigitalOcean
- Google Cloud
- Vultr
cluster_name="YOUR_CLUSTER_NAME"
cluster_id=$(linode lke clusters-list --json | jq '.[] | select(.label=="'$cluster_name'") | .id')
linode lke kubeconfig-view $cluster_id --json | jq -r '.[].kubeconfig | @base64d' > ~/.kube/config
unset cluster_name
unset cluster_id
aws eks update-kubeconfig --name <my-cluster> --region <my-cluster-region>
civo kubernetes config <my-cluster> --save
doctl kubernetes cluster kubeconfig save <my-cluster>
gcloud container clusters get-credentials <my-cluster> --region=<my-cluster-region>
cluster_name="YOUR_CLUSTER_NAME"
cluster_id=$(vultr-cli kubernetes list --output json | jq -r '.vke_clusters[] | select(.label=="'$cluster_name'") | .id')
vultr-cli kubernetes config $cluster_id | base64 -d > ~/.kube/config
Steps for Deprovisioning
Once you have the kubeconfig
file for your cluster, retrieve the Vault token:
kubectl -n vault get secrets/vault-unseal-secret --template='{{index .data "root-token"}}' | base64 -d
This assumes you've exported the environment variable KUBECONFIG=/path/to/my/kubeconfig
- if not, you can add --kubeconfig /path/to/my/kubeconfig
just after kubectl
.
Once you have the Vault root token, run the following kubefirst
command to retrieve the required environment variables for deprovisioning:
kubefirst terraform set-env \
--vault-token <vault-token> \
--vault-url https://vault.<your-domain> \
--output-file .env
source .env
This will collect the required variables from the necessary secret path and output them to a file referenced by the --output-file
flag. The second command will set environment variables with its content.
If for some reason, Vault wasn't correctly deployed and initiated when you created your cluster, this step won't generate a proper .env
file. You will still need to continue the deprovision process as it doesn't mean the cluster or other resources weren't created properly. You will either need to set some environment variable manually (see the tip at the Cloud Provider step to see all values needed) or provide them to Terraform when asked.
Next, you will need to clone the gitops
repository generated by kubefirst during the initial cluster creation:
# GitHub
git clone [email protected]:<my-org>/gitops.git
# GitLab
git clone [email protected]:<my-group>/gitops.git
Terraform
If you have added custom resources to the terraform
section of your gitops
repository, these resources will show up in the plan. Please exercise caution when destroying, and consult the official documentation before proceeding.
Switch to the terraform
directory inside of the cloned gitops
repository. For example:
cd gitops/terraform
Within the terraform
directory, there are several subdirectories that contain the infrastructure-as-code declarations for your kubefirst resources.
Cloud Provider
To deprovision the cloud provider resources, switch to the cloud provider subdirectory - for example:
cd <cloud-folder>
You can then use standard terraform
commands:
terraform init
terraform destroy
Note that on certain providers like Google Cloud, this command can take up to one hour because of the number of resources that needed to be created. Once the last command ran successfully (it can take a lot of time with some of them when many resources had to be created, ex.: Google Cloud), the cluster including all its resources are fully destroyed.
If the init
command is not working, it's probably related to the .env
file not being sourced or created properly. You can validate it was sourced correctly by running echo $TF_VAR_kbot_ssh_private_key
which should return a value. If you close your terminal or reload your ZSH or Bash configuration files, the values will be lost: you will need to source the .env
file again. You can also validate that the file contains all necessary environment variables: ARGO_SERVER_URL
, TF_VAR_atlantis_repo_webhook_url
, AWS_ACCESS_KEY_ID
, GITHUB_TOKEN
, TF_VAR_aws_access_key_id
, TF_VAR_github_token
, ATLANTIS_GH_USER
, TF_VAR_kbot_ssh_public_key
, VAULT_ADDR
, ATLANTIS_GH_TOKEN
, ATLANTIS_GH_WEBHOOK_SECRET
, TF_VAR_atlantis_repo_webhook_secret
, TF_VAR_b64_docker_auth
, TF_VAR_kbot_ssh_private_key
, TF_VAR_vault_addr
, GITHUB_OWNER
, TF_VAR_civo_token
, TF_VAR_vault_token
, VAULT_TOKEN
, ATLANTIS_GH_HOSTNAME
, TF_VAR_aws_secret_access_key
, TF_VAR_cloudflare_origin_ca_api_key
, AWS_SECRET_ACCESS_KEY
, CIVO_TOKEN
, TF_VAR_cloudflare_api_key
.
Git
To deprovision the git provider resources, switch to the git provider subdirectory - for example:
# GitHub
cd ../github
# GitLab
cd ../gitlab
You can then use standard terraform
commands:
terraform init
terraform destroy
Once you've destroyed terraform
resources for the cloud and git providers, the only resource left to clean up is the state storage objects that kubefirst created on your behalf. If you'd like to remove these, this can be achieved by using the cloud console or the command-line utility for your chosen cloud provider.
You can now delete the gitops
repository you cloned on your computer, and the .env
file:
cd ../../..
rm -rf gitops
rm .env
Console UI
No matter is you created your new cluster, using the CLI directly (kubefirst <cloud> create
) or by using the console UI (kubefirst launch up
), kubefirst will have created a k3d cluster in Docker. We call it the cluster 0, which is either used to display the Console UI, or to connect to our API and create your new kubefirst cluster directly from the CLI. Since it's not needed anymore, you need to destroy it by running:
kubefirst launch down
This command will also reset any configurations files locally you can create new clusters.
You don't have to wait till the deprovisioning to run this command: as soon as your cluster on the public cloud of your choice is created, you can get rid of cluster 0.