Skip to main content
Version: 2.7

Overview

The DigitalOcean provisioning process will:

  • Create a Kubernetes management cluster in the DigitalOcean cloud.
  • Create three virtual workload clusters for each default environment (development, staging & production).
  • Create a gitops Git repository from our gitops-template and store it in your selected Git provider.
  • Install Argo CD bootstrapped against your gitops repository so your repository powers the platform, and become your source of truth.
  • Install all the platform applications using GitOps (from the /registry folder in the gitops repository).
  • Apply Terraform to configure Vault (from the /terraform/vault folder in the gitops repository).
  • Configure the gitops repository to automatically run Terraform executions through Atlantis.
  • Integrate Argo Workflows with your selected Git provider.
  • Install Argo Workflows cluster workflow templates to build containers, publish Helm charts, and provide the GitOps delivery pipelines.
  • Install metaphor, a sample application that uses this automation to demonstrate app delivery.

Installation DiagramInstallation Diagram

Applications

kubefirst digitalocean create provisions a local DigitalOcean Kubernetes cluster to host your cloud native environment locally.

Your DigitalOcean cluster will include:

ApplicationDescription
Argo CDGitOps Continuous Delivery
Argo WorkflowsApplication Continuous Integration
AtlantisTerraform Workflow Automation
cert-managerCertificate Automation Utility
ChartMuseumHelm Chart Registry
External Secrets OperatorsSyncs Kubernetes secrets with Vault secrets
GitHub Action Runner ControllerGitHub Self-Hosted CI Executor
HashiCorp VaultSecrets Management
Metaphor(development, staging, production) instance of sample Next.js app
Ingress NginxIngress Controller

Known Limitations

General

  • Let's encrypt is limited to 50 weekly certificates with an additional limitations of 5 per subdomains. We use Let's encrypt to automatically create certificates for your domains. In most cases, this won't be an issue, but you may reach that limit if you create, and destroy often Kubefirst clusters using the same domain during a short period. You can use the Let's Debug Toolkit to check those, but note that the result isn't always valid.