Edit

Share via


Install Azure Container Storage for use with Azure Kubernetes Service

Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built for containers. Use this tutorial to install the latest production version of Azure Container Storage on an Azure Kubernetes Service (AKS) cluster, whether you create a new cluster or enable the service on an existing cluster.

This article explains how to install Azure Container Storage using two supported flows (installer-only or installer + storage type), how driver installation is triggered, and how to verify and troubleshoot your deployment. Azure Container Storage installs drivers that implement the Container Storage Interface (CSI).

If you prefer the open-source version of Azure Container Storage, see the local-csi-driver repository for alternate installation instructions.

By the end of this tutorial, you can:

  • Prepare your Azure CLI environment
  • Create or select a resource group for your cluster
  • Confirm your node pool virtual machine types meet the installation criteria
  • Install Azure Container Storage by creating a new AKS cluster or enabling it on an existing cluster

Important

This article applies to Azure Container Storage (version 2.x.x). For earlier versions, see Azure Container Storage (version 1.x.x) documentation. If you already have Azure Container Storage (version 1.x.x) installed on your AKS cluster, remove it by following these steps.

Prerequisites

  • If you don't have an Azure subscription, create a free account before you begin.

  • This article requires Azure CLI version v2.83.0 or later. For more information, see How to install the Azure CLI. Disable extensions such as aks-preview if issues occur. Install or update extensions as needed:

    • az extension add --upgrade --name k8s-extension
    • az extension add --upgrade --name elastic-san (Elastic SAN only)
  • You need the Kubernetes command-line client, kubectl. It's already installed if you're using Azure Cloud Shell. You can install it locally by running the az aks install-cli command.

  • Check if your target region is supported in Azure Container Storage regions.

  • Plan your node pool configuration:

    • Use Linux as the OS type (Windows isn't supported).
    • Select a virtual machine (VM) SKU that supports local NVMe data disks if you plan to use the local NVMe storage type, such as storage-optimized or GPU-accelerated VMs.
    • For existing clusters, ensure node pools already use a supported VM SKU before enabling Azure Container Storage.
  • If you use Elastic SAN for the first time in the subscription, run this one-time registration command:

    az provider register --namespace Microsoft.ElasticSan
    

Set subscription context

Set your Azure subscription context using the az account set command. You can view subscription IDs by running az account list --output table. Replace <subscription-id> with your subscription ID.

az account set --subscription <subscription-id>

Create a resource group

An Azure resource group is a logical container for resources. When you create a resource group, you specify a location. That location stores resource group metadata and serves as the default region for resources you create without an explicit region.

Create a resource group using az group create. Replace <resource-group-name> with your resource group name and <location> with an Azure region such as eastus, westus2, westus3, or westeurope. If you enable Azure Container Storage on an existing AKS cluster, use the resource group that already hosts the cluster.

az group create --name <resource-group-name> --location <location>

If the resource group is created successfully, you see output similar to this example:

{
  "id": "/subscriptions/<guid>/resourceGroups/myContainerStorageRG",
  "location": "eastus",
  "managedBy": null,
  "name": "myContainerStorageRG",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null
}

Install Azure Container Storage on a new AKS cluster

Choose the scenario that matches your environment.

Important

Azure Container Storage installs the latest major version by default. You can pin a major version with --container-storage-version. You can't pin minor or patch versions.

Installer-only installation

Run the following command to create a new AKS cluster and install Azure Container Storage. Replace <cluster-name> and <resource-group> with your own values, and specify which VM type you want to use.

az aks create -n <cluster-name> -g <resource-group> --node-vm-size Standard_L8s_v3 --enable-azure-container-storage --generate-ssh-keys

The deployment can take up to 5 minutes. CSI driver installation is deferred until you create a storage class or enable a storage type later.

Follow the instructions for creating a local NVMe storage class or Elastic SAN storage class.

Installer + storage type installation

Run the following command to create a new AKS cluster and install Azure Container Storage. Replace <cluster-name> and <resource-group> with your own values, and specify which VM type you want to use.

az aks create -n <cluster-name> -g <resource-group> --node-vm-size Standard_L8s_v3 --enable-azure-container-storage ephemeralDisk --generate-ssh-keys

This command installs the installer, deploys the ephemeralDisk driver, and creates a default storage class. You can install and use both local NVMe and Elastic SAN by providing comma-separated values such as ephemeralDisk,elasticSan.

Install Azure Container Storage on an existing AKS cluster

Installer-only installation

Run the following command to enable Azure Container Storage on an existing AKS cluster. Replace <cluster-name> and <resource-group> with your own values.

az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-storage

The deployment can take up to 5 minutes. When it completes, the cluster has the Azure Container Storage installer component installed. CSI driver installation is deferred until you create a storage class or enable a storage type later. Follow the instructions for creating a local NVMe storage class or Elastic SAN storage class.

Installer + storage type installation

Run the following command to enable Azure Container Storage on an existing AKS cluster. Replace <cluster-name> and <resource-group> with your own values, and specify which storage type you want to use.

az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-storage elasticSan

This command installs the installer, deploys the Elastic SAN CSI driver, and creates a default storage class. You can install and use both local NVMe and Elastic SAN by providing comma-separated values such as ephemeralDisk,elasticSan.

  • Prepare Terraform and authenticate to Azure
  • Define your resource group and AKS cluster configuration
  • Confirm your node pool virtual machine types meet the installation criteria
  • Apply Terraform to deploy Azure Container Storage or enable it on an existing cluster

Important

This article applies to Azure Container Storage (version 2.x.x). For earlier versions, see Azure Container Storage (version 1.x.x) documentation. If you already have Azure Container Storage (version 1.x.x) installed on your AKS cluster, remove it by following these steps.

Prerequisites

  • If you don't have an Azure subscription, create a free account before you begin.

  • This article requires Azure CLI version v2.83.0 or later. For more information, see How to install the Azure CLI. Disable extensions such as aks-preview if issues occur. Install or update extensions as needed:

    • az extension add --upgrade --name k8s-extension
    • az extension add --upgrade --name elastic-san (Elastic SAN only)
  • You need the Kubernetes command-line client, kubectl. It's already installed if you're using Azure Cloud Shell. You can install it locally by running the az aks install-cli command.

  • Check if your target region is supported in Azure Container Storage regions.

  • Plan your node pool configuration:

    • Use Linux as the OS type (Windows isn't supported).
    • Select a virtual machine (VM) SKU that supports local NVMe data disks, such as storage-optimized or GPU-accelerated VMs.
    • For existing clusters, ensure node pools already use a supported VM SKU before enabling Azure Container Storage.
  • Install Terraform version 1.5 or later and confirm the installation with terraform version. Terraform can reuse your Azure CLI authentication.

  • If you use Elastic SAN for the first time in the subscription, run this one-time registration command:

    az provider register --namespace Microsoft.ElasticSan
    

Set subscription context

Terraform can determine a target Azure subscription via multiple methods:

  • subscription_id in the provider block
  • ARM_SUBSCRIPTION_ID environment variable
  • Azure CLI default subscription
  • Managed identity (when running in Azure)

For local use, set the Azure CLI context:

az account set --subscription <subscription-id>

Install Azure Container Storage on a new AKS cluster

Important

Azure Container Storage installs the latest available version and updates itself automatically. Manual version selection isn't supported.

  1. In an empty working directory, create a main.tf file with the following minimal configuration of an AKS cluster. Update the resource names, locations, and VM sizes to meet your requirements.

    terraform {
      required_version = ">= 1.5.0"
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "~> 4.0"
        }
      }
    }
    
    provider "azurerm" {
      features {}
    }
    
    resource "azurerm_resource_group" "rg" {
      name     = "demo-aks-rg"
      location = "eastus"
    }
    
    resource "azurerm_kubernetes_cluster" "aks" {
      name                = "demo-aks-cluster"
      dns_prefix          = "demo-aks"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
    
      default_node_pool {
        name       = "systempool"
        vm_size    = "Standard_L8s_v3"
        node_count = 3
      }
    
      identity {
        type = "SystemAssigned"
      }
    }
    
    resource "azurerm_kubernetes_cluster_extension" "container_storage" {
      # NOTE: the `name` parameter must be "acstor" for Azure CLI compatibility
      name           = "acstor"
      cluster_id     = azurerm_kubernetes_cluster.aks.id
      extension_type = "microsoft.azurecontainerstoragev2"
    }
    
  2. Initialize the working directory to download the AzureRM provider.

    terraform init
    
  3. Review the planned changes.

    terraform plan
    
  4. Apply the configuration to create the resource group, AKS cluster, and Azure Container Storage extension. Deployment typically takes 5 minutes.

    terraform apply
    

When it completes, the cluster has the Azure Container Storage installer component installed. CSI driver installation is deferred until you create a storage class. Follow the instructions for creating a local NVMe storage class or Elastic SAN storage class.

Install Azure Container Storage on an existing AKS cluster

If your AKS cluster already exists and you manage it outside of Terraform, you can still enable Azure Container Storage by authoring only the extension resource. Use a data source to look up the cluster ID.

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

data "azurerm_kubernetes_cluster" "existing" {
  name                = "existing-aks"
  resource_group_name = "existing-aks-rg"
}

resource "azurerm_kubernetes_cluster_extension" "container_storage" {
  # NOTE: the `name` parameter must be "acstor" for Azure CLI compatibility
  name           = "acstor"
  cluster_id     = data.azurerm_kubernetes_cluster.existing.id
  extension_type = "microsoft.azurecontainerstoragev2"
}

Run terraform init (if this is a new working directory) followed by terraform apply to install Azure Container Storage on the targeted cluster. Deployment typically takes 5 minutes.

When it completes, the cluster has the Azure Container Storage installer component installed. CSI driver installation is deferred until you create a storage class. Follow the instructions for creating a local NVMe storage class or Elastic SAN storage class.

Verify installation

Verify installer (installer-only mode)

After an installer-only enable, verify that the installer is present:

kubectl get deploy -n kube-system | grep acstor

Expected output:

acstor-cluster-manager                2/2     2            2           4d9h
acstor-geneva                         2/2     2            2           4d9h

Verify storage class presence

After you create a storage class or enable a storage type, verify the storage class:

kubectl get sc

Example output:

NAME                    PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azuresan                san.csi.azure.com         Delete          Immediate              false                  4d7h
local                   localdisk.csi.acstor.io   Delete          WaitForFirstConsumer   true                   4d5h

Verify driver installation

Verify the components expected after storage class creation or storage type installation:

kubectl get pod -n kube-system | grep acstor

Example output:

pod/acstor-azuresan-csi-driver-jrqd2                       7/7     Running   0               142m
pod/acstor-azuresan-csi-driver-tcdp8                       7/7     Running   0               142m
pod/acstor-cluster-manager-76c67496f9-8ln5d                2/2     Running   0               3h54m
pod/acstor-cluster-manager-76c67496f9-b4c8q                2/2     Running   0               3h54m
pod/acstor-geneva-588bcbcc67-4tr5d                         3/3     Running   0               3h54m
pod/acstor-geneva-588bcbcc67-k7j7k                         3/3     Running   0               3h54m
pod/acstor-node-agent-46v47                                1/1     Running   0               142m
pod/acstor-node-agent-6c99m                                1/1     Running   0               142m
pod/acstor-otel-collector-4lfgz                            1/1     Running   0               142m
pod/acstor-otel-collector-hw9nd                            1/1     Running   0               142m

Debugging

For debugging, watch the system as components roll out:

kubectl get events -n kube-system --watch
kubectl get pod -n kube-system --watch

Inspect the HelmRelease and OCIRepository custom resources used by the installer:

kubectl describe helmreleases.helm.installer.acstor.io -n kube-system
kubectl describe ocirepositories.source.installer.acstor.io -n kube-system

Next steps