Key Vault, Functions, Kubernetes: Securely refresh storage keys and update them in a Kubernetes cluster

Any organization should plan to rotate keys. The bad news is that no one is doing that, the good news is that it is really simple to securely refresh storage keys and use temporary shared access signatures in a Kubernetes cluster using Key Vault and Azure Functions.

There are a lot of security benefits in key rotation. Imagine that your master key has been compromised or imagine that an unauthorized employee had access to that key. Now you have to change the key, an operation that in an utopian world could be done manually by issuing a key regeneration command and by changing an environment variable value in some CI/CD pipeline.

The aim of this post is to show you how to properly configure some Azure services in order to refresh the primary and secondary keys of an Azure Storage Account and generate time and permission limited Shared Access Signatures you can use in a Kubernetes cluster.

Proposed architecture

I know what you are thinking: the diagram is awful. Nevertheless, this picture allows me to explain you in eight simple points what you will need to achieve our goal.

Prerequisites (a.k.a deploying the needed services):

First of all you will need the following services deployed to your subscription:

  • An Azure Storage Account
    az storage account create -n damaggiostoragekv -g StorageKeyVaultRG -l northeurope --assign-identity --sku Standard_LRS
  • An Azure Function with an assigned Managed Service Identity (the Managed Service Identity, from now on MSI, allows you to keep the credentials used for authenticating to other cloud services secure):
    az functionapp create -n damaggiofuncstorkv -g StorageKeyVaultRG -s damaggiostoragekv -c northeurope
    az functionapp identity assign -n damaggiofuncstorkv -g StorageKeyVaultRG

    Important: take note of the “principalId” of your Function App, as it will be needed later on

  • An Azure Key Vault (in order to keep things clean, I suggest you to create it with no predefined access policies):
    #As of 8th June 2018 the KeyVault CLI extension which gives access to Storage Permissions is in preview
    az extension add --name keyvault-preview
    az keyvault create --name damaggioKVStorage --resource-group StorageKeyVaultRG --location northeurope --no-self-perms true

Now that you have all the required services, we can start.

1) Assigning the “Storage Account Key Operator Service Role” to Key Vault

The “Storage Account Key Operator” role allows to list and regenerate keys on Storage Accounts, which is what Key Vault needs to access the newly created Storage Account.

#Retrieve the storage id for the scope
az storage account show -n damaggiostoragekv -g StorageKeyVaultRG --query "id" 

#Returns a string like the following one, which will be the scope of the role assignment:

az role assignment create --assignee cfa8b339-82a2-471a-a3c9-0fc0be7a4093 --role "Storage Account Key Operator Service Role" --scope "/subscriptions/<subscription-id>/resourceGroups/StorageKeyVaultRG/providers/Microsoft.Storage/storageAccounts/damaggiostoragekv"

2) Assigning an access policy to the current user

Before adding the Storage Account to the jurisdiction of Key Vault, you have to assign the following permissions:

  • Storage Permissions: list, regeneratekey, set
  • Secret Permissions: set, list (needed only for the SSH Private Key part we will discuss on point 4)
az ad user show --upn-or-object-id "" --query objectId
az keyvault set-policy -n damaggioKVStorage --object-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --storage-permissions list regeneratekey set --secret-permissions set list

3) Adding an Azure Storage Account to Azure Key Vault

Since every user and service has the proper permissions, you can finally add the storage account to the Key Vault jurisdiction.

The “az key vault storage add” command, you see below, requires some parameters like:

  • –active-key-name: allows you to specify the primary key which all the operations will use
  • –auto-regenerate-key: if enabled, Key Vault will regenerate the active key in a specified time period
  • –regeneration-period: if auto-regenerate-key is true, you have to specify the regeneration period of the active key (in ISO 8601 format, like P1D for each day)
  • –resource-id: the storage account resource id retrieved in step 1
az keyvault storage add --vault-name damaggioKVStorage -n damaggiostorage --active-key-name key1 --auto-regenerate-key --regeneration-period P1D --resource-id "/subscriptions/<subscription-id>/resourceGroups/StorageKeyVaultRG/providers/Microsoft.Storage/storageAccounts/damaggiostoragekv"

4) Setting an access policy in Key Vault for Azure Function App

You have added the storage account to Key Vault. You still need to securely store the SSH Private Key for the Kubernetes Master node in a Key Vault Secret

#Import in KeyVault the SSH Private Key for reaching Kubernetes Cluster
az keyvault secret set -n privatekey --vault-name damaggioKVStorage -f privkey.key

Next step is to set the proper permission in order to allow Functions to access both the shared access signatures for storage account, generated by Key Vault, and the freshly created secret containing the SSH Private Key for connecting to Kubernetes cluster.

#Get Function App Identity and set-policy for keyvault
az functionapp identity show --name damaggiofuncstorkv --resource-group StorageKeyVaultRG
az keyvault set-policy -n damaggioKVStorage --object-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --storage-permissions setsas getsas --secret-permissions get

5,6) Writing and deploying the Azure Function

The final step consists of writing an Azure Function App to retrieve the Shared Access Signature from Key Vault and put it as a Kubernetes secret in the cluster.

Finally, to do so your function should:

  • Instantiate a KeyVaultClient object using the Managed Service Identity feature
    AzureServiceTokenProvider azureServiceTokenProvider = new AzureServiceTokenProvider();
    KeyVaultClient kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
  • Define and set a SAS policy in Key Vault (this step could and should be done in a separate function/application)
    //signedPermissions: Allowed values: (a)dd(c)reate(d)elete(l)ist(p)rocess(r)ead(u)pdate(w)rite
    //signedServices: Allowed values: (b)lob(f)ile(q)ueue(t)able
    //signedResourceTypes: Allowed values: (s)ervice(c)ontainer(o)bject
    string _sasName = "blobrwu4hours";
    Dictionary<string, string> _sasProperties = new Dictionary<string, string>() {
    	{"sasType", "account"},
    	{"signedProtocols", "https"},
    	{"signedServices", "b"},
    	{"signedResourceTypes", "sco"},
    	{"signedPermissions", "rwu"},
    	{"signedVersion", "2017-11-09"},
    	{"validityPeriod", "PT4H"}
    SasDefinitionAttributes _sasDefinitionAttributes = new SasDefinitionAttributes(enabled: true);
    var setSas = Task.Run(
    	() => kv.SetSasDefinitionAsync(_vaultBaseUrl, _storageAccountName, _sasName, _sasProperties, _sasDefinitionAttributes))
    log.Info("Sas definition created!");
  • Retrieve a SAS token corresponding to that policy
    SecretBundle secret = Task.Run(
    	() => kv.GetSecretAsync(_vaultBaseUrl, $"{_storageAccountName}-{_sasName}"))
    base64sas = Convert.ToBase64String(Encoding.UTF8.GetBytes(secret.Value));
  • Get the SSH Private Key from the Key Vault Secret
    SecretBundle sshPrivateKey = Task.Run(
    	() => kv.GetSecretAsync(_vaultBaseUrl, $"privatekey"))
  • Connect to the Kubernetes Cluster via SSH and issue a “kubectl create secret” command
    //using Renci.SshNet;
    	//For this example we do not have passphrase for the key stored in Key Vault. There should be a passphrase there
    	PrivateKeyFile privKey = new PrivateKeyFile(new MemoryStream(Encoding.UTF8.GetBytes(sshPrivateKey.Value)));
    	using (var client = new SshClient(host, 22, sshUsername, privKey))
    		byte[] expectedFingerPrint = StringToByteArray(sshPubKeyFingerprint);
    		client.HostKeyReceived += (sender, e) =>
    			if (expectedFingerPrint.Length == e.FingerPrint.Length)
    				for (var i = 0; i < expectedFingerPrint.Length; i++)
    					if (expectedFingerPrint[i] != e.FingerPrint[i])
    						e.CanTrust = false;
    				e.CanTrust = false;
    		var delete = client.CreateCommand($"kubectl delete secret {kubernetesSecretName}").Execute();
    		var create = client.CreateCommand($"kubectl create secret generic {kubernetesSecretName} --from-literal=secretKey={base64sas}").Execute();
    catch (Exception ex)
    	log.Error($"Something went wrong with Kubernetes: {ex.Message}");
    public static byte[] StringToByteArray(string hex)
    	return Enumerable.Range(0, hex.Length)
    		.Where(x => x % 2 == 0)
    		.Select(x => Convert.ToByte(hex.Substring(x, 2), 16))

If you want you can find a complete sample Function here which will need you to set some environment variables in order to correctly work:

az functionapp config appsettings set --name damaggiofuncstorkv --resource-group StorageKeyVaultRG --settings KEYVAULT_BASEURL="https://<keyvaultname>" KEYVAULT_STORAGE_NAME="damaggiostorage" KUBERNETES_DNS="<something>" KUBERNETES_SSH_USERNAME="azureuser" KUBERNETES_SSH_FINGERPRINT="3ec2167eb65d55fd9a707425ef0ce5ax" KUBERNETES_SECRET_NAME="thesecretname"

7,8) [Optional] Deploy a simple web application to view the result of your work

As a result, in order to test the achievements, you can issue a kubectl get secret command.

In alternative, you could change a little bit the node web application from my previous post to retrieve a wonderful jpeg from your storage and have a view like this one

If you want to know where this amazing beach is or for doubts, feel free to reach me out.

Spread this article

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.