Any organization should plan to rotate keys. The bad news is that no one is doing that, the good news is that it is really simple to securely refresh storage keys and use temporary shared access signatures in a Kubernetes cluster using Key Vault and Azure Functions.
There are a lot of security benefits in key rotation. Imagine that your master key has been compromised or imagine that an unauthorized employee had access to that key. Now you have to change the key, an operation that in an utopian world could be done manually by issuing a key regeneration command and by changing an environment variable value in some CI/CD pipeline.
The aim of this post is to show you how to properly configure some Azure services in order to refresh the primary and secondary keys of an Azure Storage Account and generate time and permission limited Shared Access Signatures you can use in a Kubernetes cluster.
Proposed architecture
I know what you are thinking: the diagram is awful. Nevertheless, this picture allows me to explain you in eight simple points what you will need to achieve our goal.
Prerequisites (a.k.a deploying the needed services):
First of all you will need the following services deployed to your subscription:
- An Azure Storage Account
1
az storage account create -n damaggiostoragekv -g StorageKeyVaultRG -l northeurope --assign-identity --sku Standard_LRS
- An Azure Function with an assigned Managed Service Identity (the Managed Service Identity, from now on MSI, allows you to keep the credentials used for authenticating to other cloud services secure):
Important: take note of the “principalId” of your Function App, as it will be needed later on
1 2
az functionapp create -n damaggiofuncstorkv -g StorageKeyVaultRG -s damaggiostoragekv -c northeurope az functionapp identity assign -n damaggiofuncstorkv -g StorageKeyVaultRG
- An Azure Key Vault (in order to keep things clean, I suggest you to create it with no predefined access policies):
1 2 3
#As of 8th June 2018 the KeyVault CLI extension which gives access to Storage Permissions is in preview az extension add --name keyvault-preview az keyvault create --name damaggioKVStorage --resource-group StorageKeyVaultRG --location northeurope --no-self-perms true
Now that you have all the required services, we can start.
1) Assigning the “Storage Account Key Operator Service Role” to Key Vault
The “Storage Account Key Operator” role allows to list and regenerate keys on Storage Accounts, which is what Key Vault needs to access the newly created Storage Account.
|
|
2) Assigning an access policy to the current user
Before adding the Storage Account to the jurisdiction of Key Vault, you have to assign the following permissions:
- Storage Permissions: list, regeneratekey, set
- Secret Permissions: set, list (needed only for the SSH Private Key part we will discuss on point 4)
|
|
3) Adding an Azure Storage Account to Azure Key Vault
Since every user and service has the proper permissions, you can finally add the storage account to the Key Vault jurisdiction.
The “az key vault storage add” command, you see below, requires some parameters like:
- –active-key-name: allows you to specify the primary key which all the operations will use
- –auto-regenerate-key: if enabled, Key Vault will regenerate the active key in a specified time period
- –regeneration-period: if auto-regenerate-key is true, you have to specify the regeneration period of the active key (in ISO 8601 format, like P1D for each day)
- –resource-id: the storage account resource id retrieved in step 1
|
|
4) Setting an access policy in Key Vault for Azure Function App
You have added the storage account to Key Vault. You still need to securely store the SSH Private Key for the Kubernetes Master node in a Key Vault Secret
|
|
Next step is to set the proper permission in order to allow Functions to access both the shared access signatures for storage account, generated by Key Vault, and the freshly created secret containing the SSH Private Key for connecting to Kubernetes cluster.
|
|
5,6) Writing and deploying the Azure Function
The final step consists of writing an Azure Function App to retrieve the Shared Access Signature from Key Vault and put it as a Kubernetes secret in the cluster.
Finally, to do so your function should:
-
Instantiate a KeyVaultClient object using the Managed Service Identity feature
1 2
AzureServiceTokenProvider azureServiceTokenProvider = new AzureServiceTokenProvider(); KeyVaultClient kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
-
Define and set a SAS policy in Key Vault (this step could and should be done in a separate function/application)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
//signedPermissions: Allowed values: (a)dd(c)reate(d)elete(l)ist(p)rocess(r)ead(u)pdate(w)rite //signedServices: Allowed values: (b)lob(f)ile(q)ueue(t)able //signedResourceTypes: Allowed values: (s)ervice(c)ontainer(o)bject string _sasName = "blobrwu4hours"; Dictionary<string, string> _sasProperties = new Dictionary<string, string>() { {"sasType", "account"}, {"signedProtocols", "https"}, {"signedServices", "b"}, {"signedResourceTypes", "sco"}, {"signedPermissions", "rwu"}, {"signedVersion", "2017-11-09"}, {"validityPeriod", "PT4H"} }; SasDefinitionAttributes _sasDefinitionAttributes = new SasDefinitionAttributes(enabled: true); var setSas = Task.Run( () => kv.SetSasDefinitionAsync(_vaultBaseUrl, _storageAccountName, _sasName, _sasProperties, _sasDefinitionAttributes)) .ConfigureAwait(false).GetAwaiter().GetResult(); log.Info("Sas definition created!");
-
Retrieve a SAS token corresponding to that policy
1 2 3 4 5
SecretBundle secret = Task.Run( () => kv.GetSecretAsync(_vaultBaseUrl, $"{_storageAccountName}-{_sasName}")) .ConfigureAwait(false).GetAwaiter().GetResult(); base64sas = Convert.ToBase64String(Encoding.UTF8.GetBytes(secret.Value));
-
Get the SSH Private Key from the Key Vault Secret
1 2 3
SecretBundle sshPrivateKey = Task.Run( () => kv.GetSecretAsync(_vaultBaseUrl, $"privatekey")) .ConfigureAwait(false).GetAwaiter().GetResult();
-
Connect to the Kubernetes Cluster via SSH and issue a “kubectl create secret” command
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
//using Renci.SshNet; try { //For this example we do not have passphrase for the key stored in Key Vault. There should be a passphrase there PrivateKeyFile privKey = new PrivateKeyFile(new MemoryStream(Encoding.UTF8.GetBytes(sshPrivateKey.Value))); using (var client = new SshClient(host, 22, sshUsername, privKey)) { byte[] expectedFingerPrint = StringToByteArray(sshPubKeyFingerprint); client.HostKeyReceived += (sender, e) => { if (expectedFingerPrint.Length == e.FingerPrint.Length) { for (var i = 0; i < expectedFingerPrint.Length; i++) { if (expectedFingerPrint[i] != e.FingerPrint[i]) { e.CanTrust = false; break; } } } else { e.CanTrust = false; } }; client.Connect(); var delete = client.CreateCommand($"kubectl delete secret {kubernetesSecretName}").Execute(); log.Info(delete); var create = client.CreateCommand($"kubectl create secret generic {kubernetesSecretName} --from-literal=secretKey={base64sas}").Execute(); log.Info(create); client.Disconnect(); } } catch (Exception ex) { log.Error($"Something went wrong with Kubernetes: {ex.Message}"); }
If you want you can find a complete sample Function here which will need you to set some environment variables in order to correctly work:
|
|
7,8) [Optional] Deploy a simple web application to view the result of your work
As a result, in order to test the achievements, you can issue a kubectl get secret command.
In alternative, you could change a little bit the node web application from my previous post to retrieve a wonderful jpeg from your storage and have a view like this one
If you want to know where this amazing beach is or for doubts, feel free to reach me out.