Microsoft Azure available from UK data centres

We at CoreAzure are delighted at Microsoft’s announcement in relation to the general availability of their UK data centres enabling businesses and government bodies to be able to secure information in the UK.

Notable first customers of these UK cloud services include; the Ministry of Defence, whose 230,000 employees will use Office 365 and Azure, and the South London and Maudsley NHS Foundation Trust, the largest mental health trust in the UK to name just a few.

Microsoft’s announcement to move its services to UK data centres will in our opinion enable many organisations to review their current business and IT strategies in relation to public cloud adoption in the UK.

The new facilities in Cardiff, Durham and London will host Microsoft’s Azure cloud platform and Office 365 productivity suite. Dynamics CRM Online will be added in the first half of 2017. Not every Azure service is generally available yet within the UK, but it is worth reviewing at what Microsoft Azure can offer your organisation.

The CoreAzure team pride themselves as being specialists in Microsoft Azure with an ability to maximise your existing investment in Microsoft technologies whilst realising your vision to adopt cloud.  If you are thinking of moving to Microsoft Azure, let us help you in that journey.


Microsoft launches new Azure Information Protection service

Microsoft has announced the launch of Microsoft Azure Information Protection service that builds on existing Microsoft Azure Rights Management (Azure RMS) and their recent acquisition of Secure Islands.

Microsoft have now combined Secure Islands’ industry-leading data classification and labelling technology with Azure RMS.

What does Azure Information Protection provide?

Document tracking and revocation
Documents can be tracked to show who has opened the document and from which geographical location. Users can choose to revoke access to the document in the event of unexpected activity.

Protection using encryption, authentication, and use rights
For sensitive data, protection can be applied after classification and labelling. This includes encrypting the document, which requires authentication of the user and enforces user rights that define what can be done with the data.

Automatic, user-driven, and recommended classifications
Data can be classified and labelled automatically through content detection rules. Users can also manually classify data or be prompted to make an informed classification decision.

Classification overrides and justifications
Based on policies and rules, users can be empowered to override a classification and optionally be required to provide a justification.

Flexible policy and rules engine
A set of default sensitivity labels are available with options to define custom labels based on business needs. Rules can also be configured for actions to take based on classification.

What an excellent addition to the Azure RMS, and we look forward to Microsoft Azure Information Protection becoming available in public preview in July 2016.

If you are interested in learning more about this innovative solution then please visit the website

Microsoft Azure Active Directory: Preview

For customers who are struggling with federating Active Directory and other directory stores with Microsoft Online Services (Windows Azure and Office 365), Microsoft has made a confession: “integrating your on premises identities with Azure AD is harder than it should be” and requires “too many pages of documentation to read, too many different tools to download and configure, and far too much on premises hardware required.

The good news? It has done (and is continuing to do) something about it, in the form of a new, “four-clicks-and-you’re-done” tool: Azure Active Directory Preview.

The tool is currently in Beta and is billed as “a single wizard that performs all of the steps you would otherwise have to do manually for connecting Active Directory and local directories to Azure Active Directory.

That means it installs all the required bits of .NET Framework, the Azure Active Directory Powershell Module and the Microsoft Online Services Sign-In Assistant, then gets Dirsync up and running between your on-premise environment and Microsoft Azure.

For now, the tool only allows a single Active Directory forest with Windows Azure Active Directory, but Microsoft promises to bring more forests into the cloud in future.

Customers wishing to join the program will find the following information useful:

To join the program through Microsoft Connect:

For more information about AADSync:

Azure DocumentDB is finally here

At last Microsoft have taken the wraps off their fully managed, JSON document database service. As of today (21st August) Microsoft has placed Azure DocumentDB into public preview, and made it available to the following regions: –

  • US West
  • Europe North
  • Europe West

There is an increasing demand for NoSQL databases but invariably developers are craving features and capabilities inherent to relational database systems. Unfortunately NoSQL means tough choices: –

  • strong OR eventual consistency
  • schema-free with limited query capability OR schematised and rich query capability
  • transactions OR scale

Wouldn’t it be great if you could have a massively scalable, schema-free database with rich query and transaction processing using the most ubiquitous programming language (JavaScript), data model (JSON) and transport protocol (HTTP)? That is exactly what Microsoft have given us with DocumentDB.

DocumentDB provides the ability to automatically index documents without requiring any schema or secondary indices, has the ability to issue SQL based relational and hierarchical queries over heterogeneous JSON values, has the ability to integrate database transactions with JavaScript exceptions, and the ability to seamlessly operate over JSON documents.

Above all they’ve provided a multi-tenant database service which is blazingly fast and (via tenant isolation) safe and secure.


DocumentDB supports SQL queries without forcing developers to create explicit schema or secondary indices or views. DocumentDB is able to efficiently index, query and process heterogenous documents via the deep commitment to the JSON data model.

DocumentDB’s SQL language is based on the JavaScript type system, expression semantics and ability to invoke JavaScript UDFs, ensuring the query grammar has a familiar SQL dialect for developers creating an efficient and natural way for you to query over JSON documents.

There is (of course) a downloadable .NET SDK which includes a LINQ provider – there is even a rumour that Microsoft are considering native JavaScript mapping to their SQL query language.

Here is a great link to understanding how to Query DocumentDB: –

JavaScript as a modern day T-SQL

As we adopt NoSQL systems for their simplicity, speed and scalability, we’re often forced to give up the transactional processing capabilities of traditional RDBMS systems. Support for transactions provides a performant and robust programming model for dealing with concurrent changes, resulting in faster apps that are easy to maintain. JavaScript is an obvious choice when considering that you want application code execution within the database, but you don’t want to invent yet another procedural language.

DocumentDB has JavaScript execution deeply embedded within the database engine. All application JavaScript execution is sandboxed, resource governed, and fully isolated. As a developer you can write stored procedures and triggers natively in JavaScript, allowing you to write application logic which can be shipped using HTTP POST and executed directonly on the storage partition within a transaction boundary. JSON can be materialised as JavaScript objects and transactions can be aborted by throwing an exception. This approach frees application developers from the complexity of OR mapping technologies.

DocumentDB Open and Approachable

We really don’t need more data formats, languages or protocols – let’s face it the learning curve for new systems can be steep, and we’re all pressed for time. The DocumentDB product team claim they held a mantra of ensuring they resisted the urge to be inventive where it didn’t deliver real value to the developer.

Programming against DocumentDB is really simple, approachable and doesn’t require you to buy into a specific tool-chain or require custom encodings or extensions to JSON or JavaScript. All functionality including CRUD, query and JavaScript processing is exposed over a RESTful HTTP interface. By offering simple, scalable JavaScript and JSON over HTTP, DocumentDB doesn’t invent in the area of data models, application models or protocols.

In my opinion DocumentDB is unique in how it embraces and builds on standards that are already abundantly available and established, yet adds huge value and capabilities on top – it feels like DocumentDB gives the developer the very best of all worlds!

You can find out lots more about Azure DocumentDB at the product page here – please bear in mind though that at present DocumentDB is in preview (online cloud speak for Beta).

Feel free to drop me a line if you have any questions

How to Stop an Azure VM

Recently I’ve had a few people ask me why they’ve continued to be charged for a VM in Azure, even though the VM wasn’t actually running.

Well – it’s all to do with the difference between “Stopped” and “Stopped (Deallocated)”. Since June 2013 Microsoft have not made charges to a subscription for a VM that has the status “Stopped (Deallocated)”, but they continue to charge for a VM with the status “Stopped”.

Allow me to explain…

If you power down a running VM from within the guest OS, or you use the Stop-AzureVM command with the -StayProvisioned parameter, then the VM remains allocated within Azure – that is to say it will keep the IP address that was allocated to it from DHCP. From within the Azure Portal the VM will report a status of “Stopped”, but because it is still consuming Azure resources it will continue to be billed.

However if you power down a running VM from the Azure Portal using the Shut Down button, or you use the Stop-AzureVM without the -StayProvisioned parameter, then the VM will be de-allocated from any Azure resources and the Azure Portal will report a status of “Stopped (Deallocated)”, and your subscription will no longer be billed.

One word of warning though…

VM’s belong to a cloud service, and a cloud service has a VIP (Virtual IP) which is the external IP address. If all VM’s in a particular cloud service have a status of “Stopped (Deallocated)” then it is highly likely that the cloud service will lose it’s VIP and will instead be allocated a different VIP when one of the VM’s is restarted. This may not cause you an issue, but if you are reliant on a consistent VIP for your cloud service, then you must ensure that at least one of your VM’s remains in an allocated state by either leaving it powered on, or with a status of “Stopped”.

If you have any questions or comments, feel free to contact me

Snapshot VMs in Azure

Unlike traditional (on-premise) virtualisation infrastructures where it is a relatively simple process to ‘snapshot’ a virtual machine at any one point in time (thereby allowing you to restore that virtual machine back to that specific point in time), Microsoft Azure does not natively offer this functionality.

Probably the easiest way of backing up an Azure VM is to use “Server Backup” from within the Windows OS (backing up to an attached disk via blob storage), but as we all know this is nowhere near as convenient as being able to simply restore a server image back to a specific point in time.

Fortunately through the use of a series of PowerShell commands, it is possible to provide this type of backup/restore functionality – and in this blog I’ll show you how…

Install Azure PowerShell

First things first, if you haven’t already then you need to install the Azure PowerShell extensions. The easiest way to do this is to run the Microsoft Web Platform Installer following the prompts to complete installation.

Connect To Your Azure Subscription

Next we need to connect (through Azzure PowerShell) to our Azure subscription. There are two methods of connecting to your subscription: –

  • Azure AD method
  • Azure Certificate method

Although self explanatory, the Azure Certificate method is more convoluted so for the purpose of this exercise we’ll use the Azure AD method. After opening the Azure PowerShell type the following command:

Add-Azure Account

In the pop-up window that appears type the email address and password associated with the Azure account that the VM’s (that you wish to backup/restore) are running under.

Azure will authenticate, save the credential information, and subsequently close the window.

Your Azure PowerShell session will stay authenticated for 12 hours (when using the Azure AD method of authentication), after which time you will be forced to re-authenticate. Remember you can have more than one Azure subscription per account, and in fact you can add as many accounts to your Azure PowerShell session as you like. To get a list of Azure accounts and/or subscriptions type one of the following commands:


Backup VM

To backup a VM in Azure we need to step through the following activities: –

  • Create a cloud storage container for storing backups
  • Select a virtual machine to backup
  • Identify each virtual hard disk for the virtual machine
  • Backup those virtual hard disks to the cloud storage container

Create a Cloud Storage Container

For the purpose of this exercise I have assumed that we want to keep our VM backups in Azure. There would be nothing stopping us from storing the backups locally on-premise. One benefit of this would be that you would not be paying for the storage costs in Azure, and this may become a consideration if you intend to make several backups, at different points in time, of the same machine. But for the purpose of this exercise we’re going to store our backups in Azure in the same subscription.

Prior to performing any backups we need to make sure that we have a cloud storage container to store the backups, and the first thing to do in order to create the cloud storage container is to ascertain the name of our Azure Storage Account. The easiest way of determining this is from the MediaLink property of an existing VM’s OS disk. In Azure PowerShell type the following commands: –


This will give you a list of the Azure VM’s currently assigned to the Azure Subscrption you are connected to.

$vmOSDisk = Get-AzureVM -ServiceName me-agresso-dc01 | getAzureOSDisk

This will assign the operating system disk object to the variable $vmOSDisk


me-agresso-dc01 is the name of the VM that I have selected from my list of VM’s – you will need to substitute this with the name of your own VM.

$StorageAccountName = $vmOSDisk.MediaLink.Host.Split('.')[0]

Now if you type the following command you will see your perfectly parsed Azure Storage Account Name:


Before we actually create our storage container, let’s take a look at what storage containers currently exist by typing the following command:


To create a new storage container to hold our backups type the following command: –

New-AzureStorageContainer -Name backups -Permission off

That last command will have created a storage container named backups. Just to prove it has successfully been created, let’s take another look at what storage containers currently exist in our subscription:


Select a VM to Backup

Next up we need to select the VM that we wish to backup – we can remind ourselves of the list of VM’s assigned to our current Azure Subscription with the following command: –


Now let’s choose one of the VM’s and assign it to the variable $vm for future use within PowerShell:

$vm = Get-AzureVM –ServiceName me-agresso-dc01 –Name me-agresso-dc01


me-agresso-dc01 is the name of the VM that I have selected from my list of VM’s – you will need to substitute this with the name of your own VM.

It is best practice to ensure the virtual machine is stopped prior to backup. Type the following command to report all of the relevant properties for your selected VM:


Note the Power State of the VM. If it is not set to “Stopped” then you will need to stop the VM using the following command:

$vm | StopAzureVM -StayProvisioned

Identify Virtual Hard Disks

Next we need to identify all of the VHD’s (Virtual Hard Disks) allocated to the VM. VM’s in Azure are provisioned with two general types of virtual hard disk:

  • Operating System Disks
  • Data Disks

Every VM will have an Operating System Disk from which it boots and runs the OS from. In addition each VM may have one or more additional Data Disks (although it must be noted that some VMs may not have a Data Disk at all).

In order to perform a complete Virtual Machine backup it is necessary to locate ALL of the VHDs that are currently being used by our VM.

Using the variable $vmOSDisk let’s store the location of our OS disk for the selected VM that we wish to backup:

$vmOSDisk = $vm | Get-AzureOSDisk

Using another variable $vmDataDisks let’s store the location of our Data Disks: –

$vmDataDisks = $vm | Get-AzureDataDisk


Owing to the fact a VM may have more than one Data Disk, the value type returned (and stored in the variable) is actually a Collection – when working with a collection, we will need to use a ForEach loop

Perform Backup

First off we’ll create a backup of the Operating System Disk, and then we’ll make a backup of any Data Disks.

We need to identify the blob and container names for the VHD’s we want to backup, and assign them to local variables. To do this type the following commands:

$vmOSBlobName = $vmOSDisk.MediaLink.Segments[-1]
$vmOSContainerName = $vmOSDisk.MediaLink.Segments[-1].Split('/')[0]

Now we have the blob and container names for the Operating System Disk, let’s go ahead and perform the backup:

Start-AzureStorageBlobCopy -SrcContainer $vmOSContainerName -SrcBlob $vmOSBlobName -DestContainer backups

AzureStorageBlobCopy is an asynchronous process which runs in the background on the Azure platform. To determine when the process is completed you can use the command Get-AzureStorageBlobCopyState like this:

Get-AzureStorageBlobCopyState -Container backups -Blob $vmOSBlobName -WaitForComplete


backups is the name of the storage container I created earlier, obviously you will need to substitute this with the name of the storage container you created.

Now we’ve created a backup of the Operating System Disk, we need to create the backups for all (if any) existing Data Disks attached to the VM. However because this time we are working with a Collection we will need to run the backup command from within a ForEach loop:

ForEach ($vmDataDisk in $vmDataDisks) {
$vmDataBlobName = $vmDataDisk.MediaLink.Segments[-1]
$vmDataContainerName = $vmDataDisk.MediaLink.Segments[-2].Split('/')[0]
Start-AzureStorageBlobCopy -SrcContainer $vmDataContainerName -SrcBlob $vmDataBlobName -DestContainer backups -Force
Get-AzureStorageBlobCopyState -Container backups -Blob $vmDataBlobName -WaitForComplete


backups is the name of the storage container I created earlier, obviously you will need to substitute this with the name of the storage container you created.

From a backup perspective – that’s it! You’ve successfully backed up the Operating System Disk and any attached Data Disks to the Storage Container that you created earlier.

Restore VM

To restore a VM in Azure we need to step through the following activities:

  • Select the VM to restore
  • Identify all VHD’s to be restored
  • De-provision the VM
  • Restore the Azure VM OS disk
  • Restore the Azure VM Data disk(s)
  • Re-provision the VM

Select the VM to Restore

First off we need to select the VM to restore (I am assuming you have opened up a session in Azure PowerShell and connected to your relevant subscription – if not then refer back to the top of this blog entry).

Now type the following commands:


This will give you a list of all the VM’s for the subscription that you are currently connected to. Now let’s select the VM we wish to restore :

$vm = Get-AzureVM –ServiceName me-agresso-dc01 –Name me-agresso-dc01


me-agresso-dc01 is the name of the VM that I have selected to restore from my list of VMs – you will need to substitute this with the name of your own VM (note that we’ve used the variable $vm to hold the details of the selected VM).

Now that we’ve selected our VM, we need to make sure that it’s powered off and that it’s configuration is kept in a provisioned state – to do this type the following command:

$vm | Stop-AzureVM –StayProvisioned

Identify Virtual Hard Disks to Restore

Next we need to identify all of the VHDs (Virtual Hard Disks) allocated to the VM. VMs in Azure are provisioned with two general types of virtual hard disk:

  • Operating System Disk
  • Data Disks

Every VM will have an Operating System Disk from which it boots and runs the OS from. In addition each VM may have one or more additional Data Disks (although it must be noted that some VMs may not have a Data Disk at all).

In order to perform a complete Virtual Machine restore it is necessary to locate ALL of the VHDs that are currently being used by our VM.

To store the location of the OS Disk and Data Disks using local variables type the following command:

$vmOSDisk = $vm | Get-AzureOSDisk
$vmDataDisks = $vm | Get-AzureDataDisk


Owing to the fact that there may be more than one Data disk attached to the VM, the command Get-AzureDataDisk actually returns a Collection, which we will need to iterate through using a ForEach loop (assuming there are disks in the collection of course).

The two properties that we specifically require are the DiskName and MediaLink values, these values provide the specific information we require when performing a restore.

De-Provision VM

When a VM is provisioned in Azure the platform places a lease on each VHD to ensure the disk is not inadvertently deleted. Therefore it’s necessary to completely remove the VM in order to delete and restore the VHD. However, we need to keep the VM configuration in order to recreate it once the VHD has been restored.

The easiest way to achieve this is to create a local folder on your machine and copy the VM config from Azure, thereby allowing us to delete the VM from Azure whilst maintaining the original VM configuration. In order to achieve this, type the following commands:

$exportFolder = “C:/ExportVMs
if (!(Test-Path –Path $exportFolder)) {
New-Item –Path $exportFolder –ItemType Directory
$exportPath = $exportFolder + “” + $vm.Name + “.xml”
$vm | Export-AzureVM –Path $exportPath


I chose the folder C:/ExportVMs but you will need to replace this with the folder of your choice.

If you look in the local folder you will now see an XML file containing the configuration of your selected Azure VM.

Now the Azure VM configuration has successfully been exported, it is now time to remove it from Azure, allowing the system to release the lock held on any VHDs. To do this type the following command:

Remove-AzureVM –ServiceName $vm.ServiceName –Name $vm.Name

Restore the VM OS Disk

In order to restore the selected VM’s OS disk from the storage container, then we must first define a few local variables:

$vmOSDiskName = $vmOSDisk.DiskName
$vmOSDiskuris = $vmOSDisk.MediaLink
$StorageAccountName = $vmOSDiskuris.Host.Split('.')[0]
$vmOSBlobName = $vmOSDiskuris.Segments[-1]
$vmOSOrigContainerName = $vmOSDiskuris.Segments[-2].Split('/')[0]
$backupContainerName = “backups


backups is the name of the storage container I created earlier, obviously you will need to substitute this with the name of the storage container you created.

After removing an Azure VM there is sometimes a short period of time where the VHDs are still listed as being attached to a VM (i.e. the lock is still in place). We just need to wait until the virtual hard disk is successfully reporting that it is no longer attached to a VM. The easiest way to do this is to use the Get-AzureDisk command from within a While loop. To do thi, type the following command:

While ( (Get-AzureDisk –DiskName $vmOSDiskName).AttachedTo ) { Start-Sleep 5 }

Once you have run this command to ensure the OS Disk is detached, you will need to remove the current disk in preparation for restoring the disk from backup – type the following command:

Remove-AzureDisk –DiskName $vmOSDiskName –DeleteVHD

You’re now ready to restore the OS Disk from backup – type the following command:

Start-AzureStorageBlobCopy –SrcContainer $backupContainerName –srcBlob $vmOSBlobName –DestContainer $vmOSOrigContainerName –Force

Remember that a Storage Blob Copy is an asynchronous operation, so it is prudent to check the status of the copy process and to wait until it is complete. This can be achieved using the following command:

Get-AzureStorageBlobCopyState –Container $vmOSOrigContainerName –Blob $vmOSBlobName –WaitForComplete

Once the copy has completed you can add the disk back into your subscription for the restored VM OS Disk – type the following command:

Add-AzureDisk –DiskName $vmOSDiskName –MediaLocation $vmOSDiskuris.AbsoluteUri –OS Windows

Restore the VM Data Disk(s)

Assuming your VM has one or more Data Disks attached to it (and you wish to restore them too) then we use a similar process for restoring these disks to what we used for restoring the OS Disk. However since the Data Disks are returned to us as a Collection then we need to run the relevant commands inside a ForEach loop to iterate through each disk in turn.


It is possible that your VM does NOT have any Data Disks attached, or that you do not wish to restore your Data Disks from a previous version. In either of these cases then you can ignore this section and move straight to “Re-provision Virtual Machine”.

Type the following commands:

ForEach ( $vmDataDisk in $vmDataDisks ) {
$vmDataDiskName = $vmDataDisk.DiskName
$vmDataDiskuris = $vmDataDisk.MediaLink
$vmDataBlobName = $vmDataDiskuris.Segments[-1]
$vmDataOrigContainerName = $vmDataDiskuris.Segments[-2].Split('/')[0]
While ( (Get-AzureDisk -DiskName $vmDataDiskName).AttachedTo ) { Start-Sleep 5 }
Remove-AzureDisk -DiskName $vmDataDiskName –DeleteVHD
Start-AzureStorageBlobCopy -SrcContainer $backupContainerName -SrcBlob $vmDataBlobName -DestContainer $vmDataOrigContainerName –Force
Get-AzureStorageBlobCopyState -Container $vmDataOrigContainerName -Blob $vmDataBlobName –WaitForComplete
Add-AzureDisk -DiskName $vmDataDiskName -MediaLocation $vmDataDiskuris.AbsoluteUri

You will now have successfully iterated through the Data Disk Collection and restored each of the Data Disks that were attached to your VM.

Re-Provision the VM

So here we are – at the final stage. Once we’ve completed the restore of all the VHDs (OS Disks and Data Disks) then we need to re-provision the VM using the VM config that we saved locally earlier, using command Import-AzureV.

Type the following command:

Import-AzureVM –Path $exportPath | New-AzureVM –ServiceName $vm.ServiceName

When the import process has completed the VM will have been restored and will automatically be started.


If your VM is assigned to a custom Virtual Network then you MUST specify that network as part of the Import-AzureVM command, otherwise you will get an error message. If you do have a custom virtual network that the VM is assigned to, then replace the above command with this one:

Import-AzureVM –Path $exportPath | New-AzureVM –ServiceName $vm.ServiceName –VnetName ca_me_test_agresso_01


ca_me_test_agresso_01 is the name of our custom virtual network that the Azure VM is assigned to.

And that’s it – using this process you can backup and restore (snapshot a point in time, and restore back to that point in time) any Azure VM. Although you could follow this blog each and every time, I would highly recommend you use the commands within this blog to build your own automated PowerShell scripts. Feel free to drop me a line if you have any questions, or if you wish to share any of those automated PowerShell scripts