Posts

Highly Available Cross-Premises and VNet-to-VNet Connectivity

A couple of days ago, Microsoft announced that New Azure VPN Gateways is now 6x faster which is fantastic news.

It gets even better when you start digging a little deeper and understand not only have they becomes faster but you can now create an Azure VPN gateway in an active-active configuration, where both instances of the gateway VMs will establish S2S VPN tunnels to your on-premises VPN device, as shown the following diagram;

Taking this to the next step, the most reliable option is then combine the active-active gateways on both your network and Azure, as shown in the diagram below;

Here you create and setup the Azure VPN gateway in an active-active configuration, and create two local network gateways and two connections for your two on-premises VPN devices. The result is a full mesh connectivity of 4 IPsec tunnels between your Azure virtual network and your on-premises network.

The same active-active configuration can also apply to Azure VNet-to-VNet connections. You can create active-active VPN gateways for both virtual networks, and connect them together to form the same full mesh connectivity of 4 tunnels between the two VNets, as shown in the diagram below:

This ensures there are always a pair of tunnels between the two virtual networks for any planned maintenance events, providing even better availability.

For those who are interested in taking this to the next level and  considering Highly Available Cross-Premises Connections then please do look at the Microsoft article ‘Configure active-active S2S VPN connections with Azure VPN Gateways’ and if you would like help and assistance then please do contact us.

Send us mail

CoreAzure and Vuzion announce CSP partnership

CoreAzure are delighted to announce a new partnership with Vuzion to become a tier-two Cloud Solution Provider (CSP). The partnership will enhance our capability to offer Microsoft solutions and services to our customers in the public sector, including discounted pricing, and enhanced managed service capabilities.

Microsoft created CSP indirect partners with the aim of strengthening the partner customer relationship, and with a specific objective of making it easier for customers to purchase the solutions and services they need through enabling them to build a long-lasting relationship with a partner they can trust.

CSP partners commit to providing a high-quality service, to bring their own value and services to final solutions, and provide efficient and accurate billing along with expert customer support.

Charlie La Foret, Pre-Sales Consultant at CoreAzure, says; “We are looking forward to passing on the benefits of becoming a CSP to our existing and prospective public and private sector clients, in particular helping them to realise the full potential of the Azure platform.

Julian Dyer, Vuzion CTO, says “CoreAzure has an excellent reputation as an established provider of Microsoft Cloud services. The partnership with Vuzion will help strengthen our respective services and bring value to Microsoft customers.

“For Vuzion, CoreAzure’s skills and expertise represent a valuable addition to our partner ecosystem, and we’re very pleased to welcome them on board.”

About CoreAzure:

CoreAzure are leaders in the Microsoft technology stack, specialising in Microsoft Azure, with an ability to maximise our client’s investment in Microsoft technology whilst supporting their vision to adopt cloud. We believe that technology needs to align with business vision, and aim to be the partner that always goes that extra mile. Our customers see us as the “go-to experts” when it comes to Microsoft technology, and time and again we prove to them that our knowledge, skills and experience are second to none.

About Vuzion:

Vuzion is an innovative cloud aggregator born from Cobweb, the number one independent cloud hosting provider in Europe that has been liberating technology since 1996. We strive to deliver the best hand picked cloud solutions through our one-of-a-kind partner ecosystem and reach our mission, to future proof business.

For more information, please contact:

Charlie La Foret

07557372803

charlie.laforet@coreazure.com

Snapshot VMs in Azure Resource Manager

One of our most-popular blog posts (drawing almost a quarter of all visitors to our site) has been Mark Briggs’ excellent Snapshot VMs in Azure guide from August 2014. Although much of the content of this article is still relevant for virtual machines created using the classic Azure portal, there have been a number of major changes to the Microsoft Azure platform in the interim, including the introduction of the Azure Resource Manager (ARM) deployment model, along with significant changes to the Azure PowerShell modules which may result in unexpected behaviour and warnings about deprecated features.

I thought it would be useful to provide an updated guide for creating snapshots from virtual machines which have been created in the new Azure portal (https://portal.azure.com) using the Azure Resource Manager deployment model.


Warning

The following guide has been provided purely for informational purposes and should be thoroughly tested on non-critical systems. CoreAzure cannot be held responsible for any consequences arising from the use of this information.

Step 1: Install Azure PowerShell

If you haven’t already installed the latest Azure modules for PowerShell, you can do so using the following steps:

  1. Open an administrative Command Prompt. The easiest way is to right-click on the Start button and select Command Prompt (Admin).
  2. Enter the command powershell and press Return to start a Windows PowerShell session.
  3. Enter the command Install-Module AzureRM and press Return.
  4. If you are prompted to install the NuGet provider, type and press Return to confirm.
  5. If you receive a warning about an untrusted repository, type and press Return to add PSGallery to the list of trusted repositories.

Step 2: Validate current VM configuration

We now need to take a look at the configuration of the virtual machine we wish to snapshot as the script we will be using to perform the operation needs to know details such as the resource group which contains the VM, its network configuration, storage account and disk configuration as well as the Azure region which hosts the VM.

The easiest way to do this is to login to the new Azure portal (https://portal.azure.com) and open the resource group containing the VM you wish to snapshot. This will provide a list of all resources associated with the virtual machine.

For this demonstration, I have created a test resource group named snapshotrg in the UK South region containing the resources shown in the screenshot below:

Resource Group Properties

Resource Group Properties

As you can see, the resource group contains a virtual machine named snapshotvm connected to a virtual network called snapshotvn via a network interface named snapshotvm156. This network interface has a public IP address assigned named snapshotpip, along with a Network Security Group called snapshotnsg which contains a single rule to allow incoming RDP connections.

The virtual machine is connected to a storage account named snapshotsa which contains a single container named vhds. Inside this container is the virtual machine’s OS disk named snapshotvm20170221112834.vhd.

We can ignore the other storage account (snapshotrgdiag735) listed in the resource group as this has been automatically generated by Azure for boot logging purposes and does not need to be included in any snapshot operations.

Virtual Machine Properties

Virtual Machine Properties

If we look at the properties of the virtual machine itself, we can see that it is a Windows server with a VM size of Standard F1s. The public IP address it has been assigned is 51.140.29.163 which has been given a DNS name of snapshotvm.uksouth.cloudapp.azure.com. We can also make a note of the Subscription Name.

The final piece of information we need to gather is the storage account key used to communicate with the storage account (as our script will be creating snapshots in the same container as the original OS disk). To do this, simply open the properties of the storage account (in my case, snapshotsa) and select the Access keys option. You should now be presented with two access keys, either of which can be copied into our script in Step 3.

Step 3: Populate local variables

We can now use the information gathered in Step 3 to start creating our snapshot script. Launch Windows PowerShell ISE and enter the following into the script pane (replacing the text in angled brackets with the details relevant to your environment). To see the code I used for the snapshotvm virtual machine, click on the Example tab:

Code
$resourceGroupName = "<Insert Resource Group Name Here>"
$location = "<Insert Azure Region Here>"
$vmName = "<Insert VM Name Here>"
$vmSize = "<Insert VM Size Here>" 
$vnetName = "<Insert vNet Name Here>"
$nicName = "<Insert NIC Name Here>" 
$dnsName = "<Insert DNS Name Here>" 
$diskName = "<Insert Disk Name Here (omitting the .vhd extension)>" 
$storageAccount = "<Insert Storage Account Name Here>" 
$storageAccountKey = "<Insert Storage Account Key Here>" 
$subscriptionName = "<Insert Subscription Name Here>" 
$publicIpName = "<Insert Public IP Address Name Here>"
Example
$resourceGroupName = "snapshotrg" 
$location = "UK South" 
$vmName = "snapshotvm" 
$vmSize = "Standard_F1s" 
$vnetName = "snapshotvn" 
$nicName = "snapshotvm156" 
$dnsName = "snapshotvm" 
$diskName = "snapshotvm20170221112834" 
$storageAccount = "snapshotsa" 
$storageAccountKey = "<OBFUSCATED>" 
$subscriptionName = "Pay-As-You-Go" 
$publicIpName = "snapshotpip"

We can concatenate some of the information provided above to give us the full name of the disk blob, target backup disk blob and full path to the VHD which will be stored in the following variables:

$diskBlob = "$diskName.vhd"
$backupDiskBlob = "$diskName-backup.vhd"
$vhdUri = "https://$storageAccount.blob.core.windows.net/vhds/$diskBlob"
$subnetIndex = 0

Step 4: Login to your Azure subscription

We now need to configure our script to login to Microsoft Azure and connect to the right subscription. For maximum security, Microsoft recommend using a service principal and certificate to login to Azure. This is especially true when you have created batch scripts or apps which need to run without prompting for additional credentials (and which you wouldn’t necessarily want to run under your own credentials). However, configuring this method of authentication falls outside the scope of this tutorial, so we will be using Azure AD credentials for simplicity.

To login to Azure and connect to the required subscription, we can use the Login-AzureRmAccount and Set-AzureRMContext commands in conjunction with the $subscriptionName variable we defined earlier:

Add-AzureRmAccount 
Set-AzureRMContext -SubscriptionName $subscriptionName

When these commands are run, a window should automatically appear prompting you to login to Azure. Assuming the correct credentials are provided, the window will disappear and take you back to the active PowerShell session.

Step 5: Create backup disk

To create a snapshot of the disk, we first need to power off the virtual machine using the following command:

Stop-AzureRmVM -ResourceGroupName $resourceGroupName -Name $vmName -Force -Verbose

We can then check to see if a backup has already been created using the following commands:

$ctx = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey $storageAccountKey
$blobCount = Get-AzureStorageBlob -Container vhds -Context $ctx | where { $_.Name -eq $backupDiskBlob } | Measure | % { $_.Count }

If no backup disk is currently found in the container, we can proceed with creating a copy. Although the copy operation should be relatively quick (as the target file is located in the same storage container), I’ve included a while loop to report the copy status every 10 seconds. This might prove useful if you want the snapshot to be copied to a different region or on a local file server:

if ($blobCount -eq 0)
{
$copy = Start-AzureStorageBlobCopy -SrcBlob $diskBlob -SrcContainer "vhds" -DestBlob $backupDiskBlob -DestContainer "vhds" -Context $ctx -Verbose
$status = $copy | Get-AzureStorageBlobCopyState
$status
While($status.Status -eq "Pending"){
$status = $copy | Get-AzureStorageBlobCopyState
Start-Sleep 10
$status
}
}

We can now check the vhd storage container in the portal to confirm that the copy has been created:

Original VHD and Backup

Original VHD and Backup

Step 6: Delete original resources

With our snapshot created, we can test the restore process by deleting some of the original resources. The following script should delete the original virtual machine, along with its disk, network interface and public IP address:

Remove-AzureRmVM -ResourceGroupName $resourceGroupName -Name $vmName -Force -Verbose
Remove-AzureStorageBlob -Blob $diskBlob -Container "vhds" -Context $ctx -Verbose
Remove-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $resourceGroupName -Force -Verbose
Remove-AzureRmPublicIpAddress -Name $publicIpName -ResourceGroupName $resourceGroupName -Force -Verbose

To validate that the resources have been deleted, log back into the Azure portal and open the properties of your resource group. The only resources remaining should be the Network Security Group, Virtual Network and storage accounts:

Resource Group After Deletion

Resource Group After Deletion

Step 7: Recreate original disk

We still have the name of the original disk recorded in our $diskBlob variable, so we can easily recreate this disk by creating a copy of our backup disk using this name. Again, I’ve included a 10-second status check loop in case the copy operation takes longer than expected:

$copy = Start-AzureStorageBlobCopy -SrcBlob $backupDiskBlob -SrcContainer "vhds" -DestBlob $diskBlob -DestContainer "vhds" -Context $ctx -Verbose
$status = $copy | Get-AzureStorageBlobCopyState 
$status 
While($status.Status -eq "Pending"){
  $status = $copy | Get-AzureStorageBlobCopyState 
  Start-Sleep 10
  $status
}

Step 8: Recreate resources

With the original disk now back in place, we can proceed with recreating the virtual machine and its associated network resources:

$vnet = Get-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroupName
$pip = New-AzureRmPublicIpAddress -Name $publicIpName -ResourceGroupName $resourceGroupName -DomainNameLabel $dnsName -Location $location -AllocationMethod Dynamic -Verbose
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $resourceGroupName -Location $location -SubnetId $vnet.Subnets[$subnetIndex].Id -PublicIpAddressId $pip.Id -Verbose
$vm = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $diskName -VhdUri $vhdUri -CreateOption attach -Windows

Step 9: Examine the result

We can now repeat the checks we carried out in Step 2 to see how our environment has changed. If we open the properties of the snapshotrg resource group, we can see that the snapshotvm machine and the public IP address are now present and correct. However, it now has a new network interface named snapshotvm156:

Resource Group After Restore

Resource Group After Restore

If we look at the properties of the public IP address snapshotpip, we can see that a new public IP address of 51.140.27.233 has been assigned. However, it has retained the correct DNS name (so anything which communicates with the server by DNS will still operate correctly). If we didn’t want the public IP address of the server to change, we could have left the old public IP address undeleted (and reassigned it after the new machine was created):

Public IP Address After Restore

Public IP Address After Restore

If we look at the properties of the virtual machine itself, we can see that the new virtual machine has been created with the correct VM size, location and subscription name:

Virtual Machine Properties After Restore

Virtual Machine Properties After Restore

Finally, looking at the disk configuration of the new virtual machine confirms that it is using the original disk name (although the backup file is still available in the vhds container should you need to restore the snapshot again in future):

Disk Configuration After Restore

Disk Configuration After Restore

I hope this guide proved useful. You can download the full PowerShell script using the button below:

Please do not hesitate to contact me using the form below if you have any queries.

Send us mail

1 + 2 = ?

Xen App, Xen Desktop and Xen Mobile are now tightly integrated with Microsoft Azure

Citrix and Microsoft have further strengthened their partnership this week by making it easier for customers to use Citrix’s application and desktop virtualization products, in the Microsoft Azure cloud.

Citrix has kicked off its annual partner Summit in Anaheim this week with news of new products available in Azure.

-Xen App Essentials: This new version of Citrix’s core application virtualization product lets customers host applications in Microsoft Azure’s IaaS public cloud and manage them with existing Xen App tools.

-Xen Desktop Essentials: Is the same idea as Xen App Essentials, but for full virtual desktops. It’s targeted specifically at running and managing Windows 10 remote desktops from Azure.

-Xen Mobile Essentials: This product integrates Citrix’s mobile management software with Microsoft’s Intune mobile management platform.

If you would like some more information or help in looking at the opportunities available, please get in touch.

Send us mail

6 + 1 = ?

Microsoft’s customers are now able to use private internet connections with the company’s UK data centers.

Microsoft’s customers are now able to use private internet connections with the company’s UK data centers.
The Azure ExpressRoute combined with PSN/N3 Connectivity is just one of the new services Microsoft has unveiled since the company launched its Azure and Office 365 cloud offering in this country three months ago.

Since then, thousands of customers – including the Ministry of Defence, the Met Police and parts of the NHS – have signed up to take advantage of the sites, which offer UK data residency, security and reliability.

Microsoft has also announced the availability of a second ExpressRoute location in the UK – in Newport, Wales. This second location allows Azure customers to benefit from path diversity for High Availability and Disaster Recovery in their country.
Azure ExpressRoute lets you create private connections between Azure datacenters and infrastructure on your premises or in a colocation environment. ExpressRoute connections don’t go over the public Internet. They offer more reliability, faster speeds, and lower latencies, and higher security than typical Internet connections. In some cases, using ExpressRoute connections to transfer data between on-premises systems and Azure can yield significant cost benefits.

With ExpressRoute, establish connections to Azure at an ExpressRoute location, such as an Exchange provider facility, or directly connect to Azure from your existing WAN network, such as a multi-protocol label switching (MPLS) VPN, provided by a network service provider.

If you would like more information on Express Route and how we have configured it for our customers, please get in contact with us.
express

Microsoft Azure – Single Instance SLA

Microsoft announced on Monday a significant improvement to their Service Level Agreements in respect of a single instance Virtual Machine in Azure – now offering 99.9% availability!

Many organisations with legacy line of business applications are nervous about moving those workloads to the Cloud owing to the SLA provided by the cloud vendor. Often those legacy applications are unable to take advantage of scale-out features and therefore reside on a single instance server, which in turn suffers from relatively poor SLA commitments.

Microsoft have done extensive work to improve availability of the Azure infrastructure, including innovative machine-learning to predict failing hardware early and offering premium storage to help improve reliability and performance of attached disks – the net effect of this work is that they can now offer single instance virtual machines in Azure with 99.9% availability, allowing organisations to take advantage of the agility of the cloud without compromising on their expectations of availability.

To qualify for the single instance virtual machine SLA, all storage disks attached to the VM must use Azure Premium Storage which offers both high availability and performance (80,000 IOPS and 2,000 MBps throughput).

To put that SLA into context that’s less than 9 hours per year downtime – there are very few organisations capable of running their own infrastructure with single instance line of business applications that can provide that level of availability.

Of course, if your legacy line of business applications do support scale-out then you can continue to build multi-machine high availability by having two or more virtual machines deployed in the same Availability Set or by utilising VM Scale Sets – both of which provide machine isolation, network isolation, and power unit isolation across multiple virtual machines.

If you want to know more about how you can migrate applications and workloads safely, securely and efficiently to Azure then please feel free to contact us here at CoreAzure.

Microsoft Azure Backup – If you have a backup strategy in place, are your backups secure?

 

What is the cost of recovering from the business impact of a cyber-attack? If you have a backup strategy in place, are your backups secure?

While data backups are essential to effective IT security, mismanagement and mishandling of backups can often increase an organisations security woes and with the reported increase in ransomware by the CRN Quarterly Ransomware Report with over 120 separate ransomware families, reporting a 3500% increase in cybercriminal internet infrastructure for launching attacks since the beginning of the year it’s probably a good time to review what measures you have in place to ensure that your backups remain safe.

Having recently reviewed Microsoft Azure backup for a client engagement, it’s new security features for protecting Azure Backup is another testament that Microsoft continues to build security into their cloud services providing security capabilities to protect an organisations cloud backups.

These security features ensure that customers can secure their backups and recover data using cloud backups if production and backup servers have been compromised.

These features are built on three principles – Prevention, Alerting and Recovery – to enable organisations increase preparedness against attacks and equip them with a robust backup solution.

1.Prevention – An additional layer of authentication is added whenever a critical operation like Change Passphrase is performed. This validation is to ensure that such operations can be performed only by users having valid Azure credentials.

2.Alerting – Email notification is sent to subscription admin whenever a critical operation like Delete Backup data is performed. This email ensures that user is timely notified about such actions.

3.Recovery – Deleted backup data is retained for additional 14 days from the date of delete. This ensures recoverability of the data within given time period so there is no data loss even if attack happens. Also, more number of minimum recovery points are maintained to guard against corrupt data.

If you would like to discuss this further, then please contact us on the form below. For more information on how to turn these features on within the Azure platform, simply visit this link.

Send us mail

3 + 1 = ?

The Cloud changes everything

The single biggest business opportunity to impact government is the arrival of ubiquitous internet.

Take an example of an organisation that ignored the business implications of the internet: Blockbuster Video.  One of the last actions of its board before it went into receivership was to order a revamp of its website and stores: a new front end on an old business model.  For Netflix, the arrival of the internet meant an entirely new business – Netflix didn’t duck the business implication of the internet.  Public services is just the same: in my view, the single biggest failing would be to duck the business implication.  So what is it?

Just as in private sector, what the cloud (which is just utility, rather than private, internet) means for large vertically-integrated public corporates is progressive dis-integration & separation of business from technology layers.  As a result, the business focuses increasingly on outcomes, and consumes standard processes from the cloud; and because this stuff is standard, it attracts lots of customers, and so get lots of investment and innovation.  Take a look at the annual Meeker report (published on the internet – where else).  Slack (comms); Stripe (online payments); Square (offline payments); Docusign (transaction mgmt.); Zenefits (HR); Directly (customer service); Anaplan (ERP); Greenhouse (recruiting); Checkr (background checks); increasingly these are becoming low-code and avoiding need for ‘IT people’ altogether.  In fact Techcrunch 2015 reckoned that pretty much all of the software and services a reasonably-sized organisation needs can be consumed from the cloud for approximately $75 per month.

Organisations using these are cheap to run, agile, data-driven, and responsive to customers – all the things we want our public services to be.

Industry agrees: here’s TechMarketView editor on 26 September:

We believe infrastructure services providers will play a crucial role in enabling the digitisation of the enterprise. The underlying infrastructure (e.g. networks, processing, storage) is the platform for the emerging software and services that will fuel the digital enterprise and will therefore be an investment priority for many CIOs.

However, cloud and the digitisation of processes and services will both drive and suppress overall IS market growth at the same time. On the one hand it will draw new investment into the market but by the same token we will see cannibalisation of traditional outsourcing services and an increase in automation leading to cheaper IT services and consequently lower revenue.

So what does all this mean for government?  Three simple things:

  1. We need to start rethinking our business models to be ‘of the net’, not just ‘on the net’; that means taking cloud-first and PaaS much more seriously.
  2. Local service providers need to evolve business models, and procure tech, based on accessibility, re-usability, and consumption of data and capability over that cloud/PaaS. That means proportionate approaches to risk, open data assets, open business logic, open APIs, opening all those organisational black boxes to expose those uncomfortably inefficient value chains, all those ‘commercial-in-confidence’ clauses that hide some of those really embarrassing contracts.
  3. Local public services (health, social care, housing, third sector, and local government) need, somehow, to evolve consistent approaches to consumption & re-use; we’re not competing orgs with different shareholders, yet we behave as if we are. So we need proactive initiatives from the centre that make consumption and re-use easy to specify, procure, & bolt together. Let’s start with commodity tech; then move to commodity business capability.

Imagine this: a place on Gov.uk where you could see the ‘live DNA’ of local services: drill into a local authority’s service to see its service architecture: the actual value chain, with its common service patterns, consumed modules, the supplier, and the cost.  Then imagine, at a click, looking horizontally across other local service providers: exposing everyone that’s currently consuming that service pattern and underlying modules, whose price lowers with each new additional customer.  Imagine the innovation and investment such a consolidated, accessible service architecture would trigger across public, private, and third sectors.  Imagine, too, the democratic impact such accountability would have on our local services.

In fact, probably the single biggest test of the public value of most of us in this room as technologists in government is whether we’ve enabled cloud-based service architectures that make ourselves redundant within a generation or so! Let’s divert more resources to the front line; as far as the public are concerned, management and admin is little more than a necessary evil.  This would be a reasonable indicator of whether we’ve achieved Netflix, or Blockbuster, local government; whether we’re lots of public corporates polishing our websites or whether we’ve the courage to rethink our business model – and even our own future.

 

Microsoft Azure available from UK data centres

We at CoreAzure are delighted at Microsoft’s announcement in relation to the general availability of their UK data centres enabling businesses and government bodies to be able to secure information in the UK.

Notable first customers of these UK cloud services include; the Ministry of Defence, whose 230,000 employees will use Office 365 and Azure, and the South London and Maudsley NHS Foundation Trust, the largest mental health trust in the UK to name just a few.

Microsoft’s announcement to move its services to UK data centres will in our opinion enable many organisations to review their current business and IT strategies in relation to public cloud adoption in the UK.

The new facilities in Cardiff, Durham and London will host Microsoft’s Azure cloud platform and Office 365 productivity suite. Dynamics CRM Online will be added in the first half of 2017. Not every Azure service is generally available yet within the UK, but it is worth reviewing at what Microsoft Azure can offer your organisation. https://lnkd.in/d2Pna-j

The CoreAzure team pride themselves as being specialists in Microsoft Azure with an ability to maximise your existing investment in Microsoft technologies whilst realising your vision to adopt cloud.  If you are thinking of moving to Microsoft Azure, let us help you in that journey.

2016-Finalist-Partner-of-the-Year

Microsoft launches new Azure Information Protection service

Microsoft has announced the launch of Microsoft Azure Information Protection service that builds on existing Microsoft Azure Rights Management (Azure RMS) and their recent acquisition of Secure Islands.

Microsoft have now combined Secure Islands’ industry-leading data classification and labelling technology with Azure RMS.

What does Azure Information Protection provide?

Document tracking and revocation
Documents can be tracked to show who has opened the document and from which geographical location. Users can choose to revoke access to the document in the event of unexpected activity.

Protection using encryption, authentication, and use rights
For sensitive data, protection can be applied after classification and labelling. This includes encrypting the document, which requires authentication of the user and enforces user rights that define what can be done with the data.

Automatic, user-driven, and recommended classifications
Data can be classified and labelled automatically through content detection rules. Users can also manually classify data or be prompted to make an informed classification decision.

Classification overrides and justifications
Based on policies and rules, users can be empowered to override a classification and optionally be required to provide a justification.

Flexible policy and rules engine
A set of default sensitivity labels are available with options to define custom labels based on business needs. Rules can also be configured for actions to take based on classification.

What an excellent addition to the Azure RMS, and we look forward to Microsoft Azure Information Protection becoming available in public preview in July 2016.

If you are interested in learning more about this innovative solution then please visit the website https://www.microsoft.com/en-us/cloud-platform/azure-information-protection