SQL Server data files in Azure


The next theme in my Hybrid Cloud journey is “SQL Server data files in Azure”.  So, from SQL Server 2014 (including 2016) you can put your database data files (and log files in case you thought it was only the data files) from your on premise SQL Server (or from your Azure VM) into blob storage in Azure.

Need to take breadth here … and say … for an on premise instance why on earth would you do this?  Who in their right mind would do this and what would be the benefit?  However I can see there might be a few benefits for Azure VMs.  Anyway I looked them up on MSDN for the reasons why:

  • It can make migration to Azure easier – move your data files to Azure and then it won’t feel so much a stretch to create a SQL Server VM in Azure and attach the same database files.
  • Cost and storage benefits.  The storage in Azure is pretty large .. so you shouldn’t run out of space. It’s also relatively cheap compared to your on premise storage so if your SAN is full and the piggy bank is empty it might offer an alternative.
  • HA.  This could possibly be a nice alternative.  If your on premise SQL Server machine has crashed, or your Azure VM has gone walkabouts then you can just re-point them at the data files in the blob storage.
  • Snapshot benefits.  Especially nice for SQL Server running on Azure VMs.  This allows pretty instantaneous backups and fast restores. I’ve got this down to check out on the next step in my Hybrid Cloud Journey.

High Level Steps

  1. Set up a Storage Account in Azure and create a Shared Access Signature (SAS) key. I had hoped that we might be able to use a storage account identity and access key as I did in the set up for the SQL Server backup to URL but it’s not listed.
  2. Create a Credential using the SAS key.
  3. Create a Database that uses the blob storage.

Code and run through

OK – this was a bit messier than I thought and I had to do it twice.  My fault – I initially created a brand new Storage Account in one of Azure Resource Groups.  However I created it as a Blob storage account rather than as a General storage account.

I got this error message:

Msg 5120, Level 16, State 156, Line 32
Unable to open the physical file "https://sikfinkdatafiles.blob.core.windows.net/sqldata/SandboxBlobStorageData.mdf". Operating system error 13: "13(The data is invalid.)".

Guessed it might be to do with the type of Storage Account so I created another one, this time a General Storage Account and then create a Blob Container. Once got that sorted it all worked as expected.

In the Azure Portal – Create a general storage account and create a container (folder) to hold your files.

In the screenshot below you can see that I have created a storage account called sikfinkdata

Data File in Azure 0

and added a blob container called sandboxdb:

Data File in Azure 1

In the Azure Portal – Generate a Shared Access Signature (SAS)

If you click on the Shared Access signature as outlined in the image above the following dialogue box pops up:

Data File in Azure 2

I give it all permissions as above and then Clicked Generate SAS

Data File in Azure 3

If you copy the key it looks like this:


Note: remove the ? from the beginning of the key

In SSMS – Create a credential based on the SAS Key

(SELECT * FROM sys.credentials WHERE name = 'https://sikfinkdata.blob.core.windows.net/sandboxdb')
 CREATE CREDENTIAL [https://sikfinkdata.blob.core.windows.net/sandboxdb]
,SECRET = 'sv=2015-04-05&ss=bfqt&srt=sco&sp=rwdlacup&se=2016-09-03T01:40:59Z&st=2016-09-02T17:40:59Z&spr=https&sig=XXXXXXXXXXXXXXXXXXXXXXXXXX';

If you leave the ? in the key you may get an error message like this. You need to remove it.

Msg 5123, Level 16, State 10, Line 31
CREATE FILE encountered operating system error 1117(The request could not be performed because of an I/O device error.) while attempting to open or create the physical file 'https://sikfinkdata.blob.core.windows.net/sandboxdb/SandboxBlobStorageData.mdf'.

Create the database with the files on the blob storage

CREATE DATABASE Sandbox_Blob_Storage 
( NAME = Sandbox_Blob, 
FILENAME = 'https://sikfinkdata.blob.core.windows.net/sandboxdb/SandboxBlobStorageData.mdf' ) 
( NAME = Sandbox_Blob_Log, 
FILENAME = 'https://sikfinkdata.blob.core.windows.net/sandboxdb/SandboxBlobStorageData.ldf' ) 

You can now see the files via the Azure Portal

Data File in Azure 4

And the database in SSMS

Data File in Azure 5

If you like your management in one place you can open the Storage Account in SSMS.

Data File in Azure 6

And you can always look in Azure Storage Explorer which is a great tool available on Codeplex.

Data File in Azure 7



SQL Server backup to URL


  • This lets you use Azure Blob Storage as a backup destination.


  • Potentially reduces the amount of on premise storage you need and this could give costs savings.
  • Rather than wait n hours for your backups to be returned from your favorite offsite storage you have immediate access
  • Once in Azure you could use geo-replication to further enhance your redundancy. You know in case of a loss of a data center, country, region, you know .. nuclear war …
  • As a migration path for migrating data to and from the cloud


  • Although you don’t pay for putting data into Azure you most certainly pay for taking data out (egress) as well as the actual storage cost. Just factor that in to you overall costs.


  • You can’t use premium storage in Azure
  • The maximum backup size is 1Tb which is the maximum size of a single page blob in Azure. Although you could stripe the backup set and this would support backup file sizes to 12.8Tb
  • There are others – please check the documentation.

How it works

  1. Once you have a Storage Account set up in Azure you have 2 choices:
    • You either need to use a storage account identity and access key
    • Or use a Shared Access Signature (SAS) token
  2. Create a Credential. This is used to store the authentication information required to connect to the Azure Blob storage.
  3. Backup the database using the TO URL clause of the BACKUP command


I’m running a simple example here using the storage account identity and the access key from Azure rather than the Shared Access Signature.

Go to your Storage Account:


and Click on Access Keys.  Against Key1 use the Copy icon to copy the access key.


Create a container in the Storage Account to organize your files.  To do this click on Blobs inside your Storage Account


And then on the + Container.  I’ve created one here called sandboxbackups.  Note the full URL of the container can be seen here.

Back in SSMS create a credential with the access key.  The IDENTITY is the name of the Storage Account and the SECRET is the access key copied above.  The CREDENTIAL is a name you make up for your credential.

USE master
-- Create a credential using the access key
(SELECT * FROM sys.credentials WHERE credential_identity = 'sikfink_credential')
CREATE CREDENTIAL sikfink_credential WITH IDENTITY = 'sikfinkbackups'
,SECRET = '<access key here>';

And then run the backup as below.  Note the format of the URL is:

https://<storage account name>.blob.core.windows.net/<container name>/<backup name>.bak

Although https is optional it’s recommended.  Use the CREDENTIAL created above

-- Backup the database
TO URL = 'https://sikfinkbackups.blob.core.windows.net/sandboxbackups/sandbox.bak'
WITH CREDENTIAL = 'sikfink_credential'
,STATS = 5;

and in the results pane you should get the usual confirmation that the database has been backed up:


And if you look in the container you can see the backup file:


To restore the database:

-- Restore the database
FROM URL = 'https://sikfinkbackups.blob.core.windows.net/sandboxbackups/sandbox.bak'   
WITH CREDENTIAL = 'sikfink_credential'  


SQL Server in the Hybrid Cloud

Hybrid Cloud.  I hear the term a lot and always wonder; why bother?  Why not just jump straight in? What’s the point of doing something half heartedly?  I guess that’s the developer in me – always wanting to play with the latest toys and trusting the underlying technology will deliver.

That is until I have to manage something in production and then I want everything developed to in house standards, unit tested, reviewed, gone though at least SIT and UAT and then load tested before going through a rigorous release process and being authorised by CAB.  Just in case I get called out at 3am and again at 3.30 and ….., still … not doing that job right now, breath, relax ….

So the Hybrid Cloud is a marketing term to encourage people to move fully to the cloud by doing a little bit in the cloud to give them some confidence in the hope that they will eventually move fully over to the cloud.

SQL Server has a range of Hybrid Cloud  offerings on the Microsoft Cloud, Azure, to tempt you in and over the next few weeks I’m planning to go over them here.  This is mainly for my sanity and to give me somewhere to go and remind myself what they are!!

SQL Server backup to URL

You can backup directly to Azure ‘blob’ storage or back it up on premise and copy the backups to Azure. You can then take advantage of the relatively inexpensive storage in Azure as well as have all the Cloud benefits of multiple copies, geo replication etc.  This can also be used as a path for migrating to the Cloud.

SQL Server data files in Azure

Amazingly you can use Azure blob storage for database files for an on premise instance.  I guess you wouldn’t want to do this all the time but again it might help in a migration strategy or if you literally had no disk left on premise.  An alternative usage is as a place to put the data files if you are using Azure VMs

File Snapshot Backups

This requires that the database files are on azure already. So either using SQL Server on premise where the database files are hosted on azure storage as above, or the data files are in azure running from an Azure VM.  This is supposed to give near instantaneous backups and quicker restores so could be useful as a HA DR alternative.

Stretch SQL Server table to Azure SQL DB

Stretch enables a table to store older (cold, warm)  data in Azure, keeping newer more queried (hot) data in the on premise table with all the data available to query. The Optimiser knows where the data is and adjusts accordingly.  This could be a good way to archive infrequently accessed data but watch the cost.

Transaction Replication to Azure

Transactional Replication can be used to replicate data from an on premise or IaaS database to Azure SQL DB. This could be used to replicate data closer to users to improve query performance or as part of a migration strategy.

Always on Availability Group Replica in Azure IaaS

You can configure an on premise AOAG to have an asynchronous replica in an Azure VM. Potentially this is a low cost disaster recovery scenario or could be used to off load reporting to Azure.  In that way it could be a bit like transactional replication, the difference being that this is the whole database whereas in transactional replication you could choose an individual table to replicate.

Live Query Statistics

So you work in an organisation who run ‘mission critical’ applications but don’t want to pay for tooling.  You know, they want you to keep the lights on but won’t invest in any of the great 3rd party monitoring tools that Microsoft’s partners have written to make your life easier. So, you do your best, you write your own suite of monitoring and alerting tools.  Sometimes these are pretty sophisticated and if you have the time to invest in them they can be really powerful.  Or, you may be at the other end and you just have your ‘Top 5 worst performing queries’ script that you run when things are going south.

Either way you are going to love some of the new features in SQL Server 2016 that have been put in place to help those monitoring and maintaining SQL Server instances.  Actually, these are great features for developers too that will help you tune your queries (you do that right?).

One of those new features is Live Query Statistics, a really nice little feature to help you see what’s going on while a query is running. It gives you a view of the query you are tracking whilst its running.

You can turn it on several different ways.  The easiest is to hit its icon in Management Studio which sits next to Include Actual Execution Plan:


But you can also right click inside the text of a query:


And you get a new tab alongside Results and Messages with your Live Query Statistics information in like the screenshot below.  How cool is that!


So you get a running progress of the query both in the top and bottom left.  Its useful in both places because if you switch to the Results or Messages pane you can still see how far through you are in the bottom left.  However, even better, each step in the execution plan has its own little percentage counter so you can see in real time where the hold-up or the pain is.

For example, in the Screenshot below you can see that the hold-up is in the bottom right in the read of the FactResellerSalesXL table.


It’s also worth noting here that the tool is also available via Activity Monitor.  If you right click on a query in the Active Expensive Queries pane you can also activate Live Query Statistics by clicking on Show Live Execution Plan and it will open up in its own tab.


Just a final note that there is a warning on the MSDN page which reminds the user that this feature is primarily supposed to be for troubleshooting and that by using this feature it can ‘moderately slow the overall query performance’. You have been warned!!

Hope you enjoy this new feature and it makes your life a little easier.

Row Level Security

SQL Server 2016 introduces Row Level Security or RLS.  This has been a feature that has been long anticipated by many. It lets developers and DBAs set up security so only certain users can access data at the row level.  In effect its row level filtering depending on the security policy you set up.

Where would this be useful?

Several places potentially:

  1. If you have a business requirement which defines who can select, insert, update or delete data.
  2. Different departments using the same database but only want to be able to acess or modify their own data.
  3. Multi – tenant databases.  With the move to Azure SQL Database there is a strong Software as a Service (SaaS) model evolving and if you are adminsitering it this could be right up your street.

Hows it work?

  1. Create an inline table function that does the filtering
  2. Bind the function to a table
  3. Create a security policy


Set up: Create a user account for department 1 that can only see/modify their data and a company account that will be able to see ALL data.

-- Create some users

CREATE USER Dept1 WITHOUT LOGIN; -- Should only be able to see dept1 data
CREATE USER Company WITHOUT LOGIN; -- Should be able to see all departmental data

-- Create a table to hold some data

IF EXISTS(SELECT * FROM sys.tables WHERE name = 'Customers_RLS')
	DROP TABLE Customers_RLS;

	CustomerName VARCHAR(100) NOT NULL,
	CustomerExpediture DECIMAL(10,2) NOT NULL,

IF EXISTS(SELECT * FROM sys.tables WHERE name = 'Depatment_RLS')
	DROP TABLE Depatment_RLS;

	DepartmentName VARCHAR(100) NOT NULL


-- Populate the table

INSERT INTO Customers_RLS (CustomerName, CustomerExpediture,DeptID) 
('John Bird Logisitics',1000.00,1),
('The Ink Group',25.00,1),
('Fish Supplies',3000.00,2),
('Astro Space Ltd',400.00,2),
('Screens R Us',600.00,3);

INSERT INTO Depatment_RLS (DepartmentName)

-- Check data

SELECT c.CustomerID, c.CustomerName, c.DeptID, d.DepartmentName
FROM Customers_RLS c
INNER JOIN Depatment_RLS d on c.DeptID = d.DeptID;

At this stage we have loaded some data into a couple of tables joined together below:


If we select this data as the Dept1 user:

-- Check data as Dept1 user

SELECT * FROM Customers_RLS ;

We will see all the rows. At this stage we haven’t adedd anything.


Here I am adding a specific schema called security to keep the funciton in. The function itself will act as filter depending on the User Name its being run with and the DeptID it picks up from the department table.

-- create a new schema (best practice advises to put in a security schema)


-- create the inline table value function

CREATE FUNCTION Security.DepartmentalSecurityPolicy(@DeptID int)
	RETURN SELECT 1 AS securityResult
	FROM dbo.Customers_RLS c
	INNER JOIN dbo.Depatment_RLS d on c.DeptID = d.DeptID
	WHERE d.DepartmentName = USER_NAME()
	AND c.DeptID = @DeptID;


Finally, I need to create a security policy that links the table with the function. Note the filter sytax and the block syntax.

-- create a security policy

CREATE SECURITY POLICY Security.customerPolicy
	ADD FILTER PREDICATE Security.DepartmentalSecurityPolicy(DeptID) ON dbo.Customers_RLS,
	ADD BLOCK PREDICATE Security.DepartmentalSecurityPolicy(DeptID) ON dbo.Customers_RLS
WITH (STATE = ON, SCHEMABINDING = ON) -- optional default

Now If I run the select again as Dept1 user:

-- Check data as Dept1 user

-- Run a select
SELECT * FROM Customers_RLS ;

I only see dept1 data.


If I update a row as Dept1 that is a Dept1 row it will allow me to do this:

-- Run an update against their own customer - 1 row affected
UPDATE Customers_RLS SET CustomerExpediture = 2000.00 WHERE CustomerID = 1;

If I run an update against a different deptID it will not update the row:

-- Run an update against another dept customer - 0 rows affected.
UPDATE Customers_RLS SET CustomerExpediture = 2000.00 WHERE CustomerID = 3;


Interesting at this point to run the query as me:

-- Check data as me
-- Run a select - nothing returned
SELECT * FROM Customers_RLS ;


and you can see I do not have any permissions on the table.

-- Okay could change function to let Company user have access to all but going to show
-- that you can do this by roles.

-- What can company see now? Nothing.

EXECUTE AS USER = 'Company';  
SELECT * FROM Customers_RLS ;

Like me Company has no access via the filter yet.

-- Create a new database role and assign permissions


-- Assign Company to role

ALTER ROLE CompanyRole ADD MEMBER Company;

-- Alter function

-- Drop policy first
IF EXISTS(SELECT * FROM sys.security_policies)
DROP SECURITY POLICY Security.customerPolicy;

ALTER FUNCTION Security.DepartmentalSecurityPolicy(@DeptID int)
	RETURN SELECT 1 AS securityResult
	FROM dbo.Customers_RLS c
	INNER JOIN dbo.Depatment_RLS d on c.DeptID = d.DeptID
	WHERE (		d.DepartmentName = USER_NAME()
			AND c.DeptID = @DeptID
	-- Role check
	OR IS_MEMBER('CompanyRole') = 1;


So I’ve changed the inline table function to do a check either to filter on DeptID OR is they are a member of the CompanyRole they can see everything. Just reapply the policy

-- Recreate Policy

CREATE SECURITY POLICY Security.customerPolicy
ADD FILTER PREDICATE Security.DepartmentalSecurityPolicy(DeptID) ON dbo.Customers_RLS,
ADD BLOCK PREDICATE Security.DepartmentalSecurityPolicy(DeptID) ON dbo.Customers_RLS
WITH (STATE = ON, SCHEMABINDING = ON) -- optional default

-- check access - see everything.

EXECUTE AS USER = 'Company';  
SELECT * FROM Customers_RLS ;



A dacpac file is a Data Tier Application.  It is a collection of files that describe a database.  You can either create one directly from SSOX or they are created as a by product of the build process in Visual Studio. With one you can use it to release databases or add as a reference in another Visual Studio project that references the database.

To create one for a database locate the database in SSOX in Visual Studio and right click on the database and select “Extract Data-tier Application”


Select the File on a disk and find a location.


I recommend you put your dacpac files in a central reusable place so other projects can reference them easily.  In the example above I’ve only extracted the schema and I’ve discounted all the user logins and permissions.

Data Tools Operations confirms that it has worked


If you locate the file on disk and double click


it unpacks the dacpac into a series of files that make up the definition of the database.  This includes XML files that describe the files as well as a sql file that contains a scripted version of all the objects.


REMOTE Join Hint

You may have seen or used the more regular join hints, LOOP, HASH and MERGE but what about the REMOTE hint?


I had a problem at work today where I wanted to lookup a field on a large table (253 million rows) which was accessed via a linked server.  The local table doing the lookup was paltry in size (333 rows to be updated out of a table of 5000 rows).  The remote table was joined on a field which was indexed but I was struggling to get the query to perform to what I thought was a reasonable time (sub 10 secs in this instance).  The actual execution time running in a test for this blog was 30 mins when I cancelled it without it finishing.

Turning on statistics and io events and monitoring:


30 mins before cancelling query.


Table ”. Scan count 1, logical reads 787
Table ‘Worktable’. Scan count 1, logical reads 668
(330 row(s) affected)

SQL Server Execution Times:   CPU time = 172 ms,  elapsed time = 4468 ms.
So 30 mins + to 4 secs by using this join hint!  What’s happened is that we have told SQL Server to perform the join on the remote table’s server instead of on the local server.

Note you get the following warning message:

Warning: The join order has been enforced because a local join hint is used.

reminding you that you have chosen to override the optimizer.  This should always be done  with care but in this case the query fulfils the criteria Microsoft recommend for using this join hint;  “This is useful when the left table is a local table and the right table is a remote table. REMOTE should be used only when the left table has fewer rows than the right table.”

Also worth noting that the join hint can only be used on an INNER JOIN.