Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Monday, 17 September 2012

T-SQL: Allow low privileged user to Truncate table

The problem with TRUNCATE is that it is a DDL statement rather than the regulation DELETE statement which is DML. As such, its not captured by using the db_datewriter database role and so on the face of it, you're left with two options:

1) Add the user to db_ddladmin role (or db_owner)
2) Grant ALTER table permissions to the table(s) in question

I'm not really a fan of either of these approaches as the first gives far too much control to the user and I try to avoid giving object level permissions if possible.

An alternative is to make use of the EXECUTE AS clause and wrap your TRUNCATE statement within a stored procedure and grant permissions on that instead.

-- create a test database 
CREATE DATABASE TruncateTest
GO

USE TruncateTest
GO

-- create a low privilege user to test with
CREATE USER NoRights WITHOUT LOGIN
GO

-- create a table to test against
CREATE TABLE dbo.TestData (Id INT IDENTITY(1,1))
GO
-- add some dummy data
INSERT INTO dbo.TestData DEFAULT VALUES
GO 20
-- check the data is ok
SELECT COUNT(*) FROM dbo.TestData
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- try to issue the truncate command
TRUNCATE TABLE dbo.TestData
GO

Msg 1088, LEVEL 16, State 7, Line 1
Cannot find the object "TestData" because it does not exist or you do not have permissions.
-- switch back to my superuser
REVERT
GO

-- create a wrapper procedure to truncate the table
CREATE PROCEDURE dbo.TruncateTestData
AS
TRUNCATE TABLE
dbo.TestData
GO

-- grant execute to the test user
GRANT EXECUTE ON dbo.TruncateTestData TO NoRights
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- execute the procedure
EXEC dbo.TruncateTestData
GO

Msg 1088, Level 16, State 7, Procedure TruncateTestData, Line 4
Cannot find the object "TestData" because it does not exist or you do not have permissions.
-- switch back to my superuser
REVERT
GO

-- modify the procedure to execute in the context of the owner (dbo)
ALTER PROCEDURE dbo.TruncateTestData
WITH EXECUTE AS OWNER
AS
TRUNCATE TABLE
dbo.TestData
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- execute the procedure
EXEC dbo.TruncateTestData
GO

-- switch back to the superuser
REVERT
GO

-- check the data
SELECT COUNT(*) FROM dbo.TestData
GO

-- tidy up
DROP DATABASE TruncateTest
GO

Tuesday, 17 July 2012

T-SQL Tuesday #32 - A Day In The Life

T-SQL Tuesday has been a little off my radar for the last few months (as has all blogging activity!), so I was delighted when Twitter reminded me that it was that time of the month. Even better, the topic chosen by Erin Stellato (Blog | Twitter) was a great one that I felt I could really engage with.

So here we go. For the record, my job title is Database Administrator.

The task as laid out in the invitation suggested to talk through your day on the 11th or 12th July which just so happened to be the day I resigned from my current role. Writing down a handover list was an interesting task in itself as it gave me the opportunity to put down in words what the key parts of my job were and what skills gap would be left. I have to say, I was surprised (and pleased) that the number of tasks on the list that required significant handover to another collegue were minimal as I have been very diligent in ensuring that I was not a Single Point of Failure in my organisation. As such, I have written plenty of documentation and made extensive use of Source Control and Task Tracking systems to ensure that even if people do not know what I've done, there is a paper trail to explain it.

What was apparent was the varying skills I employ in my day to day, many of which are not even related directly to SQL Server. Much of my day is spent in project meetings, planning or analysing roadmaps and although the applications typically have a database backend, its not the focus of my attention. I am fortunate to be able to dedicate some time each day to reading blogs and technical articles to help stay up to date although I also spend my fair share of time in firefighting mode. The traditional DBA element probably takes up less than 1 day a month partly due to the small size of the estate but also because I've been effective (backslap for myself here!) in automating many of the tasks.

Is my job really a Database Administrator role? I don't think so and in truth it wasn't from the outset but this hasn't been a bad thing. I enjoy the variety of being involved in the different aspects of SQL Server and feel it makes me a much more effective professional and I imagine that this is the case for many other fellow SQL Server pros. However, I've encouraged my company to not advertise the role under this job description as I believe it may attract candidates with unrealistic expectations.

As for me, I'm not sure what my next role has in store but I hope it affords me the time to participate in future T-SQL Tuesdays!!

Tuesday, 1 May 2012

T-SQL: Change Schema of User Defined Type

I've been working with one of our developers to try and rationalise some database objects into some sensible schemas using the ALTER SCHEMA syntax:

http://msdn.microsoft.com/en-us/library/ms173423.aspx

Just out of interest, is it me or is this syntax for changing an objects schema a bit strange? Are you altering the Schema or the Object? Anyhow, I digress...

In our example, we have several User Defined Types which require transfer. Here is some sample code illustrating the problem:

CREATE SCHEMA NewSchema
GO

CREATE SCHEMA OldSchema
GO

CREATE TYPE OldSchema.MyUDT FROM VARCHAR(20);
GO
ALTER SCHEMA NewSchema TRANSFER OldSchema.MyUDT
GO



This gives the rather unhelpful error:

Cannot find the object 'MyUDT', because it does not exist or you do not have permission.

I checked the documentation and there isn't anything that suggests types should be treated any differently so I was stumped. Fortunately, I managed to unearth this post which provides a workaround to the problem.

ALTER SCHEMA NewSchema TRANSFER type::OldSchema.MyUDT
GO



I'm not sure why this is not documented in Books Online (or at least nowhere I can see). One to keep an eye out for.

Thursday, 2 February 2012

T-SQL: SQL2012 Code Snippets

I've never really bought into the idea of object templates in SQL Server Management Studio and in my experience, not many other database professionals have either!

Just recently however, I've been getting a increasingly frustrated at the non-standard development pattern of database objects and (just as importantly) the lack of documentation for procedures and functions. I've written in the past about using Extended Properties to help version your database and this could easily be extended to document objects too. However, in a previous role I picked up quite a nice habit of adding documentation headers to each procedure and function which gives some basic information to anyone looking at the procedure to give them a headstart in debugging/understanding. A heading may look like this:


/*
created by: Richard Brown
date created: 02/02/2012
description: this procedure just returns information about customers
debug:
   declare @custid = 1
   exec dbo.getcustomers @custid

*/


Encouraging developers to do this in practice is a real problem and this is where code snippets can come in. There are numerous default snippets available in SQL2012 management studio and you can access them by hitting CTRL-K, CTRL-X and the tabbing to the appropriate snippet:



Here you can see there are already a few default stored procedure snippets including a "create a simple stored procedure" snippet. You can create/add new snippets as you see fit but for my purposes, I want to just modify the existing snippets so they include my header as default.

Take a note of the location of the snippets by navigating to Tools/Code Snippets Manager:


You can then find the .snippet file and make the appropriate modifications. Its just an xml file and for my needs I just needed to make the following change:

<Code Language="SQL">
            <![CDATA[
/* 
created by: 
date created:
description:
debug:

*/
CREATE PROCEDURE $SchemaName$.$storedprocname$ 
    @$Param1$ $datatype1$ = $DefValue1$,
    @$Param2$ $datatype2$ 
AS
    SELECT @$Param1$,@$Param2$ 
RETURN 0 $end$]]>
        </Code> 


When I hit the snippet function, I now get the updated template for my stored procedure. Hoepfully, this will encourage the devs to follow the standards.

Friday, 20 January 2012

T-SQL: Find missing gaps in data

I recently had solve a problem of finding missing values in a data series and at the same time, came across this article on the new Analytic Functions in SQL2012.

It turns out that there is a new function which can simplify my solution although as the problem is on a SQL2008 database, I won't be able to implement it. Still, another piece of the jigsaw.

Here, I'll present queries for each version:

USE TempDB
GO
 


-- setup the table 
CREATE TABLE [dbo].[HoleyTable](
  
[Date] [datetime] NOT NULL,
  
[Country] [char](3) NOT NULL,
  
[Val] [decimal](22, 10) NULL
)
ON [PRIMARY]
GO


 -- and some sample data
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100131','GBP', 40) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100331','GBP', 30) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100531','GBP', 20)
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20101031','GBP', 50) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20101231','GBP', 55) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100228','USD', 20)
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100331','USD', 10)
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100430','USD', 15) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100630','USD', 25)
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20100731','USD', 55) 
INSERT INTO dbo.HoleyTable (Date, Country, Val) VALUES ('20101130','USD', 30) 
GO 

-- SQL2008
WITH RankedData 
AS
( 
SELECT *, ROW_NUMBER() OVER (PARTITION BY Country ORDER BY Date) AS rn 
FROM dbo.HoleyTable 
) 
SELECT a.Country,
  
b.Date AS StartOfGap,
  
a.Date AS EndOfGap,
  
DATEDIFF(m, b.date, a.date)-1 AS missingdatapoints 

FROM RankedData a
  
INNER JOIN RankedData b
      
ON a.rn = b.rn + 1
          
AND a.Country = b.Country 

WHERE DATEDIFF(m, b.date, a.date) > 1; 
GO

-- SQL2012
WITH PeekAtData  
AS
(
SELECT *,
      
curr = date,
      
nxt = LEAD(Date, 1, NULL) OVER (PARTITION BY Country ORDER BY date)FROM dbo.HoleyTable 

)
SELECT Country,
     
curr AS StartOfGap,
      
nxt AS EndOfGap,
  
DATEDIFF(m, curr, nxt) -1 AS MissingDatapoints 

FROM PeekAtData p
WHERE DATEDIFF(m, curr, nxt) > 1
GO
 


--tidy up
DROP TABLE dbo.HoleyTable
GO


And how about the all important question of performance? Well, here is a screenshot of the execution plans for the 2 queries where you can see the 2012 implementation does outperform its 2008 counterpart by a ratio of about 3:1.


Wednesday, 11 January 2012

Admin: Start and Stop Local Instance of SQL Server

As a database professional, i'm accustomed to having a local sql server instance installed. However, it seems that SQL Server (typically Express) is finding its way onto more and more desktops, often unused and unknown to the user!! With the web browser FireFox taking up almost half your memory, you don't want any other applications taking your valuable resources.

I recommend changing the service settings of your SQL services to manual and only turning them on when you are actually going to use them. If you're using a shared database server, you may find that even as a SQL developer you may not use your local instance as often as you might have thought.

I have created 2 simple batch files on my desktop which I can use for stopping and starting my local SQL services meaning its a mere double click away to get your instance available.

StartSQLServices.bat

NET START mssqlserver
NET START sqlserveragent
NET START ReportServer
NET START msdtsserver100


StopSQLServices.bat

NET STOP mssqlserver
NET STOP sqlserveragent
NET STOP ReportServer
NET STOP msdtsserver100


* your Service names may vary depending on instance names and SQL version.

Thursday, 8 December 2011

Admin: Windows Group Login Ambiguity

Its a typical scenario to grant login access to SQL Server via Windows groups as this eases the administrative burden of setting up multiple logins all with the same access. However, its not uncommon for a user to belong to multiple Windows groups and its possible that these groups could each have a SQL login. So whats the problem? Well, consider this example:

User - DOMAIN\PeterPan
Groups - DOMAIN\ReadOnlyGroup, DOMAIN\WriteOnlyGroup

We have 2 windows groups both of which have access to SQL Server, albeit for different purposes. Peter is a member of both groups as he needs access to do both actions. My question is, when Peter logs in, under which Windows Group has he been granted authentication?

Why does it matter, I here you ask?

From a pure permissions point of view, there is no issue as the usual hierarchy applies. Permissions are built on top of each other so SQL Server will combine the permissions from both groups (DENY overriding GRANT etc) and that gives us the behaviour we want. You can validate this by simply adding a differing set of permissions to the different groups and then adding or removing the member from it:

EXEC sp_addrolemember N'db_datareader', N'DOMAIN\ReadOnlyGroup'

EXEC
sp_addrolemember N'db_datawriter', N'DOMAIN\WriteOnlyGroup'


You can view the permission set in the following way:

SELECT *, DB_NAME(), SUSER_SNAME() FROM fn_my_permissions(NULL, 'DATABASE')

However, this works because there is a defined hierarchy in place which allows SQL Server to make the correct choice based upon permission settings.

But, what about something where no such hierarchy exists like a default language. If the 2 groups have different default languages then how do we determine which default language the login connection should take? This is particularly important when dealing with dates. US_English and British_English interpret dates in MDY and DMY respectively so the date 30/11/2011 will throw an error in US_English but work correctly in British_English. Worse still, you may not get an error at all and the date is valid but not what the user wants e.g 01/02/2011. Of course, best practice would see us using ISO formatted dates YYYYMMDD but I dare say not every application adheres to this.

I understand that this is a difficult (if not impossible) issue to rectify. I mean, there is no logical hierarchy or set of rules that can be put in place when this situation occurs. However, I'd like to understand how SQL Server does determine which language it uses in this scenario as I couldn't find a method of doing so. Is it the windows group which was last granted access to SQL? Does anyone know?

Thursday, 1 December 2011

T-SQL: Audit Data Changes

I was recently asked about how to implement auditing of data changes for a database. I know SQL Server contains the Change Data Capture feature but (as is often the case with things I want!), its an Enterprise only feature meaning its out of the reach for the application in question.

As it happened, it turned out that there was already an application in-house (that I hadn't developed) which implemented its own auditing of changes based around a simple Audit table and Triggers. I'm still yet to find out who wrote it, but it certainly does the trick. The database it runs on is relatively static so I do have reservations how scalable it is but for some tables you want control over, it may be suitable.

CREATE TABLE [dbo].[Audit](
  
[AuditID] [int] IDENTITY(1,1) NOT NULL,
  
[Type] [char](1) NULL,
  
[TableName] [varchar](128) NULL,
  
[PrimaryKeyField] [varchar](1000) NULL,
  
[PrimaryKeyValue] [varchar](1000) NULL,
  
[FieldName] [varchar](128) NULL,
  
[OldValue] [varchar](1000) NULL,
  
[NewValue] [varchar](1000) NULL,
  
[UpdateDate] [datetime] NULL,
  
[UserName] [varchar](128) NULL
)
ON [PRIMARY]

GO
 

ALTER TABLE [dbo].[Audit]  
ADD  CONSTRAINT [DF_Audit_UpdateDate] DEFAULT (GETDATE()) FOR [UpdateDate]
ALTER TABLE [dbo].[Audit] 
ADD  CONSTRAINT [DF_Audit_UserName] DEFAULT (SUSER_SNAME()) FOR [UserName]
GO

CREATE TRIGGER dbo.MyTableChangeTracking
ON dbo.MyTable 
FOR INSERT, UPDATE, DELETE
AS

DECLARE
@bit INT
DECLARE
@field INT
DECLARE
@maxfield INT
DECLARE
@char INT
DECLARE
@fieldname VARCHAR(128) 

DECLARE @TableName VARCHAR(128) = 'MyTable';
DECLARE @PKCols VARCHAR(1000)
DECLARE @sql VARCHAR(2000) 
DECLARE @UserName VARCHAR(128) = SUSER_SNAME() 
DECLARE @Type CHAR(1) 
DECLARE @PKFieldSelect VARCHAR(1000)
DECLARE @PKValueSelect VARCHAR(1000)

-- Action 
IF EXISTS (SELECT TOP 1 * FROM inserted)
BEGIN
   IF
EXISTS (SELECT TOP 1 * FROM deleted)
      
SET @Type = 'U' -- update
  
ELSE
       SET
@Type = 'I' -- insert 

END
ELSE
   SET
@Type = 'D' -- delete

-- get all the changed values (need both tables for update)
 

SELECT * INTO #ins FROM inserted 
SELECT * INTO #del FROM deleted 

-- Get primary key columns for full outer join 
SELECT @PKCols = COALESCE(@PKCols + ' AND', ' ON') + ' i.' + c.COLUMN_NAME + ' = d.' + c.COLUMN_NAME 
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
  
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
      
ON c.TABLE_NAME = pk.TABLE_NAME
          
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME 

WHERE pk.TABLE_NAME = @TableName
  
AND CONSTRAINT_TYPE = 'PRIMARY KEY' 


-- Get primary key fields select for insert 
SELECT @PKFieldSelect = COALESCE(@PKFieldSelect + '+', '') + '''' + COLUMN_NAME + '''' 
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
  
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
      
ON c.TABLE_NAME = pk.TABLE_NAME
          
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME 

WHERE pk.TABLE_NAME = @TableName
  
AND CONSTRAINT_TYPE = 'PRIMARY KEY' 


SELECT @PKValueSelect = COALESCE(@PKValueSelect + '+', '') + 'convert(varchar(100), coalesce(i.' + COLUMN_NAME + ',d.' + COLUMN_NAME + '))'
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk
  
INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
      
ON c.TABLE_NAME = pk.TABLE_NAME
          
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME 

WHERE pk.TABLE_NAME = @TableName
  
AND CONSTRAINT_TYPE = 'PRIMARY KEY' 


-- raise an error if there is no Primary Key 
IF @PKCols IS NULL 
BEGIN
   RAISERROR
('no PK on table %s', 16, -1, @TableName)
  
RETURN
END

SELECT
@field = 0,
  
@maxfield = MAX(ORDINAL_POSITION)  

FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @TableName 

-- now loop through the fields to log the values from each field
WHILE @field < @maxfield 
BEGIN
   SELECT
@field = MIN(ORDINAL_POSITION)
  
FROM INFORMATION_SCHEMA.COLUMNS
  
WHERE TABLE_NAME = @TableName AND ORDINAL_POSITION > @field
  
  
SELECT @bit = (@field - 1 )% 8 + 1
  
SELECT @bit = POWER(2,@bit - 1)
  
SELECT @char = ((@field - 1) / 8) + 1
  
  
IF SUBSTRING(COLUMNS_UPDATED(),@char, 1) &amp; @bit > 0 OR @Type IN ('I','D')
  
BEGIN
       SELECT
@fieldname = COLUMN_NAME
      
FROM INFORMATION_SCHEMA.COLUMNS
      
WHERE TABLE_NAME = @TableName
          
AND ORDINAL_POSITION = @field
      
      
-- dynamically build and execute the audit insert statement
      
SELECT @sql = 'INSERT INTO dbo.Audit (Type, TableName, PrimaryKeyField, PrimaryKeyValue, FieldName, OldValue, NewValue)'
      
SELECT @sql = @sql + ' SELECT ''' + @Type + ''''
      
SELECT @sql = @sql + ',''' + @TableName + ''''
      
SELECT @sql = @sql + ',' + @PKFieldSelect
      
SELECT @sql = @sql + ',' + @PKValueSelect
      
SELECT @sql = @sql + ',''' + @fieldname + ''''
      
SELECT @sql = @sql + ',CONVERT(VARCHAR(1000),d.' + @fieldname + ')'
      
SELECT @sql = @sql + ',CONVERT(VARCHAR(1000),i.'  + @fieldname + ')'
      
SELECT @sql = @sql + ' FROM #ins i FULL OUTER JOIN #del d'
      
SELECT @sql = @sql + @PKCols
      
SELECT @sql = @sql + ' WHERE i.' + @fieldname + ' <> d.' + @fieldname
      
SELECT @sql = @sql + ' OR (i.' + @fieldname + ' IS NULL AND d.' + @fieldname + ' IS NOT NULL)'
      
SELECT @sql = @sql + ' OR (i.' + @fieldname + ' IS NOT NULL AND d.' + @fieldname + ' IS NULL)'

      
EXEC (@sql)
  
END
END

GO


Its quite a complicated trigger and took some time to get my head round but it certainly does the trick and you just need to implement this for each table you wish to track changes on. The data is readable too:


But i wasn't comfortable with the overhead of the trigger, the dynamic sql nor the fact that I hadn't written it myself! So I considered a second approach, still using triggers but this time just dumping the changed records to an audit table in XML format. This would have the advantage of a much simpler trigger even at the cost of readability of results and potentially higher storage costs.


CREATE TABLE dbo.tblAudit(
  
AuditID INT NOT NULL IDENTITY(1,1),
  
TableName VARCHAR(255),
  
DataInserted XML,
  
DataDeleted XML,
  
DateCreated DATETIME,
  
CreatedBy VARCHAR(2000)
)
 

GO 

CREATE TRIGGER dbo.MyTable_ChangeTracking 
ON dbo.MyTable 
FOR INSERT, UPDATE, DELETE
AS

SET NOCOUNT ON
-- set the tablename

DECLARE @TableName VARCHAR(255) = 'dbo.MyTable'-- grab the changed data in XML format
DECLARE @InsertXML XML = (SELECT * FROM inserted FOR XML AUTO);
DECLARE @DeleteXML XML = (SELECT * FROM deleted FOR XML AUTO);-- log it 
INSERT INTO dbo.tblAudit (TableName, DataInserted, DataDeleted, DateCreated, CreatedBy) 
SELECT @TableName, @InsertXML, @DeleteXML, GETDATE(), SUSER_SNAME()
GO

To interogate the results, you'd need to be comfortable working with XML but the simplicity of the trigger is something i really like. Hopefully, you wouldn't need to be looking at this data too often so the ease of logging is the main factor when implementing something like this. You could easily deploy this across multiple tables/servers too with a simple Powershell script.

Now, I wouldn't recommend either of this in a highly transactional database and you would want to consider whether to implement this method if the performance of your DML statements was critical and your storage quite tight. I'm going to experiment a bit with the CDC to see just what benefits it gives over a bespoke implementation (I suspect its much more lightweight) while also hoping that the feature gets downgraded to the lower editions of SQL Server.

Saturday, 26 November 2011

Denali: Now gets a proper name!

Since I last blogged about Denali, Microsoft has announced the official name for this version of SQL Server to be....(drum roll). SQL2012. Ok, ok. I'm pretty underwhelmed too but hey ho. Project Crescent now has the rather grand title of Power View and Juneau has the slightly less impressive Data Tools.

More exciting and interesting though are the changes to the Editions and also the licensing costs, in particular the introduction of a BI edition and the move to per core licensing. You can read more about this in this blog from Geoff Hiten.

I also came across this blog from the SQL Express guys which talks of the new LocalDB edition aimed at Developers. A lightweight database server with less management overhead than a full SQL Express edition. Nice.


Tuesday, 8 November 2011

SSRS: Use Stored Procedures in Datasets


This is my contribution to T-SQL Tuesday #24 hosted by Brad Schulz (blog) on the subject of Prox ‘n’ Funx (Stored Procedures and Functions to you and me :-) ).

I'm a big believer in using Stored Procedures (or at the very least, UDFs) for your Reporting Services datasets. and separating your presentation layer from your data layer and moving the SQL code away from the RDL.

The benefits of this are that you as long as the meta-data of your Stored Procedure stays the same, then you able to modify and enhance your SQL code without having to touch the RDL. You essentially abstract away the source code from the report.

Perhaps you're improving performance by moving to JOINs from cursors, extended the business logic to only return rows that meet new criteria or simply doing a refactoring of SQL code to standardise your table names. All of these don't affect the presentation layer and having them reside as Stored Procedures on the database, gives huge maintenance benefits.

Other advantages include having all your T-SQL held in the one place and knowing that you are aware of the impact of any changes without having to worry about dependancies elsewhere. Also, you will often be re-using code (eg for parameter datasets) and using a single stored procedure helps reduce duplication of effort (and probably performance benefits too).

Of course, there are downsides to this approach. If you need to introduce a new parameter to a report (which is passed to your dataset) then you have to change both the RDL and the stored procedure. I can see this being a slight irritation as you now have 2 deployments whereas holding the SQL code "inline" means a simple upload of the new report.

For me though, the former approach still wins and I advocate using Stored Procedures for Reporting Services datasets. I've been experimenting recently with putting Stored Procedures used for Reports into their own schema (acting as a namespace) although I can't categorically say whether this has been a success or not (Jamie Thomson (Blog | Twitter) has an interesting blog post which touches on Schema usage here).

Friday, 21 October 2011

Admin: Generate Restore Scripts

In order to speed up the recovery process of database (and to give the unintentional DBAs in my organsiation) a helping hand, I recently started experimenting with generating restore scripts for our production databases.

As is my wont, my technology preference was again Powershell (although it did make use of T-SQL too) as looping through databases on a server and writing to file are all pretty trivial.

I wrote a simple TVF to generate the restore script for a particular database, based upon the system tables in MSDB and based closely on the script found on MSSQLTIPS.


CREATE FUNCTION [dbo].[tvf_GetRestoreCommands](@DatabaseName SYSNAME)RETURNS
    
@AllCommands TABLE (
        
[backup_set_id] INT NULL,
        
[Command] NVARCHAR (MAX) NULL
        )
AS
BEGIN

   DECLARE
@backupStartDate DATETIME
   DECLARE
@backup_set_id_start INT
   DECLARE
@backup_set_id_end INT
   DECLARE
@IncludeMoveClause BIT = 1
  
DECLARE @MoveClause NVARCHAR(255) = ''

  
-- get the most recent full backup
  
SELECT @backup_set_id_start = MAX(backup_set_id)
  
FROM  msdb.dbo.backupset
  
WHERE database_name = @databaseName
      
AND TYPE = 'D'

  
SELECT @backup_set_id_end = MIN(backup_set_id)
  
FROM  msdb.dbo.backupset
  
WHERE database_name = @databaseName AND TYPE = 'D'
  
AND backup_set_id > @backup_set_id_start

  
IF @backup_set_id_end IS NULL SET @backup_set_id_end = 999999999

  
-- do you want to include the move clause in case of having to go to a different server
  
IF @IncludeMoveClause = 1
      
SELECT @MoveClause = COALESCE(@MoveClause + ',','') + ' MOVE ' + QUOTENAME(name,'''') + ' TO ' + QUOTENAME(physical_name, '''')
      
FROM sys.master_files
      
WHERE database_id = DB_ID(@databaseName);

  
INSERT INTO @AllCommands
      
SELECT backup_set_id, 'RESTORE DATABASE ' + @databaseName + ' FROM DISK = '''
              
+ mf.physical_device_name + ''' WITH NORECOVERY, STATS = 5 ' + @MoveClause
      
FROM    msdb.dbo.backupset b
          
INNER JOIN msdb.dbo.backupmediafamily mf
              
ON b.media_set_id = mf.media_set_id
      
WHERE b.database_name = @databaseName
                
AND b.backup_set_id = @backup_set_id_start
      
UNION
       SELECT
backup_set_id, 'RESTORE LOG ' + @databaseName + ' FROM DISK = '''
              
+ mf.physical_device_name + ''' WITH NORECOVERY'
      
FROM    msdb.dbo.backupset b,
                  
msdb.dbo.backupmediafamily mf
      
WHERE    b.media_set_id = mf.media_set_id
              
AND b.database_name = @databaseName
              
AND b.backup_set_id >= @backup_set_id_start
              
AND b.backup_set_id < @backup_set_id_end
              
AND b.TYPE = 'L'
      
UNION
       SELECT
999999999 AS backup_set_id, 'RESTORE DATABASE ' + @databaseName + ' WITH RECOVERY'
      
ORDER BY backup_set_id
      
  
RETURN
END


This was the sort of thing I needed and then all I was left to do was write a powershell script to execute this command for each database and persist the results to a file:

# Load the SQL Management Objects assembly (Pipe out-null supresses output)[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-nulladd-pssnapin SqlServerCmdletSnapin100

$server
= hostname;$backupdir = "E:\SQLBackup\Scripts\Restore\";$sql = new-object "Microsoft.SqlServer.Management.SMO.Server" $server;# Get databases on our server
$databases
= $sql.Databases | Where-object {$_.IsSystemObject -eq $false};# generate the restore commandforeach ($db in $databases){
   $filePath
= $backupdir + 'Restore_' + $db.name + '.sql'
   $dbname
= $db.Name
  
   $sqlresults
= Invoke-Sqlcmd -Query "SELECT [Command] FROM GlobalDB.dbo.tvf_GetRestoreCommands('$dbname')" -ServerInstance $sql_server

   # nasty hack here to force the output to fit
in. Hope my restore command isn't > 3000 chars!
   $sqlresults
| out-file -filepath $filepath -width 3000
}


The key thing to note here is that I've created the TVF in a central database GlobalDB which I use for admin/logging tasks on the server. This saves me from having to keep a copy of the script in each database (although I could achieve the same behaviour by installing it into the Master database or MSDB database).

Also, there is reference to a hack which I talked about in a previous post.

The final part of the jigsaw was being able to schedule this to run every time there was a backup so it would pickup the latest backup sets. SQL Agent in SQL2008 has a Powershell step but this will actually execute code held within the SQL job. For me, I want to be able to manage my powershell scripts independantly (keeping them under source control etc) so really, I just want to point SQLAgent at a script and execute it. Turns out this can be achieved using the CmdExec step:

powershell "& D:\SQLAdmin\CreateDBRestoreCommands.ps1"

The main downside to this is the permissions as running this Job Step type will mean the job runs under the context of the SQL Agent and, unless I set up a proxy, I will need to grant some database permissions (SELECT on the TVF) to the SQL Agent login account. As a quick fix, I granted SELECT permissions on the TVF to the Public role and off we went.

Thursday, 13 October 2011

Denali SSIS: Loop through a list of servers

Its been tough to get any time recently to play with Denali, so I thought i'd put together a quick tutorial on SSIS using Denali to give people a tiny flavour of what it looks like. I intend (time permitting) to use this tutorial in the future to extend the package to show off a few of the new features of Denali.

Note: this task can be achieved in previous versions of SSIS with minimal changes.

The aim of this package is a simple one: to iterate over a list of Servers and execute a SQL task against each one. I've also thrown in a Script task too for good measure.

Setup

First up, we'll set up the test data in Management Studio (look at that cool Denali syntax colouring!):


Then we move into Visual Studio to start our package:

Although I've not configured the Tasks yet, you can immediately get a feel for what the purpose of this package is going to be. As you might expect, there is improved "kerb appeal" from MS in this release with smoother graphics and features such as the magnifier which allow you to make your design more readable.



Configuration

1) T-SQL - Get Servers:

The Execute T-SQL dialog box is straight forward enough to configure. The main thing to note on the General tab is that you need to set the ResultSet appropriately to Full result set.


 On the Result Set tab, you need to set your Result to a variable of type Object. If you haven't already created your variable, you can create one from this dialog box.






2) For Each Loop - Servers

We just need to choose the Foreach ADO Enumerator and select the object source variable to be that you populated in the previous task. Simple.




Click onto the Variable Mappings and here is where you'll pull out the relevant details from your object. In our case, we just need to grab the ServerName and populate a simple string variable to use in the tasks within the container. You need to map these variables by Zero based Index and we're only interested in the first column, hence Index 0.



3) T-SQL - Get Version

Another T-SQL task here, but the clever part is that we need the connection to be dynamic. In other words, for each server in our collection, we need to connect to that server and get its version.

First, we add an extra connection using the Connection Manager (Note, I also change my connection name to show that its dynamic - i've called it DynamicSQLConn)


We can make the connection dynamic by changing the ConnectionString property on each iteration of the loop. To do this, we dive into the Properties window and click on Expressions:


We just then set the ConnectionString property to something like the following:

"Data Source=" + @[User::ServerName]  + ";Initial Catalog=master;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;"


Now we can just use this connection string in our T-SQL task. I've chosen to return the results of @@VERSION. Its a single column, single row result set so i've chosen the Single Row result set.


Now we just need to configure the Result Set by sending the output to another string variable


4) Script Task - Show Version

Now typically, you'd want to do something more appropriate than just showing your results via a Message Box but this is exactly what i'm going to do. If nothing else, it illustrates the use of the Script Task.

We just open up the task and we need to pass in the variables we wish to use. You can type them in, or just use the select dialog box thats provided. As we're only displaying, they just need to be ReadOnlyVariables.


When you click Edit Script, you get a new instance of Visual Studio open up to add your code. We only need to modify the Main method:


And thats it!!

Execution

You can test and execute the package through Visual Studio and see even more of that "Kerb Appeal" that I talked about earlier. Gone are the garish colours associated with previous versions of SSIS and they've been replaced by more subtle and sexy icons:




Hopefully this has been helpful in giving a quick glance at the look and feel of Denali SSIS while also showing how you can SSIS to loop over a dataset.

Specifically the main points to take away are:
1) For Each Loop Container with an ADO Enumerator, using the Object variable and accessing properties of the object
2) T-SQL Task - using a dynamic connection with Expressions, using different Result Sets and populating variables from result sets
3) Script Task - passing in variables and writing a simple task.

/* add this crazy stuff in so i can use syntax highlighter