Showing posts with label SQL 2008. Show all posts
Showing posts with label SQL 2008. Show all posts

Monday, 17 September 2012

T-SQL: Allow low privileged user to Truncate table

The problem with TRUNCATE is that it is a DDL statement rather than the regulation DELETE statement which is DML. As such, its not captured by using the db_datewriter database role and so on the face of it, you're left with two options:

1) Add the user to db_ddladmin role (or db_owner)
2) Grant ALTER table permissions to the table(s) in question

I'm not really a fan of either of these approaches as the first gives far too much control to the user and I try to avoid giving object level permissions if possible.

An alternative is to make use of the EXECUTE AS clause and wrap your TRUNCATE statement within a stored procedure and grant permissions on that instead.

-- create a test database 
CREATE DATABASE TruncateTest
GO

USE TruncateTest
GO

-- create a low privilege user to test with
CREATE USER NoRights WITHOUT LOGIN
GO

-- create a table to test against
CREATE TABLE dbo.TestData (Id INT IDENTITY(1,1))
GO
-- add some dummy data
INSERT INTO dbo.TestData DEFAULT VALUES
GO 20
-- check the data is ok
SELECT COUNT(*) FROM dbo.TestData
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- try to issue the truncate command
TRUNCATE TABLE dbo.TestData
GO

Msg 1088, LEVEL 16, State 7, Line 1
Cannot find the object "TestData" because it does not exist or you do not have permissions.
-- switch back to my superuser
REVERT
GO

-- create a wrapper procedure to truncate the table
CREATE PROCEDURE dbo.TruncateTestData
AS
TRUNCATE TABLE
dbo.TestData
GO

-- grant execute to the test user
GRANT EXECUTE ON dbo.TruncateTestData TO NoRights
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- execute the procedure
EXEC dbo.TruncateTestData
GO

Msg 1088, Level 16, State 7, Procedure TruncateTestData, Line 4
Cannot find the object "TestData" because it does not exist or you do not have permissions.
-- switch back to my superuser
REVERT
GO

-- modify the procedure to execute in the context of the owner (dbo)
ALTER PROCEDURE dbo.TruncateTestData
WITH EXECUTE AS OWNER
AS
TRUNCATE TABLE
dbo.TestData
GO

-- impersonate the test user
EXECUTE AS USER = 'NoRights'
GO
-- execute the procedure
EXEC dbo.TruncateTestData
GO

-- switch back to the superuser
REVERT
GO

-- check the data
SELECT COUNT(*) FROM dbo.TestData
GO

-- tidy up
DROP DATABASE TruncateTest
GO

Tuesday, 1 May 2012

T-SQL: Change Schema of User Defined Type

I've been working with one of our developers to try and rationalise some database objects into some sensible schemas using the ALTER SCHEMA syntax:

http://msdn.microsoft.com/en-us/library/ms173423.aspx

Just out of interest, is it me or is this syntax for changing an objects schema a bit strange? Are you altering the Schema or the Object? Anyhow, I digress...

In our example, we have several User Defined Types which require transfer. Here is some sample code illustrating the problem:

CREATE SCHEMA NewSchema
GO

CREATE SCHEMA OldSchema
GO

CREATE TYPE OldSchema.MyUDT FROM VARCHAR(20);
GO
ALTER SCHEMA NewSchema TRANSFER OldSchema.MyUDT
GO



This gives the rather unhelpful error:

Cannot find the object 'MyUDT', because it does not exist or you do not have permission.

I checked the documentation and there isn't anything that suggests types should be treated any differently so I was stumped. Fortunately, I managed to unearth this post which provides a workaround to the problem.

ALTER SCHEMA NewSchema TRANSFER type::OldSchema.MyUDT
GO



I'm not sure why this is not documented in Books Online (or at least nowhere I can see). One to keep an eye out for.

Thursday, 9 February 2012

T-SQL: INSERT with XML shredding is very slow

Back in 2010, I came across a performance bug with SQL2008 SP1. When shredding XML using the nodes query, the performance is fine with a SELECT but when you want to INSERT the data to a table/temptable/tablevariable the performance becomes very poor.

The query I was having problems with was something like this (Thanks for the sample data wBob!):

DECLARE @xml XML;
-- Spin up some test data 
WITH cte AS 
( 
SELECT 65 x 
UNION ALL 
SELECT x + 1 
FROM cte 
WHERE x < 90 ) 

SELECT @xml = (
    
SELECT CHAR( a.x ) +
' & co.' AS "name"
    a.x
AS "value"
    
'FIELD' AS "paramdata/field"

    FROM cte a 
      CROSS JOIN cte b 
      CROSS JOIN cte c
    
FOR XML PATH('root'), TYPE
    
) 


DECLARE @ConfigData TABLE (ID INT, Name NVARCHAR(255), Value NVARCHAR(255), ParamData XML)

INSERT INTO @ConfigData (ID, Name , Value, ParamData) 
SELECT 1,
    
tbl.cols.value('name[1]', 'varchar(1000)'),
    
tbl.cols.value('value[1]', 'varchar(1000)'),
    
tbl.cols.query('./paramdata[1]')

FROM @XML.nodes('//root') AS tbl(cols) 
GO

I raised this with MS via Connect (#562092) and went on to implement my own workaround using the following CLR shredding function:

public partial class UserDefinedFunctions
{
    [SqlFunction(FillRowMethodName = "tvf_clr_FillParameterData",
        TableDefinition = "Name nvarchar(255), Value nvarchar(255), ParamData nvarchar(255)")]
    
public static IEnumerable tvf_clr_ParameterDataShredder(SqlXml parameterData)
    
{
        XmlDocument document
= new XmlDocument();
        
document.LoadXml(parameterData.Value);
        
return document.SelectNodes("//parameter");
    
}

    
public static void tvf_clr_FillParameterData(object row, out string outName, out string outValue, out string outParamData)
    
{
        XmlNode document
= (XmlNode)row;

        
outName = document.SelectSingleNode("name").InnerXml;
        
outValue = document.SelectSingleNode("value").InnerXml;

        
outParamData = null;
        
if (document.SelectSingleNode("paramdata") != null)
            
outParamData = document.SelectSingleNode("paramdata").OuterXml;
    
}
}
;

 
This served me well and solved the performance issue and all was fine. Until now.

The thing is, this method doesn't handle decoding of escape characters in XML, so any &s that exist in the XML string being shredded will be returned as &:amp. Not ideal. My first thought was to just implement a decoding function within the CLR routine but the main candidates use System.Web which is not available within SQL Server. I eventually stumbled upon this post from Jonathan Keyahias which provided a sql safe Encoding implementation and I was going to use this as a basis for writing my own decoding function. While waiting the 20mins for Visual Studio 2008 to open, it occured to me to revisit the Connect case and fortunately, there were a number of workarounds there, and a combination of the solutions the most effective.

INSERT INTO @ConfigData (ID, Name , Value, ParamData)
SELECT     1,
    
tbl.cols.value('(name/text())[1]', 'nvarchar(255)'),
    
tbl.cols.value('(value/text())[1]', 'nvarchar(255)'),
    
tbl.cols.query('./paramdata[1]') 

FROM @xml.nodes('//root') AS tbl(cols) 
OPTION ( OPTIMIZE FOR ( @xml = NULL ) )

The above simple modification to my SQL INSERT statement and performance was acceptable and more importantly, the decoding of the escaped characters was fine. Saved me a lot of work.

Wednesday, 11 January 2012

Admin: Start and Stop Local Instance of SQL Server

As a database professional, i'm accustomed to having a local sql server instance installed. However, it seems that SQL Server (typically Express) is finding its way onto more and more desktops, often unused and unknown to the user!! With the web browser FireFox taking up almost half your memory, you don't want any other applications taking your valuable resources.

I recommend changing the service settings of your SQL services to manual and only turning them on when you are actually going to use them. If you're using a shared database server, you may find that even as a SQL developer you may not use your local instance as often as you might have thought.

I have created 2 simple batch files on my desktop which I can use for stopping and starting my local SQL services meaning its a mere double click away to get your instance available.

StartSQLServices.bat

NET START mssqlserver
NET START sqlserveragent
NET START ReportServer
NET START msdtsserver100


StopSQLServices.bat

NET STOP mssqlserver
NET STOP sqlserveragent
NET STOP ReportServer
NET STOP msdtsserver100


* your Service names may vary depending on instance names and SQL version.

Thursday, 8 December 2011

Admin: Windows Group Login Ambiguity

Its a typical scenario to grant login access to SQL Server via Windows groups as this eases the administrative burden of setting up multiple logins all with the same access. However, its not uncommon for a user to belong to multiple Windows groups and its possible that these groups could each have a SQL login. So whats the problem? Well, consider this example:

User - DOMAIN\PeterPan
Groups - DOMAIN\ReadOnlyGroup, DOMAIN\WriteOnlyGroup

We have 2 windows groups both of which have access to SQL Server, albeit for different purposes. Peter is a member of both groups as he needs access to do both actions. My question is, when Peter logs in, under which Windows Group has he been granted authentication?

Why does it matter, I here you ask?

From a pure permissions point of view, there is no issue as the usual hierarchy applies. Permissions are built on top of each other so SQL Server will combine the permissions from both groups (DENY overriding GRANT etc) and that gives us the behaviour we want. You can validate this by simply adding a differing set of permissions to the different groups and then adding or removing the member from it:

EXEC sp_addrolemember N'db_datareader', N'DOMAIN\ReadOnlyGroup'

EXEC
sp_addrolemember N'db_datawriter', N'DOMAIN\WriteOnlyGroup'


You can view the permission set in the following way:

SELECT *, DB_NAME(), SUSER_SNAME() FROM fn_my_permissions(NULL, 'DATABASE')

However, this works because there is a defined hierarchy in place which allows SQL Server to make the correct choice based upon permission settings.

But, what about something where no such hierarchy exists like a default language. If the 2 groups have different default languages then how do we determine which default language the login connection should take? This is particularly important when dealing with dates. US_English and British_English interpret dates in MDY and DMY respectively so the date 30/11/2011 will throw an error in US_English but work correctly in British_English. Worse still, you may not get an error at all and the date is valid but not what the user wants e.g 01/02/2011. Of course, best practice would see us using ISO formatted dates YYYYMMDD but I dare say not every application adheres to this.

I understand that this is a difficult (if not impossible) issue to rectify. I mean, there is no logical hierarchy or set of rules that can be put in place when this situation occurs. However, I'd like to understand how SQL Server does determine which language it uses in this scenario as I couldn't find a method of doing so. Is it the windows group which was last granted access to SQL? Does anyone know?

Thursday, 3 November 2011

T-SQL: Why have NULL bit fields?

I was asked the question the other day, why SQL Server allows NULLs for the BIT datatype. What you need to bear in mind is what NULL represents - a value which is unknown - and, so there are 3 states which you need to cater for. True, False or Unknown.

As is my wont, I like to explain things with analogies and this is no different. Consider a table holding job application details for a job board website.

CREATE TABLE dbo.tblJobApplications(
  
CandidateID INT,
  
JobID INT,
  
DateApplied DATE,
  
Successful BIT)


-- Application in progress
INSERT INTO dbo.tblJobApplicationsSELECT 1, 1, GETDATE(), NULL

-- Application unsuccessful
INSERT INTO dbo.tblJobApplicationsSELECT 2, 1, GETDATE(), 0

-- Application successful
INSERT INTO dbo.tblJobApplicationsSELECT 3, 1, GETDATE(), 1


The key field to note here is of course the BIT field which indicates the success or failure of the applicaiton. Obviously, when a candidate applies to a job, the success of the application isn't known - the candidate has been accepted, nor rejected. Its only at the end of the lifecycle of the application that this field can take on a meaningful value.

Hopefully, this contrived example helps explain just when you might require a NULL bit field.

Thursday, 27 October 2011

Admin: A day of "a day" series...

There really are some incredible bloggers out there. Despite my recent 5 day series on Continuous Integration, I typically struggle to blog more frequently than once a week. Yet these guys have committed to a whole month of blogging on a single topic and the quality of the blogs isn't diluted at all giving a real "Deep Dive" into a topic.

So here are some of the "A Day" series that i've dipped into recently - I would bet that there will be more to be added. Credit to the guys and I doff my hat to you.

-- A DMV a day
http://sqlserverperformance.wordpress.com/2010/04/

-- XE a day
http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/12/01/a-xevent-a-day-31-days-of-extended-events.aspx

-- A SQL Myth a day
http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-%28130%29-in-flight-transactions-continue-after-a-failover.aspx

-- SSIS a day
http://www.jasonstrate.com/2011/01/31-days-of-ssis-raw-files-are-awesome-131/

Tuesday, 4 October 2011

T-SQL: Helping the optimiser can hinder

I've been meaning to blog about this for a while and noticing the (earlier than usual) T-SQL Tuesday #23 hosted by Stuart Ainsworth (blog | twitter) on the topic of Joins, I thought i'd cobble something together.

Essentially, this post is about reading the results of SHOWPLAN_TEXT, being aware of the different physical join operators and also how the optimiser can be influenced (poorly) by a bad WHERE clause.

I have the following query on a view which abstracts the 4 tables joined:
 
SELECT *, CAST(Val AS NUMERIC(22,16)) FROM DDL.vwData

The tables behind the view aren't really that important, as I just want to concentrate on the plans generated but basically, there is a lookup table for RateID and then a maindata table which joins to this table (via other intermediate tables). The key thing to note in the query though is the CAST - sometimes the column Val (from maindata) has a value which can't be converted - indeed running this query as is gives the error:

Msg 8115, Level 16, State 8, Line 2
Arithmetic overflow error converting numeric to data type numeric.

I know the data for RateID does have Vals which are CASTable to the decimal precision/scope and changing the query to only return those records where RateID = 1, returns a result set without errors:

SELECT *, CAST(Val AS NUMERIC(22,16)) FROM DDL.vwRateData WHERE RateId = 1

However, if I "improve" the query to filter out these records further to only return data from the last month using a non-sargable clause, i get that pesky overflow error again:

SELECT *, CAST(Val AS NUMERIC(22,16)) FROM DDL.vwRateData WHERE RateId = 1 AND DATEDIFF(D,GETDATE(),[Date])< 365

So whats different? First glance suggests something is wrong because if the whole dataset for RateId=1 returns without error, then choosing a further subset of this data should also work.

I delved into the query plans to find out and the clue was there:

The Good query generated a plan that was using a Nested Loops inner join. In a nutshell, the first thing this query does is filter out the correct rows from the lookup table (RateID = 1) and then iterates through the main table where there is a match on that RateId. In other words, it doesn't matter whether or not the Val column for other RateIDs is CASTable as this plan only touches RateID = 1.



On the other hand, the Bad query generated a plan that was using a Hash Match join. This differs in that it will read all the rows in both the tables, and thus attempting to run the CAST function over every record. Only later in the query plan does it do the filtering on the RateID.



The different types of  JOINs (both logical and physical) are explained really well in this article but the "takeaway" from this post is that you need to be careful with your query construction as the optimiser may choose a plan that can influence both performance and even robustness. Logic may tell you that adding extra filtering criteria will help the optimiser choose a more efficient plan, but in this example it has chosen a plan that has caused the query to fail.

Thursday, 25 August 2011

T-SQL: Use GO to run the same batch more than once

A simple post this one, but something I only became aware of in the past few months. One of our devs hadn't seen it either so i thought i'd pop it down on this blog.

Consider a scenario where you are generating test data, perhaps with GETDATE() and you want to quickly generate lots of rows. By simply adding a number after the GO batch separator you can get you batch to execute multiple times. No need to write any crazy loops or any joins.

So, something like the following will generate 55 rows:

CREATE TABLE t (id IDENTITY(1,1), dt DATETIME)
GO;
INSERT INTO t SELECT GETDATE()
GO 55;


Simple, but effective.

Thursday, 4 August 2011

Admin: can't drop impersonated login

This week, I came across an interesting behaviour while working with EXECUTE AS. I'd set up a test database to demo some work with a test login and when trying to tidy up the objects, I was faced with:

I can repro the behaviour with the following example. NB: there are 2 separate connections in this script.

/*
CONNECTION 1
*/

-- create a test database
CREATE DATABASE TestDropLogin;
GO
-- and a test login
CREATE LOGIN DropLogin
WITH PASSWORD = 'abc';
GO
USE TestDropLogin
GO
-- and finally a test user for that login
CREATE USER [DropLogin]
FOR LOGIN [DropLogin]
GO

EXECUTE AS LOGIN = 'DropLogin'
GO
SELECT ORIGINAL_LOGIN() -- 'DOMAIN\MyUser'
SELECT SUSER_SNAME() -- 'DropLogin'
GO
USE [master]
GO

/*
CONNECTION 2
*/
-- drop the database as we no logner need it
DROP DATABASE TestDropLogin;
GO
-- try and drop the login too
DROP LOGIN [DropLogin]
GO

Msg 15434, Level 16, State 1, Line 1
Could not drop login 'DropLogin' as the user is currently logged in.

That's because I didn't revert my execution context earlier and so I'm still logged in as DropLogin. If I go back to the 1st connection I can run the REVERT command.

Msg 15199, Level 16, State 1, Line 1
The current security context cannot be reverted. Please switch to the original database where 'Execute As' was called and try it again.

But i've dropped the database where EXECUTE AS was called so issuing a USE command gives us:

Msg 911, Level 16, State 1, Line 1
Database 'TestDropLogin' does not exist. Make sure that the name is entered correctly.

So I don't appear to be able to REVERT the execution context and therefore can't drop the login. The only solution I found to this was to force the connection to close by issuing a KILL command.

Thursday, 21 July 2011

T-SQL: Service Broker to alert errors

I read this interesting blog post by Jamie Thomson a few weeks back and it inspired me to write up this article.

I've been "kicking the tyres" of Service Broker myself on and off over the past few months as its an area of SQL Server which I don't often seen used in anger. I've put together a sample here which demonstrates how you may use Service Broker to alert your DBAs to any application errors in the database using Database Mail (specifically dbo.sp_send_dbmail).

1) Error gets logged to an central application error table
2) The error is passed to Service Broker for delivery
3) Service Broker delivers error via email in a nice format.

/*
#################################################################
-- configure Service Broker Objects
#################################################################
*/
ALTER DATABASE GlobalDB
SET ENABLE_BROKER
WITH ROLLBACK IMMEDIATE
GO
USE GlobalDB
GO
-- initially a messagetype
CREATE MESSAGE TYPE [ApplicationError]
VALIDATION = NONE;
GO
-- and a contract to send messages of that type
CREATE CONTRACT [ApplicationErrorContract]
(      
  
[ApplicationError] SENT BY INITIATOR
);
GO
-- a queue to send the messages to
CREATE QUEUE [ApplicationError_Snd_Q];
GO
-- and a service that manages that queue
CREATE SERVICE [ApplicationError_Snd_Svc]
ON QUEUE [ApplicationError_Snd_Q];
GO
-- we also need a queue to receive messages into
CREATE QUEUE [ApplicationError_Rcv_Q];
GO
-- and a service that manages that queue
CREATE SERVICE [ApplicationError_Rcv_Svc]
ON QUEUE [ApplicationError_Rcv_Q]([ApplicationErrorContract]);
GO

/*
#################################################################
-- we'll need to configure database mail here
-- to use this ASIS you'll need a default profile configured
#################################################################
*/

/*
#################################################################
-- configure database objects
#################################################################
*/
USE GlobalDB
GO
-- a generic table for holding error logs
CREATE TABLE [dbo].[tblErrorLog]
  
(
      
[LogID] [int] IDENTITY(1,1) NOT NULL,
      
[ErrorMessage] [nvarchar](2000) NULL,
      
[ErrorNumber] [int] NULL,
      
[ErrorSeverity] [int] NULL,
      
[ErrorState] [int] NULL,
      
[ErrorLine] [int] NULL,
      
[ErrorProcedure] [nvarchar](555) NULL,
      
[DatabaseName] [nvarchar](555) NULL,
      
[Date_Created] [datetime] NULL,
      
[Created_By] [nvarchar](555) NULL
   )
GO
-- this function will take the error message from SB, parse it and return it in
-- HTML format for the send mail task
CREATE FUNCTION dbo.GetErrorEmailBody(@Msg XML)
RETURNS NVARCHAR(MAX)
AS
BEGIN

   DECLARE
@Errors TABLE (
      
LogID INT,
      
ErrorMessage NVARCHAR(1000),
      
ErrorProcedure NVARCHAR(1000),
      
Created_By NVARCHAR(1000),
      
Date_Created DATETIME
      
)
  
  
INSERT INTO @Errors
  
SELECT tbl.col.value('LogID[1]','int'),
      
tbl.col.value('ErrorMessage[1]','nvarchar(1000)'),
      
tbl.col.value('ErrorProcedure[1]','nvarchar(1000)'),
      
tbl.col.value('Created_By[1]','nvarchar(1000)'),
      
tbl.col.value('Date_Created[1]','datetime')
  
FROM @Msg.nodes('//Errors') AS tbl(col)

  
DECLARE @tableHTML  NVARCHAR(MAX) ;

  
SET @tableHTML =
      
N'<H1>Application Errors</H1>' +
      
N'<table border="1">' +
      
N'<tr><th>Log ID</th><th>Error Message</th>' +
      
N'<th>Procedure</th><th>Date Created</th>' +
      
N'<th>Created By</th></tr>' +
      
CAST ( ( SELECT TOP 50 td = LogID, '',
                      
td = ISNULL(ErrorMessage,''), '',
                      
td = ISNULL(ErrorProcedure,''), '',
                      
td = Date_Created, '',
                      
td = Created_By, ''
                
FROM @Errors
                
ORDER BY Date_Created DESC
                 FOR XML
PATH('tr'), TYPE
      
) AS NVARCHAR(MAX) ) +
      
N'</table>' ;

  
RETURN @tableHTML;
  
END
GO

-- a stored procedure to send data to the send queue
CREATE PROCEDURE dbo.usp_SB_Snd_ApplicationError
(
@LogID INT
)
AS
BEGIN
   DECLARE
@MessageBody XML
   DECLARE
@ConfigurationID INT
   DECLARE
@CalibrationID INT

   SELECT
@MessageBody = (
          
SELECT DISTINCT LogID, ErrorProcedure, ErrorNumber, ErrorMessage, Date_Created, Created_By
          
FROM dbo.tblErrorLog
          
WHERE LogID = @LogID
          
FOR XML PATH ('Errors'), TYPE
                      
)

  
DECLARE @Handle UNIQUEIDENTIFIER;
  
BEGIN DIALOG CONVERSATION @Handle
  
FROM SERVICE [ApplicationError_Snd_Svc] TO SERVICE 'ApplicationError_Rcv_Svc'
  
ON CONTRACT [ApplicationErrorContract]
      
WITH ENCRYPTION = OFF;

  
SEND ON CONVERSATION @Handle
  
MESSAGE TYPE [ApplicationError](@MessageBody);
END
GO
-- this procedure will receive messages and process them
CREATE PROCEDURE dbo.usp_SB_Rcv_ApplicationErrorr
AS
BEGIN
   SET NOCOUNT ON
;
      
  
DECLARE @Handle UNIQUEIDENTIFIER;
  
DECLARE @MessageType SYSNAME;
  
DECLARE @Message XML
   DECLARE
@BodyTable NVARCHAR(MAX);
  
  
RECEIVE TOP (1) @Handle = conversation_handle,
                  
@MessageType = message_type_name,
                  
@Message = message_body
  
FROM [ApplicationError_Rcv_Q];
      
  
IF(@Handle IS NOT NULL AND @Message IS NOT NULL)
  
BEGIN
      
-- parse the message xml to get the error details
      
       -- build the HTML table for send_dbmail
      
SET @BodyTable = (SELECT dbo.GetErrorEmailBody(@Message));
      
      
-- send the mail on the default profile
      
EXEC msdb.dbo.sp_send_dbmail
          
@recipients='richard.brown@barrhibb.com',
          
@subject = 'Errors',
          
@body = @BodyTable,
          
@body_format = 'HTML' ;
          
      
END CONVERSATION @Handle;

    
  
END
END
GO

-- this procedure will receive messages and process them
CREATE PROCEDURE dbo.[usp_SB_Snd_ProcessResponses]
AS
DECLARE
@conversation_handle UNIQUEIDENTIFIER
DECLARE    
@message_body XML
DECLARE
@message_type_name SYSNAME
DECLARE
@error_message NVARCHAR(3000)

WHILE 1 = 1
BEGIN
   SET
@message_type_name = NULL;
  
WAITFOR (
      
RECEIVE TOP (1)
          
@conversation_handle = conversation_handle,
          
@message_body = CAST(message_body AS XML),
          
@message_type_name = message_type_name
      
FROM dbo.[ApplicationError_Snd_Q]), TIMEOUT 1000;
      
  
IF @message_type_name IS NULL BREAK; --no more messages

  
IF @message_type_name = 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
  
BEGIN
       END CONVERSATION
@conversation_handle;    
  
END
   ELSE
   BEGIN
       IF
@message_type_name = 'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
      
BEGIN
          
-- should log expected errors here
          
END CONVERSATION @conversation_handle;    
      
END
       ELSE
       BEGIN
          
--All other messages types are unexpected here - log error
          
SET @error_message = N'<Error>Unexpected message type</Error>';

          
END CONVERSATION @conversation_handle
              
WITH ERROR = 1
               DESCRIPTION
= @error_message;
              
      
END
   END
END

GO

-- a generic logging procedure
CREATE PROCEDURE [dbo].[usp_LogError]
AS
   DECLARE
@LogID INT

   INSERT INTO
dbo.tblErrorLog
          
(
      
ErrorMessage, ErrorNumber, ErrorSeverity, ErrorState,
      
ErrorLine, ErrorProcedure, DatabaseName, Date_Created, Created_By
          
)
  
SELECT
          
ERROR_MESSAGE() AS ErrorMessage
          
,ERROR_NUMBER() AS ErrorNumber
          
,ERROR_SEVERITY() AS ErrorSeverity
          
,ERROR_STATE() AS ErrorState
          
,ERROR_LINE () AS ErrorLine
          
,ERROR_PROCEDURE() AS ErrorProcedure
          
,DB_NAME()
           ,
GETDATE()
           ,
SUSER_SNAME()
    
  
SET @LogID = SCOPE_IDENTITY();
    
  
EXEC dbo.usp_SB_Snd_ApplicationError @LogID;
      
GO

/*
#################################################################
-- active the queues
#################################################################
*/

-- lets active the queue
ALTER QUEUE [ApplicationError_Rcv_Q] WITH ACTIVATION
(      
  
STATUS = ON,
  
MAX_QUEUE_READERS = 1,
  
PROCEDURE_NAME = dbo.usp_SB_Rcv_ApplicationError,
  
EXECUTE AS OWNER
);
GO

ALTER QUEUE [ApplicationError_Snd_Q] WITH ACTIVATION
(      
  
STATUS = ON,
  
MAX_QUEUE_READERS = 1,
  
PROCEDURE_NAME = dbo.[usp_SB_Snd_ProcessResponses],
  
EXECUTE AS OWNER
);
GO

-- TEST --
CREATE PROCEDURE dbo.TestSB
AS
  
-- force an error
  
BEGIN TRY
      
SELECT 1/0
  
END TRY
  
BEGIN CATCH
      
EXEC dbo.usp_LogError
  
END CATCH
GO
EXEC dbo.TestSB
GO

Tuesday, 5 July 2011

Denali: SSMS has Improved look and feel

One of the first things I noticed with the new version of SQL Server (Denali) was the changes to Management Studio and the development experience. While not as ground breaking as the move from Query Analyzer to SSMS when SQL 2005 was released, there is definitely further movement towards a common development platform for programmers.

Here is the initial splash page you get when booting up SSMS:



The main thing I noted here was the comment in the bottom right corner which explicitly states "Powered by Visual Studio". To me, this is a clear indication of the direction MS is heading with its database toolset.

And here we have a shot of what using SSMS for development looks like:



Notice here how we have much more varied colour coding for keywords, specifically variables and object names. This is a vast improvement on the generic black text in previous versions. Another feature you can see here is the bottom left of the shot above the results pane - a zoom dropdown allowing you to quickly increase the size of the text in your query window. There is also one for the results pane, although this shot doesn't show it.

Development Environment Convergence

We've already seen Visual Studio creeping into the SQL Server arena with the Business Intelligence Development Studio (BIDS) which is essentially the VS shell with SQL Server plugged in and I can see MS trying to consolidate their development environments still further. This makes a lot of sense for software devs who will often be developing database enabled applications and need to flit between their .Net code and database code. Having a more uniform and familiar environment makes the experience that much smoother and hopefully more efficient.

Thursday, 30 June 2011

Admin: Securing your data with TDE

As SQL Server has developed over the past few version, security has been a significant area of improved both in the ability to lock down a server and to encrypt your data. In saying that, despite all the obvious benefits its something that I have rarely seen implemented on production systems. My guess is that a lot of legacy systems have been built on previous versions without the fancy features and it would take a significant development effort to introduce it.

However, one feature that can be harnessed and will give you a level of encryption is Transparent Data Encryption (TDE) introduced in SQL2008. The benefits of using this is that, theoretically speaking, it can just be turned on and your data is encrypted. Sounds great doesn't it? Well, it is but its perhaps not quite what you'd expect.

Lets delve into an example:

USE MASTER;
GO
-- we need a master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'BryanRiggs';
GO
-- and a server certificate
CREATE CERTIFICATE TestTDECert WITH SUBJECT = 'A Test TDE Certificate'
GO
-- and this will be our securable DB
CREATE DATABASE MySecureDB
GO
-- and we'll put in some sensitive data
USE MySecureDB
GO
CREATE TABLE dbo.Person
(Nm NVARCHAR(255), Phone NVARCHAR(50))
GO
INSERT INTO dbo.Person SELECT 'Andy', '07733403304'
INSERT INTO dbo.Person SELECT 'Mandy', '07970402401'
INSERT INTO dbo.Person SELECT 'Gandhi', '07652123957'
GO
-- we can obviously view this data
SELECT * FROM dbo.Person
GO
-- now we just need a en encryption key for the securable database
USE MySecureDB
GO
CREATE DATABASE ENCRYPTION KEY
WITH
ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE TestTDECert
GO
-- look out for this warning

Warning: The certificate used for encrypting the database encryption key has not been backed up.
You should immediately back up the certificate and the private key associated with the certificate.
If the certificate ever becomes unavailable or if you must restore or attach the database on another server,
you must have backups of both the certificate and the private key or you will not be able to open the database.

-- now turn on TDE
ALTER DATABASE MySecureDB
SET ENCRYPTION ON
GO
-- but we can still view the data
SELECT * FROM dbo.Person
GO


So whats happened? Well, for me, the main crux of TDE is that it protects your data at the OS level. In other words, if you are connected a database which has TDE turned enabled, you can view the data within the tables in the same way as you can in a database which has TDE turned OFF. However, the value comes in that a malicious user can't steal your data by copying a backup file and restoring it to their local machine.

Lets look at that:



Essentially, because the database is encrypted the new server will not even accept a restore of the database. A similar error will occur if you just try and copy the MDF/LDF files and attach them to another server.

But of course, there are valid reasons why you would want to move your database to a new server and fortunately this is not too much of an arduous process. Remember, the error we got when setting up TDE? Well, that gives us a hint that our certificates should be backed up and its with these backups that we can create a valid database on an alternative server. So lets go through the steps:

-- on the source server, lets backup that certificate
USE MASTER
GO
BACKUP CERTIFICATE TestTDECert TO FILE = 'C:\SQLBackups\TestTDECert.cert'
WITH PRIVATE KEY
  
(FILE = 'C:\SQLBackups\EncryptionPrivateKey.key', ENCRYPTION BY PASSWORD = 'SECUREME')
GO
-- now copy this server to the remote server

-- on the remote server
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S3cur£Me';
GO
-- create the certificate from the copied backup cert
CREATE CERTIFICATE MyTDECert
FROM FILE = 'c:\SQLBackup\TestTDECert.cert'
WITH PRIVATE KEY (FILE = 'C:\SQLBackup\EncryptionPrivateKey.key', DECRYPTION BY PASSWORD = 'SECUREME')
GO
-- now we should be good to go
RESTORE DATABASE MySecureDB
FROM DISK = 'c:\SQLBackup\MySecureDB.bak'
GO


And there you have it. A quick demonstration of TDE and how it helps secure your data. Roy Ernest has written a more thorough post outlining on TDE over at SQL Server Central and this is definitely worth a read.

Tuesday, 21 June 2011

SSMS: Intellisense Won't Let Me PARTITION BY!

Warning!

This post is really just a rant inspired from a throw away comment in this blog post.

I write a lot of CTEs (because they're great!), often in conjunction with the PATITION BY clause. I'm using SQL2008 (although the behaviour is the same in 2008R2 and Denali) and get incredibly frustrated with SSMS when writing this type of query becuase it inevitably ends up with intellisense selecting PARTITION_FRAGMENT_ID. This is because hitting space bar selects the highlighted keyword.



I haven't found a way of working around this effectively other than hitting ESC to close the dialog and then continuing. Ho hum.
/* add this crazy stuff in so i can use syntax highlighter