Pages

Tuesday, 26 June 2012

How Local Variable or Optimize for Unknown Resolve Parameter Sniffing

In my last post , we have gone through the parameter sniffing  and possible solutions for parameter sniffing. In the possible solutions except the local variable and Optimize For Unknown are very straight forward solution and we know how they helps us to resolve the issue.In this post we will see how local variable and option for unknown are resolving the parameter sniffing issue.

If the parameter values are known while compiling the stored procedure , the optimizer use the statistics histogram and generate the best plan for the parameters.When we define local variable and use that in the query, SQL server will not be able use the parameter values to find the optimal value. In this scenario optimizer use density vector information of the statistics and it will generate same execution plan for all input parameter. Let us see how it will work.


Below statement returns returns 13 records by doing index seek and key lookup operation.This plan is generated based on the estimation that, the query will return 44.5 records. The query optimizer done the estimation based on the histogram.

SELECT * FROM Sales.SalesOrderDetail WHERE productid =744
DBCC show_statistics('Sales.SalesOrderDetail','IX_SalesOrderDetail_ProductID')


A portion of the out put of the above query will looks like below

The estimated number of rows is calculated based on the EQ_ROWS and AVG_RANGE_ROWS. In this example, the parameter value 744 is not matching with RANGE_HI_KEY and optimizer took AVG_RANGE_ROWS values of 747 to calculate the estimated number of rows. The execution plan will be same if you convert this to a procedure.

Let us see how it will work with procedure with local variable

  CREATE PROCEDURE get_SalesOrderDetail 
(
   @ProductId INT
)AS
   DECLARE
@L_ProductId INT
   SET
@L_ProductId =@ProductId
  
SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = @L_ProductId 



If you execute this procedure with parameter value 744, the execution plan will will looks like below.
EXEC get_SalesOrderDetail 744


This time optimizer gone for index scan under the estimation that the query will return 456 records.As we have defined the local variable, the parameter value is not available at compilation time and optimizer used the density vector to estimate the number of row. The value of estimated number of rows will be same in execution plan of this procedure with any parameter value and hence the execution plan.


Let us see how optimizer calculating the estimated number of rows in this case. As the parameter value is not available at the time of optimization, it assumes that records are distributed uniformly. In the SalesOrderDetail table we have 266 distinct value for Productid and the total number of records is 121317.If you divide total number of records with number of distinct values of productid , you will get 121317/266=456.07 which is same as estimated number of rows. All these data required for the calculation are available in the statistics.The total number of records is available in the first sections. The density value 0.003759399 is available in the second section which is equivalent to 1/266. So the estimated number of rows =121317X0.003759399 = 456.079.

You can see the same execution plan if we change this procedure with optimize for unknown as given below

ALTER PROCEDURE get_SalesOrderDetail (
  
@ProductId INT)AS
SELECT
* FROM Sales.SalesOrderDetail WHERE ProductID = @ProductId OPTION (OPTIMIZE FOR UNKNOWN)


Hope you enjoyed reading this post.If you liked this post, do like my FaceBook Page: http://www.facebook.com/PracticalSqlDba






Sunday, 24 June 2012

SQL Server : Parameter Sniffing

Two days back one of my colleque came to me and started complaining about the performance of an Stored Procedure. Her complaint was one stored procedure is taking long time from the application but completing in two seconds in SSMS.I told her it might be due to parameter sniffing but she was not ready to accept that because store procedure is running using the same parameter in the application and SSMS. Let us try to understand the parameter sniffing and various solution for parameter sniffing.

Understand the parameter sniffing

SQL Server use the statistics to estimate the cardinality of a predicate(where condition) and use that information to produce an optimal execution plan for the current parameter.This execution plan will be cached in the procedure cache and used for subsequent execution of the same parameterized query even with different parameter values.This behavior is called parameter sniffing and it is not bad always. This help us to save the optimization time as the query does not need to be compiled again. The problem occurs when the plan generated for the first execution is not optimal for the subsequent execution.For example let us assume that we have stored procedure which returns the details of the customer based on the input range.In the first execution we have passed parameters as 1 and 5000 and the execution plan might perform an index scan on the customer table. In the next execution of the same procedure will use the same plan even if you passed parameters 1 and 10.The optimal plan might be a index seek and the table scan due to the parameter sniffing will cause for delay in response and excess usage of resources. Many people have a misconception that parameter sniffing will happen only with stored procedure but it can happen with parameterized query also.

Let us see execution plan of the below two statements

SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN 11000  AND 30118
GO
SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN 11000  AND 11002

The first query is done withe clustered index scan and the second one is done with index seek with key look up operation. These are the optimal execution plan. Let us see how it will work if these statement are converted into a procedure/parameterized query.

EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End;',N'@Start INT,@End INT',@Start=11000,@End=30118
GO
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End;',N'@Start INT,@End INT',@Start=11000,@End=11002
GO


Surprisingly the execution plans of both statements are same! Right click on the left most of operator of the second execution plan and select properties.we can see the below information.


We can see that the parameter compiled values are same as the parameter of the first statement. You can see the same information in the XML execution plan also as given below.

Plan Re-usability

As we discusses the earlier, query optimization and plan generation are costly operations.To avoid this cost , SQL server will try to keep the generated execution plans in plan cache and reuse the same in subsequent execution.However, It is not possible to use the same plan , if the connection has different SET options.It will force to generate new plan instead of using the cached plan and this plan will be reused by all connection having the same SET options.The following SET options are plan-reuse-affecting:

ANSI_NULL_DFLT_OFF
DATEFIRST
ANSI_NULL_DFLT_ON
DATEFORMAT
ANSI_NULLS
FORCEPLAN
ANSI_PADDING
LANGUAGE
ANSI_WARNINGS
NO_BROWSETABLE
ARITHABORT
NUMERIC_ROUNDABORT
CONCAT_NULL_YIELDS_NULL
QUOTED_IDENTIFIER

The above SET options are plan-reuse-affecting because SQL Server performs "constant folding" (evaluating a constant expression at compile time to enable some optimizations) and because settings of these options affects the results of such expressions.Let us see an example.

CREATE PROCEDURE Get_SalesOrderHeader  
@FromCustid INT,
@ToCustid INT  )  AS
SELECT
* FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @FromCustid AND @ToCustid  


Run this procedure from one of the SSMS window 

SET ARITHABORT OFF
EXEC Get_SalesOrderHeader  11000,11002
GO

Run below query to find out the plan details which will return one row with usecounts value 1

SELECT OBJECT_NAME(CONVERT(INT,PvtAttr.objectid)),plan_handle, usecounts, PvtAttr.set_optionsFROM (
    
SELECT plan_handle, usecounts,f.value,f.attribute
    
FROM sys.dm_exec_cached_plans
        
OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) f
    
WHERE cacheobjtype = 'Compiled Plan') AS PlanAttr
PIVOT
(MAX(PlanAttr.value) FOR PlanAttr.attribute IN ("set_options" "objectid") AS PvtAttrWHERE PvtAttr.objectid = OBJECT_ID('dbo.get_SalesOrderHeader')


In another SSMS window, execute the procedure in same way and run the query to find the plan details which will again will return only one row with usecounts value 2.In one of the SSMS window change the ARITHABORT setting to ON and run the procedure.Run the query to get the plan details which will return two rows with different set_options value.ADO.NET and SSMS have different default SET options and it might cause to use two different execution plan by the application and SSMS.

Possible Solutions for Parameter Sniffing

If you feel that  parameter sniffing affecting the performance of a procedure, you have a few choices.

Optimize for a specific parameter

This can be used , if you feel that most of the execution of the procedure can be benefited form execution plan of a specific parameters. For example in our case , let us assume that the procedure is going return maximum of two or three customers in most of the execution. Then you can create procedure as given below

ALTER PROCEDURE Get_SalesOrderHeader  
@FromCustid INT, @ToCustid INT  )  AS
SELECT
* FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @FromCustid AND @ToCustid  OPTION (OPTIMIZE FOR (@FromCustid = 11000,@ToCustid = 11002))

Recompile for all execution

This option will force to recompile the procedure for every execution and generate optimal plan for the current parameter. It has a overhead of recompilation .If the procedure has many statements, the cost of recompilation will increase.This can be achieved in two ways.
ALTER PROCEDURE Get_SalesOrderHeader  
( @FromCustid INT, @ToCustid INT  )  
AS
SELECT
* FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @FromCustid AND @ToCustid  OPTION (RECOMPILE)

GO
ALTER PROCEDURE Get_SalesOrderHeader   
( @FromCustid INT, @ToCustid INT  )  WITH RECOMPILE
AS
SELECT
* FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @FromCustid AND @ToCustid  

An interesting difference between two of this approach is : WITH RECOMPILE , the plan is never put into the plan cache, whereas this happens with OPTION (RECOMPILE) and it use the plan cache in following situations.
  1. The procedure is called with a very high frequency, and the compilation overhead hurts the system.
  2. The query is very complex and the compilation time has a noticeable negative impact on the response time.

Local Variable or option for unknown

The traditional way of handling the parameter sniffing is assigning the parameter value to a local variables and use the local variable. The option for unknown also works in the same way. When the query optimizer knows the parameter value, it can use the statistical histogram to estimate the number of records that can be returned by the query and can generate the best plan based on the histogram of the statistics .When the parameters values are assigned to the local variable , optimizer can not use histogram instead it uses the density vector of the statistics. I will explain this in detail in my my next post.
This can be implemented as given below:

ALTER PROCEDURE Get_SalesOrderHeader   ( @FromCustid INT, @ToCustid INT  )  AS
SELECT
* FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @FromCustid AND @ToCustid  OPTION (OPTIMIZE FOR UNKNOWN)


GO


ALTER PROCEDURE Get_SalesOrderHeader   ( @FromCustid INT, @ToCustid INT  )AS
DECLARE
DECLARE  
@L_FromCustid INT
DECLARE  
@L_ToCustid INT
SET
@L_FromCustid   =@FromCustidSET @L_ToCustid     =@ToCustid SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID BETWEEN @L_FromCustid AND @L_ToCustid  


Apart from this we can implement a decision tree procedure where it will call two separate procedure based on difference between two parameter value.For example one for fetching less than 50 customer and other one for fetching more than 50 customers.

Thank you for reading this post.If you liked this post, do like my page on Facebook : http://www.facebook.com/practicalSqlDba




Friday, 15 June 2012

Connection Pooling :At a Glance

What is a Connection Pool?

A connection pool is a group of database connections (with same connection properties) maintained in the application server so that these connections can be reused when future requests to the database are required.Creating a connection to the database servers is a  time consuming process.To establish a connection to the database server , a physical  channel such as socket or named pipe must be established , the connection string information to be parsed, it should be authenticated by the server and so on.Connection pool helps to reuse the established connection to serve multiple database request and hence improve the response time. In a practical world, most of the application use only one or a few different  connection configuration.During the application execution, many identical connections will be opened and closed. To minimize the cost of opening connections each time,ADO.NET uses the technique called Connection Pooling. Many people has the misunderstanding that, connection pool is managed in the database server.
Connection pooling reduces the number of times that new connection must be opened.The pooler maintains ownership of the physical connection. Whenever a user calls open on a connection, the pooler looks for an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls close on the connection, the pooler returns it to the pool instead of closing it. Once the connection is returned to the pool, it is ready to reused on the next open call.

How the Connection Pool Works ?

Many people are not very familiar with the  connection pool and how it works. This is because connection pooling is a default property of ADO.NET.We do not have to do anything special to enable the connection pooling.It can be enabled or disabled  by setting the value true or false for connection string  property Pooling  (;pooling =true). Connection pool is tightly coupled with connection string . A slight change in the connection string (change in the case / change in the order of connection property ) will force the ADO.NET to open a new connection pool.Every pool is associated with distinct connection string. ADO.NET maintain separate connection pool for each distinct application ,process and connection string. When  first time application request for a connection , ADO.NET look for any associated connection pool. As it is a first request, there will not be any connection pool and ADO.NET negotiate with the database server to create fresh connection.When application close/dispose this connection after completing the process, the connection will not get closed instead it will be moved to connection pool.When application request for next connection using the same connection string, ADO.NET return the context of the the open connection which is available in the pool.If  second request from application comes in before the first request closed/disposes the connection , ADO.NET create a fresh new connection and assigned to the second request.


The behavior of connection pooling is controlled by the connection string parameters. Below are the list  of parameters that controls the behavior of connection pooling.
  • Connection Timeout : Control the wait period in seconds when a new connection is requested,if this period expires, an exception will be thrown. Default value for connection timeout is 15 seconds.
  • Max Pool Size: This specify the maximum number of connection in the pool.Default is 100.
  • Min Pool Size : Define the initial number of connections that will be added to the pool on opening/creating the first connection.Default is 1
  • Pooling : Controls the connection pooling on or off. Default is true.
  • Connection Lifetime : When a connection is returned to the pool, its creation time is compared with the current time, and the connection destroyed if that time span (in seconds) exceed the value specified by connection lifetime  else added to the pool. This parameter does not control the lifetime of connection in the pool.It is basically decides whether the connection to be added to pool or not once the it got closed by the caller application.A lower value 1 may be equivalent to a state of pooling is off. A value zero cause pooled connection to have the maximum lifetime. 

Connection Leakage

At times, connections are not closed/disposed explicitly, these connections will not go to the pool immediately. We can explicitly close the connection by using Close()  or Dispose() methods of connection object or by using the using statement in C# to instantiate the connection object. It is highly recommended that we close or dispose the connection once it has served the purpose.

Connection leakage will happen in the scenarios where the application is not closing the connection once it is served the request. As it is not closed , these connections can not be used to serve other request from the application.and pooler forced to open new connection to serve the connection requests. Once the total number of connection reaches the Max Pool Size,new connection request wait for a period of Connection Timeout and throw below error.

"Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached"

Pool Fragmentation

Pool fragmentation is a common problem in many applications where the application can create a large number of pools that are not freed until the process exits. This leaves a large number of connections open and consuming memory, which results in poor performance.

Pool Fragmentation due to Integrated Security

Connections are pooled according to the connection string plus the user identity. Therefore, if application  use  Windows Authentication  and an integrated security to login, you get one pool per user. Although this improves the performance of subsequent database requests for a single user, that user cannot take advantage of connections made by other users. It also results in at least one connection per user to the database server. This is a side effect of a particular application architecture that developers must weigh against security and auditing requirements.

Pool Fragmentation due to Many Databases used by same application

Many application may use a single database to authenticate the application login and then open a connection to a specific database based on the the user role / region. The connection to the authentication database is pooled and used by everyone. However, there is a separate pool of connections to each database, which increase the number of connection to the database server.This is also a side effect of the application design.The simple way to get rid of this issue with out compromising the security is to connect to the same database always (may be master database) and run USE databasename statement  to change the database context to desired database.

Clearing the Pool

ADO.NET have two methods to clear the pool: ClearAllPool() and ClearPool()ClearAllPools clears the connection pools for a given provider, and ClearPool clears the connection pool that is associated with a specific connection. If there are connections being used at the time of the call, they are marked appropriately. When they are closed, they are discarded instead of being returned to the pool.

How to Monitor the connection Pool ?

The connection pool can be monitored using the performance counters in the server where the ADO.NET is initiating the connections.While selecting the counter ,make sure to select the right instance based on your application name and process id which is there in the bracket. Process id of your application can easily get from the task manager. Find below a snapshot of perfmon counters.






The below code snippet will help you to understand the connection pooling in much better way. Do not comment on the slandered of the code snippet !  I am not an expert in writing vb/.net code

Imports System
Imports System.Data.SqlClient

Module Module1
    
Private myConn As SqlConnection
    
Private myCmd As SqlCommand
    
Private myReader As SqlDataReader

    
Private myConn1 As SqlConnection
    
Private myCmd1 As SqlCommand
    
Private myReader1 As SqlDataReader
 
    
Private myConn2 As SqlConnection
    
Private myCmd2 As SqlCommand
    
Private myReader2 As SqlDataReader

    
Private StrConnectionString_1 As String
    Private
StrConnectionString_2 As String
    Private
query As String
 
    Sub
Main()


        
'Two connction string which help us to create two different pool
        'The Application Name is mentioned as ConnectionPool_1 and ConnectionPool_2 to identify the connection in sql server
        
StrConnectionString_1 = "Server=XX.XX.XX.XX;user id=" + "connectionpool" + ";password=" + "connectionpool" + ";database=master;packet size=4096;application name=ConnectionPool_1"
        
StrConnectionString_2 = "Server= XX.XX.XX.XX ;user id=" + "connectionpoo2" + ";password=" + "connectionpool" + ";database=master;packet size=4096;application name=ConnectionPool_2"


        
query = "select * from sys.objects"


        
'STEP :1
        'Opening a connection first connection string and excuting the query after it served closing the connection
        
myConn = New SqlConnection(StrConnectionString_1)
        
myCmd = myConn.CreateCommand
        myCmd.CommandText
= query
        myConn.
Open()
        
myReader = myCmd.ExecuteReader()
        
myReader.Close()
        
myConn.Close()

        
'Now look at the perfmon counters. Numberofactiveconnectionpoll will be 1 and Numberofpooledconenction will be 1
        'In sql server you can see connection is still open even after closing the connetion.You can verify this by querying the sys.dm_exec_connections

        'STEP :2
        'Opening a connection using the second connection string.This will force the pooler to open one more connection pool
        
myConn1 = New SqlConnection(StrConnectionString_2)
        
myCmd1 = myConn1.CreateCommand
        myCmd1.CommandText
= query
        myConn1.
Open()
        
myReader1 = myCmd1.ExecuteReader()
        
myReader1.Close()
        
myConn1.Close()
        
'Now look at the perfmon counters. Numberofactiveconnectionpoll will be 2 and Numberofpooledconenction will be 2
        'In sql server you can see two active connection one from ConnectionPool_1 and ConnectionPool_2

        'STEP :3
        'Opening a connection again using first connection string. This will be servered by the existing connection created as part of step 1
        
myConn = New SqlConnection(StrConnectionString_1)
        
myCmd = myConn.CreateCommand
        myCmd.CommandText
= query
        myConn.
Open()
        
myReader = myCmd.ExecuteReader()

        
'Now look at the perfmon counters. Numberofactiveconnectionpoll will be 2 and Numberofpooledconenction will be 2
        'In sql server you can still see only two active connections. one from ConnectionPool_1 and ConnectionPool_2
        'Please note that the connection is not closed


        'STEP :4
        'Opening a connection again using first connection string. This will be forsed to open a new connection as the connection is not closed in Step3 (connection leakage)

        
myConn2 = New SqlConnection(StrConnectionString_1)
        
myCmd2 = myConn2.CreateCommand
        myCmd2.CommandText
= query
        myConn2.
Open()
        
myReader = myCmd2.ExecuteReader()
        
myConn2.Close()

        
'Now look at the perfmon counters. Numberofactiveconnectionpoll will be 2 and Numberofpooledconenction will be 3
        'In sql server you can see three active connections. two from ConnectionPool_1 and one from ConnectionPool_2
     

        'Closing the connection created as part of Step 3
        
myConn.Close()
        
'Now look at the perfmon counters. Numberofactiveconnectionpolll will be 2 and Numberofpooledconenction will be 3
        'In sql server you can see three active connections. two from ConnectionPool_1 and one from ConnectionPool_2


        'clearing the pool
        
SqlConnection.ClearAllPools()
        
'Now look at the perfmon counters. Numberofactiveconnectionpoll will be 0 and Numberofpooledconenction will be 0. Number of inactiveconnectionpoll will be 2
        'In sql server you can't see any connection from ConnectionPool_1 or ConnectionPool_2


    
End SubEnd Module



Thank you for reading this post.



If you liked this post, do like my page on FaceBook at http://www.facebook.com/practicalSqlDba









Monday, 11 June 2012

SQL Server : Moving System, resource and User Databases to New SAN disk

In my earlier post I have explained the steps that we have followed to move the MSDTC and Quorum drive to the new SAN. In this post let us go through the steps that we have followed to move the System,Resource and User databases to the new SAN.The environment that we are talking about is running on windows 2008 server and SQL server 2008

 When this task assigned to me, I did not had any idea about the steps that need to be followed. The first thing came in mind was copying the entire contents of each drive to the respective new drive and change the drive letter, but that is a time consuming process and need lot of down time. After spending lots of time, we came up with a plan with minimal downtime. Let us go through the steps.We stared our process from Wednesday and finished on Sunday. I will explain the steps that we have followed on each day.


Wednesday 

  • Give proper resource name for each disk in the available storage: This can be done right clicking the disk and selecting the properties option and change the resource name.This will help us to identify the disks in the future steps.I suggest to do it for old disk also for easy identification of disks.In many places it will never show the drive letter.
  • Move the new disk to resource group : This can be done right clicking on the available disk and selecting more actions. Do this for each disk and move to appropriate application group(SQL instance)










  • Add dependency on the disks for the SQL server instance : This can be done by selecting the right instance under the Services and Application group available in the left pane. Then right click on the SQL server engine resource (which will listed under the Other resource in the detail pane) and select the properties.On the dependencies tab add the new disks.
  • Test the failover to make sure that the disks are failing over to the other nodes with out any issue.

Thursday

  • Change the recovery model of the user database which are in simple recovery to full recovery.Note down the list of database which were in simple recovery.This will help us to revert back the database to simple recovery after moving to new SAN.
  • Schedule a full backup on friday early morning of all availabe user database in that instance. This can be changed depend on the environment and backup policy.The output of the below script can be used to schedule the  full backup of all user database.
SELECT 'Backup database ['+name +'] to disk =''R:\DISKmoveFullBackup\'+ REPLACE(name,' ','')+'_Diskmovefriday.bak'' with password =''Password@121'''   FROM sys.databases WHERE name NOT IN ('MAster','model','msdb','tempdb','distribution')
  • Take a copy of current database file location to excel file.It will be helpful if you need to refer the location at the later stage. I have copied the output of the below query to excel sheet.
SELECT database_id,DB_NAME(database_id),FILE_ID,type_desc,name,physical_name,state_desc FROM sys.master_files


Friday

The first task on Friday is to restore all user database as _New in the new disk. In our environment we have equal number of new disks. As a first step we need create the same folder structure new disk  as in the old disk. As it is data drives , the folder structure may not be complicated  to create. You can ignore the folder structure where the system files resides. At later point we will be copying the entire folder to the new drive.This script can be used to generate the restoration script based on the latest full backup.

The next step is to change the default data/log path and backup path. To do that on the server node in the object explorer and select properties.On left pane of the property window select the database settings and change the default location of data and log. Using the regedit change the value of BackupDirectory key under the HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL   Server\MSSQL10.instname\MSSQLSERVER

Disable any full backup job that is schedule to run before the disk movement maintenance windows. Any full backup will break the backup chain.If the environment have the mirroring/replication , prepare complete script to remove and create replication and mirroring.

Sunday : The Big Day

Take differential backup of all user database and apply on top of the _new database which we created on Friday.The output of the below script can be used to take the differential backup.Before starting the differential backup, disable all the transaction log backup jobs and make sure that none of the transaction backup jobs are  running at the point.

SELECT 'backup database ['+name +'] to disk =''R:\DiskMoveDiffBackup\'+REPLACE(name,' ','')+'_diffsunday.bak'' with DIFFERENTIAL'FROM sys.databases WHERE name NOT IN ('Tempdb','Master','model','msdb' , 'distribution') AND name NOT LIKE '%_new'


To restore the differential backup, this script can be used. Once the restoration of differential backup  completed, the actual maintenance window starts. It will take  minimum of 30 minutes ,depends on the number of database and time required to recover the database after the transaction log restoration. In our environment (with around 60 database and approx 1 TB size)  it took around an hour to complete the process. Recovering the database after the transaction log backup is the most time consuming process.

Follow the below steps once the maintenance window start:
  1. Disable all logins used by the application. This will be helpful to avoid unnecessary connection request from the application.
  2. Stop the SQL server agent service from the cluster admin window.
  3. Kill all existing user session especially from application and agent service.
  4. Remove mirroring and replication using the script which we have prepared on Friday.
  5. Take transition log backup of all user database and rename the existing database as _OLD.
  6. Restore the transaction log backup on top of _new databases with recovery.
  7. Rename _new database (remove the _new).Each section of this script will help us to perform the the steps 5,6 and 7.
  8. Detach and attach the distribution database to new drive.
  9. System databases and resource database will be there in one of the disk and moving them to new disk is a tedious task. We followed  following step to do that
    1. Identify the drive in which the system database and resource database resides .In our case it was M drive and associated new drive is U.
    2. Alter all database file which is  there in M drive (many _old database) to point to U drive.This script will be useful to perform this task.
    3. Alter all database file which is there in U drive (newly restored databases) to point to M drive.This script  will be useful to perform this task.
  10. Alter the tempdb database to point the data and log file to new disk.
  11. Bring down the instance offline through the cluster admin tool. Make sure that that, disks are online.
  12. Copy the system database root folder from M drive to U drive with all sub folders. We have used the XCOPYcommand to move the all the files along with folder structure to the new drive.
  13. Swap the drive letter M and U. We can perform this task through the cluster admin.
  14. Bring the instance online.
  15. Set up the replication
  16. Enable the logins disabled in the step 1 and the instance is ready to use.
  17. Set up the mirroring.
  18. Change recovery model to simple for those recovery changed as part of Thursday task.
  19. Enable all backup jobs.
  20. Drop the _OLD database and remove the dependency on old disks.
  21. Now the old disks will be available under the 'Available disk group'. Right click on each one and delete.
  22. Inform your SAN team 
Thank you for reading such long post !

If you liked this post, do like my page on FaceBook at http://www.facebook.com/practicalSqlDba