Thursday, February 27, 2014

Fix : SQLServer Error: 599, WRITE: The length of the result exceeds the length limit (2GB) of the target large type. [SQLSTATE 42000] (LogToTableWrite)



SQLServer Error: 599, WRITE: The length of the result exceeds the length limit (2GB) of the target large type. [SQLSTATE 42000] (LogToTableWrite)

I find that error message was continously logged into SQLAgent.out file for every one second, to troubleshoot this:

First verify are there any jobs that are very frequently running thru SQL Agent.

Verify that Job Step properties and click on advanced and verify whether Log to table check box is selected or not

If it is selected click on view button and verify you can see the same message into that table.

If you observe the same message then uncheck that check box and save the job
 

Friday, February 21, 2014

What is Virtual Log File, Reduce VLFs, Too many VLFs

Virtual Log Files in SQL Server

The transaction log file is used to guarantee the data integrity of the database and for data recovery. It is a reusable circular file which captures transaction details to recover the database incase of disaster, each transaction log file is logically divided into segments called virtual log files. VLF’s are created by SQL Server internally when a transaction log file is created / altered to manage the transaction log space efficiently (ie., reuse of VLF’ space).

SQL Server writes transaction log sequentially that means it writes into VLF’s one by one, after reaching to end it starts from beginning and searches for inactive VLF and start writing it.

Active VLF – The VLF’s that are currently using by SQL Server, it contains the transactions that are completed but are not written to disk.

InActive VLF – The VLF’s that are not currently using by SQL Server, once the log records are being written to data file then that VLF become inactive and come to reusable state. Whenever Log Backup happened the VLF file becomes inactive.
If LDF size is <64 MB then 4 VLF’s are created, if LDF File size is between 64 MB and 1 GB then 8 VLF’s are created if LDF size is >1 GB then 16 VLF’s are created.

How many VLF’s are currently exists on the LOG file
Dbcc loginfo -- This command displays the rows, each row indicates one VLF.

 Let see the example how many VLF’s are created at the time of Database Creation.
CREATE DATABASE [VLFTest] ON PRIMARY
( NAME = N'VLFTest', FILENAME = N'C:\VLFTest.mdf' , SIZE = 100MB , FILEGROWTH = 100MB )
LOG ON ( NAME = N'VLFTest_log', FILENAME = N'C:\VLFTest_log.ldf' , SIZE = 50MB , FILEGROWTH = 10%)

 use VLFTest
dbcc loginfo -- It displays 4 files

Use master
drop database [VLFTest]

CREATE DATABASE [VLFTest] ON PRIMARY
( NAME = N'VLFTest', FILENAME = N'C:\VLFTest.mdf' , SIZE = 100MB , FILEGROWTH = 100MB )
LOG ON ( NAME = N'VLFTest_log', FILENAME = N'C:\VLFTest_log.ldf' , SIZE = 65MB , FILEGROWTH = 10%)

use VLFTest
dbcc loginfo -- It displays 8 files

Use master
drop database [VLFTest]
CREATE DATABASE [VLFTest] ON PRIMARY
( NAME = N'VLFTest', FILENAME = N'C:\VLFTest.mdf' , SIZE = 100MB , FILEGROWTH = 100MB )
 LOG ON ( NAME = N'VLFTest_log', FILENAME = N'C:\VLFTest_log.ldf' , SIZE = 2GB , FILEGROWTH = 10%)

use VLFTest
dbcc loginfo -- It displays 16 files

Use master
drop database [VLFTest]

Let see the example how too many VLF’s are created at the time of Database Alteration.
Points to remember:
1. Too many VLF’s are created if there is Auto Growth value is less
2. We need to perform capacity planning initially to avoid creating too many VLF’s
3. Too many VLF’s can cause impact in crash recovery process, It will take more time in discovery process and more time while doing crash recovery or attaching database from disk.

Demo:
If Autogrow property is wrongly set it majorly impacts the VLFs.
CREATE DATABASE [VLFTest] ON PRIMARY
( NAME = N'VLFTest', FILENAME = N'C:\VLFTest.mdf' , SIZE = 100MB , FILEGROWTH = 100MB ) LOG ON
( NAME = N'VLFTest_log', FILENAME = N'C:\VLFTest_log.ldf' , SIZE = 64MB , FILEGROWTH = 10%)

use VLFTest
dbcc loginfo -- Intially 4 files are created for size 64 MB, If you create a log file with 64Mb size then 4 VLF’s will get created.

Use VLFTest
go
sp_helpfile
As Autogrowth is set as 10% So the file size is altered to 71 Mb,
ALTER DATABASE VLFTest MODIFY FILE ( NAME = N'VLFTest_log', SIZE = 71MB )
use VLFTest
dbcc loginfo -- Here it created 8 files as the file size is less than 1 GB so it created 8, so far everything seems good.
We will run Alter command one more time with 77 MB
ALTER DATABASE VLFTest MODIFY FILE ( NAME = N'VLFTest_log', SIZE = 77MB )

use VLFTest
dbcc loginfo -- Now the VLF count is increased to 12, If autogrowth happened again the VLF’s count will get increased abnormally. Here for every 10% autogrowth 4 VLfs are created, so we need to be very careful in deciding the autogrowth factor.

How to reduce VLF’s

Using the command verify the log used space of VLFTest Database
DBCC SQLPerf(Logspace) -- If the Log Space Used(%) is very less it means many VLFs are inactive, we can decrease the VLF count by using the below command,

NOTE – But it is a blocking operation you should take downtime if you do this in production.
Use VLFTest
dbcc shrinkfile(VLFTest_log,truncateonly) -- This command truncates all the inactive VLFs

use VLFTest
dbcc loginfo

TruncateOnly – Truncate removes the inactive virtual log files, even after truncating the physical size of log remains same and it won’t break the LSN chain.

Monday, February 17, 2014

SQL Server Profiler : Create a Trace Template, Import, Export Template and Delete a Template


SQL Server Profiler

How to save a template, to import a template, export a trace template, edit a template and delete a template file?

Using SQL Server Profiler we can create our own template to trouble shoot issues like identifying slow queries, to view the locks and for few other purposes.

Steps to Save User defined Template file.
From SQL Server Profiler

File – New Trace – Server Name - Specify a server name on which you have access to run profiler and then – Connect
Trace Name - Specify Name (Optional while creating Template file)
Click on Events Selection Tab
Remove all events,

Select RPC:Completed, RPC:Starting ( in Stored Procedures Event Class)
Select SQL:BatchCompleted, SQL:BatchStarting (in TSQL Event Class)
Uncheck Show all events and Show all columns to hide unwanted information.

Click on Organize Columns, in which you can specify the order of columns and you can also group the columns on which you want view details.
Here I am not moving any column into Groups Section but changing order of SPID as first column in the Columns Section and hit OK
Hit Run and Stop
File – Save As – Trace Template – Specify Name of the Template -- Template to View Stored Procs
Once you save the template definition the same will be stored under Use the template list on General Section of New Trace window.


To Export the Template
Open SQL Profiler
File – Templates – Export Template
Server Type - Microsoft SQL Server 2008
Template Name - Select the template which we created in the earlier step – OK
File Name – Specify location and name of the template, which is stored with .tdf extension 
The same template file can be imported in other server using File – Templates – Import template
While specifying template in the profiler you can select the user defined template and then move on.
Using the above steps we can edit / delete the template.

Tuesday, February 11, 2014

Type of Locks in SQL Server

Type of Locks

Shared Locks - SQL Server applies shared locks for all read operations. If a shared lock is placed on a data page the other transactions can also acquire lock on the same page even when the first transaction is not completed that means any number of shared locks on a resource can be placed by concurrent sessions. No other transactions can modify the data (or place exclusive lock) until all shared locks. The shared lock gets released as soon as the read operation is completed. For example (select * from persons), will place a shared lock on the first page in the persons table when query starts, after data on the first page is read the lock on the page is released. It doesn’t hold the lock until the statement completes or until the end of its transaction.

Update Locks - SQL Server applies update locks on resource (page / row) that a transaction is would like to modify. If a transaction tries to update a row, first step it needs to read that row from the page (then it places a shared lock) on that page / row, A resource (page / row) that has update lock on it can still have shared locks from other concurrent sessions to prevent deadlocks. If a transaction initially with shared lock needs to converted to Exclusive (X) lock on the resource to modify the record and to prevent other users from modifying the same record, it may lead to deadlocks in the environment where multiple transactions are trying to update same record at the same time. Update locks will prevent this kind of situation, this update lock will allow shared locks on the same page / row but it doesn’t allow exclusive locks, this is an internal lock help to avoid deadlocks. If the other transactions need to change same resource then no shared locks are exists then update lock are promoted to exclusive locks.

In general read operations acquired shared locks and write operations acquired exclusive locks. 

Exclusive Lock – SQL Server applies exclusive lock to prevent access to that resource by other transactions, when an Exclusive (X) lock placed on a resource then no other transaction will read / modify the data.

Intent Locks (I) - SQL Server uses intent locks to establish locking hierarchy. For example if a transaction placed an exclusive (X) lock on a row, then SQL Server places an Intent (I) lock on the table / page itself. The intent lock prevents other transaction from modifying or dropping the table / page while my session is reading that row. Intent locks improves locking performance by allowing SQL Server to examine locks at the table level to determine the locks held on the table, rather than searching through multiple locks at the page or row level within the table. There are three types of Intent Locks, Intent Shared (IS), Intent Exclusive (IX), and shared with intent exclusive (SIX). The IS lock indicates the process holds shared locks on the lower-level resources (page / row), The IX lock indicates the process hold exclusive locks on the lower level resources. The SIX lock has occurs on a special circumstances that a transaction is holding shared lock on a resource, and later in the transaction an IX lock is needed. In that case IS lock is converted to SIX.

Schema Locks - SQL Server uses schema locks to maintain structural integrity of SQL Server tables. There are two types of Schema Locks, Schema Modification Locks (Sch-M) and Schema Stability (Sch-S), SQL Server places Schema Modification Locks during a DDL operation is performed such as Alter Table, Create Table, and during that time it prevents concurrent access to the table until the transaction is completed or committed.
Schema Stability locks are placed while doing read operations, these are similar to shared locks on the object, it won’t block normal processing including write operations, but it will block other DDL operations on that resource. This means no other process will modify the schema of the object such as dropping an index, or altering a stored procedure or table, while other processes is referencing the object.

Bulk Update Lock (BU) - These are special types of locks used only when bulk copying data using bcp utility or the BULK Insert command. Bulk Update allows multiple threads to bulk load data concurrently into same table however it blocks other processes which is not doing Bulk Update.

select resource_type, request_session_id, db_name(resource_database_id) DatabaseName, request_mode, resource_description from sys.dm_tran_locks where resource_database_id>4

Monday, February 10, 2014

Locking In SQL Server

Locking

Locking is a part of SQL Server operations. When a row is read by a query, a lock will be placed on that row to prevent any modifications which reading. If we issue an update statement on a single row, a lock will be placed on that row to prevent reading that data which is being updated. Different types of locks issued by SQL Server DB engine to determine how the resources can be accessed by concurrent transactions.

The main resources that SQL Server can lock are a row, a page, a table, there are other locks as well such as Database and Extent but these locks will have little effect on user experience with the Server. Extent locks are used with allocating new extents to get more pages for an index or tables, SQL Server uses database lock to calculate when a database is in use by one ore more users.

SQL Server locks resources using different lock modes that determine how the resources can be accessed by concurrent transactions. SQL Server uses these resource lock modes:

SQL Server has two levels of locking,

Page Level Locks & Table Level Locks

Page Level Locks are less restrictive when compared to table level locks. Page Level lock locks all rows in a page, whereas table level lock locks all rows in the table. SQL Server tries to use page level lock as much as possible by providing data to concurrent sessions to improve concurrency.

Table Level locks are being placed by SQL Server when large amount of table being utilized or most of the table pages will be accessed by the transaction, If UPDATE or DELETE has no useful index then it does a table scan and places a table lock, Sometimes If UPDATE or DELETE statement uses index and it was trying to acquire many page locks or it reaches locks threshold then it escalates to table lock. 

Shared (S), Update (U), Exclusive (X), Intent, Schema, Intent (I), Intent Shared (IS), Intent Exclusive(IX), Shared with Intent Exclusive (SIX) and Bulk Update (BU).