SQL Server–Database Configuration Report


When coming in to a new environment, it’s a good idea to get a baseline of all of the databases on each of the systems in the environment.  The script below will generate a report of all of the databases on a given SQL Server instance.  Looking through the results you can quickly identify differences and anomalies between databases.

use master
GO
 
SELECT
    D.name AS DatabaseName,
    CONVERT(BIT,
    CASE WHEN source_database_id IS NULL THEN 0 ELSE 1 END) AS IsSnapshot,
    SP.name AS DatabaseOwner,
    D.create_date AS DatabaseCreateDate,
    D.compatibility_level AS DatabaseCompatibilityLevel,
    D.collation_name AS DatabaseCollation,
    D.user_access_desc AS UserAccess,
    CASE WHEN is_read_only = 1 THEN ‘READ_ONLY’ ELSE ‘READ_WRITE’ END AS ReadWriteAccess,
    D.is_auto_close_on AS AutoCloseEnabled,
    D.is_auto_shrink_on AS AutoShrinkEnabled,
    D.state_desc AS State,
    D.is_in_standby AS InStandby,
    D.is_cleanly_shutdown AS IsCleanlyShutdown,
    D.snapshot_isolation_state_desc AS SnapshotIsolation,
    D.is_read_committed_snapshot_on AS IsRCSIEnabled,
    D.recovery_model_desc AS RecoveryModel,
    D.page_verify_option_desc AS PageVerificationModel,
    D.is_auto_create_stats_on AS AutoCreateStats,
    D.is_auto_update_stats_on AS AutoUpdateStats,
    D.is_auto_update_stats_async_on AS AutoUpdateStatsAsync,
    D.is_ansi_null_default_on AS AnsiNullDefault,
    D.is_ansi_nulls_on AS AnsiNulls,
    D.is_ansi_padding_on AS AnsiPadding,
    D.is_ansi_warnings_on AS AnsiWarnings,
    D.is_arithabort_on AS ArithAbort,
    D.is_concat_null_yields_null_on AS ConcatNullYieldsNull,
    D.is_numeric_roundabort_on AS NumericRoundAbort,
    D.is_quoted_identifier_on AS QuotedIdentifier,
    D.is_recursive_triggers_on AS RecursiveTriggers,
    D.is_cursor_close_on_commit_on AS CursorCloseOnCommit,
    D.is_local_cursor_default AS LocalCursorDefault,
    D.is_fulltext_enabled AS FullTextEnabled,
    D.is_trustworthy_on AS Trustworthy,
    D.is_db_chaining_on AS DbChaining,
    D.is_parameterization_forced AS ForcedParameterization,
    D.is_master_key_encrypted_by_server AS MasterKeyEncryptedByServer,
    D.is_published AS Published,
    D.is_subscribed AS Subscribed,
    D.is_merge_published AS MergePublished,
    D.is_distributor AS Distributor,
    D.is_sync_with_backup AS ReplicationSyncWithBackup,
    D.is_broker_enabled AS BrokerEnabled,
    D.is_date_correlation_on AS DateCorrelationEnabled,
    D.is_cdc_enabled AS CdcEnabled,
    D.is_encrypted AS Encrypted,
    D.is_honor_broker_priority_on AS HonorBrokerPriority
FROM
    sys.databases AS D
    INNER JOIN
    sys.server_principals AS SP
    ON
    D.owner_sid = SP.sid
ORDER BY
    DatabaseName
GO

SQL Server–List data file sizes


Here is a quick script to identify just the size of the data files for all of the databases on a given instance of SQL Server.

use master
GO
 
SELECT
    sys.databases.name AS DatabaseName,
    SUM(sys.master_files.size) * 8 / 1024 AS DatabaseSize
FROM
    sys.master_files
    INNER JOIN
    sys.databases
    ON
    sys.databases.database_id = sys.master_files.database_id
WHERE
    sys.databases.database_id > 4
    AND
    sys.master_files.type_desc = ‘ROWS’
GROUP BY
    sys.databases.name
ORDER BY
    DatabaseSize
GO

How to: Identify SQL Server Transaction Log Usage by Table


The transaction log of a SQL Server database is often misunderstood.  In short, the transaction log records every change made to the database.  Yes, that’s right – every change.  As such, depending on the changes being made to the database, the recovery model and certain other factors that can prevent log truncation, transaction logs can grow – sometimes out of control.

A SQL Server transaction log that has runaway growth can have several negative side-effects including:

  1. Consuming all of the free space on the disk
  2. Overloading the I/O subsystem
  3. Causing Transaction Log Shipping to fall out of sync

So, how can one determine the cause of the rapid growth of the transaction log?  Well, fortunately, there is a function named sys.fn_dblog.  This function is officially undocumented, however it is very straightforward to use.  It takes 2 input parameters, the first being the starting LSN (Log Sequence Number) and the second being the ending LSN.  If NULL provided to either of these parameters, the parameter is effectively ignored.  What sys.fn_dblog returns is a table with 129 columns (SQL Server 2012).  While there is enough information to write a book about all of these columns, we are going to focus our attention on only 3 columns:

  1. AllocUnitName
  2. Operation
  3. Log Record Length

The combination of the above columns will give us the information we need to identify the cause of transaction log growth.

Environment Setup

use master
GO
 
IF EXISTS (SELECT * FROM sys.databases WHERE name = ‘DeleteMe’)
ALTER DATABASE DeleteMe SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE DeleteMe
GO
 
CREATE DATABASE DeleteMe
GO
 
ALTER DATABASE DeleteMe SET RECOVERY SIMPLE
ALTER DATABASE DeleteMe MODIFY FILE (NAME = N’DeleteMe_log’, SIZE = 10240KB)
GO
 
USE DeleteMe
GO
 
CREATE TABLE dbo.TestTable
(
    TestTableId INT NOT NULL IDENTITY(1, 1),
    Column01 INT NOT NULL,
    Column02 NVARCHAR(50) NOT NULL,
    CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED
    (
        TestTableId ASC
    )
)
GO

 

The code above simply:

  1. Creates the database
  2. Sets the recovery model to simple
  3. Sets the transaction log size to 10 MB
  4. Creates a table named TestTable

Now that we have an environment to work in, let’s clear out the transaction log.  Since the recovery model is simple (formerly known as “TRUNCATE LOG ON CHECKPOINT”) issuing a simple CHECKPOINT command will clear the entries out of the transaction log.

use DeleteMe
GO
 
CHECKPOINT
GO

INSERT Operations

Next, let’s insert a few rows of data to work with.

use DeleteMe
GO
 
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO

 

Now that we’ve just performed an insert, there should be records written to the transaction log.  We can check with the following query:

use DeleteMe
GO
 
WITH CTE AS
(
    SELECT
        AllocUnitName,
        Operation,
        SUM(CONVERT(BIGINT, [Log Record Length])) AS TotalTranLogBytes,
        SUM(CONVERT(BIGINT, [Log Record Length])) * 100 /
            SUM(CONVERT(MONEY, SUM(CONVERT(BIGINT, [Log Record Length]))))
            OVER() AS PercentOfLog
    FROM
        sys.fn_dblog(NULL,NULL)
    GROUP BY
        AllocUnitName,
        Operation
)
 
SELECT
    AllocUnitName,
    Operation,
    TotalTranLogBytes,
    PercentOfLog
FROM
    CTE
WHERE
    PercentOfLog >= 0
ORDER BY
    TotalTranLogBytes DESC
GO

 

On my system, when I execute the above query, I get the following:

Capture

You can see right at the top of the list the the LOP_INSERT_ROWS operation on the clustered index of dbo.TestTable accounts for 21.524% of my total transaction log consumption.

UPDATE Operations

Let’s try an update next:

use DeleteMe
GO
 
UPDATE
    dbo.TestTable
SET
    Column01 = 1,
    Column02 = ‘One’
GO

 

Since I omitted the WHERE clause, we updated all 5 rows in the table.  Checking with sys.fn_dblog again, we get the following result:

Capture2

Again, at the top, we can see the LOP_MODIFY_ROW operation which represents the UPDATE operation we just performed.

TRUNCATE vs. DELETE

How about TRUNCATE vs. DELETE?  Let’s check out TRUNCATE first:

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5 rows in the table
and the transaction log is clear.
*/
 
–Truncate the table
TRUNCATE TABLE dbo.TestTable
GO

 

We have cleared the table, inserted a fresh set of rows, cleared the transaction log using a CHECKPOINT and then issued the TRUNCATE TABLE command.  Here is our result:

Capture3

The sum of the “TotalTranLogBytes” comes out to 2,402.  As you can see, some system tables are updated to take note of the now missing data, but only 196 “TotalTranLogBytes” were used for the actual clustered index of dbo.TestTable.

Let’s check DELETE next.

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5 rows in the table
and the transaction log is clear.
*/
 
–DELETE from the table
DELETE dbo.TestTable
GO

 

We’ve just performed the same set of steps, only replacing the last TRUNCATE with a DELETE.  Checking the transaction log we see the following:

Capture4

The total “TotalTranLogBytes” here totals up to 1,224!  How can that be?  DELETE operations are always supposed to consume more transaction log space.

Let’s check it again:

Capture5

Whoa!  We didn’t execute any further commands but our “TotalTranLogBytes” just jumped to 1,612.  How can that be?  Well, the reason for this is how SQL Server handles DELETE operations.  When a record is deleted, it is simply marked for deletion and becomes a “ghost record”.  A separate process comes along (think Garbage Collection) and actually deletes the records at a later time.

So, the TRUNCATE statement still took (though very little) more transaction log space than the delete.  Let’s check this again with larger data sets.

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
GO
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO 1000
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5,000 rows in the table
and the transaction log is clear.
*/
 
–TRUNCATE the table
TRUNCATE TABLE dbo.TestTable
GO

 

I’ve modify my query slightly to insert 5,000 rows instead of just 5.  When we check the transaction log again, this is what we find:

Capture6

A total of 4,438 bytes have been consumed truncating this table of 5,000 records.

Let’s try the same thing with DELETE:

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
GO
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO 1000
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5,000 rows in the table
and the transaction log is clear.
*/
 
–DELETE from the table
DELETE dbo.TestTable
GO

 

This is what we get:

Capture7

The DELETE operation took a total of 968,604 bytes – more than 218 times the space taken by the TRUNCATE operation.  This of course also translates into disk I/O as well as time.

Production Usage

I would like to point out that sys.fn_dblog is an undocumented function and as such is subject to change without notice from Microsoft.  Additionally, this function can be resource intensive so if you use this code in production, please do so with care.

Until next time.

How to: Identify the object stored at a given page location–SQL Server


Often times when troubleshooting performance issues with SQL Server, you will identify a file id and a page id that is the source of your troubles.  The question is: How do I translate this File Id / Page Id combination into an actual object within SQL Server?  Well, with the following script of course:

How to match a File Id / Page Id to a SQL Server object

use tempdb
GO
 
–Set the parameters
DECLARE @FileId INT = 1
DECLARE @PageId INT = 122
 
–Create a temporary table to hold the DBCC PAGE output
DECLARE @PageTable TABLE
(
    ParentObject NVARCHAR(500),
    [Object]     NVARCHAR(500),
    Field        NVARCHAR(500),
    VALUE        NVARCHAR(500)
)
 
–Create the DBCC PAGE command
DECLARE @CommandString NVARCHAR(1000)
SET @CommandString = ‘DBCC PAGE(”’ + DB_NAME() + ”’, ‘ +
                      CONVERT(VARCHAR, @FileId) + ‘, ‘ +
                      CONVERT(VARCHAR, @PageId) +
                      ‘, 0) WITH TABLERESULTS, NO_INFOMSGS’
 
–Execute the DBCC PAGE command into the #PageTable table
INSERT @PageTable EXEC (@CommandString)
 
–Extract the schema and object name based on the object_id
SELECT
    sys.schemas.name + ‘.’ + sys.objects.name AS ObjectName
FROM
    @PageTable
    INNER JOIN
    sys.objects
    ON
    CONVERT(INT, VALUE) = sys.objects.object_id
    INNER JOIN
    sys.schemas
    ON
    sys.objects.schema_id = sys.schemas.schema_id
WHERE
    ParentObject = ‘PAGE HEADER:’
    AND
    Field = ‘Metadata: ObjectId’
GO

In short, you must set the database context (I’m using tempdb in this example)

  1. Set the database context (I am using tempdb in this example).
  2. Specify the file id and the page id.
  3. Create a temporary table to hold the contents of the specified page.
  4. Execute DBCC PAGE to get the contents of the page.
  5. Extract the ObjectId from the page contents and join to sys.objects.

That’s it!  So next time you track your performance issue down to a specific file id and page id you won’t be scratching your head wondering what it all means.

How to: Understanding the SQL Server FILLFACTOR setting


What is Fill Factor?

When dealing with index design in SQL Server one of the single most important things to configure is the fill factor.  The fill factor is a number between 0-100 which indicates how full (percentage-wise) each page of the index should be when the index is created.  This setting also comes into play when an index is rebuilt or reorganized (defragmented).  A fill factor of 0 is functionally the same as a fill factor of 100.

Impact on Disk Space

The higher the fill factor, the more full each page of the index is once it’s created.  That means fewer pages are required to store the same amount of information.  As the fill factor drops, the number of pages required increases as each page is not as full.  The chart below shows the impact of different fill factors on a sample index.

FillFactorImpactOnIndexSize

As you can see, the index grows at a logarithmic rate.  So, generally speaking, the difference in space consumption between say a fill factor of 90 and 100 is negligible.  The difference between 80 and 90 is possibly tolerable but fill factors lower than 80 quickly balloon the index wasting considerable space on disk.

Impact on Memory Space

When SQL Server need data from an index or a heap, it reads that data from the disk and then stores that data in memory.  SQL Server will then try to keep that data in memory as long as possible to avoid having to read it from disk again.  The catch is that SQL Server reads data into memory in pages which means they will maintain the same distribution that they have on disk.  So, wasted space on disk equals wasted space in memory.

Data Warehouses and Fill Factor

The lower the fill factor, the more reads will be required to access the same information.  For data that is very “read-centric”, this can be a problem.  For example in data warehouses, data is written once and read many times.  Individual records rarely change in a data warehouse and so assuming there is a solid partitioning plan in place, setting a fill factor of 100 is *generally* a good idea.

OLTP Databases and Fill Factor

OLTP databases usually have a high number of reads and writes so balance is key.  Obviously, the fill factor should be high enough to minimize the number of reads to access any specific record, however if it is too high performance can be negatively impacted by page splits.  Data in SQL Server is stored in B-trees.  When a record is inserted into an index, SQL Server will start at the root, traverse the tree to the page where the record belongs and if there is adequate free space, insert the record into that page.  If SQL Server reaches that page and there is not enough room, SQL Server will perform the following:

  1. Create a new page
  2. Make an entry into the parent node pointing to the newly created page
  3. Split the original target page in half (hence the “page split”)
  4. Move 1/2 of the rows to the new page

As  you can see, this can be quite an expensive operation – especially if the parent node is also full.  If the parent node is also full, the same operation takes place and so on up the tree.

So, if the fill factor is set to 100 on an index that receives frequent inserts, there will be a negative impact on write performance.  Generally speaking, setting the fill factor to 90 and then either directly monitoring the number of page splits using Extended Events or indirectly monitoring page splits by analyzing page density the fill factor can be adjusted up or down.  If the index is receiving a lot of page splits or page density is changing dramatically, consider lowering the fill factor.  If there are few to no page splits, consider raising the fill factor.  Just keep in mind that changing between 90 and 95 will have little impact, but 80 to 90 may have a noticeable impact (see graph above).

PAD_INDEX option

One more thing to point out about fill factor is the PAD_INDEX setting.  When specifying a fill factor, the PAD_INDEX option can be turned on which will force SQL Server to apply the same fill factor to the intermediate nodes of the B-tree.  By default this option is turned off.

How to: Check the time of the last clean DBCC CHECKDB in SQL Server


DBCC CHECKDB is a common maintenance task executed to verify that the data stored in a SQL Server database is clean (i.e. not corrupt).  Generally speaking, it is good practice to run DBCC CHECKDB against any database that contains valuable data.  The one database where DBCC CHECKDB is not necessary is the tempdb database as the tempdb database is recreated from scratch every time that the SQL Server service is started.  All of the other system and user databases should be checked on a regular basis – if possible.

So, how do you know the last time DBCC CHECKDB was executed successfully against a given database?  Why with the following script of course!

use master
GO
 
DBCC TRACEON (3604)
GO
 
DECLARE @DBCCPage AS TABLE
(
    DatabaseName    SYSNAME NULL,
    ParentObject    SYSNAME,
    [Object]        SYSNAME,
    Field           SYSNAME,
    [Value]         SQL_VARIANT
)
 
DECLARE DatabaseCursor CURSOR LOCAL FAST_FORWARD
FOR SELECT name FROM sys.databases WHERE state_desc = ‘ONLINE’
 
OPEN DatabaseCursor
 
DECLARE @DatabaseName SYSNAME
 
FETCH NEXT FROM DatabaseCursor INTO @DatabaseName
 
WHILE @@FETCH_STATUS = 0
BEGIN
    –Insert the DBCC PAGE output into the @DBCCPage table
    INSERT INTO @DBCCPage (ParentObject, [Object], Field, Value)
    EXEC(‘DBCC PAGE ([‘ + @DatabaseName +‘], 1, 9, 3) WITH TABLERESULTS’)
   
    –Set the database name
    UPDATE @DBCCPage SET DatabaseName = @DatabaseName WHERE DatabaseName IS NULL
   
    FETCH NEXT FROM DatabaseCursor INTO @DatabaseName
END
 
SELECT DISTINCT
    DatabaseName,
    CONVERT(DATETIME, Value) AS LastGoodDBCC,
    DATEDIFF(day, CONVERT(DATETIME, Value), GETDATE()) AS DaysSinceLastGoodDBCC
FROM
    @DBCCPage
WHERE
    Field IN (‘dbi_dbccLastKnownGood’)
ORDER BY
    DatabaseName
GO
 
DBCC TRACEOFF (3604)
GO

Some time ago, Paul Randal published a post entitled “CHECKDB From Every Angle: When did DBCC CHECKDB last run successfully?.”  This gave us the raw internals to identify the last good DBCC CHECKDB run.  The script above simply wraps that information into an easy to consume report.

This is a great script to have in your toolbox – especially if you are new to an environment and you are trying to get a feel for the condition of the databases on a particular system.

That’s it for this post.  Please check back tomorrow for some more SQL Server / .NET goodness!

How to: Enable CLR (Common Language Runtime) in SQL Server


With the release of SQL Server 2005 it became possible for developers to write C# or VB.NET code that can execute within SQL Server.  One thing that is often times overlooked when installing a .NET assembly into a SQL Server database is enabling the database to execute CLR code.  In order to do this, simply execute the following code:

use master
GO
 
exec sp_configure ‘show advanced options’ , ‘1’
GO
 
RECONFIGURE WITH OVERRIDE
GO
 
exec sp_configure ‘clr enabled’ , ‘1’
GO
 
RECONFIGURE WITH OVERRIDE
GO

That’s it.  Short and sweet.  Once this code executes, all assemblies installed within the database are available for use.

That’s it for today.  Please check back tomorrow for more Tips & Tricks.