.NET Framework execution was aborted Error in SQL Errorlog


There are many error messages defined in SQL Server which are returned when certain invalid operation is performed and based on the task performed, appropriate error number, severity and brief error message is sent to the application which run the query like SSMS or SQLCMD. Unfortunately, not all the error messages are understood easily and can become difficult to identify the cause and to find a solution. Especially errors related to performance or memory can be even difficult to diagnose or troubleshoot. One of the error, which I worked on recently was related to .NET Framework execution was aborted by escalation policy because of out of memory. Initial look at the error message, it given an idea that the error message was result of memory problem, as it is “out of memory” in the error message.

Error: 6532, Severity: 16, State: 49.
–> .NET Framework execution was aborted by escalation policy because of out of memory.
[WARNING] System.Threading.ThreadAbortException: Thread was being aborted.
[WARNING] System.Threading.ThreadAbortException:
[WARNING] at System.Data.SqlTypes.SqlChars.get_Value()
[WARNING] at System.Data.SqlServer.Internal.XvarWlobStream.GetCharArray()
[WARNING] at System.Data.SqlServer.Internal.XvarWlobStream.GetString()
[WARNING] at System.Data.SqlServer.Internal.XvarWlobStream.GetSqlString()

Other than the above error, you may also end up seeing some error messages like below.
– AppDomain 2 (xxx) is marked for unload due to memory pressure.
– AppDomain Trio.dbo[runtime].xx was unloaded by escalation policy to ensure the consistency of your application. Out of memory happened while accessing a critical resource.
– Failed to initialize the Common Language Runtime (CLR) v2.0.50727 due to memory pressure.
– Additionally, you may see the above error messages with SQL Server 2008 features which use SQL CLR internally, like DMF (Declarative Management Framework) and spatial data types.

These errors can occur when SQL CLR stored procedures, user defined functions, user defined data types, or user defined aggregates, are used against SQL Server especially on 32 bit SQL instance.
– AppDomain 2 (xxx) is marked for unload due to memory pressure – This error message can be mostly informational message indicating that SQL CLR is responding to the memory pressure on the system. If this is an intermittent error message and is not affecting the execution of SQL CLR objects, the message can be ignored.
– For other errors, ensure that .NET Framework 2.0 is up to date with the latest available patch. Ensure SQL CLR object uses any assemblies other than those documented are not used. SQL Server 32 bit instance virtual memory is limited to 4GB. If we use SQL CLR extensively with a SQL Server 32 bit instance and experience one or more of above errors, then the best thing would be to migrate to a 64 bit SQL Server. This will allow SQL CLR to access more virtual memory and may prevent the above errors from happening in your environment. Other than that when we develop custom SQL CLR applications, we need to use SQL CLR memory carefully, like avoiding caching large amount of data using objects such as DataTables, If required use TSQL to retrieve the data, try avoid using static variables in your code to store large objects, and try create objects as late, and release them as early, as possible. If issue still persists, then check if there are any known issues and patch SQL Server to latest version.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings


Basics of SQL Server Policy Based Management


Policy Based Management is a framework for dealing with one or more occasions of SQL Server. At the point when SQL Server arrangement heads use Policy-Based Management, they utilize SQL Server Management Studio to make approaches to oversee elements on the server, for example, the occurrence of SQL Server, databases, or other SQL Server objects. Policy Based Management is useful in determining the issues introduced in the accompanying situations where An organization approach disallows empowering Database Mail or SQL Mail. A strategy is made to check the server condition of those two highlights. An overseer contrasts the server state with the approach. In the event that the server state is out of agree-ability, the overseer picks the Configure mode and the arrangement brings the server state into consistence.

The AdventureWorks2012 database has a naming tradition that obliges every put away method to begin with the letters AW_. A strategy is made to implement this approach. A director tests this approach and gets a run-down of put away methodology that are out of consistence. In the event that future put away systems don’t agree to this naming tradition, the creation articulations for the put away methodology come up short.

Important Policy Based Management Terms:
Policy Based Management oversaw target – Elements that are overseen by Policy-Based Management, for example, an occasion of the SQL Server Database Engine, a database, a table, or a record. All focuses in a server occasion shape a target order. A target set is the situated of focuses on that outcomes from applying an arrangement of target channels to the target order, for instance, all the tables in the database possessed by the HumanResources blueprint.
Policy Based Management feature – An arrangement of coherent properties that model the conduct or attributes for specific sorts of oversaw targets. The number and attributes of the properties are incorporated with the aspect and can be included or uprooted by just the producer of the feature. A target sort can actualize one or more administration features, and an administration feature can be actualized by one or more target sorts. A few properties of a feature can just apply to a particular variant..
Policy Based Management condition – A Boolean interpretation that indicates an arrangement of permitted conditions of a Policy-Based Management oversaw focus as to an administration aspect. SQL Server tries to watch gatherings when assessing a condition. At the point when SQL Server gatherings don’t precisely coordinate Windows examinations, test your condition to decide how the calculation determines clashes.
Strategy Based Management approach – A Policy-Based Management condition and the normal conduct, for instance, assessment mode, target channels, and calendar. A strategy can contain stand out condition. Strategies can be empowered or debilitated. Approaches are put away in the msdb database.
Policy Based Management approach classification – A client characterized classification to help oversee strategies. Clients can characterize approaches into diverse strategy classes. An approach has a place with unrivaled one strategy class. Arrangement classes apply to databases and servers. At the database level, the accompanying conditions apply

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings


Basics of SQL Server Database Instant File Initialization


Data and log files are initialized to overwrite any existing data left on the disk from previously deleted files. Data and log files are first initialized by filling the files with zeros when you perform one of the operations like Create a database, Add files, log or data, to an existing database, Increase the size of an existing file (including autogrow operations), Restore a database or filegroup. File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros.

In SQL Server, data files can be initialized instantaneously. This allows for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously.

How to Enable Database Instant File Initialization in SQL Server?
Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy. Important point to note regarding instant file initialization is that Instant file initialization is not available when TDE is enabled.

To grant an account the Perform volume maintenance tasks permission:
On the computer where the backup file will be created, open the Local Security Policy application (secpol.msc).
In the left pane, expand Local Policies, and then click User Rights Assignment.
In the right pane, double-click Perform volume maintenance tasks.
Click Add User or Group and add any user accounts that are used for backups.
Click Apply, and then close all Local Security Policy dialog boxes.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings


Suspect Database Issue Due to MSDTC Orphan Transaction


It is not strange to see different kinds of issues after restart of SQL Server instance or reboot of a server or after failover of cluster SQL Server instances. Below is one of the issue you may encounter due to MSDTC service orphan transaction causing the database recovery to fail after a restart of the server. Below are the errors you may see in the SQL Server errorlog.

Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
The Microsoft Distributed Transaction Coordinator (MS DTC) service could not be contacted. If you would like distributed transaction functionality, please start this service.
Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
The Microsoft Distributed Transaction Coordinator (MS DTC) service could not be contacted. If you would like distributed transaction functionality, please start this service.
Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
The Microsoft Distributed Transaction Coordinator (MS DTC) service could not be contacted. If you would like distributed transaction functionality, please start this service.
SQL Server detected a DTC/KTM in-doubt transaction with UOW {995D82F2-3C9B-4249-C827-B33FC42BAD82}.Please resolve it following the guideline for Troubleshooting DTC Transactions.
Error: 3437, Severity: 21, State: 3.
An error occurred while recovering database ‘TestDB’. Unable to connect to Microsoft Distributed Transaction Coordinator (MS DTC) to check the completion status of transaction (0:-1647710395). Fix MS DTC, and run recovery again.
Error: 3414, Severity: 21, State: 2.
An error occurred during recovery, preventing the database ‘TestDB’ (database ID 6) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

– If you run query against sys.sysdatabases, you will see that the status of the database is shows as suspect. Also suspect status is shown for the database under SSMS GUI.
– This issue happens after restart of server and SQL Server is trying to recover a database which has a incomplete transaction related to linked server or a remote query which uses MSDTC.
– Once the server is rebooted, SQL Server started and tried to start and recover all databases as usual and it found a open transaction in TestDB which was a linked server query using MSDTC, but the MSDTC service was not yet started resulting in failure to recover the transaction.
– To resolve the issue, DBA has to start the MSDTC service, if it is not yet started.
– If MSDTC service has started later, then DBA has to recover and bring the database online by running the below query
Alter database DBName SET ONLINE

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014


SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings


SQL Server Indexes Frequently Asked Question and Answeres (FAQ) in Interviews


Index fundamentals are most important concept for any DBA to understand and has lot of value in interviews. Below are some of the common fundamental questions asked about the SQL Server Indexes.

What type of Indexes can a Table consist of in SQL Server 2012?
A table in SQL Server can have below types of Indexes.

One Clustered Index
Non-Clustered Indexes
Unique Indexes
Index with included columns
Columnstore index
Filtered Indexes
Spatial Indexes
XML Indexes

what is a Clustered index in sql server 2008 R2? How many clustered Indexes can we create on a table?
Clustered index is an on-disk structure which contain all the columns and rows of the table. We can create only one clustered index as it represents the table itself and we cannot represent the same table in another way, as it is as good as a new table.

what is a Non-Clustered index in sql server 2008 R2? How many clustered Indexes can we create on a table?
Clustered index is an on-disk structure which contains one or more columns as key columns defined while creating the index and also consists of clustered index key column.

Can we create Index on a View?
Yes, we can create indexes on a view and it will be persisted on the database, just like a index created on a table.

Does the Clustered and Non-Clustered Indexes unique data in the key column?
Not necessarily, we can create a Clustered and Non-Clustered Indexes as unique as well and
non-unique where multiple rows can share the same key value.

A table having Primary Key meaning it also has Clustered Index?
Not necessary, by default, creating a Primary key also creates a clustered index, but we can change that behavior and make it create a non-clustered index or no index at all, in which case the table creates unique constraint to maintain the uniqueness in the primary key column.

Can we create Clustered Index without a primary key in the table?
Yes, we can have a table without a primary key, but with a clustered index.

What are Heap tables in SQL Server 2014?
Any table which does not have a Clustered index is called as heap table. But it is recommended to have clustered index on all table, of-course there can be special cases where you do not require a clustered index.

Can Non-clustered indexes be created on Heap tables?
Yes, we can create non-clustered indexes on Heap tables, it is fully supported, but not recommended.

Do we need to perform any regular maintenance on the server?
Yes, we need to regularly perform smart index optimization by rebuilding or reorganizing of all the indexes based on the fragmentation percentage. Also statistics need to be updated on all the indexes with Full Scan preferably to improve performance.

Does Shrink operation has any impact on the Indexes?
Yes, Shrink operation on entire database or data files cause high fragmentation and slowdown the performance significantly, so it is not recommended to Shrink data files. Even in special cases if we have to Shrink a data file, then we need to rebuild the indexes immediately to reduce the fragmentation.

Are creating multiple indexes always improves performance?
Not necessarily, having too many or improper indexes also negatively affects the performance, so only after careful evaluation and testing, proper indexes need to be created.

Can we create Indexes on Views in SQL Server 2012?
Yes, definitely we can create indexes on views.

What kind of indexes can we create on Views?
First index we create on a View must be Unique Clustered Index and then we can create multiple Non-Clustered Indexes.

Do we have to performance any maintenance on the indexed Views to control fragmentation?
Yes, we need to perform regular smart re-indexing and also statistics updates on the indexed views.

Does creating Indexes on View help the performance of queries?
Yes, it will improve the performance, but also can negatively impact the performance in some cases, so test it thoroughly before implementing on Production servers.

Do we need to reference the indexed Views in the query for it to be used by the SQL optimizer?
No, even if we do not reference the indexed views, SQL Server may still use the Indexed Views.

If my table has high number DML(Insert/Update/Delete) operations performed frequently, then does creating indexes improves the performance?
Not always, need to very carefully test and create the indexes which are required which improves the performance and remove and indexes which are not used much. DML operations will have additional overhead with lot of indexes and can slowdown the performance.

Are there any tools which help me identify which indexes to create in SQL Server 2008 R2?
– Yes, with SQL Server Management Studio (SSMS) 2008 R2 or higher, we can generate a execution plan and it also suggests if it finds any missing indexes.
– Also, with regular installation of SQL Server, a tool named Database Tuning Advisor (DTA) is also available, which will help in analyzing a query or trace file data and suggest any missing indexes or statistics.
– We can also use SQL Server built-in DMV’s which provide us list of missing indexes based on the index usage data. This index usage data is removed with restart of SQL Server, so be careful.

The suggestions returned by the tools and DMV’s are not accurate and as a DBA, we need to carefully evaluate and test the indexes on a test server and then only create them on Production servers.

How does the indexes help with SQL Server performance?
With just a table without any indexes, there is only one path to access table data which is by reading entire table in to memory and this data is not stored in any particular order, so for a request which requires specific data, we need to read the entire table data and also at a given point in time, only one session can use all the data, thus blocking other sessions. By creating clustered index, now the data will be stored in a particular order based on the columns chooses as index keys, now this allows us to directly point to the required data with the help of the index keys, thus improving performance. Creating additional non-clustered indexes on few columns improves the performance further, where
multiple queries can access the same data without blocking each other, thus improving the performance further.

What are Filtered Indexes?
Filter Indexes are introduced Starting SQL Server2008, we can can create index on set of columns with only subset of rows by using a WHERE condition, so the index will only have the rows which are satisfied by the where condition.

What are Included Columns in Indexes?
An index contans key columns, which act as pointer to the entire row data and this normally have fewer columns based on where condition, however if the index does not have other columns which are to be returned for the query to complete, SQL Server again have to reach out to clustered index for additional columns which are not present in non-clustered index. We can eliminate additional clustered index search, by including the additional columns in the non-clustered index, but not as key columns. These additional columns are added as “Included Columns”.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Installing Patches to Servers Involved in Database Mirroring with Witness


Prior to SQL Server 2012, Database Mirroring was heavily used for High Availability for Critical Databases in most of the Organization. It is common to install patches at OS and SQL Server level regularly, so this had to be planned in such a way that the amount of downtime during the patching process in minimal and that we can run the application safely on one of the servers. Below is the environment setup.

Server01 – Principal
Server02 – Mirror
Server03 – Witness
DB1 – Database Mirroring is setup with High-Safety(Synchronous) with Automatic Failover.

Below are steps to apply SQL Server Service Pack or Cumulative Updates or Operating System patches to all the servers(Principal, Mirror, Witness) involved in Database Mirroring.

– Document the current Database Mirroring Configuration settings of all the databases having Mirroring setup.
– Ensure the same SQL Server Logins exist on both the servers.
– Ensure the same SQL Server Jobs exist on both the servers and are to disabled on the Mirror Server Server02(Just in case if there are problems with patching and databases had to be run from the Mirror Server Server02).
– Perform Full backup and Log backup of all the System and User databases on Principal Server and keep it safe. Perform backup of all system databases on Mirror and Witness servers.
– Remove Witness server from the Database Mirroring Configuration. Connect to Principal server and run the below command by changing DBName to remove the witness server. This step needs to be performed on all the databases where database mirroring is setup with witness.


Optional, Incase, Database Mirroring session is running in high-performance mode, change it from high-performance mode to high-safety mode by running the below command on Principal Server Server01. This needs to be run on all the databases where database mirroring is setup.


– Make sure there are no databases acting as Principal on Mirror Server Server02.
– Apply the Service Pack or Cumulative Update to the Mirror Server Server02.
– Reboot the Mirror Server Server02.
– Once the server is back online, monitor and make sure database mirroring started catching up and should get in to Synchronized state.
– Perform a manual failover of Database Mirroring from Server01 to Server02. Run the below command on Principal server Server01. This needs to be run on all the databases where database mirroring is configured.


– Make sure you are able to connect to the databases on new Principal server Server02.
– Check SQL Server Errorlogs and Eventlogs to make sure there are no errors.
– If everything looks good, then proceed further.
– Apply the Service Pack or Cumulative Update to the Witness Server Server03.
– Reboot the Witness server Server03.
– Apply the Service Pack or Cumulative Update to the Mirror Server Server01.
– Reboot the new Mirror server Server01.
– Once the server is back online, monitor and make sure database mirroring started catching up and should get in to Synchronized state.
Optionally, change back to high-performance mode by running below query. Run this command on new Principal Server Server02. This needs to be run on all databases that have database mirroring setup.


– Verify there are no errors in Witness server SQL errorlog
– Add back the Witness Server Server03. Run the below command from new Principal Server Server02. This needs to be run on all database that have database mirroring setup.


In case of any issues adding witness back, check if the witness endpoint is present or not and also check the permissions on the endpoint. Below article should help.

Optionally failover the Database Mirroring back to original Principal Server Server01. You can perform a manual failover of Database Mirroring from Server02 to Server01 by running the below command on Principal server Server02. This needs to be run on all the databases where database mirroring is configured.


– Verify SQL errorlog and eventlogs on all three servers to make sure there are no issues.
– Test the Application to make sure there are no issues.
– This completes the successful patching of the servers part of database mirroring.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


SQL Server performance Tuning Frequently Asked Question and Answers (FAQ) in Interviews PART 1

Troubleshooting SQL Server performance and SQL Server performance Tuning are most important skill for a DBA which provides lot of value and differentiates between an average DBA and a very good and skilled DBA. It is very common in interviews to ask questions related to SQL Server performance troubleshooting and tuning. Proving strong fundamentals and understanding about SQL Server performance tuning will show positive opinion on interviewer and results in better chance of clearing the interview. SQL Server performance troubleshooting and tuning is very extensive topic and covers almost entire SQL Server, as every configuration change can show affect on performance.

Below are some of the commonly asked SQL Server performance tuning questions (FAQ) and their answers provided in-line.

How do you approach or troubleshoot performance problems in SQL Server?
There are different ways of troubleshooting performance issues and there is no right or wrong way of troubleshooting performance issues, it is just my way of approaching in identifying the problem and then taking corrective actions.

– First try to understand more details about the performance issues. Normally performance problems are reported by Developers or Client as per feedback from end users or some monitoring tools reporting specific performance issue on the server.
– Ask Developers or Client questions to understand what they mean by performance problems and gather mode details to pinpoint issue to a specific application or a specific functionality of the application.
– Troubleshooting approach will differ for general performance problems affecting whole application or more specific problem like slowness with specific Stored Procedure or Query. Next steps are specific to general performance troubleshooting.
– Check the overall hardware resource usage like, CPU usage on the server, Memory usage on the server, I/O usage on the server and Network usage to see if there is any spikes in any of their usage. If yes, then drill further down in that direction, if every thing looks normal, then will proceed with checking at SQL Server level.
– Check SQL Server errorlogs and eventlogs for any errors.
– Check for any blocking or heavy locking or high number of suspended sessions.
– Check wait stats to see the top waits.
– Checking if there are regular maintenance on the SQL Server like rebuilding indexes and update of statistics. If not, then will implement those which will significantly improve the performance.
– Will run DMV’s to identify Top Duration, Top CPU, Top Read or Write intensive queries and try to tune them by creating appropriate indexes or report them to developer suggesting to re-write those queries.
– Checking for SQL configuration settings like, MaxDoP, SQL Max Server Memory, Lock Pages in Memory, Instant File Initialization, Auto-Growth settings, etc.

Above steps should help in understanding the performance problem and in fixing the same.

What question do you ask Developers or Client to understand more about the performance issue?
– What kind of performance issue are you seeing, can you be more specific? It is often said that the Application is slow or website is slow.
– Are there any specific feature or a webpage of the application that is slow or is it that entire application is slow?
– Since when you started seeing performance problems?
– Have you started seeing these problems after recent upgrades to Application or SQL Server or OS and any other changes? or Were there anything changed recently on the system, like applying patches, Hardware or Application changes performed recently?
– Are you aware of any changes to the data or increase in number of users on the SQL Server recently?
– So far have you observed anything that can point in a direction where could be the problem?
– Have you checked application and web server to make sure problem does not lie there itself? How you came to conclusion that problem is with SQL Server?
– Do we have any baseline on the application performance or query performance like, how much time it used to take before and how much time it taking now?
– Have you performed any troubleshooting thus far and what are your findings, if any, so far?

It is common that Developers or Client as expected do not have answers to most of these questions and general answer is entire Application is slow.

How do you troubleshoot slowness with a specific Stored Procedure or a Query?
– First, get more details like, how much time on an average this query was taking previously(baseline)? were there any changes to the stored procedure or query recently? How often does this query runs? Does the problem only happen on this server or other servers too, meaning can this be reproduced on Test or Dev servers?
– Check if this query is being blocked by other sessions.
– Check if this query is waiting some any resource using wait stats DMV’s.
– Check if statistics are up to date for the tables and indexes used in the Stored procedure or the query.
– Check fragmentation of the objects in the Stored procedure or the query.
– Collect execution plan of the Stored Procedure and the statements inside the SP or the query. Also collect Read/Write and Time execution statistics of the query.
– Check for any missing indexes based on the execution plan, based on table or clustered index scans.
– Try and suggest to developers if the query needs to be rewritten in a better way like replacing temp variables with tempdb tables or usage of functions in where clause.

What SQL Server tools are available to troubleshooting performance problems?
– SQL Server Management Studio (SSMS), helps us to run the queries or DMV’s, We can generate execution query plans, Read/Write statistics, Time Statistics by running the query from SSMS.
– SQL Server Profiler trace can be used to identify all the queries running during the problem time and which queries are taking longer times to complete.
– SQL Server Database Tuning Adviser (DTA) is available to check for missing index and statistics recommendations.
– SQL Server DMV’s available to get lot of in-depth details about query statistics and execution plans.
– SQL Server Performance Dashboard to generate performance related reports(This needs to be installed as additional plugin)
– Starting SQL Server 2008 R2, we can use SQL Server Extended Events can be used for more granular details with less performance overhead than profiler.
– Starting SQL Server 2008 R2, we can Management Data Warehouse (MDW) for performance troubleshooting.

How can we proactively try and make sure that performance issues do not occur?
– Follow best practices related to Operating System, SAM, Network
– Follow SQL Server best practices and SQL Server configuration settings for best performance.
– Make sure there are regular maintenance jobs like smart index maintenance tasks like rebuilding or reorganizing indexes based on fragmentation, Smart Updating statistics with Full Scan.
– Using DMV’s or other monitoring tools to monitor the resource usage on server like CPU, Memory and Disk I/O.
– Monitor for occurrences of locking, blocking and suspended sessions.
– Monitor for occurrences of Waits in SQL Server using waitstats DMV’s
– Baseline SQL Server and server resource usage.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Reviewing of Servers for Implementing SQL Server Best Practices

DBA’s need to be proactive and perform health check and review of best practices periodically on all the servers to make sure that all the best practices for SQL Server are being implemented. This approach will avoid any issues which may arise in future. So rather than troubleshooting and fixing the problems after they occur, it is always good to proactively follow best practices and avoid the problems completely.

It is most common in most of the company’s that there are server outages, Performance problems with SQL Server causing unplanned application downtime. To resolve the issue, DBA’s would have done some research and made some configuration changes. If the DBA has performed a periodic review of the server and proposed those changes proactively, the issue itself would have not surfaced.

Best practices are not just applicable for SQL Server, it is also to be done at Operating System, DISK/SAN level and Network level. So, we can divide the best practice or health checks into below categories.

Operating System/SAN/Network Best Practices – SQL Server is one of the Operating System process and will heavily depends on its functioning and management. So, need to verify and enable any settings which can help SQL Server performance or avoid any issues.

Below are the some of the Operating System checks that can be performed on server with SQL Server installed.

– Make sure latest Operating System patches and security updates are installed on the system.

– Add SQL Server service account to “Lock Pages in Memory” local security policy, which basically reduces the chances of Operating System paging out or trimming SQL Server memory working set. On a VMWare server, it is more important, so that the memory is not taken back from the SQL Server by the VMWare balloon driver.

– Configure paging file to 1.5 times of available physical memory for servers will less amount of RAM, as these servers will use paging file at some point in time, but the best approach would be to increase the memory on the server. On servers with large amount of RAM, we do not need to have huge paging file, instead set paging file to same size of physical memory, just in case to grab the kernel dump in case of server crash.

– Starting with Windows Server 2008 R2, power plan is available to configure. There are normally three types of power plans available which are Balanced(this is default setting), High Performance, and Power Saver. For Servers running SQL Server, it is recommended to use High Performance power plan. This change may look simple, but is very powerful and can show significant performance gains. This option needs to be enabled from control panel and also at the NetBios level for it to show real effect. On VMWare systems, this has to be implemented on the ESX host server as well.

– Add SQL Server service account to “Perform volume maintenance tasks” local security policy, which basically enabled “Database Instant File Initialization”. Without this privilege, SQL Server will first zero out all data file before using it, which can show affect on performance on highly transaction systems. Log files cannot benefit from this, as this only works for data files. This helps very much on servers where auto-growth is enabled for database and is there is auto-growth that happens during business hours when system is being under load.

– Review installed Software’s, Applications, and Services on the server and Uninstall or Disable those which are not required, as they can use some resources too.

– It is one of the important recommendation from Microsoft to not install Anti-Virus software on dedicated SQL Server box, even if it installed, make sure that SQL Server exe’s, Dll’s and database files(.mdf, .ndf, .ldf) fileStream files, Full-Text related files from Anti-virus scan. This is very important as it can cause SQL Server to crash and generate Access Violation memory dumps and also can slow down the performance.

– It is recommended to format the disk drives having SQL Server database files using 64 KB block size. If already database files are on disks with lower block size, need to plan and move them on to new disk drives will 64 KB block size. This helps in improving the IO performance greatly.

– It is recommended to use RAID disks for SQL Server which will improve performance and also provides redundancy and support to recover data in case of disk corruption. It would be great to use Raid 10 for all the database and log file, but can be very expensive. Alternatively, you can use RAID 5 for data files and RAID 1 for Transaction Log files.

– There can be other settings at OS, SAN or Network level, so better to work with the System Administrators, SAN Admins and Network Administrators and discuss about improving setting at HBA drivers, HBA queue depth, VMWare settings, Network settings, etc and see if they are configured optimally and if any improvements can be done for those areas.

SQL Server Instance Best Practices – As a DBA, we need to make sure that all SQL Server configurations are done specific to the environment. There are no specific SQL Server configuration recommendations which is applicable for all server, so need to monitor, test and implement the best configurations specific to that server to perform well.

Below are some of the Configuration settings that need to be adjusted for better performance and stability.

– Make sure SQL Server has latest Service Pack (SP), Cumulative Updates (CU) and Security Updates installed.
– Always SQL Server hosting critical databases are running on their Dedicated Servers, so that there won’t be any impact due to other applications running on the system.

– SQL Server default instances run on port 1433 and named instance will be using random dynamic port. It is good to change the named instances to use static port, so that there will not be any problems during startup. Also, make sure to reserve this static port, so that no other applications will use this port, else this can cause SQL Server failure during startup.

– There are different network protocols that may be enabled on a server. Make sure only the required protocols like Shared Memory and TCP/IP are enabled, enable Named Pipes only, if it is used by applications.

– Configure SQL Server Max Server Memory, this is considered one of the most important settings. By default, this value will be set to very large number, meaning it allows SQL Server instance to use all of the available memory on the server, thus leading to server crash or server hang or serious memory pressure on the system. You would need to monitor the memory usage on the server and set appropriate Max Server Memory, leaving enough memory available for OS and other processes running on the server. Also, additional care needs to be taken, when there are multiple SQL Server instances installed on the server, in that case, Max Server Memory for each SQL Server instance should be set in such a way that, adding all instances Max Server Memory will be less than total available physical memory and also enough memory is left to OS and other processes.

– Affinity Mask is mostly good if it is not changed, but can be changed on servers with multiple SQL instances after careful evaluation and testing. Improper changes to this setting can cause serious performance problems.

– Max Degree Of Parallelism (MaxDOP) determines the level of parallelism, which basically has some control on the number of processors that can be used to run a particular query. Be default this is set to Zero, meaning a SQL query which goes for parallel execution can use all the available processors.

General recommendation from Microsoft about this setting for OLTP systems is

• For servers that use more than eight processors, set the MAXDOP=8.
• For servers that have eight or less processors, set MAXDOP=0 to N, where N equals the number of processors on the server.
• For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs that are assigned to each NUMA node.
• For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.

Example: On server with 2 processor sockets, with each one having 4 cores and 8 logical processors and NUMA enabled, set the MaxDoP to 4.

Also, it is important to note that some applications has their own recommendations. For example SharePoint recommends to set the MaxDoP setting to 1, as they have designed their product in such a way that any SQL statement or Stored Procedure which is expected to benefit from parallelism has been explicitly defined with required MaxDoP level in the code itself and remaining query’s which does not benefit from parallelism would be run serially as per instance level MaxDoP setting. So, for any third party applications, contact them for recommendations for MaxDoP.

– Make sure SQL Server Priority Boost is disabled, it does not do any good, rather causes serious problems on the system, if this option is enabled. This was provided to be used only by Microsoft CSS to troubleshoot some problems and this will be removed in future versions of SQL Server.

– Login Auditing, by default only failed logins are audited, but someone may change it to audit both failed and successful logins. It is good to leave with default to audit only failed logins and if there is any requirement to audit successful logins, follow some other method like enabling Auditing feature, run SQL trace, extended events, etc.

– C2 Audit, is better left disabled.

– Create a job to periodically recycle SQL errorlogs like Monthly, to avoid large errorlog files. You may also consider creating some automated process to archive old recycled errorlog files for future reference.

– Change default data and log file path, so that any new databases created will have their data and log files created on different disk drives which are on appropriate RAID level, which will significantly improve the performance.

– Backup Compression which was introduced in SQL Server 2008 is a good option to be enabled at instance level which will save lot of disk space and network bandwidth, when stored on a backup server or tape.

SQL Server Database Best Practices – Database level settings are very important and provides lot of benefits related to performance, Security and stability, but are mostly ignored as these are more in depth settings are often forgot to configure by most of the DBA’s.

Below are some of the database level settings which provides lot of value.

– Each database by default has two files, one data file and one transaction log file, additional data and log file can be created and can be pointed to other disks to offload IO load which provides significant performance benefits. It is very important that data and transaction log files are placed on different disk drives as placing them on same drive causes lot of performance problems. Make sure that data and log files are placed on different disk drive which is on appropriate RAID level.

– System databases are created during the time of installation of SQL Server and are most often created in C:\ drive or other drives with less amount of disk space. Tempdb is also a system database and can grow large depending on the activity on the system, so need to place the system database files on different drives. Placing Tempdb on dedicated disk provides significant performance gains on systems where tempdb is heavily utilized. Also, it is good practice to create multiple tempdb files which helps benefit by reducing latch contention. There are different theories about how many tempdb files to create, in my opinion, best would be to create 1 tempdb file for 2 logical processors and limit to a maximum of 12 tempdb files. In special cases more than 12 tempdb files can be created after through testing.

– Set auto growth settings in-terms of fixed. Each database by default has one data file and one transaction log file. Additional data and log files can be created if required for performance benefits and the files can be placed on different drives. Each data and lot file is created with initial file size and can be configured to auto grow if initial file size is used and additional space is required. It is recommended to set a good initial size based on capacity planning and then enable the auto-growth setting in terms of fixed size like 256 MB or 512 MB, instead of growth in percentage which can cause problems when database files have to grow during business hours and leads to timeout’s and performance problems.

– For very large databases, it is good practice to create additional file groups and store different types of data bases on IO patterns in different filegroups. This will help in significant improvement in performance. You can create additional file groups like Index File group, BLOB filegroup, etc and place their physical file location on different disks.

– There are few database options which are mostly forgot or unknown to many DBA, but provides lot of benefits for performance, security and stability of SQL Server databases. Below are for of those options.

Database Recovery model – Recovery models determine amount of data that transaction log file needs to keep active. This will whether point-in-time recovery of all the data is possible or not and to what extent it is possible. Set the recovery model appropriately based on the Recovery Point Objective (RPO) SLA.

Auto Close – This should be disabled as this can cause performance problems.

Auto Shrink – This option should be disabled as it is not recommended to shrink a database which causes fragmentation and leads to notable performance degradation.

Auto Update Statistics – This option should be kept enabled. Statistics are used by SQL Server while compilation and execution of SQL query, and having accurate statistics helps in building accurate and optimal query plans, thus resulting in improved and stable performance. SQL Server determines how often does it need to auto-update the statistics.

Page Verify – This option should be set to CHECKSUM, which basically detects any database corruption. When a data page from memory is to be saved back to disk, SQL Server calculates a checksum and adds it to the page header and then stores the page on the disk. Later, when we read the page back to memory, SQL Server calculates the page checksum and compares it with checksum previously stored in the page header, if it matches then page will be successfully read into the memory and will be returned to the user, if it does not match then it reports that the database is corrupt.

– We need to make sure that number of Virtual Log Files (VLF’s) are minimum for each database transaction log files, as this can cause database recovery to take long time and uses will not be able to use the database properly. VLF’s can be controlled by setting up proper initial size for database file after doing capacity planning, and also auto-growth settings need to set interms of fixed size of 512 MB of higher less than 2 GB, so that number of VLF’s will be minimum.

– For performance perspective, it is important to have regular smart index maintenance on the server for all databases like rebuilding and re-organizing the indexes based on fragmentation percent.

– Updating Statistics is another most important thing which proved most of the times to significantly improve the performance. Schedule a job or use maintenance plan to update the statistics with full scan regularly like once a week or more often if there are lot of DML operations performed on the databases.

– Another important task to be performed regularly is to perform integrity check on all databases, which basically means running checkdb on all user and system databases to detect any corruption. Integrity checks can normally be scheduled to run weekly once.

– Backups are another important regular task, as we do not know when can a disaster strike, so we should be ready to be in a position to recover the data to the maximum extent possible based on the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) SLA’s. Also, it is important to periodically test the restore of these backups on a test server to make sure that we can restore them successfully and also will provide us some time estimates on how much time, it can take to recover the data after a disaster.

There can be other best practices still exist and may not be covered here, so it is important to keep ourselves updated and add new findings to out best practice or health check list.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

1 2