Auditing or Monitoring SQL Server Instance for Any Incidents


In today’s world Auditing has become very important to control and detect the changes to the IT Systems, which may be planned or unplanned. SQL Server databases store important end user data and is often secured from unauthorized access, but there can be instances where there are unexpected changes to the system configuration to get access to the secure data or there can be cases where changes are made to the system and difficult to identify who made the changes. Proactive Auditing helps in identifying the changes made to the system and take corrective steps, thus can provide more control on the systems.

Auditing SQL Server Instance:

SQL Server 2005 was released around 15 years back and has limited options for built-in Audit solutions. With latest versions of SQL Server 2008 or higher there are better in-built Audit solutions. We can enable auditing in SQL Server using various methods including built-in options audit options, Customize existing features and build an audit solution which most of the third-party software’s does.

Below are some of the methods which we can use to Audit changes to SQL Server instances.

DDL Triggers:

Triggers in SQL Server are fired when a certain event occurs and we can define a certain action to be taken like logging the change made into a table. DDL triggers are fired when a change is made using commands like ALTER, CREATE or DROP.

Advantages: DDL triggers allow us to specify an action to be performed when a particular event occurs. This is suitable for scenarios where we need to audit specific events, but not all DDL changes.

Dis-Advantages: DDL triggers cannot be used to track all changes to the SQL Server instance. Implementing DDL triggers require lot of customization and will be very huge task and can also result into issues when implemented without proper planning and testing.

SQL Server Server-Side Trace:

Using the SQL Server Traces, we can audit or track almost all activity that takes place on SQL Server instance. Trace data will be stored in a trace file and we can import the trace data into tables periodically and create Scripts to read the trace data and extract the required information.

Advantages: We can track all the activity that happens on the SQL Server instance. We can filter the captured events to trace only required data.


-Running trace on system can cause additional overhead on the system and the performance overhead depends on the number of events traced and the activity that happens on the server. Often server side trace overhead in not much, as most of the third party solutions internally rely on the traces.

-There is lot of customization required where we need to configure the trace to capture required data and then periodically import the data into tables and run queries against those tables manually or can run the queries through jobs which run on schedule to send the report via e-mail.

SQL Server Audits:

SQL Server Audit feature was introduced with SQL Server 2008. SQL Server audit lets one create server audits, which can contain server audit specifications for server level events, and database audit specifications for database level events. Audited events can be written to the event logs or to audit files.

Advantages: This is a built-in feature, it is simple to enable and manage and no additional customization is required. Audit results can be saved to event logs or audit files, which can be referenced later. Audit can be enabled at different levels and for different issues.

Dis-Advantages: Database level auditing is limited to Enterprise, Developer, and Evaluation editions. There is no option to send an e-mail when a certain condition is met, so need to customize that piece.

SQL Server Extended Events:

Starting SQL Server 2008 R2 and higher, Extended Events can be used to track the changes that happen in SQL Server at granular level without much overhead. Extended events collect all the information similar to SQL Server Traces, and also additional details at more granular level.

Advantages: We can track all the activity on a SQL Server instance at more granular level. We can filter events based on our interest. Performance overhead is very minimal.

Dis-Advantages: This requires lot of customization for creating and managing.

Other In-built Options:

There are few other rarely used built-in options like C2 Audit, Common Criteria Compliance and Default Trace is available, but they target specific requirements, but has limited options and customization as per our requirement is not supported.

Third-Party Audit Tools:

Most of the organizations rely on third-party audit tools as they do not need much customization and can be configured to generate and send reports by e-mails. Advantage of the third party tools is that they are tested and used by many organizations already and documentation is available to use the tools. Disadvantages of third-party audit tools include additional cost and additional overhead of configuring and managing these tools.

Some of the popular third party audit products for SQL Server instance are Idera SQL Compliance Manager, ApexSQL Comply, Change Auditor, etc.

Idera SQL Compliance Manager is one of the most used products for SQL Server audits and compliance. Idera SQL Compliance Manager provides a comprehensive SQL Server auditing that can help us monitor, audit and alert SQL Server user activities and data changes. We can get detailed information about who did what activity and when it was performed, and how the changes were implemented, etc. We can use it to track changes, monitor and audit data access, schema changes and login failures. There are many built-in reports available to validate SQL Server audit trails. A DBA can configure alerts to get notified of suspicious activity. Idera SQL Compliance Manager can help you ensure compliance with regulatory and data security requirements.

More information about Idera SQL Compliance Manager can be found at

ApexSQL Comply is another popular tool for SQL Server audits and compliance. This tool supports tracking data changes to SQL Database. It can capture and store data changes to a central database and tracks, who did what activity and when it was performed, and how changes were implemented. This tool supports many audit reports that are run from the central database. It can also inform a user about how and when an incident occurred, capturing relevant information such as which SQL or Windows host and application name was involved. Finally, this tool can also track table definition changes and unauthorized or malicious changes.

More information about ApexSQL Comply can be found at

Quest Change Auditor for SQL Server tracks, audits, reports and alerts you to any changes to our environment in real time. This tool is capable of tracking all user and administrator activity, such as who made what kind of changes, when the changes were implemented and from which work station changes were carried out. This tool monitors for critical change within the environment, and we can configure to get alerted in real time whenever there are critical changes to your environment. Since this tool maintains a centralized repository, it makes it easier to monitor multiple servers from a single location. It also supports role-based access, thereby allowing auditors to run searches and reports without the ability to make any configuration changes to the application.

More information about Change Auditor can be found at

There are many other third party products available for Auditing SQL Server and can be carefully evaluated and choose the best one based on the features provided, manageability and cost.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

<strong>SQLServerF1 Team</strong>
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


SQL Server Indexes Frequently Asked Question and Answeres (FAQ) in Interviews


Index fundamentals are most important concept for any DBA to understand and has lot of value in interviews. Below are some of the common fundamental questions asked about the SQL Server Indexes.

What type of Indexes can a Table consist of in SQL Server 2012?
A table in SQL Server can have below types of Indexes.

One Clustered Index
Non-Clustered Indexes
Unique Indexes
Index with included columns
Columnstore index
Filtered Indexes
Spatial Indexes
XML Indexes

what is a Clustered index in sql server 2008 R2? How many clustered Indexes can we create on a table?
Clustered index is an on-disk structure which contain all the columns and rows of the table. We can create only one clustered index as it represents the table itself and we cannot represent the same table in another way, as it is as good as a new table.

what is a Non-Clustered index in sql server 2008 R2? How many clustered Indexes can we create on a table?
Clustered index is an on-disk structure which contains one or more columns as key columns defined while creating the index and also consists of clustered index key column.

Can we create Index on a View?
Yes, we can create indexes on a view and it will be persisted on the database, just like a index created on a table.

Does the Clustered and Non-Clustered Indexes unique data in the key column?
Not necessarily, we can create a Clustered and Non-Clustered Indexes as unique as well and
non-unique where multiple rows can share the same key value.

A table having Primary Key meaning it also has Clustered Index?
Not necessary, by default, creating a Primary key also creates a clustered index, but we can change that behavior and make it create a non-clustered index or no index at all, in which case the table creates unique constraint to maintain the uniqueness in the primary key column.

Can we create Clustered Index without a primary key in the table?
Yes, we can have a table without a primary key, but with a clustered index.

What are Heap tables in SQL Server 2014?
Any table which does not have a Clustered index is called as heap table. But it is recommended to have clustered index on all table, of-course there can be special cases where you do not require a clustered index.

Can Non-clustered indexes be created on Heap tables?
Yes, we can create non-clustered indexes on Heap tables, it is fully supported, but not recommended.

Do we need to perform any regular maintenance on the server?
Yes, we need to regularly perform smart index optimization by rebuilding or reorganizing of all the indexes based on the fragmentation percentage. Also statistics need to be updated on all the indexes with Full Scan preferably to improve performance.

Does Shrink operation has any impact on the Indexes?
Yes, Shrink operation on entire database or data files cause high fragmentation and slowdown the performance significantly, so it is not recommended to Shrink data files. Even in special cases if we have to Shrink a data file, then we need to rebuild the indexes immediately to reduce the fragmentation.

Are creating multiple indexes always improves performance?
Not necessarily, having too many or improper indexes also negatively affects the performance, so only after careful evaluation and testing, proper indexes need to be created.

Can we create Indexes on Views in SQL Server 2012?
Yes, definitely we can create indexes on views.

What kind of indexes can we create on Views?
First index we create on a View must be Unique Clustered Index and then we can create multiple Non-Clustered Indexes.

Do we have to performance any maintenance on the indexed Views to control fragmentation?
Yes, we need to perform regular smart re-indexing and also statistics updates on the indexed views.

Does creating Indexes on View help the performance of queries?
Yes, it will improve the performance, but also can negatively impact the performance in some cases, so test it thoroughly before implementing on Production servers.

Do we need to reference the indexed Views in the query for it to be used by the SQL optimizer?
No, even if we do not reference the indexed views, SQL Server may still use the Indexed Views.

If my table has high number DML(Insert/Update/Delete) operations performed frequently, then does creating indexes improves the performance?
Not always, need to very carefully test and create the indexes which are required which improves the performance and remove and indexes which are not used much. DML operations will have additional overhead with lot of indexes and can slowdown the performance.

Are there any tools which help me identify which indexes to create in SQL Server 2008 R2?
– Yes, with SQL Server Management Studio (SSMS) 2008 R2 or higher, we can generate a execution plan and it also suggests if it finds any missing indexes.
– Also, with regular installation of SQL Server, a tool named Database Tuning Advisor (DTA) is also available, which will help in analyzing a query or trace file data and suggest any missing indexes or statistics.
– We can also use SQL Server built-in DMV’s which provide us list of missing indexes based on the index usage data. This index usage data is removed with restart of SQL Server, so be careful.

The suggestions returned by the tools and DMV’s are not accurate and as a DBA, we need to carefully evaluate and test the indexes on a test server and then only create them on Production servers.

How does the indexes help with SQL Server performance?
With just a table without any indexes, there is only one path to access table data which is by reading entire table in to memory and this data is not stored in any particular order, so for a request which requires specific data, we need to read the entire table data and also at a given point in time, only one session can use all the data, thus blocking other sessions. By creating clustered index, now the data will be stored in a particular order based on the columns chooses as index keys, now this allows us to directly point to the required data with the help of the index keys, thus improving performance. Creating additional non-clustered indexes on few columns improves the performance further, where
multiple queries can access the same data without blocking each other, thus improving the performance further.

What are Filtered Indexes?
Filter Indexes are introduced Starting SQL Server2008, we can can create index on set of columns with only subset of rows by using a WHERE condition, so the index will only have the rows which are satisfied by the where condition.

What are Included Columns in Indexes?
An index contans key columns, which act as pointer to the entire row data and this normally have fewer columns based on where condition, however if the index does not have other columns which are to be returned for the query to complete, SQL Server again have to reach out to clustered index for additional columns which are not present in non-clustered index. We can eliminate additional clustered index search, by including the additional columns in the non-clustered index, but not as key columns. These additional columns are added as “Included Columns”.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Common SQL Server Backup Failure Errors and Issues

You can find here some of the frequent or common errors faced which manually performing a database backup or an automated maintenance plan or custom backup job failed.

Backup Error: 3041, Severity: 16, State: 1
Backup Error: 3041, Severity: 16, State: 1.
Backup BACKUP failed to complete the command BACKUP DATABASE database_name. Check the backup application log for detailed messages.

Backup Error 3041 is a generic error which is returned when a backup fails. Along with this error, there are additional errors returned and can be found in the SQL Server errorlog. Look for other specific backup errors, which will point the cause of the backup failure.

Below are more specific errors which cause the backup failures.

Error: 18204, Severity: 16, State: 1
Error: 18204, Severity: 16, State: 1
BackupDiskFile::CreateMedia: Backup device '\\BackupServer\Backups\Test\Test_Backup.bak' failed to create. Operating system error 53(the network path was not found.)

If you carefully read the complete error message, it gives the cause of the failure “Operating system error 53(the network path was not found”.

– Test If you are able to browse to this backup share \\BackupServer\Backups\Test. You need to test using the same account which is performing the backup, meaning if backups are running from SQL Server job, use the SQL Server and Agent service accounts to test access to the backup share.
– If this is happening intermittently, then possibly it was is network glitch, so work with your network administrator to run a network trace to identify network issues.

Msg 3201, Level 16, State 1, Line 1
Msg 3201, Level 16, State 1, Line 1
Cannot open backup device 'D:\Backups\Test_backup.bak’. Operating system error 3(The system cannot find the path specified.).

– Make sure that the path D:\Backups exists, sometimes with user oversight, the path mentioned could be wrong lie the folder name could be backup and we use backups and will result in error. – Another issue when this can occur is, if there are any space at the end of the folder name.

Msg 3201, Level 16, State 1, Line 1
Msg 3201, Level 16, State 1, Line 1
Cannot open backup device 'D:\Backups\Adv.bak'. Operating system error 5(Access is denied.).

– The above error occurred, because of permission issues on the backup folder. Grant Full Control on backup folder to SQL Server service account and to the account with which are you are trying to perform the backups.
– Also, make sure that the account used to perform backup has sufficient permissions to perform backups in SQL Server.

Error: 3043, Severity: 16, State: 1
BACKUP 'Test_DB' detected an error on page (1:12534) in file ’Test_DB.mdf'.
Error: 3043, Severity: 16, State: 1.

– The error 3043 occurs, if there is some sort of corruption in the database, which would again be mostly because if underlying hardware issues.
– Troubleshoot this issue in the direction, by first troubleshooting and fixing the database corruption issue.

Error: 18210, Severity: 16, State: 1
Error: 18210, Severity: 16, State: 1.
BackupMedium::ReportIoError: write failure on backup device '0a158c7d-a7a3-4d5a-8b58-124602e40a14'.
Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).

– Error 18210 occurs mostly when a third party backup tool is performing the backups of SQL databases using native T-SQL commands or by backing up the .mdf and .ldf files. Third party tools generally use VDI/VSS and use VSSWriter or SQLWriter for performing the backups. Check for any problems with these writers.
– If you are not using any third party tool for performing SQL database backups, then you can try disabling the OS and SQL VSSwriter to see if backup completes successfully after disabling them.
– Check for any known issues with SQL Server or OS.
– Check for any known issues with the third backup software.
– Refer below for more information on VSS and VDI backups in SQL Server.
SHEDDING LIGHT on VSS & VDI Backups in SQL Server

Msg 3009, Level 16, State 1, Line 1
Msg 3009, Level 16, State 1, Line 1
Could not insert a backup or restore history/detail record in the msdb database. This may indicate a problem with the msdb database. The backup/restore operation was still successful.

– Whenever SQL Server database backup is performed, an entry for that backup is made into msdb..backupset table. Check if the data and log files of msdb is set for autogrowth. Verify if there is sufficient free space of the disk drive where msdb files are located.

Msg 3033, Level 16, State 0, Line 1
Msg 3033, Level 16, State 0, Line 1
BACKUP DATABASE cannot be used on a database opened in emergency mode.

– This error can occur if you try to perform backup of a database which is in emergency mode, which is not supported. Bring your database online and then perform the backups.

Msg 4208, Level 16, State 1, Line 1
Msg 4208, Level 16, State 1, Line 1
The statement BACKUP LOG is not allowed while the recovery model is SIMPLE. Use BACKUP DATABASE or change the recovery model using ALTER DATABASE.

– This error can occur, if we try to perform transaction log backup of a database whose recovery model is set to “SIMPLE”. Transaction Log backups are not supported for databases in Simple recovery mode. If the database is critical and recovering as much data is required and if point in time recovery is important, then change the recovery model of the database to “FULL” and then schedule job to perform regular transaction log backups.

SQL Server Backups causing operating system returned error 1450
The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x00000000003200 in file with handle 0x0000101C. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists then immediate action must be taken to correct it.

– Error 1450 is not related to SQL Server, it is related to Hardware/Operating System, so check for any configuration issues and make changes accordingly.
– Remove /3 GB switch if enabled, and try performing the backups.
– Make sure there is no corruption with database.
– Check for any issues with disk subsystem.

Msg 3241, Level 16, State 7, Line 1 and Msg 3013, Level 16, State 1, Line 1
Msg 3241, Level 16, State 7, Line 1
The media family on device 'PathToBackupFile\BackupFile.bak' is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

This issue can occur for various reasons, some of them are as mentioned below.

– Make sure the backup file is located on the same server where you are trying to restore. Check if the backup file is not corrupted. Try to move the backup file on to another drive or to another server and see if that works. Run below command to make sure there is no corruption with the backup file.

Restore verifyonly from disk = 'PathToBackupFile\BackupFile.bak'

– This error can also occur if you are trying to restore a database from lower version to higher version. Example, if you are trying to perform a restore from SQL Server 2012 database on SQL Server 2008 R2 instance. This scenario is not supported and cannot do it, so you have to script our all tables/SP’s/Views/Other objects from SQL 2012 and create them on SQL Server 2008 and then export the data from SQL Server 2012 to SQL Server 2008 R2.

Share any backup errors not covered above and we will add it so that it will be helpful for others.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


SQL Server performance Tuning Frequently Asked Question and Answers (FAQ) in Interviews PART 1

Troubleshooting SQL Server performance and SQL Server performance Tuning are most important skill for a DBA which provides lot of value and differentiates between an average DBA and a very good and skilled DBA. It is very common in interviews to ask questions related to SQL Server performance troubleshooting and tuning. Proving strong fundamentals and understanding about SQL Server performance tuning will show positive opinion on interviewer and results in better chance of clearing the interview. SQL Server performance troubleshooting and tuning is very extensive topic and covers almost entire SQL Server, as every configuration change can show affect on performance.

Below are some of the commonly asked SQL Server performance tuning questions (FAQ) and their answers provided in-line.

How do you approach or troubleshoot performance problems in SQL Server?
There are different ways of troubleshooting performance issues and there is no right or wrong way of troubleshooting performance issues, it is just my way of approaching in identifying the problem and then taking corrective actions.

– First try to understand more details about the performance issues. Normally performance problems are reported by Developers or Client as per feedback from end users or some monitoring tools reporting specific performance issue on the server.
– Ask Developers or Client questions to understand what they mean by performance problems and gather mode details to pinpoint issue to a specific application or a specific functionality of the application.
– Troubleshooting approach will differ for general performance problems affecting whole application or more specific problem like slowness with specific Stored Procedure or Query. Next steps are specific to general performance troubleshooting.
– Check the overall hardware resource usage like, CPU usage on the server, Memory usage on the server, I/O usage on the server and Network usage to see if there is any spikes in any of their usage. If yes, then drill further down in that direction, if every thing looks normal, then will proceed with checking at SQL Server level.
– Check SQL Server errorlogs and eventlogs for any errors.
– Check for any blocking or heavy locking or high number of suspended sessions.
– Check wait stats to see the top waits.
– Checking if there are regular maintenance on the SQL Server like rebuilding indexes and update of statistics. If not, then will implement those which will significantly improve the performance.
– Will run DMV’s to identify Top Duration, Top CPU, Top Read or Write intensive queries and try to tune them by creating appropriate indexes or report them to developer suggesting to re-write those queries.
– Checking for SQL configuration settings like, MaxDoP, SQL Max Server Memory, Lock Pages in Memory, Instant File Initialization, Auto-Growth settings, etc.

Above steps should help in understanding the performance problem and in fixing the same.

What question do you ask Developers or Client to understand more about the performance issue?
– What kind of performance issue are you seeing, can you be more specific? It is often said that the Application is slow or website is slow.
– Are there any specific feature or a webpage of the application that is slow or is it that entire application is slow?
– Since when you started seeing performance problems?
– Have you started seeing these problems after recent upgrades to Application or SQL Server or OS and any other changes? or Were there anything changed recently on the system, like applying patches, Hardware or Application changes performed recently?
– Are you aware of any changes to the data or increase in number of users on the SQL Server recently?
– So far have you observed anything that can point in a direction where could be the problem?
– Have you checked application and web server to make sure problem does not lie there itself? How you came to conclusion that problem is with SQL Server?
– Do we have any baseline on the application performance or query performance like, how much time it used to take before and how much time it taking now?
– Have you performed any troubleshooting thus far and what are your findings, if any, so far?

It is common that Developers or Client as expected do not have answers to most of these questions and general answer is entire Application is slow.

How do you troubleshoot slowness with a specific Stored Procedure or a Query?
– First, get more details like, how much time on an average this query was taking previously(baseline)? were there any changes to the stored procedure or query recently? How often does this query runs? Does the problem only happen on this server or other servers too, meaning can this be reproduced on Test or Dev servers?
– Check if this query is being blocked by other sessions.
– Check if this query is waiting some any resource using wait stats DMV’s.
– Check if statistics are up to date for the tables and indexes used in the Stored procedure or the query.
– Check fragmentation of the objects in the Stored procedure or the query.
– Collect execution plan of the Stored Procedure and the statements inside the SP or the query. Also collect Read/Write and Time execution statistics of the query.
– Check for any missing indexes based on the execution plan, based on table or clustered index scans.
– Try and suggest to developers if the query needs to be rewritten in a better way like replacing temp variables with tempdb tables or usage of functions in where clause.

What SQL Server tools are available to troubleshooting performance problems?
– SQL Server Management Studio (SSMS), helps us to run the queries or DMV’s, We can generate execution query plans, Read/Write statistics, Time Statistics by running the query from SSMS.
– SQL Server Profiler trace can be used to identify all the queries running during the problem time and which queries are taking longer times to complete.
– SQL Server Database Tuning Adviser (DTA) is available to check for missing index and statistics recommendations.
– SQL Server DMV’s available to get lot of in-depth details about query statistics and execution plans.
– SQL Server Performance Dashboard to generate performance related reports(This needs to be installed as additional plugin)
– Starting SQL Server 2008 R2, we can use SQL Server Extended Events can be used for more granular details with less performance overhead than profiler.
– Starting SQL Server 2008 R2, we can Management Data Warehouse (MDW) for performance troubleshooting.

How can we proactively try and make sure that performance issues do not occur?
– Follow best practices related to Operating System, SAM, Network
– Follow SQL Server best practices and SQL Server configuration settings for best performance.
– Make sure there are regular maintenance jobs like smart index maintenance tasks like rebuilding or reorganizing indexes based on fragmentation, Smart Updating statistics with Full Scan.
– Using DMV’s or other monitoring tools to monitor the resource usage on server like CPU, Memory and Disk I/O.
– Monitor for occurrences of locking, blocking and suspended sessions.
– Monitor for occurrences of Waits in SQL Server using waitstats DMV’s
– Baseline SQL Server and server resource usage.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Top New SQL Server 2012 Features

SQL Server 2012 was released by Microsoft on April 1st 2012. There are many new features introduced in SQL Server 2012 and some existing features are enhanced as well. One of the most popular feature and most talked feature that comes in SQL Server 2012 is AlwaysON Availability Groups.

Below are some of the top new features in SQL Server 2012.

AlwaysOn Availability Groups – SQL Server 2012 AlwaysON Availability Groups is a revolutionary feature which replaces many other features such Database Mirroring, Logshipping and up to an extent Replication. AlwaysON also known as HADRON, which provides High Availability and Disaster Recovery solution for critical databases. AlwaysOn Availability Groups is a container which consists of one or more databases which can together failover as an unit. a set of replicas(servers) host the primary and secondary AlwaysON Availability Group databases. AlwaysON configuration Availability Group has One Primary Replica and up to 4 Secondary replicas including one Synchronous and 2 asynchronous replicas. We can perform Read/Write operations on Primary replica and can use Secondary replicas for Read_Only work loads, thus offloading primary server. We can have multiple Available Groups created between same server and failover happens at Availability Group level. Failing over an availability Group brings all databases part of that group ONLINE on the secondary replica. We can also offload backups to be performed on the secondary replicas. CheckDB can also be offloaded to the secondary replicas. It is also simple to implement AlwaysON Availability Groups.

AlwaysON SQL Server Failover Clusters – SQL Server AlwaysON Failover Clusters allows multi-subnet failover clustering where cluster nodes can be spread at geographically at different locations and data is moved to other node through SAN level replication. This provides both High Availability and Disaster Recovery. Another enhancement include flexible failover policy, where in we can choose the condition on which a failover should occur.

Contained Databases – SQL Server 2012 provides partial containment. In previous versions of SQL Server, moving a database from one server to another server also involved moving associated logins and other objects. Starting with SQL Server 2012 Contained Databases, users are associated with the database itself, thus no more dependent on the SQL Server instance. Moving contained database to new server does not require any additional actions of creating logins, or fixing orphan users. Also, previously there were issues related to collation where SQL Server instance is of different collation and databases are of different collation. SQL Server 2012 contained databases solves this problem where tempdb now creates objects related to contained database under same collation of the database.

ColumnStore Indexes – Microsoft SQL Server team brought the concept of column based indexes from VertiPaq. Columnstore indexes in the SQL Server 2012 Database Engine is used to significantly speed-up the processing time of some common data warehousing queries. In traditional clustered and non-clustered indexes which data is stored row wise in pages, where as in SQL Server 2012 ColumnStore Indexes, data is grouped and stored column-wise, so each column can be accessed independent of other columns.

Enhancements to ONLINE rebuild index operations – Starting SQL Server 2012, we can rebuild indexes ONLINE for BLOB indexes, which have data types like nvarchar(max), varchar(max), varbinary(max). This was not possible in previous SQL Server versions.

Database Recovery Adviser – In previous versions of SQL Server, DBA’s had to perform manual restores in order to recover a database from a failure for which Database administrators had to plan and restore a set of backup files in a logically correct order. New Database Recovery Advisor facilitates preparing the restore plans which implement optimal and correct restore sequences, thus reducing the overhead and any manual mistakes. Also with AlwaysON Availability Groups, backups could be performed on secondary replicas as well, in such cases Database Recovery Advisor will be helpful in preparing the restore sequence.

Reduced downtime for Application Upgrades – Adding new columns with default constraint is now meta data only operation, thus significantly reducing amount of time for changes.

New FILETABLE feature has been introduced in SQL Server 2012, which is built on top of FILESTREAM technology. With FILETABLES, we can now store the files and documents in special tables in SQL Server, but access them from Windows applications as if they were stored in the file system, without making any changes to the Windows applications.

Product Update is another new feature introduced in SQL Server 2012 Setup where, setup integrates the latest available product updates with the main product during the installation time, so that the main version and all applicable updates are installed at the same time, thus reducing additional overhead and downtime later.

New Startup Parameters has been added to the SQL Server 2012 Configuration manager, so it makes it simple and manageable for changing paths for master database files or to add any startup trace flags.

Starting with SQL Server 2012 support for partitions is increased to 15,000 by default, which in previous versions was limited to 1,000 by default.

Starting with SQL Server 2012, FileStream FileGroups can contain multiple files and can be on different drives, which improves I/O performance by spreading the I/O load.

There are some changes in Licensing of SQL Server 2012 Enterprise Edition, where two types of Enterprise Edition Licenses are sold based on Server/Client Access License (CAL) or Per Core licensing. Also Data Center Edition has been removed in SQL Server 2012 and Business Intelligence Edition has been introduced.

Starting with SQL Server 2012, we can install it on Windows Server 2012 Core, which basically does not have any GUI.

– SQL Server Business Intelligence Development Studio has been redesigned and modified into SQL Server Data Tools (SSDT).

There are many other features and enhancements made in SQL Server 2012 as well, refer MSDN for more information.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Top New Microsoft SQL Server 2014 Features

Microsoft released new SQL Server 2014 version on 1st of April 2014. SQL Server 2014 was released with on two years of release of SQL Server 2012. Most famous and talked feature of SQL Server 2014 is In-Memory OLTP (In-Memory Optimization). In SQL Server 2012, most famous feature was AlwaysON Availability Groups. There are many other new features introduced in SQL Server 2014 along with the In-Memory OLTP.

Below is list of some of the top features released with SQL Server 2014.

In-Memory OLTP (In-Memory Optimization) – As the name says, it is designed to greatly improve the performance on Online Transactional Processing (OLTP) applications, by keeping all the data in memory thus reducing the physical IO, which was the cause of slowness for most of the applications. In-Memory OLTP engine has been developed and has been integrated with the SQL Server Database Engine as one component, rather than having to install as a separate component.

In order to use SQL Server 2014 In-Memory OLTP, we need to identify highly transactional tables and convert them as memory optimized tables. Memory-optimized-tables supports all the ACID properties which are Atomicity, Consistency, Isolation, and Durability. In-Memory Tables can be accessed using regular Transact-SQL in the similar way as we access the disk-based tables. In one query we can perform DML operations on both In-memory optimized tables and disk-based tables as well. Regular Stored procedures can access In-Memory tables and regular tables. Natively compiled stored procedures introduced on SQL Server 2014 is used specifically to work with In-Memory optimized tables only, which provide additional performance benefits. The In-Memory OLTP engine is designed to handle extremely high session concurrency for OLTP type of transactions driven system. It achieves this by using latch-free data structures and optimistic, multi-version concurrency control, which results in predictable, very fast, low latency, high throughput for OLTP transactions. There are other factors too which affect the performance gains. Microsoft in their official documentation mentioned that commonly we see 5-to-20 times performance improvements and can be increased further by following recommended In-Memory query optimization techniques. More information can be found in Microsoft official website.

Enhancements to AlwaysOn Feature – SQL Server 2014 contains many new enhancements for AlwaysOn Failover Cluster Instances and AlwaysOn Availability Groups compare to SQL Server 2012.

– Maximum number of secondary replicas has been increased from 4 to 8.
– When secondary replicas are disconnected with primary replica or when Secondary replicas is not part of the Quorum, readable secondary replicas still remain available for read only workloads.
– Cluster Shared Volumes (CSVs) whose support to SQL Server was introduced with SQL Server 2014 can now be used in Failover cluster instances (FCIs) in SQL Server 2014 as cluster disks.
– Existing AlwaysON Availability Group DMV’s are enhanced and new DMV’s and functions are introduced as well for better monitoring and troubleshooting.
– Using “Add Azure Replica”, we can use Azure server as a secondary replica and add Azure databases as the secondary AlwaysON Availability Group databases for Read_Only purposes in hybrid IT environment.

Enhancements to Resource Governor – Previous versions of the Resource Governor allowed us to specify the limits for the amount of CPU and memory for a particular application request. With SQL Server 2014, now we can also control the I/O limits for a particular application request. MIN_IOPS_PER_VOLUME and MAX_IOPS_PER_VOLUME settings are provided to control the physical I/O’s for a particular user threads.

Enhancement to Statistics – Statistics is the main resource and used to calculate the cost for a query and to create best execute plan. From SQL Server 2014, CREATE STATISTICS and statistic related T-SQL command now allows statistics to be created per partition by using the INCREMENTAL option.

Buffer Pool Extension – With SQL Server 2014 Buffer Pool Extension we can use solid-state drives (SSD) as Non-Volatile RAM which extends the BufferPool region allowing to pageout small workloads to SSD’s, as I/O operations between RAM and SSD’s are fast enough, the SSD’s will server as a backup memory. This enhancement significantly improves the I/O throughput for Read heavy workloads.

Enhancements to Columnstore Indexes – Allows DML operations as well on ColumnStore indees, which was not allowed before. This was possible as now we can create clustered columnstore index which improves data compression and query performance for data warehousing workloads. Since the clustered columnstore index is updateable, the workload can perform DML operations like insert, update, and delete. Another enhancement include, allowing SHOWPLAN to display the information related to ColumnStore Indexes as well in the query plan.

Enhancements to ONLINE Rebuild Index Operations – Starting SQL Server 2014, we can specify whether SPID or Session which is performing ONLINE rebuild index operation can now run at a lower priority compared to application SPID’s or sessions, thus reducing the blocking of application sessions. We can also configure online rebuild operation to kill itself, if it is blocking for more than certain duration. WAIT_AT_LOW_PRIORITY option allows us to specify how long the rebuild process should wait for the necessary locks and if it does not get the locks by that time, then to terminate rebuild session.

Delayed Durability – Delayed Durabile is also known as lazy commit. By default, SQL Server uses “fully durable transaction commits” which are synchronous and notifies a commit transaction as successful and return control to the client only when the associated log records for the transaction is written to disk. In “Delayed durable transaction”, commits are asynchronous and notifies a commit transaction as successful even before the log records for the transaction are written to disk, thus reducing any latency to wait for the log record to be flushed to the physical disk. Writing the transaction log entries to disk is mandatory for a transaction to be durable, so even in Delayed durable the transactions only become durable later when transaction log of that transaction are flushed to physical disk. This is to be only used for some applications where loss of some transactions is acceptable.

Enhancements to Cardinality Estimation – Cardinality Estimation logic has been redesigned in SQL Server 2014 which improved in generating quality query plans, thus improving query performance.

Enhancements to Backup and Restore – There are few enhancements to Backup and Restore process in SQL Server 2014 which include, adding GUI feature of performing a backup to a URL, thus allowing us to backup to or restore from Windows Azure Blob storage service. Another enhancement is the ability to encrypt the data while creating a backup, resulting in an encrypted backup file.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


In-Memory OLTP (In-Memory Optimization) Frequently Asked Question and Answers (FAQ) in Interviews Part 1

In-Memory OLTP was introduced in SQL Server 2014 and the goal of it is significantly improving the OLTP database applications performance. In-Memory OLTP is a memory-optimized database engine code which has been integrated with the SQL Server engine code, thus making SQL Server competent to improve the OTLP applications performance. It is very common in interviews to ask questions about new and popular features of SQL Server like AlwaysON Availability Groups, In-Memory OLTP, etc. Proving strong knowledge on new features will showcase the interest you have in updating your skills and adapting quickly, thus provides better possibilities of clearing the interview. Not only from an interview perspective, it is important as a DBA to update our skills quickly, otherwise there wont be much market value to old skills, which become old very quickly. If a client asked a question, we should be able to atleast know the very basics, rather than proving ourselves outdated.

Below are some of the common questions which may be asked in interviews regarding In-Memory OLTP.

In which version of SQL Server was In-Memory OLTP introduced? or What is the top feature in SQL Server 2014
In-Memory OLTP introduced was introduced with SQL Server 2014 version.

Which SQL Server 2014 Edition include In-Memory OLTP?
In-Memory OLTP is available in SQL Server 2014 Enterprise Edition, Enterprise Evaluation and Developer Edition.

Does In-Memory OLTP requires applications to be modified?
Yes, the table schema has to be modified and changes to the application are required as well.

Can we create the In-Memory Tables in default FileGroup?
No, In-Memory Tables cannot be stored in default or regular filegroups, we need to create separate filegroup and files to store In-Memory Tables.

Can we create multiple in-memory filegroups and files?
We can create only one filegroup for storing In-Memory tables, but the filegroup can have multiple files and can be placed on different drives.

Can my database store regular tables as well as In-Memory tables?
Yes, we can have both regular and in-memory tables in same database.

Will my data persist even after restarting my SQL Server?
If we create In-Memory table using option DURABILITY = SCHEMA_AND_DATA, then we will have data retained even after restart of SQL Server. If SCHEMA_ONLY option is specified, then we will loose all the data.

Can we create clustered index on In-Memory Table
No, Clustered indexes are not supported on In-Memory Tables.

Can we create Non-Clustered Indexes on In-Memory Tables?
Yes, but you need to create the Non-Clustered Indexes during the time of creation of the In-Memory Table itself, we cannot add new indexes, alter existing indexes or drop any indexes on In-Memory Tables, so need to plan the indexes well before hand.

Does Non-Clustered Indexes on In-Memory Tables take storage on the disk?
No, Non-Clustered Indexes on In-Memory Tables are only in memory, and they are not persisted on the disk. When SQL Server is stopped or Database is taken offline, the indexes are lost. Also, any operations on these indexes are not logged to transaction log file, as no recovery is required for the indexes. During startup of database, indexes are populated into the memory as part of recovery process.

Can we add covering indexes for In-Memory tables?
No, Memory-optimized indexes are inherently covering, meaning all columns are virtually included in the index itself, thus bookmark lookups are not required for memory-optimized tables.

If we cannot alter the indexes, then how do we reduce the fragmentation for In-Memory indexes?
Indexes do not persist on physical storage, they are recreated every time SQL Server restarts or databases is brought online, so there wont be any index fragmentation in in-memory indexes.

What are different kinds of Indexes supported for In-Memory tables?
We can create two kinds of in-memory indexes for In-Memory tables, which are Nonclustered Hash Index and Nonclustered Range Index.

Nonclustered Hash Index is used for queries which are going to perform equi-joins

Nonclustered Range Index is used for queries which performs range scans and ordered scans

What happens when we create in-memory table?
When we run the DDL for creating the In-Memory table, it will be converted into a in-memory table DDL and is loaded into SQL Server memory. We can find what all In-Memory tables are created and loaded in memory by running below query

Can I use regular T-SQL queries and Stored Procedures to access In-Memory tables?
Yes, we can use regular T-SQL queries and Stored Procedures to access In-Memory tables. Also, In-memory OLTP engine introduces a new type of stored procedure called natively compiled stored procedure, which can oly access In-Memory tables. When we create the natively compiled stored procedures, they query plan gets created during the compile time itself. These stored procedures are compiled by Microsoft’s Visual C/C++ compiler to native code and provides maximum performance benefits and is the preferred way of accessing In-Memory tables. At this time, there is limited T-SQL support for these stored procedures. When this SP gets compiled, DLL will be created and loaded into SQL Server Memory. We can verify the same by running above mentioned SQL query.

How does In-Memory tables provide high performance?
In-Memory tables reside completely in memory, so the access to the in-memory data is always very fast, as there is no requirement of physical IO from disk to memory. Using natively compiled stored procedures provide additional performance boost as they are per-compiled and no permission checks need to be performed later, as it is done during the time of compilation itself.

Can we create database to hold In-Memory tables using SSMS GUI?
Yes, SQL Server 2014 GUI supports creation of database with In-Memory filegroup and filestream data file for In-Memory usage. But we cannot create In-Memory table using GUI, we need to use T-SQL commands for that.

How do I know if a Table or Stored Procedure are candidates to be migrated as In-Memory OLTP?
Transaction performance reports tool tell us which tables in our database will benefit if ported into In-Memory OLTP. Transaction performance collector tool is available in SQL Server 2014 Management Studio, which helps us to evaluate, if In-Memory OLTP will improve our database application’s performance. Transaction performance analysis report also tells how much work we must have to do in order to enable In-Memory OLTP in your application.

How do you migrate regular table into In-Memory table?
We can use the memory optimization advisor tool which helps us in migrate the disk-based database table into In-Memory OLTP.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


Reviewing of Servers for Implementing SQL Server Best Practices

DBA’s need to be proactive and perform health check and review of best practices periodically on all the servers to make sure that all the best practices for SQL Server are being implemented. This approach will avoid any issues which may arise in future. So rather than troubleshooting and fixing the problems after they occur, it is always good to proactively follow best practices and avoid the problems completely.

It is most common in most of the company’s that there are server outages, Performance problems with SQL Server causing unplanned application downtime. To resolve the issue, DBA’s would have done some research and made some configuration changes. If the DBA has performed a periodic review of the server and proposed those changes proactively, the issue itself would have not surfaced.

Best practices are not just applicable for SQL Server, it is also to be done at Operating System, DISK/SAN level and Network level. So, we can divide the best practice or health checks into below categories.

Operating System/SAN/Network Best Practices – SQL Server is one of the Operating System process and will heavily depends on its functioning and management. So, need to verify and enable any settings which can help SQL Server performance or avoid any issues.

Below are the some of the Operating System checks that can be performed on server with SQL Server installed.

– Make sure latest Operating System patches and security updates are installed on the system.

– Add SQL Server service account to “Lock Pages in Memory” local security policy, which basically reduces the chances of Operating System paging out or trimming SQL Server memory working set. On a VMWare server, it is more important, so that the memory is not taken back from the SQL Server by the VMWare balloon driver.

– Configure paging file to 1.5 times of available physical memory for servers will less amount of RAM, as these servers will use paging file at some point in time, but the best approach would be to increase the memory on the server. On servers with large amount of RAM, we do not need to have huge paging file, instead set paging file to same size of physical memory, just in case to grab the kernel dump in case of server crash.

– Starting with Windows Server 2008 R2, power plan is available to configure. There are normally three types of power plans available which are Balanced(this is default setting), High Performance, and Power Saver. For Servers running SQL Server, it is recommended to use High Performance power plan. This change may look simple, but is very powerful and can show significant performance gains. This option needs to be enabled from control panel and also at the NetBios level for it to show real effect. On VMWare systems, this has to be implemented on the ESX host server as well.

– Add SQL Server service account to “Perform volume maintenance tasks” local security policy, which basically enabled “Database Instant File Initialization”. Without this privilege, SQL Server will first zero out all data file before using it, which can show affect on performance on highly transaction systems. Log files cannot benefit from this, as this only works for data files. This helps very much on servers where auto-growth is enabled for database and is there is auto-growth that happens during business hours when system is being under load.

– Review installed Software’s, Applications, and Services on the server and Uninstall or Disable those which are not required, as they can use some resources too.

– It is one of the important recommendation from Microsoft to not install Anti-Virus software on dedicated SQL Server box, even if it installed, make sure that SQL Server exe’s, Dll’s and database files(.mdf, .ndf, .ldf) fileStream files, Full-Text related files from Anti-virus scan. This is very important as it can cause SQL Server to crash and generate Access Violation memory dumps and also can slow down the performance.

– It is recommended to format the disk drives having SQL Server database files using 64 KB block size. If already database files are on disks with lower block size, need to plan and move them on to new disk drives will 64 KB block size. This helps in improving the IO performance greatly.

– It is recommended to use RAID disks for SQL Server which will improve performance and also provides redundancy and support to recover data in case of disk corruption. It would be great to use Raid 10 for all the database and log file, but can be very expensive. Alternatively, you can use RAID 5 for data files and RAID 1 for Transaction Log files.

– There can be other settings at OS, SAN or Network level, so better to work with the System Administrators, SAN Admins and Network Administrators and discuss about improving setting at HBA drivers, HBA queue depth, VMWare settings, Network settings, etc and see if they are configured optimally and if any improvements can be done for those areas.

SQL Server Instance Best Practices – As a DBA, we need to make sure that all SQL Server configurations are done specific to the environment. There are no specific SQL Server configuration recommendations which is applicable for all server, so need to monitor, test and implement the best configurations specific to that server to perform well.

Below are some of the Configuration settings that need to be adjusted for better performance and stability.

– Make sure SQL Server has latest Service Pack (SP), Cumulative Updates (CU) and Security Updates installed.
– Always SQL Server hosting critical databases are running on their Dedicated Servers, so that there won’t be any impact due to other applications running on the system.

– SQL Server default instances run on port 1433 and named instance will be using random dynamic port. It is good to change the named instances to use static port, so that there will not be any problems during startup. Also, make sure to reserve this static port, so that no other applications will use this port, else this can cause SQL Server failure during startup.

– There are different network protocols that may be enabled on a server. Make sure only the required protocols like Shared Memory and TCP/IP are enabled, enable Named Pipes only, if it is used by applications.

– Configure SQL Server Max Server Memory, this is considered one of the most important settings. By default, this value will be set to very large number, meaning it allows SQL Server instance to use all of the available memory on the server, thus leading to server crash or server hang or serious memory pressure on the system. You would need to monitor the memory usage on the server and set appropriate Max Server Memory, leaving enough memory available for OS and other processes running on the server. Also, additional care needs to be taken, when there are multiple SQL Server instances installed on the server, in that case, Max Server Memory for each SQL Server instance should be set in such a way that, adding all instances Max Server Memory will be less than total available physical memory and also enough memory is left to OS and other processes.

– Affinity Mask is mostly good if it is not changed, but can be changed on servers with multiple SQL instances after careful evaluation and testing. Improper changes to this setting can cause serious performance problems.

– Max Degree Of Parallelism (MaxDOP) determines the level of parallelism, which basically has some control on the number of processors that can be used to run a particular query. Be default this is set to Zero, meaning a SQL query which goes for parallel execution can use all the available processors.

General recommendation from Microsoft about this setting for OLTP systems is

• For servers that use more than eight processors, set the MAXDOP=8.
• For servers that have eight or less processors, set MAXDOP=0 to N, where N equals the number of processors on the server.
• For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs that are assigned to each NUMA node.
• For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.

Example: On server with 2 processor sockets, with each one having 4 cores and 8 logical processors and NUMA enabled, set the MaxDoP to 4.

Also, it is important to note that some applications has their own recommendations. For example SharePoint recommends to set the MaxDoP setting to 1, as they have designed their product in such a way that any SQL statement or Stored Procedure which is expected to benefit from parallelism has been explicitly defined with required MaxDoP level in the code itself and remaining query’s which does not benefit from parallelism would be run serially as per instance level MaxDoP setting. So, for any third party applications, contact them for recommendations for MaxDoP.

– Make sure SQL Server Priority Boost is disabled, it does not do any good, rather causes serious problems on the system, if this option is enabled. This was provided to be used only by Microsoft CSS to troubleshoot some problems and this will be removed in future versions of SQL Server.

– Login Auditing, by default only failed logins are audited, but someone may change it to audit both failed and successful logins. It is good to leave with default to audit only failed logins and if there is any requirement to audit successful logins, follow some other method like enabling Auditing feature, run SQL trace, extended events, etc.

– C2 Audit, is better left disabled.

– Create a job to periodically recycle SQL errorlogs like Monthly, to avoid large errorlog files. You may also consider creating some automated process to archive old recycled errorlog files for future reference.

– Change default data and log file path, so that any new databases created will have their data and log files created on different disk drives which are on appropriate RAID level, which will significantly improve the performance.

– Backup Compression which was introduced in SQL Server 2008 is a good option to be enabled at instance level which will save lot of disk space and network bandwidth, when stored on a backup server or tape.

SQL Server Database Best Practices – Database level settings are very important and provides lot of benefits related to performance, Security and stability, but are mostly ignored as these are more in depth settings are often forgot to configure by most of the DBA’s.

Below are some of the database level settings which provides lot of value.

– Each database by default has two files, one data file and one transaction log file, additional data and log file can be created and can be pointed to other disks to offload IO load which provides significant performance benefits. It is very important that data and transaction log files are placed on different disk drives as placing them on same drive causes lot of performance problems. Make sure that data and log files are placed on different disk drive which is on appropriate RAID level.

– System databases are created during the time of installation of SQL Server and are most often created in C:\ drive or other drives with less amount of disk space. Tempdb is also a system database and can grow large depending on the activity on the system, so need to place the system database files on different drives. Placing Tempdb on dedicated disk provides significant performance gains on systems where tempdb is heavily utilized. Also, it is good practice to create multiple tempdb files which helps benefit by reducing latch contention. There are different theories about how many tempdb files to create, in my opinion, best would be to create 1 tempdb file for 2 logical processors and limit to a maximum of 12 tempdb files. In special cases more than 12 tempdb files can be created after through testing.

– Set auto growth settings in-terms of fixed. Each database by default has one data file and one transaction log file. Additional data and log files can be created if required for performance benefits and the files can be placed on different drives. Each data and lot file is created with initial file size and can be configured to auto grow if initial file size is used and additional space is required. It is recommended to set a good initial size based on capacity planning and then enable the auto-growth setting in terms of fixed size like 256 MB or 512 MB, instead of growth in percentage which can cause problems when database files have to grow during business hours and leads to timeout’s and performance problems.

– For very large databases, it is good practice to create additional file groups and store different types of data bases on IO patterns in different filegroups. This will help in significant improvement in performance. You can create additional file groups like Index File group, BLOB filegroup, etc and place their physical file location on different disks.

– There are few database options which are mostly forgot or unknown to many DBA, but provides lot of benefits for performance, security and stability of SQL Server databases. Below are for of those options.

Database Recovery model – Recovery models determine amount of data that transaction log file needs to keep active. This will whether point-in-time recovery of all the data is possible or not and to what extent it is possible. Set the recovery model appropriately based on the Recovery Point Objective (RPO) SLA.

Auto Close – This should be disabled as this can cause performance problems.

Auto Shrink – This option should be disabled as it is not recommended to shrink a database which causes fragmentation and leads to notable performance degradation.

Auto Update Statistics – This option should be kept enabled. Statistics are used by SQL Server while compilation and execution of SQL query, and having accurate statistics helps in building accurate and optimal query plans, thus resulting in improved and stable performance. SQL Server determines how often does it need to auto-update the statistics.

Page Verify – This option should be set to CHECKSUM, which basically detects any database corruption. When a data page from memory is to be saved back to disk, SQL Server calculates a checksum and adds it to the page header and then stores the page on the disk. Later, when we read the page back to memory, SQL Server calculates the page checksum and compares it with checksum previously stored in the page header, if it matches then page will be successfully read into the memory and will be returned to the user, if it does not match then it reports that the database is corrupt.

– We need to make sure that number of Virtual Log Files (VLF’s) are minimum for each database transaction log files, as this can cause database recovery to take long time and uses will not be able to use the database properly. VLF’s can be controlled by setting up proper initial size for database file after doing capacity planning, and also auto-growth settings need to set interms of fixed size of 512 MB of higher less than 2 GB, so that number of VLF’s will be minimum.

– For performance perspective, it is important to have regular smart index maintenance on the server for all databases like rebuilding and re-organizing the indexes based on fragmentation percent.

– Updating Statistics is another most important thing which proved most of the times to significantly improve the performance. Schedule a job or use maintenance plan to update the statistics with full scan regularly like once a week or more often if there are lot of DML operations performed on the databases.

– Another important task to be performed regularly is to perform integrity check on all databases, which basically means running checkdb on all user and system databases to detect any corruption. Integrity checks can normally be scheduled to run weekly once.

– Backups are another important regular task, as we do not know when can a disaster strike, so we should be ready to be in a position to recover the data to the maximum extent possible based on the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) SLA’s. Also, it is important to periodically test the restore of these backups on a test server to make sure that we can restore them successfully and also will provide us some time estimates on how much time, it can take to recover the data after a disaster.

There can be other best practices still exist and may not be covered here, so it is important to keep ourselves updated and add new findings to out best practice or health check list.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.


SQL Server Memory Frequently Asked Question and Answers (FAQ) in Interviews Part 1

Memory in SQL Server is one of the most interesting concept. SQL Server implements its own memory architecture and management. If you are attending an interview for a senior DBA role or attending an interview with a very reputed organization who are expecting a very talented resource, then most often or not you will be asked lot of questions about the SQL Server internals and it definitely includes Memory related question. Understanding SQL Server Memory architecture is simple, yet very interesting and challenging as we dig deeper.

Below could be some of the Frequently Asked Question and Answers (FAQ) in Interviews on SQL Server Memory.

Is SQL Server Memory architecture different in x86, x64 and WoW modes?
Yes, SQL Server has different architecture for servers with x86, x64 and WoW modes of SQL Server instances, primarily because of how Virtual Address Space (VAS) is allocated by the Operating System. On x86 system, VAS is limited to 4 GB and out of this 4 GB, SQL Server can only use VAS up to 2 GB by default or 3 GB if PAE or /3 GB switch is enabled, where as in x64 systems, it is as high as 16 TB and SQL Server can use up to 8 TB of VAS.

What is Virtual Address Space (VAS)?
Virtual Address Space (VAS) is a Operating System concept, which is nothing but set of addresses which are allocated to each process running on the system. The idea behind VAS is, If processes running on system are only allowed to access the physical RAM, then very quickly the system will run out of memory, so VAS concept was employed, where each process is allocated some virtual addresses and when required, the processes can request the OS to map their virtual addresses with the physical addresses where the data is to be loaded. This way all processes get their own private virtual addresses and can get physical memory allocated as and when required. This way Operating System can manage the available physical memory efficiently across all the processes.

What are the available Virtual Address Space (VAS) on x86 and x64 systems for SQL Server?
SQL Server is also a user processes, so it gets its own private Virtual Address Space (VAS) from the Operating System. On x86 system, 4 GB of VAS is available in a system, which is divided into two regions, User mode VAS and Kernal mode VAS. Each process running on the server gets its own 2 GB of VAS and remaining 2 GB is available for the OS for its own use. We can increase the User mode VAS of a process from 2 GB to 3 GB using /3GB switch or /USERVA. so, SQL Server on x86 system is a user process, so it gets 2 GB or 3 GB or VAS depending upon the /3 GB switch. On a x64 system, VAS available 16 TB of which 8 TB is available for each process and 8 TB is available for the OS. SQL Server On x64 system, can use up to 8 TB of VAS.

What is SQL Server Memory Architecture or How is SQL Server Memory Divided?
SQL Server user VAS is divided into two regions, which are BufferPool and Memory-To-Leave (MTL).

BufferPool – BufferPool is the region of memory which is used for storing the in memory data and index pages which are <= 8 KB of size. MemToLeave (MTL) – MTL is contiguous memory which is allocated during the startup of SQL Server, which is used for SQL Server thread stacks, third party DLL’s, Extended Stored Procedures, COM objects, Memory used by linked servers, CLR, any memory allocated for requests greater than 8 KB of contiguous memory.

How are the memory sizes for BufferPool and MemToLeave (MTL)?
Size of BufferPool and MTL differs from x86 and x64 systems. On x86 system, during startup of SQL Server MTL is reserved, which is contiguous memory and by default is 384 MB on systems with less than 4 processors. More accurate way of calculating the size of MTL is as below.

MTL = (Size of Stack Size * Number of SQL Server Worked Threads) + Additional space reserved, by default 256 MB and the value specified in -g startup parameter).

– Size of stack on x86 system is 512 KB
– Number of worked threads on a server can be different based on available processors. On x86 system with <= 4 processors, there will be 256 worker threads available, but can be changed using sp_configure options as well. Refer here for more information about worked threads.

So, on x86 system with <= 4 processors and -g option not used, MTL = ((256 * 512 KB)/1025) MB + 256 MB = 384 MB. Once this 384 MB contiguous memory is allocated out of available 2 GB VAS, there will be 1.6 GB of memory remaining which will be used as BufferPool memory.

On a x64 system, SQL Server Stack Size is 2048 KB. So a x64 system with <=4 processors will have MTL = ((512 * 2048)/1024) MB + 256 MB = 1280 MB. This is contiguous memory allocated during the startup of SQL Server and then BufferPool is allocated and finally remaining VAS is left to be used for MTL.

BufferPool is also calculated, where BufferPool = Minimum(Physical memory, User address space – MTL) – BUF structures

On x86 system with <= 4 processors, 4 GB RAM and -g option not used, BPool = Min(4 GB, 2 GB – 384 MB) – 32 MB = 1632 MB ~ 1.59 GB

On x64 system with <= 4 processors, 4 GB RAM and -g option not used, BPool = Min(4 GB, 8 TB – 1280 MB) – 32 MB = 4064 MB ~ 3.96 GB

What can SQL Server MTL memory region contain?
MTL regios is used for thread stacks, third party DLL’s, COM components, Extended Stored Procedures, CLR objects loaded in SQL Server, linked server objects memory, any contiguous memory greater than 8 KB.

What can SQL Server BufferPool memory region?
SQL Server BufferPool memory region contains data or index pages which are less than or equal to 8 KB size.

What is Address Windowing Extensions (AWE) and how is it useful and how it can be used with SQL Server?
Address Windowing Extensions aka AWE is nothing but set of API’s which is used for addressing more than 4 GB of physical memory. On x86 system, SQL Server by default can only use 4 GB physical memory, so to allow SQL Server to use more than 4 GB of memory, AWE needs to be enabled. AWE for 32 bit SQL Server instance can be enabled either from Sp_Configure or from SQL Instance Properties from SSMS.

What happens on x86 system with 64 GB RAM and /3 GB switch enabled?
On 32 bit system with /3 GB switch enabled, SQL Server instance can only use a maximum of 16 GB physical memory. This is because, when /3 GB switch is enabled SQL Server process gets 3 GB of VAS, but Kernal mode VAS is reduced to 1 GB, with this 1 GB of kernal VAS, OS can only use up to 16 GB physical memory, thus SQL Server also cannot see beyond this 16 GB, so be mindful with enabling /3 GB switch.

Should we enabled AWE in x64 systems?
No, although we have the option to enable AWE on x64 systems, it has no affect. By default x64 systems can use large amounts of physical memory as it has 8 TB if user VAS, so no requirement of using AWE.

What is Lock Pages in Memory?
“Lock Pages in Memory” is Operating System policy, which allows a process to lock its data in memory and does not trim/release that memory working set under memory pressure. This is not completely true, as Operating System can trim working set of any process if required, but it will only trim the process workingset with lock pages in memory enabled at the last. Enabling this for SQL Server service account, will lock bufferPool pages in memory and will not trim the BPool unless OS is completely out of memory and all other processes memory has been already trimmed.

Should we enabled Lock Pages in Memory for SQL Server instance?
On x86 system, to use AWE, lock pages needs to be enabled.
On x64 system, if the server is dedicated SQL Server system, then it is good to enable lock pages in memory.

Hope this was helpful.

SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.