DATABASE SCOPED CONFIGURATION in SQL Server 2016

SQLServerF1

Microsoft SQL Server 2016 has many new features and enhancements to existing features. Also, Microsoft claims break through improvement in the performance of queries in SQL Server 2016, compared to its previous versions. Database Scoped Configurations is one of the interesting enhancement made in SQL Server 2016. In previous versions of SQL Server, we can open database properties and can view various details about the databases and also make changes to some of the database properties to alter the behavior of the database. Starting with Microsoft SQL Server 2016, in database properties window, options tab, new section has been added with the name “Database Scoped Configurations”, which includes different options which can be configured.

Legacy Cardinality Estimation – This options allows us to set the query optimizer cardinality estimation at an individual database level, which is different from the database compatibility level. In SQL Server 2016 there are a lot of enhancements that come when we set Compatibility Level 130 for a database in SQL Server 2016, such as performance improvements with Columnstore and In-Memory OLTP and all the Query Optimizer enhancements done over the years under Trace Flag 4199. But there can be situations where you want to benefit from the new compatibility, but still want the application to work well and run under old compatibility mode, then we can enable this option. Once we set the Compatibility Level of database to 130, then we will get all the performance benefits for query execution that come with SQL Server 2016, while still using old CE.
Legacy Cardinality Estimation For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

MAXDOP – In previous versions of SQL Server, we can set MAXDOP at an instance level or at batch/stored procedure/query level, but not at the database level. But starting SQL Server 2016, we can set Maximum degree of parallelism(MAXDOP) for an individual database. SP_Configure MAXDOP setting will be overwritten by this database setting for queries ran against this database.
MAXDOP For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Parameter Sniffing – Using this option, we can enable or disable parameter sniffing at database level. In previous versions of SQL Server, we could turn on or off using the trace flag, but now it is made simple with just changing the option from database properties.
Parameter Sniffing For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Query Optimizer Fixes – In previous versions of SQL Server, there are many query optimizer hotfixes enabled by some trace flags. In SQL Server 2016, we can choose to either enable or disable query optimization hotfixes for an individual database.
Query Optimizer Fixes For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.
– Starting SQL Server 2016, we now have the ability to clear individual Database Plan Cache using command ALTER

DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Basics of Temporal Tables in SQL Server 2016

SQLServerF1

Microsoft has introduced many new features with their new release of SQL Server 2016. One of the such interesting feature released with SQL Server 2016 is Temporal Tables. Temporal Tables are also known as system-versioned tables. In a single liner, these Temporal Tables allows SQL Server to automatically keep history of the data in the table instead of just the current data which we see in regular or temporary tables or variables. This Temporal Tables is a database feature, introduced newly with SQL Server 2016. Although this is called as system-versioned temporal table, but this is a new type of user table in SQL Server 2016, which is designed to keep a full history of data changes in the table and allows an easy point in time analysis of the data. You might wonder, why this type of temporal table is called as a system-versioned temporal table, it is because the period/time of validity for each row is managed by the system, which is the database engine.

Temporal tables were introduced in the ANSI SQL 2011 standard, which is now adopted by Microsoft SQL Server 2016 onwards. To understand more about the Temporal tables, we can compare it with regular user table to understand its behavior, which is different from a normal user table. In a normal user table, we insert the data and we can see the inserted data using select statement, but once we issue a delete or update statement, we will now see the latest data only with the new select statements, but with Temporal tables, we can query and get the data which was deleted or the data which was modified and what was the old value of the data. Taking a real time example, suppose we have a table with one column and it has one row with a data value of ABCD, now we updated this data from ABCD to ABC, now in a normal user table, we can only see ABC value, but with a Temporal tables we can actually see both the old value ABCD and ABC. This is possible by storing the history of the data changes and its values in a history table. History table has the old data and also the start and end times to identify when the data was present in the Temporal tables, thus allowing us to see the data at a older given point of time.

Now, as we have some basic idea about the Temporal tables, lets try to understand further about how to create and query these Temporal tables. Before, we proceed with seeing, how to create these Temporal tables, as like any other feature or object, there are some pre-requisites for creating Temporal tables.
– It is mandatory for a temporal table to have a primary key defined.
– Two additional columns are required to be defined while creating the temporal tables, which are used to record the start and end date and times. These two columns must be of datetime2 data type. These columns are referred to as period columns. These period columns are used exclusively by the system to record period of validity for each row whenever a row is modified.
– These tables cannot be part of In-memory OLTP feature.
– There are some restrictions on types of triggers that can be created on these tables.

CREATE TABLE dbo.TestTemporalTable
	(ID int primary key,
	Col1 int
	Col2 int
	SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL
	SysEndTime datetime2 GENERATED ALWAYS AS ROW END NOT NULL
	PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)) 
        WITH(SYSTEM_VERSIONING = ON(HISTORY_TABLE = dbo.TestTemporalTableHistory));

If we don’t specify and History_table, the SQL Server creates a history table with its own name like dbo.MSSQL_TemporalHistoryFor_xxx, where xxx is the object id of the main table. In GUI, the TestTemporalTable is shown under the tables list, where as the TestTemporalTableHistory will be shown under as a subset of TestTemporalTable. TestTemporalTableHistory has same columns as the actual temporal table, but any constraints will be removed. Also, we can create different indexes on TestTemporalTableHistory and TestTemporalTable tables based on the usage of these tables in the application and having different indexes, will greatly help improving the performance. Also, this is not compared to the CDC, which mainly uses transaction log file to track the changes.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

SQL Server 2014 Developer Edition is Free From Now

As a DBA or Developers it is important to have a SQL Server instance to be able to install and use it to understand the working and usage for the applications. So far, DBAs mostly rely on the free Evaluation edition of SQL Server provided by the Microsoft team. This Evaluation edition expires after 180 days, after which we need to purchase a license and upgrade the Evaluation edition to licensed edition depending on our requirement. Microsoft offers different editions for different purposes, like Enterprise edition for important and critical production applications, Standard edition for less critical applications to reduce the cost of the licensing, Developer edition specifically for developers for development and unit testing the applications before moving to production environment. Previously, most developers preferred buying developer edition as the cost is less and is is specifically released for the purpose and it consists of all features available in enterprise edition and also it could be easily upgraded to enterprise edition if required.

Starting with SQL Server 2014, Microsoft has made a decision to make the SQL Server developer edition free of cost for the developers through Visual Studio Dev Essentials members. Once you login as Visual Studio Dev Essentials members, you will be able to download the SQL Server Developer edition and use it for free of cost. This decision was taken inorder for developers to leverage the capabilities that SQL Server 2014 and higher has to offer for their applications and also this is considered as another step for making SQL Server more accessible to the users which helps in spreading the SQL Server in more companies. However, it is important to keep in mind that SQL Server Developer Edition is still specifically for development and testing purposes only, and not to be used in production environments or not to be used with any production data.

Few important things to keep in mind are below.
– This only includes SQL Server Developer Edition and does not include Operating System license or license for any other products.
– SQL Server 2016 developer edition is also will be made available free under same program and this will mostly be continued with the future releases of SQL Server.

In case you are not part of Visual Studio Dev Essentials members, then you may either buy SQL Server developer edition or can try other options available in the market like SQL Server Evaluation edition. If the application is small and not so critical, you may even choose trying SQL Server Express edition. Hope you will get your free developer edition and explore the new features available like InMemory OLTP and other features.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

SQL Server Cumulative Updates are More Stable to Install

Microsoft releases various service packs, cumulative updates, hotfixes for SQL Server to fix any bugs or known issues. Each of these serves different purposes. For example hotfixes address specific bug or issue and released when a customer opens a case with Microsoft and issue identified as bug, then hoxfix will be released to rectify the specific issue. So, DBAs are required to only apply the hotfix if they are facing that issue. Cumulative updates contain more number of fixes which include multiple hotfixes and additional bug fixes raised by customers. Service Packs are more higher level and contain all service packs, cumulative updates released so for for that version of SQL Server.

With each release of SQL Server version, Microsoft has been making various improvements to the product and the process based on the feedback received from the customers. For example, since SQL Server 2008, Microsoft has allowed possibility of uninstalling service packs, thus providing a better and easier way of rollback in case something goes wrong after applying the patch. Now in year 2016, Microsoft has come up with approach of recommending to install latest available service pack or cumulative update and stated that both of these will be stable as the level of testing done will be same. Previously, in DBA community, there has been assumption that service packs are more stable compared to cumulative updates, as service packs contain more fixes and tested thoroughly and thought that CUs are not tested as thoroughly as SPs.

Now with the article, Microsoft has made it clear that even service packs going forward from year 2016 are stable and recommends to install them as soon as they are available, similar to how DBAs react to release of service packs. This is due to change in the way the service packs and cumulative updates are created and tested. Microsoft has made effort to update the KB articles to reflect the same. Previously, the KB articles used to have message that “This cumulative package is intended to correct only the problems that are described in this article. Apply it only to systems that are experiencing these specific problems.”, which made many customer back down from applying the cumulative updates, but now they are updating the message with “we now recommend ongoing, proactive installation of CU’s as they become available.”

This change was welcomed by the DBA community where it makes it easier to convince the management to go ahead with applying the latest updates available from Microsoft whether it is a service pack or a cumulative update. Hope, with coming years there will be many more changes which not only improves the product, but also improves the processes.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Pros and Cons of Running SQL Server On-premise vs Azure Cloud

SQLServerF1

Cloud solutions has been growing in popularity of recent with new offerings and more features added in different offerings and reduce or eliminating many traditional and operational overheads of maintaining hardware, servers, OS, Network, SQL Servers, patching, backups, High availability, Disaster recovery, etc. Depending on the type of offering choose, the cost will be impacted and features available. At present, SQL Server instances or databases can be hosted on different environments which include running SQL Server on physical hardware on-premise in client(our own) data center or running SQL Server on virtual machine guest system running on-premise or running SQL Server on Azure VMs(IaaS) or SQL Azure database as service (Paas).

There are many advantages as well as disadvantages of running SQL Server on above mentioned environments, with each one with their own pros and cons which are listed below.
Pros of running SQL Server on Azure
– Running SQL Server on Azure VMs will reduce the cost of hardware maintenance as it will be taken care by Microsoft, thus reduces the load on the data centers and reduces overall maintenance and costs.
– Depending on the type of Azure offering chooses, different administrative or operational costs are reduced. For example, using SQL Azure database as service PaaS will reduce the administrative tasks like Setting up and maintenance of hardware, Installation of SQL Server, maintenance tasks like patching, backups, High availability, etc are taken care by Microsoft. In Infrastructure as service offering IaaS, hardware costs are reduced.
– One of the important advantages of Azure VM and SQL Azure database as service are the flexibility of scaling up or down based on the requirement. For example, it is a common requirement for a test server with similar configuration of production specifications, which can be easily setup on Azure and once testing is complete, we can release or shutdown the Azure database, thus reduces the cost of maintaining the server outside of testing time, which can be significant. Another example, where we may need to scale up the performance of the server like shopping portal during thanks giving day or during festivals when there is high demand for the portal, Azure allows to quickly upscale the performance.

Cons of running SQL Server in Azure
– Although Azure provides cost effective cloud solution, but has its own disadvantages too. On SQL Azure database as service, there is not much control on the features or operations like patching, backups, etc.
– Also, another major concern about running SQL Server on Azure is security. Applications which store critical client data which comes under PII or finance related data, many organizations are not comfortable storing them on third party data centers where they do not have much control.
– The SQL Server instance databases main purpose is to store the data and allow it to be retrieved when required, so there is lot of dependency on the application and other services. So, it is important to consider where other dependent or related applications are running. For example, if applications and other services are running on-premise and SQL database on azure can introduce additional latency and any network issues can cause frustration to users, so any critical applications which are running on-premise may take advantage of SQL Server running on-premise too.
– Also, it is possible that some times the performance of SQL Server on azure is not predictable or consistent, because the SQL Server will be sharing the same hardware resources with other SQL Server instances or other applications, so heavy load from all servers or applications can lead to some resource saturation, thus causing bottlenecks at times, so performance is not always predictable on the azure.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Memory Counters to Check on Virtual Machine Running SQL Server

SQLServerF1

These days it is common to see SQL Server instances running on virtual machine. There would be multiple virtual machine guest systems running on one host server. To have more clarity, lets look at some terms.
Host – This is the server which runs on top of the hardware and manages and allocates all server hardware to all virtual machines.
Guest – This is the virtual machine which has been allocated few hardware resources(CPU, Memory, Storage) from the available pool of resources on the server.

As one virtual machine host system can create and host multiple guest virtual machine systems, all these guests will share the available CPU, Memory and Storage on the server. As a DBA, one of the responsibility is to ensure that the SQL Server instances running the virtual machine guest does not face any performance problems, but due to many configuration settings at the host and guest level, it is possible that all hardware resources allocated the the virtual machine guest system cannot be fully utilized by the SQL Server, instead the host may apply some further limits and does not allow the SQL Server to use the resources allocated to it. Sometimes, when the other applications on another guest system consumes more resources, it is possible that remaining guest systems on the same host may be impacted.

Memory is one of the important resource to ensure high performance, as reading data from memory is faster compared to reading from disk drives. Out guest system may be allocated certain amount of memory and DBAs may have set certain max server memory cap for SQL Server instance, but it is possible that SQL Server could never be allowed to use the allocated memory or is forced to release the memory allocated to it, so it is important to monitor if any memory is being asked to be released from the SQL Server. In general, DBAs monitor general memory counters like Server: Available Memory, Buffer Manager: Buffer cache hit ratio, Page Life Expectancy, Memory grants pending, Memory grants outstanding, free pages, total server memory, target server memory, etc. However there are also some memory counters to be monitored on virtual machine which include below.

Memory Limit(MB) – Amount of memory allocated to the guest.
Memory Reservation(MB) – This is the lowest amount of memory the guest will have. Although this is 100% of the guest’s memory, but that’s not always true. For SQL Server, generally the reservations is expected at 75% of the guest memory allocated.
Memory Ballooned(MB) – Memory released from the guest by Balloon driver.
Memory Swapped(MB) – Current amount of guest physical memory swapped out to the virtual machine’s swap file by the VMkernel. Swapped memory stays on disk until the virtual machine needs it.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Tips to Improving SQL Server Backup Performance

SQLServerF1

Backups are one of the most important and core DBA task which every DBA needs to expertise. If there are any short comings or issues with backup planning, implementation, monitoring and testing, then it can result in data loss or can increase the time taken to recover the data with data loss. SQL Server supports multiple databases on SQL Server instances with each database size can be small of few MBs to very large in 100s of GBs to Tera Bytes TBs of late. There is additional care to be taken to plan and implement back strategy for large databases as the time taken to perform the backup increases significantly, so we need to diligently use the different backup methods available to backup the large databases like performing weekly full backups, daily differential backup and every 10-15 minutes log backups. The backups are important even when there are High availability and disaster recovery solutions implemented.

As a DBA some times we may need to look into possibilities of speeding up the backup process to save time taken for the backups of critical databases which are used 24×7. Performing backup can slowdown the databases to an extent due to IO load put by the backup on reading from database files and writing to backup file. There are different ways which we can consider to speed up the SQL Server database backup process to avoid problems for critical servers. Some of the methods or things which we can consider are below. It is important to note that some of the below mentioned options may be available only in latest versions of SQL Server instances, like backup compression available only starting with SQL Server 2008.

– Starting with SQL Server 2008, Microsoft introduced compressed backup. Using the backup compression not only reduces the size of the backup files, but also reduces the amount of IO performed to write to backup files, thus reducing the amount of time taken to perform the backups.

– SQL Server backups allow us to perform backup of one database into multiple backup files, thus increases the use of parallelism which speeds up the backup process. If we point the multiple backup files to different disk drives, then this will further spread out the IO load and makes the backup even faster. We can split database backup to be performed to many backup files, but we need to find the best number of files, as if we use too many files can impact negatively too.

– Some environments, it is common to perform backup directly to the network share or tape drives, which will cause the backup to take more time because the data has to be moved across network. If the backup of large databases is taking long time, then we can consider performing backup locally first to dedicated backup drive and then move the backup file from local backup drive to backup share or tape drive.

– We need to ensure that file system level backups and SQL Server backup timings does not overlap, as both running together will slowdown the system and causes the backup to take more time.

– For large databases we can choose alternative backup plans like performing weekly full backups, daily differential backup and every 10-15 minutes log backup, this we can restrict long time of full backups to weekly once. Differential backups are often fast compared to full backups. It is important to move all full, differential and log backups to backup share or tape so that they can be used in case the server or drive where backups are performed is lost or corrupted.

– We can also use some of the backup parameters like BUFFERCOUNT, BLOCKSIZE, MAXTRANSFERSIZE to improve the backup performance. There is no good or bad values for these parameters, it is trial and error approach, where we need to test different values and come to conclusion on what works best for your environment.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

The query processor ran out of internal resources Error in SQL Server errorlog

SQLServerF1

There are many number of errors documented for SQL Server and are raised when certain error condition occurs which range from syntax errors to problems with instance, databases, jobs, performance, features, etc. When a particular error is received, the error message will have some information regarding the kind of error and what might have caused the error. Sometimes, it is found that the errors are not detailed or misleading and make troubleshooting move in wrong direction or can cause slowdown of resolution the errors.

Out of many errors, some errors are related to performance and the error message may not be enough to understand the reason of the failure. Below is one such error message which occurs due to performance issues with the server performance or with a specific stored procedure or a batch or a individual SQL query. When ever a query hits SQL Server instance, a query plan is created if it is the first time, if not, SQL server optimizer will check for existing plans and reuse if one is already present. There are various reasons the query plan generated may not be optimal and takes more time to complete and sometimes the query can fail altogether with different errors. Below is one such error.

Error: 8623, Severity: 16, State: 1.
The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

This particular error mentioned above is is a rare event and only expected to occur when a very extremely complex queries or queries that reference a very large number of tables or partitions is received by SQL Server. There can be different reasons on what a complex query is referred as, it can happen when attempting to select records through a query with a large number of entries in the “IN” clause like greater than 10,000. There are also some bugs or known issues that can cause this error and some connect bugs are raised too. You can use SQL Server profiler or extended events to track and identify the query causing this error. If this error is recurrent and happening on certain day or time, then we can check if there are any jobs running at that time leading to this error.

The resolution for this error is to simplify and rewrite the complex query. We can also try and divide and get part of the query working and then use extra joins and conditions. We can also use temp table and use temp tables in joins to avoid this error from occurring. We can also try other alternatives like we can try to run the query using the hint option like force order, or option hash join or merge join or using trace flag 4102 with a plan guide. Enabling Traceflag 4102 will revert the behavior of the query optimizer to older version of optimizer for handling semi-joins.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

CPU Affinity Settings on Server with Multiple SQL Server Instances

SQLServerF1

It is common to see multiple SQL Server instances being installed on one server. The intention behind this is to reduce the licensing cost related to Operating System and also to use the hardware resources effectively. Also, having multiple instances separate maintenance and provides more security from not allowing users of other applications to connect to different SQL Server instance databases. With introduction of visualization, the reason for having multiple SQL Server instances has reduced a bit by allocating on VM for each SQL Server instance thus avoiding issues with maintenance of OS or hardware. However, still there are older systems or some newer systems too which still use multiple SQL Server instances on one server.

There are some disadvantages of running multiple SQL Server instances on one server like, if built-in administrators group is added to SQL Server instance as sysadmin, then all local administrators get access to SQL Server instance by default which can be security issue, irrespective of it, a local administrator can gain access to SQL Server instance without being having permissions to the instance through backdoor by starting SQL Server instance in single user mode. Also, there can be performance issues to all SQL Server instances, if one of the SQL Server instance starts to behave bad due to bad queries, improper index and stats maintenance or bad application design.

One of the performance issue you notice on servers is high CPU usage causing performance issues. There can be one of more instances consuming more CPU thus starving other SQL Server instances, thus overall server performance degrades and application users will start to complain. One of the ways to avoid each SQL Server instance consuming more CPU and causing contention is to allocate each SQL Server instance specific CPUs available on the server. For example, if a server has 16 processors and if there are 4 SQL Server instances running on the box, we could analyse CPU requirements for each SQL Server instance based on number of databases, size of databases, number of concurrent users and max CPU required during high load, and then assign specific number of CPUs to each instance. So, for one instance if it requires more CPU usage, then allocate 8 processors to user and for remaining 3 instances use assign remaining available processors based on the usage. If all SQL Server instances usage requirement is greater than available processors, then we will need to plan to move one or more SQL Server instances on to another server.

We can use sp_configure of from instance properties, choose which CPUs to be allocated for which SQL Server instances. There is also option available to choose IO affinity mask, so do not change it, instead just change the affinity mask or CPU affinity mask. There can also be affinity 64 settings, which you can ignore and just use affinity setting or CPU affinity setting. Do not change IO affinity setting. For each SQL Server instance choose different CPUs to avoid overlapping.

Also, it is good to consider reviewing and adjusting the Max Degree of Parallelism (MaxDoP) setting as per best practices and the allocated number of CPUs for each SQL Server instance.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

SQL Server High CPU Issues Due to Edition Limitations

SQLServerF1

There are variety of performance issues that can cause SQL Server performance issues. Performances can range from low server Hardware configuration, improper OS or SQL Server settings, fragmentation, blocking, no proper maintenance like index rebuilds or stats updates, etc. It is not easy to identify or pin point what is causing the slowness or performance problems with the SQL Server. As a DBAs we will need to review everything related to the hardware, OS and SQL Server to able to find the root cause of the issue.

There are many cases where performance issues are caused due to low server hardware configuration like lack of CPU power, or insufficient memory available or under-performing disk IO. We can look at various parameters like perfmon, SQL Server dmv statistics, query waits, etc to be able to understand if there is real bottleneck with the hardware and if we need to upgrade the hardware for better performance. However there can be situations, where the server although has a good hardware configuration interms of CPU, Memory or Disks, improper settings on some limitations cause the SQL Server not to use all of the available hardware on the server.

One of such issue related to the slowness caused by CPU usage due to limitations on the SQL Server edition been used. On one of our client server, there were multiple SQL Server instances installed. There was a monitoring tool which used to monitor the server and used to raise intermittent alerts for high CPU usage. Upon connecting to the server and verifying, the CPU usage was only reported as 50% usage on the server. Users also used to report about the performance issues some times, but not frequently.

Upon checking the server, from task manager the CPU usage was reported around 50% and this server has 8 sockets with one core each, The server has Windows Server 2012 as the Operating System and the task manager, by default combines all the processors available on the server and shows the average usage of all the processors. There is also an option to see the detailed view of CPU usage of each individual processor usage, checking which we found that only four processors are pegged to 100%, where as rest 4 CPUs were having negligible usage. So, it appeared that the SQL Server instances were using only 4 CPUs thus the high CPU usage with the four CPUs only.

Upon checking further, we noticed that all the SQL Server instances installed on the server were Standard Edition. Checking the documentation showed that SQL Server standard edition can use maximum of 4 sockets or 16 cores, which ever is less. Similar limitations on other SQL Server 2008 R2 or SQL Server 2005 versions of 4 sockets only.
https://msdn.microsoft.com/en-us/library/ms143760(v=sql.110).aspx

We can also check and confirm this on SQL Server 2012 instances by checking the SQL errorlog which will have a message like “SQL Server detected 8 sockets with 1 cores per socket and 1 logical processors per socket, 8 total logical processors; using 4 logical processors based on SQL Server licensing”.

For SQL Server 2008 R2 or SQL Server 2005, we can query sys.dm_os_schedulers to see how many processors are been used by the SQL Server instances.

The server where we had issues had multiple SQL Server instances, so we had to use processor affinity mask settings and map each instance to use first 4 or last 4 processors allowed us to use all the available processors on the server, thus reducing the CPU contention.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 
1 2 3 4 5 9