How to Decide If Stretch Database Can Be Used in Your Environment

SQLServerF1

Stretch Database is an interesting and popular new feature introduced with new SQL Server 2016. Stretch database helps in migrating our historical or less frequently used or archived or cold data transparently and securely to the Microsoft Azure cloud. Stretch Database provides many benefits to the organizations like reducing the storage costs, automatically archiving the old data to Azure, etc. However, stretch database also has its own limitations which make it less likely to be used as of now with SQL Server 2016 RTM release. Hopefully Microsoft comes up with significant improvements soon in service packs or with another release in another couple of years, like it has been releasing since year 2008. Some of the benefits to decide on using SQL Server 2016 Stretch Database feature are, Provides cost-effective availability for cold data(historical data which is not accessed much, but still available to support user queries from Azure SQL database). Using this feature does not require any changes to the applications, this feature takes care of it internally and transparently. Moving cold or not frequently used data to Azure SQL database will reduce the maintenance efforts on the production data like less times required for backups, indexing statistics updates, etc.

Stretch Database in a SQL Server instance requires at least one table. Once we enable it at instance level, database level and table level, it then silently begins to migrate the historical data to Azure SQL Database on Microsoft cloud. If we are storing historical data in a separate table on-premise database, then we can migrate the entire table to Azure cloud. If our table contains both historical and current data, then we can specify a filter predicate to select the rows which need to be moved to Azure SQL database. Also, importantly, Stretch Database ensures that no data is lost if a failure occurs during migration. There is also retry logic to handle intermittent connection issues that may occur during migration. However, most important question from many organizations management and DBAs or developers is how to decide if stretch database suits their environment or requirement. Below are some points to help decide if Stretch Database can help in meeting your requirements and solve the existing problems.

– Are you looking to store historical data, and do not want to get rid of very old data, as it may be still required to be accessed rarely, then stretch database will be of great help in reducing the storage and maintenance costs of the old data and still allows you to access the old data from the azure cloud.
– Are you looking for archival solution to archive old data from the frequently accessed production data, then stretch database feature is a very good solution to your problem, as you can archive old data on to Azure SQL database cloud, and best thing is you do not need to change anything in your application, as the data storage and access is taken care purely by SQL Server itself.
– If you are looking to reduce storage, compute and memory costs, then this is ideal solution.
– If you see that some tables are very large and causing issues with maintenance like backups, reindexing, stats update, etc, then archiving is best solution and stretch database will be very good solution.
– If your backup/restore are taking too long and missing SLAs, then moving old data to Azure cloud using stretch database will move old data, thus reduces the size of on=premise tables and thus reduces size of on-premise database and now the backup/restore times will be less and should meet the SLA requirements.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

DATABASE SCOPED CONFIGURATION in SQL Server 2016

SQLServerF1

Microsoft SQL Server 2016 has many new features and enhancements to existing features. Also, Microsoft claims break through improvement in the performance of queries in SQL Server 2016, compared to its previous versions. Database Scoped Configurations is one of the interesting enhancement made in SQL Server 2016. In previous versions of SQL Server, we can open database properties and can view various details about the databases and also make changes to some of the database properties to alter the behavior of the database. Starting with Microsoft SQL Server 2016, in database properties window, options tab, new section has been added with the name “Database Scoped Configurations”, which includes different options which can be configured.

Legacy Cardinality Estimation – This options allows us to set the query optimizer cardinality estimation at an individual database level, which is different from the database compatibility level. In SQL Server 2016 there are a lot of enhancements that come when we set Compatibility Level 130 for a database in SQL Server 2016, such as performance improvements with Columnstore and In-Memory OLTP and all the Query Optimizer enhancements done over the years under Trace Flag 4199. But there can be situations where you want to benefit from the new compatibility, but still want the application to work well and run under old compatibility mode, then we can enable this option. Once we set the Compatibility Level of database to 130, then we will get all the performance benefits for query execution that come with SQL Server 2016, while still using old CE.
Legacy Cardinality Estimation For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

MAXDOP – In previous versions of SQL Server, we can set MAXDOP at an instance level or at batch/stored procedure/query level, but not at the database level. But starting SQL Server 2016, we can set Maximum degree of parallelism(MAXDOP) for an individual database. SP_Configure MAXDOP setting will be overwritten by this database setting for queries ran against this database.
MAXDOP For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Parameter Sniffing – Using this option, we can enable or disable parameter sniffing at database level. In previous versions of SQL Server, we could turn on or off using the trace flag, but now it is made simple with just changing the option from database properties.
Parameter Sniffing For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Query Optimizer Fixes – In previous versions of SQL Server, there are many query optimizer hotfixes enabled by some trace flags. In SQL Server 2016, we can choose to either enable or disable query optimization hotfixes for an individual database.
Query Optimizer Fixes For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.
– Starting SQL Server 2016, we now have the ability to clear individual Database Plan Cache using command ALTER

DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Memory Counters to Check on Virtual Machine Running SQL Server

SQLServerF1

These days it is common to see SQL Server instances running on virtual machine. There would be multiple virtual machine guest systems running on one host server. To have more clarity, lets look at some terms.
Host – This is the server which runs on top of the hardware and manages and allocates all server hardware to all virtual machines.
Guest – This is the virtual machine which has been allocated few hardware resources(CPU, Memory, Storage) from the available pool of resources on the server.

As one virtual machine host system can create and host multiple guest virtual machine systems, all these guests will share the available CPU, Memory and Storage on the server. As a DBA, one of the responsibility is to ensure that the SQL Server instances running the virtual machine guest does not face any performance problems, but due to many configuration settings at the host and guest level, it is possible that all hardware resources allocated the the virtual machine guest system cannot be fully utilized by the SQL Server, instead the host may apply some further limits and does not allow the SQL Server to use the resources allocated to it. Sometimes, when the other applications on another guest system consumes more resources, it is possible that remaining guest systems on the same host may be impacted.

Memory is one of the important resource to ensure high performance, as reading data from memory is faster compared to reading from disk drives. Out guest system may be allocated certain amount of memory and DBAs may have set certain max server memory cap for SQL Server instance, but it is possible that SQL Server could never be allowed to use the allocated memory or is forced to release the memory allocated to it, so it is important to monitor if any memory is being asked to be released from the SQL Server. In general, DBAs monitor general memory counters like Server: Available Memory, Buffer Manager: Buffer cache hit ratio, Page Life Expectancy, Memory grants pending, Memory grants outstanding, free pages, total server memory, target server memory, etc. However there are also some memory counters to be monitored on virtual machine which include below.

Memory Limit(MB) – Amount of memory allocated to the guest.
Memory Reservation(MB) – This is the lowest amount of memory the guest will have. Although this is 100% of the guest’s memory, but that’s not always true. For SQL Server, generally the reservations is expected at 75% of the guest memory allocated.
Memory Ballooned(MB) – Memory released from the guest by Balloon driver.
Memory Swapped(MB) – Current amount of guest physical memory swapped out to the virtual machine’s swap file by the VMkernel. Swapped memory stays on disk until the virtual machine needs it.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Tips to Improving SQL Server Backup Performance

SQLServerF1

Backups are one of the most important and core DBA task which every DBA needs to expertise. If there are any short comings or issues with backup planning, implementation, monitoring and testing, then it can result in data loss or can increase the time taken to recover the data with data loss. SQL Server supports multiple databases on SQL Server instances with each database size can be small of few MBs to very large in 100s of GBs to Tera Bytes TBs of late. There is additional care to be taken to plan and implement back strategy for large databases as the time taken to perform the backup increases significantly, so we need to diligently use the different backup methods available to backup the large databases like performing weekly full backups, daily differential backup and every 10-15 minutes log backups. The backups are important even when there are High availability and disaster recovery solutions implemented.

As a DBA some times we may need to look into possibilities of speeding up the backup process to save time taken for the backups of critical databases which are used 24×7. Performing backup can slowdown the databases to an extent due to IO load put by the backup on reading from database files and writing to backup file. There are different ways which we can consider to speed up the SQL Server database backup process to avoid problems for critical servers. Some of the methods or things which we can consider are below. It is important to note that some of the below mentioned options may be available only in latest versions of SQL Server instances, like backup compression available only starting with SQL Server 2008.

– Starting with SQL Server 2008, Microsoft introduced compressed backup. Using the backup compression not only reduces the size of the backup files, but also reduces the amount of IO performed to write to backup files, thus reducing the amount of time taken to perform the backups.

– SQL Server backups allow us to perform backup of one database into multiple backup files, thus increases the use of parallelism which speeds up the backup process. If we point the multiple backup files to different disk drives, then this will further spread out the IO load and makes the backup even faster. We can split database backup to be performed to many backup files, but we need to find the best number of files, as if we use too many files can impact negatively too.

– Some environments, it is common to perform backup directly to the network share or tape drives, which will cause the backup to take more time because the data has to be moved across network. If the backup of large databases is taking long time, then we can consider performing backup locally first to dedicated backup drive and then move the backup file from local backup drive to backup share or tape drive.

– We need to ensure that file system level backups and SQL Server backup timings does not overlap, as both running together will slowdown the system and causes the backup to take more time.

– For large databases we can choose alternative backup plans like performing weekly full backups, daily differential backup and every 10-15 minutes log backup, this we can restrict long time of full backups to weekly once. Differential backups are often fast compared to full backups. It is important to move all full, differential and log backups to backup share or tape so that they can be used in case the server or drive where backups are performed is lost or corrupted.

– We can also use some of the backup parameters like BUFFERCOUNT, BLOCKSIZE, MAXTRANSFERSIZE to improve the backup performance. There is no good or bad values for these parameters, it is trial and error approach, where we need to test different values and come to conclusion on what works best for your environment.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

The query processor ran out of internal resources Error in SQL Server errorlog

SQLServerF1

There are many number of errors documented for SQL Server and are raised when certain error condition occurs which range from syntax errors to problems with instance, databases, jobs, performance, features, etc. When a particular error is received, the error message will have some information regarding the kind of error and what might have caused the error. Sometimes, it is found that the errors are not detailed or misleading and make troubleshooting move in wrong direction or can cause slowdown of resolution the errors.

Out of many errors, some errors are related to performance and the error message may not be enough to understand the reason of the failure. Below is one such error message which occurs due to performance issues with the server performance or with a specific stored procedure or a batch or a individual SQL query. When ever a query hits SQL Server instance, a query plan is created if it is the first time, if not, SQL server optimizer will check for existing plans and reuse if one is already present. There are various reasons the query plan generated may not be optimal and takes more time to complete and sometimes the query can fail altogether with different errors. Below is one such error.

Error: 8623, Severity: 16, State: 1.
The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

This particular error mentioned above is is a rare event and only expected to occur when a very extremely complex queries or queries that reference a very large number of tables or partitions is received by SQL Server. There can be different reasons on what a complex query is referred as, it can happen when attempting to select records through a query with a large number of entries in the “IN” clause like greater than 10,000. There are also some bugs or known issues that can cause this error and some connect bugs are raised too. You can use SQL Server profiler or extended events to track and identify the query causing this error. If this error is recurrent and happening on certain day or time, then we can check if there are any jobs running at that time leading to this error.

The resolution for this error is to simplify and rewrite the complex query. We can also try and divide and get part of the query working and then use extra joins and conditions. We can also use temp table and use temp tables in joins to avoid this error from occurring. We can also try other alternatives like we can try to run the query using the hint option like force order, or option hash join or merge join or using trace flag 4102 with a plan guide. Enabling Traceflag 4102 will revert the behavior of the query optimizer to older version of optimizer for handling semi-joins.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

CPU Affinity Settings on Server with Multiple SQL Server Instances

SQLServerF1

It is common to see multiple SQL Server instances being installed on one server. The intention behind this is to reduce the licensing cost related to Operating System and also to use the hardware resources effectively. Also, having multiple instances separate maintenance and provides more security from not allowing users of other applications to connect to different SQL Server instance databases. With introduction of visualization, the reason for having multiple SQL Server instances has reduced a bit by allocating on VM for each SQL Server instance thus avoiding issues with maintenance of OS or hardware. However, still there are older systems or some newer systems too which still use multiple SQL Server instances on one server.

There are some disadvantages of running multiple SQL Server instances on one server like, if built-in administrators group is added to SQL Server instance as sysadmin, then all local administrators get access to SQL Server instance by default which can be security issue, irrespective of it, a local administrator can gain access to SQL Server instance without being having permissions to the instance through backdoor by starting SQL Server instance in single user mode. Also, there can be performance issues to all SQL Server instances, if one of the SQL Server instance starts to behave bad due to bad queries, improper index and stats maintenance or bad application design.

One of the performance issue you notice on servers is high CPU usage causing performance issues. There can be one of more instances consuming more CPU thus starving other SQL Server instances, thus overall server performance degrades and application users will start to complain. One of the ways to avoid each SQL Server instance consuming more CPU and causing contention is to allocate each SQL Server instance specific CPUs available on the server. For example, if a server has 16 processors and if there are 4 SQL Server instances running on the box, we could analyse CPU requirements for each SQL Server instance based on number of databases, size of databases, number of concurrent users and max CPU required during high load, and then assign specific number of CPUs to each instance. So, for one instance if it requires more CPU usage, then allocate 8 processors to user and for remaining 3 instances use assign remaining available processors based on the usage. If all SQL Server instances usage requirement is greater than available processors, then we will need to plan to move one or more SQL Server instances on to another server.

We can use sp_configure of from instance properties, choose which CPUs to be allocated for which SQL Server instances. There is also option available to choose IO affinity mask, so do not change it, instead just change the affinity mask or CPU affinity mask. There can also be affinity 64 settings, which you can ignore and just use affinity setting or CPU affinity setting. Do not change IO affinity setting. For each SQL Server instance choose different CPUs to avoid overlapping.

Also, it is good to consider reviewing and adjusting the Max Degree of Parallelism (MaxDoP) setting as per best practices and the allocated number of CPUs for each SQL Server instance.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

SQL Server High CPU Issues Due to Edition Limitations

SQLServerF1

There are variety of performance issues that can cause SQL Server performance issues. Performances can range from low server Hardware configuration, improper OS or SQL Server settings, fragmentation, blocking, no proper maintenance like index rebuilds or stats updates, etc. It is not easy to identify or pin point what is causing the slowness or performance problems with the SQL Server. As a DBAs we will need to review everything related to the hardware, OS and SQL Server to able to find the root cause of the issue.

There are many cases where performance issues are caused due to low server hardware configuration like lack of CPU power, or insufficient memory available or under-performing disk IO. We can look at various parameters like perfmon, SQL Server dmv statistics, query waits, etc to be able to understand if there is real bottleneck with the hardware and if we need to upgrade the hardware for better performance. However there can be situations, where the server although has a good hardware configuration interms of CPU, Memory or Disks, improper settings on some limitations cause the SQL Server not to use all of the available hardware on the server.

One of such issue related to the slowness caused by CPU usage due to limitations on the SQL Server edition been used. On one of our client server, there were multiple SQL Server instances installed. There was a monitoring tool which used to monitor the server and used to raise intermittent alerts for high CPU usage. Upon connecting to the server and verifying, the CPU usage was only reported as 50% usage on the server. Users also used to report about the performance issues some times, but not frequently.

Upon checking the server, from task manager the CPU usage was reported around 50% and this server has 8 sockets with one core each, The server has Windows Server 2012 as the Operating System and the task manager, by default combines all the processors available on the server and shows the average usage of all the processors. There is also an option to see the detailed view of CPU usage of each individual processor usage, checking which we found that only four processors are pegged to 100%, where as rest 4 CPUs were having negligible usage. So, it appeared that the SQL Server instances were using only 4 CPUs thus the high CPU usage with the four CPUs only.

Upon checking further, we noticed that all the SQL Server instances installed on the server were Standard Edition. Checking the documentation showed that SQL Server standard edition can use maximum of 4 sockets or 16 cores, which ever is less. Similar limitations on other SQL Server 2008 R2 or SQL Server 2005 versions of 4 sockets only.
https://msdn.microsoft.com/en-us/library/ms143760(v=sql.110).aspx

We can also check and confirm this on SQL Server 2012 instances by checking the SQL errorlog which will have a message like “SQL Server detected 8 sockets with 1 cores per socket and 1 logical processors per socket, 8 total logical processors; using 4 logical processors based on SQL Server licensing”.

For SQL Server 2008 R2 or SQL Server 2005, we can query sys.dm_os_schedulers to see how many processors are been used by the SQL Server instances.

The server where we had issues had multiple SQL Server instances, so we had to use processor affinity mask settings and map each instance to use first 4 or last 4 processors allowed us to use all the available processors on the server, thus reducing the CPU contention.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Blocking Due to SPID in Sleeping State in SQL Server

SQLServerF1

Blocking is a common issues noticed in any of any relational database management system (RDBMS) which uses lock-based concurrency. In SQL Server, blocking generally occurs when one SPID holds a lock on a specific resource and a another SPID attempts to acquire a conflicting lock type on the same resource. Generally, the time frame which the first SPID holds locks on the resource is very small, so when it releases the lock, the second connection is free to acquire its own lock on the resource and continue processing. This is normal behavior and happens many a times throughout with no noticeable effect on system performance.

It is common for DBAs to notice blocking on various servers and typically the locks held for a short time does not cause any issues, nor is identified by any monitoring tool. Sometimes, blocking can get severe and cause performance issues from timeouts to slowness, etc. DBAs generally deal with blocking issues by checking on the head or lead blocker causing the blocking and try to understand why the head blocker is taking long time to complete and it is not very important, then DBAs kill the SPID. If the blocking issue is persistent then DBAs identify the culprit and tune it by adding indexes or rewriting the query.

However, sometimes there can be strange blocking issues, one of such issue is blocking caused by a SPID which is in sleeping state. In general, a SPID which finishes its task will leave any locks held, so the sleeping sessions are not expected to cause any blocking issues, however in some environments, persistent blocking issues are noticed due to SPID which is in sleeping state still holds the locks and does not release it until the connection is closed from the application of till the SPID is killed by the DBAs. If we try to understand why the blocking occurs with a session in sleeping state, there could be different reasons and some of the are mentioned below.

– Mostly this issue happens when someone uses SQL Server Management Studio(SSMS) query analyzer window, runs some bad queries and leave the session open. The queries which were run either have a explicit open transaction or an implicit transaction which holds locks and does not release the locks because no commit or rollback was issued. We need to identify such sessions, the user who is running queries from that session, what queries are run, what locks are held, from which application are the session from, hostname, etc and then DBAs can go ahead and kill the sessions. Then DBAs need to contact the respective user or management and report the issue to avoid happening in future.

– This can also happen when an application opens a connection with SQL Server, runs some query and then due to some issues, the application connection is lost either due to application crash or hang or some application problems. No signal is sent to SQL Server due to sudden crash, so SQL Server still assumes that the application may still send additional queries and keeps the session available. This is due to bad application design and needs to fix the application code.

– This can also happen if application has bad code, which opens transactions and fails to commit or rollback in case of query failures or other issues.

– Applications also uses session pooling, where is sessions are reused when required. Periodically applications need to close sessions which have not been reused after sometime.

– We can also suggest developer to use SET XACT_ABORT ON for the connection, or in any stored procedures which begin transactions and are not cleaning up following an error. In the event of a run-time error, this setting will abort any open transactions and return control to the client. It is important to note that T-SQL statements following the statement which caused the error will not be executed.

– Some older or incompatible client components may cause these kind of issues too, so need to keep them updated and use latest and compatible connectivity components.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Tips for DBAs to Troubleshooting SQL Server Performance Issues

SQLServerF1

Troubleshooting SQL Server performance issues is one of the common and important task of DBAs. It is not very easy to troubleshoot performance issues without prior experience or good knowledge about how SQL Server query execution works and what factors can contribute to performance issues with the SQL Server instances. It is very common to see many entry level DBAs or even DBAs with more than five years of experience still struggle dealing with performance issues, as there are just many things that can cause the performance problems and many symptoms may be interrelated and not easy to diagnose and come to a conclusion after which the solution to be found and implemented. In many organizations troubleshooting performance problems take many hours to days and even weeks too.

To help DBAs to be able to find a quick list of things which they can check or look into in case any SQL Server performance issues are related. It is important to note that troubleshooting performance issues with SQL Server is very broad ranging from SQL Server configuration, OS configuration, Network or other hardware configuration, blocking, waits, fragmentation, outdated statistics, etc. Below are some of the things which DBAs can look into when performance issues are reported with SQL Server instances.

– Understand about the problem. Generally performance issues are reported by developers, Managers, application users or some monitoring tools. Depending on the source of the issue reported, gather data by asking questions like what do their interpretation of performance issues mean, why do they think SQL Server is cause of performance issues, what data they have collected, what symptoms do they see with application, did they collect any data so far, etc. Collecting as much data can be helpful to move in the right direction.

– If the performance issues are reported by all users, that means it is general slowness, can be anything from Hardware configuration issues to SQL Server query problems. We can start looking into basic SQL Server configuration settings like MaxDoP, Max Server Memory, SQL Server version and patch details, platform, etc. Gather OS configuration details like CPU, Memory and disk details and if they are as per best practices like data and log files on different disks, tempdb on its dedicated disk, etc. Also, sometimes OS settings related to instant file initialization, lock pages in memory, CPU power plan set to balanced can show impact too.

– Quickly check on perform to see if there are any resource bottleneck. Look into counters for CPU, Memory, Disk IO and network to see any issues or any values which are outside of expected values.

– From SQL Server, we can check if there are regular maintenance like rebuild of indexes and statistics updates. Check for fragmentation and when stats were last updated. Check if there are any jobs currently running, if jobs related to reindexing, stats update, checkdb or backups running, then stop them. Check if there is any blocking happening on the SQL Server instance, if found, check the headblocker and take appropriate action.

– Check wait stats on the server and see the top contributing wait and see if we can related to any physical resource bottleneck.

– Check currently running sessions which are taking more time and see why they are taking more time. Try tune them by creating any missing indexes or rewrite queries, etc.

– If the application is related to Vendor, then check for any best practices from Vendor for improving the performance.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Introduction to SQLAzure for SQL Server DBAs

SQLServerF1

Microsoft has been aggressively focussing on promoting cloud technology. As a SQL Server DBA, one might wonder what is all about this cloud and how this works and how this impacts their role as a DBA. Although the terminology may be different, but a lot about cloud may already known to you or worked on, but may be not able to correlate to what you already know. For SQL Server Microsoft is offering two cloud platforms, one of which is Infrastructure as a service and the other one is database as a service.

Infrastructure as a service – This is similar like a hyper-v or VMWare VM, where you have a VM machine on which you install and manage SQL Server instances, databases and jobs, etc. The only difference is the VMWARE or Hyper-V management is taken care by your client company or your company, where in case of Microsoft Azure cloud the infrastructure is taken care by Microsoft and as a DBA, we need to take care of installing SQL Server, patching, maintenance, administration similar to what we used to do, but just on a Microsoft data centre.

Database as a service – This might be something different and which many DBAs may not be familiar about. In this the OS, Network, Storage and also SQL Server installation, patching, Backups, etc are taken care by Microsoft and we as DBAs are not responsible for such tasks, instead we can focus on other tasks such as migrating data from on-premise SQL Server to SQL Azure cloud and checking and working on making the SQL Server database resilient for performance. Database as a service is not a regular SQL Server instance which we used to manage on-premise, instead there are lot of limitations on what we can do and what changes we can make, for example, we cannot change many instance settings directly, we cannot alter storage configuration, or tempdb configuration, etc, instead we focus more at the database level and at data level.

If you are wondering what are the advantages for choosing SQLZure cloud solution is mainly reduction in cost of maintaining hardware and SQL Server maintenance of patches/backups, etc. Depending on the criticality of our database server and application, we can choose different kind of licensing based on our requirement of performance, data recovery and Disaster recovery features. So, for small test/dev servers you can choose servers with basic or minimal cost configuration and for productions servers depending on the size and usage of the database we can choose appropriate license. Different types of licensing includes, Basic, Standard and Premium and these are sub divided into S0-S1-S2-S3 for Standard and P1-P2-P3 for Premium.

As our data is stored on cloud on third party vendor place, it is important to understand how quickly we can recover our data and how much of data loss can occur in case there is a disaster, for which there are different types of Disaster recovery strategies to choose from, which include Geo-Restore, Standard Geo-Replication, Active Geo-Replication. In Geo-Restore, a copy of database can be restored on another region, but this will have data older than 24 hours, thus may not be good option for production data, instead can be used for dev. In Standard Geo-Replication, we can recover data up to 30 minutes before crash and can take up to 2 hours for the restore to complete and database to be available, but this will increase the cost. Finally, Active Geo-Replication, the data loss could be reduced to 5 minutes and amount of time to restore would about about an hour, this further increases the cost. Depending on our requirement, we can choose the best solution that suits.

in SQLZure, there is something known as DTU(Database Throughput Unit), which is used to measure the performance we get out of our SQLZure instance. DTU is a combined measure of CPU, Memory and IO of a database on a server and we can use this to compare the performance of database between different servers.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 
1 2 3