Top New SQL Server 2012 Features

SQL Server 2012 was released by Microsoft on April 1st 2012. There are many new features introduced in SQL Server 2012 and some existing features are enhanced as well. One of the most popular feature and most talked feature that comes in SQL Server 2012 is AlwaysON Availability Groups.

Below are some of the top new features in SQL Server 2012.

AlwaysOn Availability Groups – SQL Server 2012 AlwaysON Availability Groups is a revolutionary feature which replaces many other features such Database Mirroring, Logshipping and up to an extent Replication. AlwaysON also known as HADRON, which provides High Availability and Disaster Recovery solution for critical databases. AlwaysOn Availability Groups is a container which consists of one or more databases which can together failover as an unit. a set of replicas(servers) host the primary and secondary AlwaysON Availability Group databases. AlwaysON configuration Availability Group has One Primary Replica and up to 4 Secondary replicas including one Synchronous and 2 asynchronous replicas. We can perform Read/Write operations on Primary replica and can use Secondary replicas for Read_Only work loads, thus offloading primary server. We can have multiple Available Groups created between same server and failover happens at Availability Group level. Failing over an availability Group brings all databases part of that group ONLINE on the secondary replica. We can also offload backups to be performed on the secondary replicas. CheckDB can also be offloaded to the secondary replicas. It is also simple to implement AlwaysON Availability Groups.

AlwaysON SQL Server Failover Clusters – SQL Server AlwaysON Failover Clusters allows multi-subnet failover clustering where cluster nodes can be spread at geographically at different locations and data is moved to other node through SAN level replication. This provides both High Availability and Disaster Recovery. Another enhancement include flexible failover policy, where in we can choose the condition on which a failover should occur.

Contained Databases – SQL Server 2012 provides partial containment. In previous versions of SQL Server, moving a database from one server to another server also involved moving associated logins and other objects. Starting with SQL Server 2012 Contained Databases, users are associated with the database itself, thus no more dependent on the SQL Server instance. Moving contained database to new server does not require any additional actions of creating logins, or fixing orphan users. Also, previously there were issues related to collation where SQL Server instance is of different collation and databases are of different collation. SQL Server 2012 contained databases solves this problem where tempdb now creates objects related to contained database under same collation of the database.

ColumnStore Indexes – Microsoft SQL Server team brought the concept of column based indexes from VertiPaq. Columnstore indexes in the SQL Server 2012 Database Engine is used to significantly speed-up the processing time of some common data warehousing queries. In traditional clustered and non-clustered indexes which data is stored row wise in pages, where as in SQL Server 2012 ColumnStore Indexes, data is grouped and stored column-wise, so each column can be accessed independent of other columns.

Enhancements to ONLINE rebuild index operations – Starting SQL Server 2012, we can rebuild indexes ONLINE for BLOB indexes, which have data types like nvarchar(max), varchar(max), varbinary(max). This was not possible in previous SQL Server versions.

Database Recovery Adviser – In previous versions of SQL Server, DBA’s had to perform manual restores in order to recover a database from a failure for which Database administrators had to plan and restore a set of backup files in a logically correct order. New Database Recovery Advisor facilitates preparing the restore plans which implement optimal and correct restore sequences, thus reducing the overhead and any manual mistakes. Also with AlwaysON Availability Groups, backups could be performed on secondary replicas as well, in such cases Database Recovery Advisor will be helpful in preparing the restore sequence.

Reduced downtime for Application Upgrades – Adding new columns with default constraint is now meta data only operation, thus significantly reducing amount of time for changes.

New FILETABLE feature has been introduced in SQL Server 2012, which is built on top of FILESTREAM technology. With FILETABLES, we can now store the files and documents in special tables in SQL Server, but access them from Windows applications as if they were stored in the file system, without making any changes to the Windows applications.

Product Update is another new feature introduced in SQL Server 2012 Setup where, setup integrates the latest available product updates with the main product during the installation time, so that the main version and all applicable updates are installed at the same time, thus reducing additional overhead and downtime later.

New Startup Parameters has been added to the SQL Server 2012 Configuration manager, so it makes it simple and manageable for changing paths for master database files or to add any startup trace flags.

Starting with SQL Server 2012 support for partitions is increased to 15,000 by default, which in previous versions was limited to 1,000 by default.

Starting with SQL Server 2012, FileStream FileGroups can contain multiple files and can be on different drives, which improves I/O performance by spreading the I/O load.

There are some changes in Licensing of SQL Server 2012 Enterprise Edition, where two types of Enterprise Edition Licenses are sold based on Server/Client Access License (CAL) or Per Core licensing. Also Data Center Edition has been removed in SQL Server 2012 and Business Intelligence Edition has been introduced.

Starting with SQL Server 2012, we can install it on Windows Server 2012 Core, which basically does not have any GUI.

– SQL Server Business Intelligence Development Studio has been redesigned and modified into SQL Server Data Tools (SSDT).

There are many other features and enhancements made in SQL Server 2012 as well, refer MSDN for more information.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Update Statistics Job Taking Long Time to Complete

Recently, I got a question from a client where they had a SQL Server Agent job which was performing update statistics with full scan. The job was running successfully and had been running since long time. Since past few they started noticing that this job is taking longer time, than what it used to take before. Run time of the job doubled in last few runs and it continued taking longer time to finish.

I have verified and found that this job was created from a maintenance plan. Maintenance plan has update statistics task which is basically performing statistics update of all the objects(all table statistics, all column Statistics, and all index statistics) with FULL SCAN.

Some of the reasons why there is an increase in the duration of this job could be

– Number of databases has increased on the SQL Server instance. As this job covers all databases, so increase in databases could increase the durations of this job.

– Increase in the size of the Data in few or all of the databases. It is common that databases grow over a period of time and this will lead to increase in time taken for update statistics to complete, thus increasing the duration of the job run.

– Changes in the CPU usage, Memory usage or IO(Reads/Writes) usage patterns on few or all databases which can increase the time taken to complete the job.

– Changes in the access Patterns of the data in few or all databases can show affect which may cause locking/blocking with the update statistics jobs.

– Any other jobs are running at the same time and now Update statistics and new jobs have to share the available resources on the system, thus can lead to increase in job duration.

To be able to understand or find which of these above mentioned is the cause of the increase in the run time of the job, we need to have historical data when the job was running fast. The historical data which we need would include performance monitor counters related CPU, Memory, Disk and Network. SQL trace files to understand what other queries are running while this job is running. Wait statistics, Locking and Blocking details to see if there are any waits for Update Statistics job. It is most often are not available unless there is some third party monitoring tool is installed or custom monitoring is configured.

Also, we can look into other options to see how we can improve the speed of Update Statistics job by making changes to it. As we know that this job is using maintenance plan Update Statistics task and performing with FULL SCAN, we can look into possibilities of changing from maintenance plan in to a customized solution, where we update statistics only if there are any changes to the rows data in a table or column. Also, if the number of changes are small, we may perform the update statistics with Sampling as well. There are already some automated tools/Scripts available to achieve this, one of which is a popular solution Ola Hallengren’s Index and Statistics Maintenance Script. Test this solution on a test server and see the results and then implement the same on the production, if you are satisfied with the results on the test server.

I have migrated the update statistics job from maintenance plan solution to Ola Hallengren’s Statistics Maintenance Script and noticed that the run time of the job was reduced by 70%.

Same issue can also happen with other maintenance jobs like rebuild indexes or for any application related jobs as well and the cause of slowness could be due to one of the above mentioned causes. Similar troubleshooting approach should help in identifying the cause and resolving the issue by fixing the identified problem or by improving the maintenance job itself.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Reviewing of Servers for Implementing SQL Server Best Practices

DBA’s need to be proactive and perform health check and review of best practices periodically on all the servers to make sure that all the best practices for SQL Server are being implemented. This approach will avoid any issues which may arise in future. So rather than troubleshooting and fixing the problems after they occur, it is always good to proactively follow best practices and avoid the problems completely.

It is most common in most of the company’s that there are server outages, Performance problems with SQL Server causing unplanned application downtime. To resolve the issue, DBA’s would have done some research and made some configuration changes. If the DBA has performed a periodic review of the server and proposed those changes proactively, the issue itself would have not surfaced.

Best practices are not just applicable for SQL Server, it is also to be done at Operating System, DISK/SAN level and Network level. So, we can divide the best practice or health checks into below categories.

Operating System/SAN/Network Best Practices – SQL Server is one of the Operating System process and will heavily depends on its functioning and management. So, need to verify and enable any settings which can help SQL Server performance or avoid any issues.

Below are the some of the Operating System checks that can be performed on server with SQL Server installed.

– Make sure latest Operating System patches and security updates are installed on the system.

– Add SQL Server service account to “Lock Pages in Memory” local security policy, which basically reduces the chances of Operating System paging out or trimming SQL Server memory working set. On a VMWare server, it is more important, so that the memory is not taken back from the SQL Server by the VMWare balloon driver.

– Configure paging file to 1.5 times of available physical memory for servers will less amount of RAM, as these servers will use paging file at some point in time, but the best approach would be to increase the memory on the server. On servers with large amount of RAM, we do not need to have huge paging file, instead set paging file to same size of physical memory, just in case to grab the kernel dump in case of server crash.

– Starting with Windows Server 2008 R2, power plan is available to configure. There are normally three types of power plans available which are Balanced(this is default setting), High Performance, and Power Saver. For Servers running SQL Server, it is recommended to use High Performance power plan. This change may look simple, but is very powerful and can show significant performance gains. This option needs to be enabled from control panel and also at the NetBios level for it to show real effect. On VMWare systems, this has to be implemented on the ESX host server as well.

– Add SQL Server service account to “Perform volume maintenance tasks” local security policy, which basically enabled “Database Instant File Initialization”. Without this privilege, SQL Server will first zero out all data file before using it, which can show affect on performance on highly transaction systems. Log files cannot benefit from this, as this only works for data files. This helps very much on servers where auto-growth is enabled for database and is there is auto-growth that happens during business hours when system is being under load.

– Review installed Software’s, Applications, and Services on the server and Uninstall or Disable those which are not required, as they can use some resources too.

– It is one of the important recommendation from Microsoft to not install Anti-Virus software on dedicated SQL Server box, even if it installed, make sure that SQL Server exe’s, Dll’s and database files(.mdf, .ndf, .ldf) fileStream files, Full-Text related files from Anti-virus scan. This is very important as it can cause SQL Server to crash and generate Access Violation memory dumps and also can slow down the performance.

– It is recommended to format the disk drives having SQL Server database files using 64 KB block size. If already database files are on disks with lower block size, need to plan and move them on to new disk drives will 64 KB block size. This helps in improving the IO performance greatly.

– It is recommended to use RAID disks for SQL Server which will improve performance and also provides redundancy and support to recover data in case of disk corruption. It would be great to use Raid 10 for all the database and log file, but can be very expensive. Alternatively, you can use RAID 5 for data files and RAID 1 for Transaction Log files.

– There can be other settings at OS, SAN or Network level, so better to work with the System Administrators, SAN Admins and Network Administrators and discuss about improving setting at HBA drivers, HBA queue depth, VMWare settings, Network settings, etc and see if they are configured optimally and if any improvements can be done for those areas.

SQL Server Instance Best Practices – As a DBA, we need to make sure that all SQL Server configurations are done specific to the environment. There are no specific SQL Server configuration recommendations which is applicable for all server, so need to monitor, test and implement the best configurations specific to that server to perform well.

Below are some of the Configuration settings that need to be adjusted for better performance and stability.

– Make sure SQL Server has latest Service Pack (SP), Cumulative Updates (CU) and Security Updates installed.
– Always SQL Server hosting critical databases are running on their Dedicated Servers, so that there won’t be any impact due to other applications running on the system.

– SQL Server default instances run on port 1433 and named instance will be using random dynamic port. It is good to change the named instances to use static port, so that there will not be any problems during startup. Also, make sure to reserve this static port, so that no other applications will use this port, else this can cause SQL Server failure during startup.

– There are different network protocols that may be enabled on a server. Make sure only the required protocols like Shared Memory and TCP/IP are enabled, enable Named Pipes only, if it is used by applications.

– Configure SQL Server Max Server Memory, this is considered one of the most important settings. By default, this value will be set to very large number, meaning it allows SQL Server instance to use all of the available memory on the server, thus leading to server crash or server hang or serious memory pressure on the system. You would need to monitor the memory usage on the server and set appropriate Max Server Memory, leaving enough memory available for OS and other processes running on the server. Also, additional care needs to be taken, when there are multiple SQL Server instances installed on the server, in that case, Max Server Memory for each SQL Server instance should be set in such a way that, adding all instances Max Server Memory will be less than total available physical memory and also enough memory is left to OS and other processes.

– Affinity Mask is mostly good if it is not changed, but can be changed on servers with multiple SQL instances after careful evaluation and testing. Improper changes to this setting can cause serious performance problems.

– Max Degree Of Parallelism (MaxDOP) determines the level of parallelism, which basically has some control on the number of processors that can be used to run a particular query. Be default this is set to Zero, meaning a SQL query which goes for parallel execution can use all the available processors.

General recommendation from Microsoft about this setting for OLTP systems is

• For servers that use more than eight processors, set the MAXDOP=8.
• For servers that have eight or less processors, set MAXDOP=0 to N, where N equals the number of processors on the server.
• For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs that are assigned to each NUMA node.
• For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.

Example: On server with 2 processor sockets, with each one having 4 cores and 8 logical processors and NUMA enabled, set the MaxDoP to 4.

Also, it is important to note that some applications has their own recommendations. For example SharePoint recommends to set the MaxDoP setting to 1, as they have designed their product in such a way that any SQL statement or Stored Procedure which is expected to benefit from parallelism has been explicitly defined with required MaxDoP level in the code itself and remaining query’s which does not benefit from parallelism would be run serially as per instance level MaxDoP setting. So, for any third party applications, contact them for recommendations for MaxDoP.

– Make sure SQL Server Priority Boost is disabled, it does not do any good, rather causes serious problems on the system, if this option is enabled. This was provided to be used only by Microsoft CSS to troubleshoot some problems and this will be removed in future versions of SQL Server.

– Login Auditing, by default only failed logins are audited, but someone may change it to audit both failed and successful logins. It is good to leave with default to audit only failed logins and if there is any requirement to audit successful logins, follow some other method like enabling Auditing feature, run SQL trace, extended events, etc.

– C2 Audit, is better left disabled.

– Create a job to periodically recycle SQL errorlogs like Monthly, to avoid large errorlog files. You may also consider creating some automated process to archive old recycled errorlog files for future reference.

– Change default data and log file path, so that any new databases created will have their data and log files created on different disk drives which are on appropriate RAID level, which will significantly improve the performance.

– Backup Compression which was introduced in SQL Server 2008 is a good option to be enabled at instance level which will save lot of disk space and network bandwidth, when stored on a backup server or tape.

SQL Server Database Best Practices – Database level settings are very important and provides lot of benefits related to performance, Security and stability, but are mostly ignored as these are more in depth settings are often forgot to configure by most of the DBA’s.

Below are some of the database level settings which provides lot of value.

– Each database by default has two files, one data file and one transaction log file, additional data and log file can be created and can be pointed to other disks to offload IO load which provides significant performance benefits. It is very important that data and transaction log files are placed on different disk drives as placing them on same drive causes lot of performance problems. Make sure that data and log files are placed on different disk drive which is on appropriate RAID level.

– System databases are created during the time of installation of SQL Server and are most often created in C:\ drive or other drives with less amount of disk space. Tempdb is also a system database and can grow large depending on the activity on the system, so need to place the system database files on different drives. Placing Tempdb on dedicated disk provides significant performance gains on systems where tempdb is heavily utilized. Also, it is good practice to create multiple tempdb files which helps benefit by reducing latch contention. There are different theories about how many tempdb files to create, in my opinion, best would be to create 1 tempdb file for 2 logical processors and limit to a maximum of 12 tempdb files. In special cases more than 12 tempdb files can be created after through testing.

– Set auto growth settings in-terms of fixed. Each database by default has one data file and one transaction log file. Additional data and log files can be created if required for performance benefits and the files can be placed on different drives. Each data and lot file is created with initial file size and can be configured to auto grow if initial file size is used and additional space is required. It is recommended to set a good initial size based on capacity planning and then enable the auto-growth setting in terms of fixed size like 256 MB or 512 MB, instead of growth in percentage which can cause problems when database files have to grow during business hours and leads to timeout’s and performance problems.

– For very large databases, it is good practice to create additional file groups and store different types of data bases on IO patterns in different filegroups. This will help in significant improvement in performance. You can create additional file groups like Index File group, BLOB filegroup, etc and place their physical file location on different disks.

– There are few database options which are mostly forgot or unknown to many DBA, but provides lot of benefits for performance, security and stability of SQL Server databases. Below are for of those options.

Database Recovery model – Recovery models determine amount of data that transaction log file needs to keep active. This will whether point-in-time recovery of all the data is possible or not and to what extent it is possible. Set the recovery model appropriately based on the Recovery Point Objective (RPO) SLA.

Auto Close – This should be disabled as this can cause performance problems.

Auto Shrink – This option should be disabled as it is not recommended to shrink a database which causes fragmentation and leads to notable performance degradation.

Auto Update Statistics – This option should be kept enabled. Statistics are used by SQL Server while compilation and execution of SQL query, and having accurate statistics helps in building accurate and optimal query plans, thus resulting in improved and stable performance. SQL Server determines how often does it need to auto-update the statistics.

Page Verify – This option should be set to CHECKSUM, which basically detects any database corruption. When a data page from memory is to be saved back to disk, SQL Server calculates a checksum and adds it to the page header and then stores the page on the disk. Later, when we read the page back to memory, SQL Server calculates the page checksum and compares it with checksum previously stored in the page header, if it matches then page will be successfully read into the memory and will be returned to the user, if it does not match then it reports that the database is corrupt.

– We need to make sure that number of Virtual Log Files (VLF’s) are minimum for each database transaction log files, as this can cause database recovery to take long time and uses will not be able to use the database properly. VLF’s can be controlled by setting up proper initial size for database file after doing capacity planning, and also auto-growth settings need to set interms of fixed size of 512 MB of higher less than 2 GB, so that number of VLF’s will be minimum.

– For performance perspective, it is important to have regular smart index maintenance on the server for all databases like rebuilding and re-organizing the indexes based on fragmentation percent.

– Updating Statistics is another most important thing which proved most of the times to significantly improve the performance. Schedule a job or use maintenance plan to update the statistics with full scan regularly like once a week or more often if there are lot of DML operations performed on the databases.

– Another important task to be performed regularly is to perform integrity check on all databases, which basically means running checkdb on all user and system databases to detect any corruption. Integrity checks can normally be scheduled to run weekly once.

– Backups are another important regular task, as we do not know when can a disaster strike, so we should be ready to be in a position to recover the data to the maximum extent possible based on the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) SLA’s. Also, it is important to periodically test the restore of these backups on a test server to make sure that we can restore them successfully and also will provide us some time estimates on how much time, it can take to recover the data after a disaster.

There can be other best practices still exist and may not be covered here, so it is important to keep ourselves updated and add new findings to out best practice or health check list.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Migration of SQL Server Instance from One Server to Another Server with Minimum Downtime

As newer versions of SQL Server are released, we plan implement to upgrade to higher versions and during this process, we may also upgrade to new and powerful servers with better hardware and latest Operating System. Also, as part of data center relocation, we migrate the SQL Server instance from one server to another server in new data center. The process of migration involves lot of steps and need to plan and implement the steps so that, at end of migration, we have SQL Server instance running without any problems.

This article covers a scenario where we want to migrate SQL Server instance from one server to another server. This is applicable for scenarios where we are migrating between servers which have same versions of SQL Server installed or new servers having higher version of SQL Server. Goal of the migration is that there is minimal downtime when we switch the applications from old server to new server.

Below are the high level steps involved in migration.

– First, we need to perform these steps on test server, so that we can find the issues that can arise during the migration before hand and also to test if the applications work without any problems after migration.

– If we are migrating to a higher version of SQL Server, then need to document the issues that can arise with the migration by running Upgrade Adviser and prepare documentation with remediation steps to fix those issues as part of the migration. If we are migrating to new server with same version of SQL Server, then no requirement of upgrade adviser.

– Install the required version of SQL Server instance on the new server.
– Apply latest Service Pack or Cumulative Updates.
– Script logins from existing server and Create those logins on the new server.
– Script all the jobs on existing server and create them on the new server, keep them disabled. Create maintenance plans on new server similar to existing server.
– Configure Database Mail if used.
– Create any linked servers if required.
– Configure SQL Server settings like Max Server Memory, MaxDoP, backup compression settings, etc on the new server.
– Configure OS or Network settings on new server like Lock Pages In Memory, Perform Volume Maintenance Tasks, etc.
– Backup all User databases on existing server, copy the backups to new server and restore them. Document the time it took for performing backups, copying backups to new server and for restoring of databases.
– Fix Orphan Users for all user databases.
– Enable backup and maintenance jobs like rebuild index, Update stats, etc on new server.
– Change the compatibility level of user databases to latest version(If migrating to higher version).
– Perform Application testing to make sure the application is working without any problems and performance is acceptable.
– Once, everything looks fine and got permission to move ahead, plan for the Go-Live of new server.

Go-Live Planning
– Our goal here is to have minimum downtime during the Go-Live process, so we need to either setup Logshipping or Database Mirroring between existing and new servers.

– Configure Logshipping from existing server to new server for all user databases and make sure backup, copy and restore jobs are running without any issue.
– Before Go-Live, make sure Logins and jobs are same on both the existing and new servers.
Start of Down Time
– Disable all jobs on existing server.
– Run logshipping backup jobs on existing server, once it completes successfully, disabled the job.
– Run logshipping copy jobs on new server, once it completes successfully, disabled the job.
– Run logshipping restore jobs on new server, once it completes successfully, disabled the job.
– Remove the logshipping configuration between existing and new server.
– Bring the all user databases online on new server by running “restore database databasename with recovery”.
– Enable all required SQL Server Agent jobs on new server and disable the same on old server
– Fix the orphan users.
– Change the compatibility level of all user databases to latest version(If migrating to higher version).
– Point the application to use the new server.
End of Down Time
– Test the Application to make sure it is working as expected.
– Later run rebuild indexes and Update Statistics on all user databases.
– If there are any issues, try possibilities of fixing those issue, if it is taking time and decided to rollback, refer below RollBack plan.
– If no rollabck required, take SQL Services offline on old server and decommission the old server.

RollBack Plan
– Verify all the user Databases are online on old server.
– Point the application back to old server.
– Enable all required jobs on old server and disable the same on new server.
– Test the application.

You can also use database mirroring instead of logshipping, just replace the steps related to logshipping with database mirroring related steps.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Planning and Implementation of Maintenance of Servers with Least Amount of Downtime with SQL Server 2012 AlwaysON Availability Groups

There was a request where we had to perform maintenance on servers which were hosting Primary and Secondary AlwaysON Availability Group Databases. Initially, when the Servers were setup, the hardware was not planned for best performance, so it was decided that we will replace the servers with complete new servers with new and better Hardware. Below is the environment setup.

Servers Server01 and Server02 are running Windows Server 2012 Operating System with One Default Standalone Instance on each server installed with SQL Server 2012 Enterprise Edition. AlwaysON Availability Group with Listener name as “PROD_LSTNR” has been setup between Server01 as Primary AlwaysON replica and Server02 as Secondary replica and this AG has couple of user databases. There is another AlwaysON Availability Group with Listener name as “PROD_LSTNR_RPT” has been setup between Server02 as Primary AlwaysON replica and Server01 as Secondary replica and this AG has ReportServer and ReportServerTempDB databases.

Now, we had to replace both the servers with brand new servers with new and better hardware. This needs to be accomplished with minimum downtime for end users and least amount of manual work to be performed by DBA or SysAdmins.

Below are the High Level steps which we identified and later successfully implemented with minimum downtime and least amount of manual work.

Prepare for Maintenance of AlwaysON Availability Group Secondary Replica
– Let the AlwaysON Availability Group “PROD_LSTNR” run on the Primary Replica Server01.
– Make a list of Report Server Subscription jobs which are currently enabled on Server02. Save it for future reference. If there are any other user created jobs related to Report Server, script and create them on Server01 and keep them disabled for now.
– Backup ReportServer and ReportServerTempDB databases and system databases on secondary replica Server02
– Backup ReportServer Encryption Keys on Server02.
– Backup ReportServer Encryption Keys on Server01.
– Synchronize all logins and resolve any orphan users on Primary Server Server01
– Failover AlwaysON Availability Group “PROD_LSTNR_RPT” to Replica Server01. This causes very short downtime for reporting services.
– Restore the Encryption Keys on Server01 with the backup taken from Server02.
– Remove Server02 from Scale-Out deployment tab using Report Service Configuration Manager on Server01.
– Stop Reporting Services on new secondary server Server02.
– Restart SQL Server Agent on Server01. This will create or enable Report Server Subscription jobs on Server01 automatically. Enable any user created report server related jobs on Server01.
– Test the reports to make sure they are working. If you have configured reports data sources to use AlwaysON Secondary databases read_Only databases, then make sure that AlwaysON Availability Group settings allow Primary replica to allow both Read/Write connections.
– Turn off SQL Services on Secondary replica Server02.

Steps for System Administrators:
– This is critical step and to be done by System Administrators. Take down Secondary replica Server02 and copy the consistent server snapshot of Secondary server Server02 in to new server with better hardware.
– Rename Server02 to Server02_Old and rename new server to Server02 and change the IP addresses same way.
– Add new secondary server Server02 with powerful hardware back to the network and domain.
– Now, you should be able to see that AlwaysON Availability Groups start synchronizing and after some time will get fully synchronized.
– Now that the secondary replica maintenance is complete, next we need to prepare the primary server for the maintenance to replace the server with new server with powerful hardware.

Prepare for Maintenance of AlwaysON Availability Group Primary Replica
– Backup all user and system databases on Primary replica Server01.
– Backup Encryption Keys on primary and secondary server Server01 and Server02
– Make a list of all application or user created jobs which are enabled on primary replica Server01 and disable the jobs on primary server Server01.
– Failover AlwaysON Availability Group “PROD_LSTNR” to Secondary Server Server02 and now this will become new primary server.
– Enable all application or user created jobs as we noted earlier.
– Test the Application.
– Failover AlwaysON Availability Group “PROD_LSTNR_RPT” to Secondary Server Server02 and now this will become new primary server.
– Stop the reporting Services on new secondary server Server01.
– Restart SQL Server Agent Service on new primary server server02. This will create or enable report server subscription jobs on new primary replica Server02.
– Restore the Encryption keys on Server02 using the backup taken from Server01.
– From Report Server Configuration Manager, remove Server01 from Scale-Out Deployment.
– Test the reports to make sure they are working. If you have configured reports data sources to use AlwaysON Secondary databases read_Only databases, then make sure that AlwaysON Availability Group settings allow Primary replica to allow both Read/Write connections.

Steps for System Administrators:
– This is critical step and to be done by System Administrators. Take down new Secondary replica Server01 and copy the consistent server snapshot of Secondary server Server01 in to new server with better hardware.
– Rename Server01 to Server01_Old and rename new server to Server01 and change the IP addresses same way.
– Add new secondary server Server01 with powerful hardware back to the network and domain.
– Now, you should be able to see that AlwaysON Availability Groups start synchronizing and after some time will get fully synchronized.

– Now that the new secondary replica Server01 maintenance is complete, next we can failover the AlwaysON Availability Group “PROD_LSTNR” to Server01 where it was running as primary before, enable all the user created jobs on Server01 and disable them on Server02. Let the AlwaysON Availability Group “PROD_LSTNR_RPT” run on Server02 and use the read_only copy and offload the reporting from primary server.
– Test the application.

We have successfully followed these steps in our environment, but you should test this solution thoroughly on your lab or test environment before proceeding with the production changes. This post is provided “AS IS” with no warranties or guarantees.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings, SQL Server Logshipping on Databases which are part of AlwaysON Availability Groups (AG)

 

Configuring SQL Server 2012 Logshipping on AlwaysON Availability Group Databases with Successful Failover of AG and Logshipping

Configuring SQL Server Logshipping on Databases which are part of AlwaysON Availability Groups (AG) is supported in SQL Server 2012 or higher. We can failover AlwaysON Availability Groups between Primary and Secondary Server and also move or enable the Logshipping from new Primary Server to DR server. Below is the environment setup.

Three servers Server01, Server02, Server03 with Windows Server 2008 R2 as Operating System. Each server has one standalone default instance of SQL Server 2012. AlwaysON Availability Group (AG) and Listener “Prod_LSTNR” has been setup between Server01 and Server02.

Now we can setup Logshipping from Server01 to Server03 just the same way how we setup logshipping between any two server databases as mentioned below.

How to Setup Logshipping between two servers?
– Connect to SQL Server instance from SSMS using instance name as Server01.
– On Server01, right click the database you want to use as your primary database in the log shipping configuration, and go to its Properties and go to last tab “Transaction Log Shipping”.
– Select the “Enable this as a primary database in a log shipping configuration” check box, which will now enable the logshipping configuration options to choose backup, copy and restore settings.
– Under “Transaction log backups”, click on Backup Settings.
– In the Network path to the backup folder box, type the network path to the backup share to use it for placing the transaction log backups.
– If this backup share is located on the primary server itself then type the local path to the backup folder in the “If the backup folder is located on the primary server” text box. If the backup folder is not on the primary server, you can leave this box empty.
– Make sure that the SQL Server service account on the primary server has full permissions on the backup share.
– Configure the Delete files older than and Alert, if no backup occurs within parameters.
– You can choose to customize the schedule for log backup and alerts.
– Starting SQL Server 2008, backup compression is supported, so you may choose to enabled it or set it to use Instance wide backup compression configuration.
– Now, under Secondary server instances and databases, click Add.
– Click on Connect and then connect to the SQL Server instance Server03 that we want to use as the secondary server for logshipping.
– In the Secondary Database box, either choose a database from the list or just type the name of the database you want to create.
– On the Initialize Secondary database tab, choose the option that you want to use to initialize the secondary database.
– Make sure that you choose correct option above and appropriately prepare the database on secondary server.
– Now go to Copy Files tab, and in the “Destination folder for copied files” text box, put the path of the folder to which the transaction logs backups should get copied. This folder is often located on the secondary server or common share. Make sure that SQL Server service account on secondary has full permissions to this path.
– You may optionally choose to change the copy and alert thresholds as per your SLA’s
– Now move to Restore tab, under “Database state when restoring backups”, choose the No recovery mode or Standby mode option. If you are choosing Standby mode option, then also select that you want to disconnect users from the secondary database while the restore operation is performed.
– If you want to delay the restore process on the secondary server, choose a delay time under Delay restoring backups at least.
– You may optionally alter the restore and alert thresholds.
– If you want, you may also add a third server as Monitor server instance to monitor this log shipping configuration. You need to decide whether you want to use monitor server or not, because, if you want to add the monitor server later, we need to remove logshipping entirely and reconfigure it with monitor server.
– If you choose to configure monitor server as well, then click on Connect and connect to the SQL Server instance to use as the logshipping monitor server.
– Under Monitor connections, choose the connection method to be used by the backup, copy, and restore jobs to connect to the monitor server. Under History retention, choose the length of time you want to retain a history of log shipping.
– On the Database Properties dialog box, logshipping tab, click OK to begin the configuration process.
– Verify and make sure that logshipping log backup job is running successfully on Primary server Server01 and is creating the backup file on the backup share we provided to place the log backup files.
– Verify and make sure that logshipping copy job is running successfully on Secondary Server Server03 and is copying the backup file from the backup share to path specified during copy settings for secondary server.
– Verify and make sure that logshipping restore job is running successfully on Secondary Server Server03 and is restoring the backup files.
– Check Logshipping report from SSMS on Server01 and Server03, by right clicking on SQL Server Instance -> Reports -> Standard Reports -> Transaction Log Shipping Status.

Now we have successfully setup logshipping from Server01 to Server03, Next thing is to failover AlwaysON Availability Group “PROD_LSTNR” from Server01 to Server02, after which we also want to move the logshipping to happen from Server02 to Server03 and disable logshipping from Server01 to Server03.

Disable the logshipping from Server01 to Server03 by following below steps
Connect to SQL Server instance using SSMS with instance name as Server01 and Server03
Disable logshipping log backup job on Server01.
Disable logshipping copy job on Server03 related to Server01 as primary.
Disable logshipping restore job on Server03 related to Server01 as primary.

Perform AlwaysON Availability Group “PROD_LSTNR” failover from Server01 to Server02. Verify to make sure databases are synchronized and no errors related to AlwaysON.

Now configure logshipping from Server02 to Server03
– Connect to SSMS using SQL Server instance name as Server02.
– Configure logshipping from Server02 to Server03 using steps mentioned above under section “How to Setup Logshipping between two servers?”.
– Make sure logshipping is working fine by checking new backup job on Server02 and new copy and restore jobs on Server03 which are related to Server02 as primary.

From now onwards, you can failover AlwaysON Availability Group from Server01 to Server02 or vice-versa and at same time switch the logshipping to be performed from Primary AlwaysON AG database to Server03, by disabling the logshipping jobs related to AlwaysON AG secondary server.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings, SQL Server Logshipping on Databases which are part of AlwaysON Availability Groups (AG)

 
1 2