Steps for Side-By-Side Migrate or Upgrade plan for SQL Server 2005

SQLServerF1

SQL Server 2000 is legacy version which is no more supported by SQL Server. Most of the organizations have migrated to the latest versions of SQL Server long back, however there are few servers or applications which still use SQL Server 2000 due to compatibility issues with latest versions of SQL Server versions. There may be lot of cost involved to upgrade legacy applications to work with latest versions of SQL Server, but it is high time to either move to another application or upgrade the application to work with latest SQL Server versions. There are two different methods available to migrate or upgrade an SQL Server 2000 instance to SQL Server 2005 which are In-Place Upgrade and Side-by-Side upgrade. It is always better to prefer Side-by-Side upgrade which is more safe method and also we can move to new hardware instead of using the old hardware to run the new SQL Server instances. Below are the important steps to keep in mind while upgrading from SQL Server 2000 to SQL Server 2005 using side-by-side method.

– As a first step it is important to understand which objects would be affected after upgrading to SQL Server 2005. We can get this information by running the SQL Server 2005 Upgrade Advisor against the SQL 2000 instance which will generate an report with list of any warnings or issues.
– As part of the up-gradation process, stop all write activity to the SQL Server 2000 instance. This may involve disconnecting all users or forcing applications to read-only activity.
– Transfer data from the legacy instance to the SQL Server 2005 instance. This can be done by backup/restore of databases from SQL Server 2000 to SQL Server 2005. To reduce the amount of downtime, one can look into options like Log Shipping.
– Create supporting objects in system databases like SQL Server Agent jobs, security settings(Logins, Server role permissions), Instance level configuration settings, Database options and DTS packages (legacy mode) to the new SQL Server 2005 instance.
– Create new maintenance plans to mimic the old maintenance plans on server.
– Run validation scripts to verify integrity of new data (rebuild indexes, checkdb ..etc).
– Test the new instance to verify that applications work without any problems.

Also there are some new features available in SQL Server 2005 like database mirroring, so check if any of the new features are worth to be implemented to High Availability, Security, Stability, etc.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Why to Upgrade or Migrate to Higher Versions of SQL server 2008 R2

SQLServerF1

Why upgrade or migrate to higher versions of SQL server 2008 R2 or SQL Server 2012 or 2014?
SQL server 2008 R2 or higher versions provides an enterprise data platform with great benefits which include, Minimizing business disruptions by increasing availability and scalability while significantly lowering IT costs. Gaining deeper business insight through richer end-user analytics and reporting tools that result in a faster return on investment. Accelerating the development time of line-of-business applications by up to 40 percent with Visual Studio and Microsoft .NET integration, including 50 to 70 percent code reduction for most scenarios. Reducing data management complexity and ease of manageability.

There are many new latest features available with each newer version of SQL Server which include some cool features like Multi-Server management, Resource Governor, Master Data Services, CDC, etc in SQL Server 2008 R2, AlwaysON Availability Groups, Enhanced Fail-Over clustering, Extended Events, from SQL Server 2012 onwards, In-Memory OLTP in SQL Server 2014 onwards. Also there has been huge improvement related to Business intelligence which include Dataware house support, SQL Server Reporting Service, SQL Server Analysis Services and Integration Services, PowerPivot and Self-Service BI, etc.

The upgrade or migration to higher versions of SQL Server has been made easier with new setup or installation interface which provides lot of options like repair, add cluster instance, add node to add additional nodes to SQL Server failover clustering instance, etc. Also, from SQL Server 2008 R2 onwards, Service Packs or Cumulative Updates can be uninstalled too. The logging also has been improved which provides more details regarding failure of in place upgrades or failure of service packs or Cumulative updates. Also, slip-streaming feature allows DBAs to merge the base RTM SQL Server media with Service Packs and apply to avoid any known issues and to keep the installation to complete smoothly as long as all the pre-requisites are fulfilled.

With each newer version release of SQL Server, support for older versions become outdated and new service packs or Cumulative updates will not be released for older versions. When there is any major problems, it may turn difficult to get support from Microsoft for older versions of SQL Server. Running with older versions of SQL Server instances poses great risk and threat to security or availability of the data during crisis, so it is always advisable to run with latest versions of SQL Servers which are consistent.

This is applicable on below versions of SQL Server

SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Performance Considerations for ASPSTATE Database in SQL Server for DBAs

SQLServerF1

One of the database which may appear on multiple SQL Server instances in multiple environments is the database with name ASPSTATE and most often or not these are very small in size, but are very busy with user activity. Most of the times, DBAs are not really concerned with this ASPSTATE database and not much aware of the purpose of this database and if there are any considerations as a DBA to be aware of this database. However at times the performance and management of this database becomes critical for the application to function smoothly. Below are some of the considerations for ASPSTATE databases for DBAs to be aware of.

The ASPSTATE databases is generally very busy with lot of user activity which involves multiple DML or select operations. DML operations like Insert, Update and Delete are performed very frequently on this database which causes lot of write operations to the transaction log file, so it is better to keep the ASPSTATE database in SIMPLE recovery model which will simplify the management of the transaction log file.

By default all the user data is stored in tempdb, so it is important to configure the ASPSTATE database in such a way that the user data is created and stored in the ASPSTATE database itself. As mentioned before this database can have lot of DML and select operations, it is advisable to make sure that the data and transaction log files are on different RAID disks and good to monitor the disk response times where the data and log files reside. Any disk latency can cause significant slowness.

Generally it is possible to have multiple applications use same ASPSTATE database, but as a best practice it is not a good idea, instead always prefer dedicated database for each application which makes monitoring and management of the ASPSTATE database efficient.

The database tables and indexes may show high fragmentation and blocking due to multiple requests to access the tables. For fragmentation, performing index rebuilds and update stats once a day should be good, however it may not show enough benefits as the data keeps changing very frequently. Another idea is to make sure that auto-update stats is turned off. Generally the blocking is common to see against ASPSTATE database, but is mostly very short time blocks, however if the blocking happens more frequently, then action needs to be taken based on what the head blocker is doing.

Starting with SQL Server 2014, In-Memory OLTP technology can be used to make the database access much faster. There are other optimizations too related to deletes and blocking but that are mode dealt by the developers than DBAs. If requested as DBA you could propose any missing index which may be beneficial, but keep in mind that this need to tested throughly before implementing as it can cause negative impact as well.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Basics of ASPSTATE Database in SQL Server for DBAs

SQLServerF1

It is common for DBAs to notice databases with name ASPSTATE on multiple SQL Server instances and most often or not these are very small in size, but are very busy with user activity. Most of the times, DBAs are not really concerned with this ASPSTATE database and not much aware of the purpose of this database and if there are any considerations as a DBA to be aware of this database. Below are some basics about the purpose of this ASPSTATE database and some points for SQL Server Database Administrators to have some knowledge about this database which may turn helpful in future.

What is ASP.Net Session State?
ASP.NET session state allows to store and retrieve values for a user as the user navigates ASP.NET pages in a Web application. HTTP is a stateless protocol. This means that a Web server treats each HTTP request for a page as an independent request. The server retains no knowledge of variable values that were used during previous requests. ASP.NET session state identifies requests from the same browser during a limited time window as a session, and provides a way to persist variable values for the duration of that session. By default, ASP.NET session state is enabled for all ASP.NET applications.

In order to configure an ASP.NET application for ASP session state management, we need to configure the web.config file of the web application. There are different modes that are available for use to store the session state information as mentioned below.

OFF – In this mode the ASP session state is turned off for the web application.
InProc – In this mode ASP sessions are kept in memory on the web server. This is the default behavior.
StateServer – In this mode the session is stored in a separate process called the ASP.NET state service.
SQL Server – In this mode the session state is stored in SQL Server database generally named as ASPSTATE.
Custom – Developers can write their own custom mechanism or any other way that is readily available for the session storage mechanism.

This ASPState database is created by developers using aspnet_regsql.exe utility that comes with the .NET Framework. There are various parameters available for this utility to specify the instance name and other settings like authentication mode to connect to SQL instance, etc. ASPState database is used to store the stored procedures and functions required for session state management and by default all tables which store session state information is created in tempdb which will be lost after restart of the SQL Server instance. To avoid missing session state information even after restart of the SQL Server instance, there are various parameters passed to aspnet_regsql.exe utility which will create/store the tables which hold session state information in ASPSTATE database itself.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

WSFC Cluster Quorum Vote Configuration Not Recommended for AG

SQLServerF1

When you create an AlwaysOn availability group by using the New Availability Group Wizard in Microsoft SQL Server 2012, or when you are performing failover of the AlwaysON Availability Group (AG) every time you may receive a warning message as shows below

The current WSFC cluster quorum vote configuration is not recommended for this availability group.

The above error occurs due to various reasons. and one of which is missing windows OS patch KB 2494036 on all cluster nodes part of the AlwaysON Availability Groups. Verify and apply the mentioned patch and reboot the nodes and the issue should be resolved. However if the issue still persists even after this patch is applied on all the cluster nodes then apply latest Service Pack or Cumulative Update available for SQL Server 2012 or SQL Server 2014.

Starting with Windows Server 2008, failover clustering introduced node based voting where Windows Server Failover Clustering (WSFC) uses a majority of votes to establish a quorum for determining cluster membership. Votes are assigned to nodes in the cluster or to a witness that is either a disk or a file share witness. We can use the Configure Cluster Quorum Wizard to configure the clusters quorum model. When you configure a Node Majority, Node and Disk Majority, or Node and File Share Majority quorum model, all nodes in the cluster are each assigned one vote. However there are limitations to the votes for AlwaysON Availability Groups which involve SQL Server Clustered instance. If a  primary or a secondary replica is hosted by a Windows cluster then this replica is configured for an automatic failover without a vote. If this is by design then you may ignore the warning as it still allows the initial setup of AlwaysON Availability Groups and subsequent manual failover of AlwaysON Availability Groups, but the warning will appear each time you try to perform a manual failover.

Also it is possible to configure the votes for each node and customized as per the business requirement. This functionality helps with multi-site clusters where you may want one site to have more number of votes than other sites in a disaster recovery data center. By default all nodes in the cluster have one vote but administrators can change the vote functionality in the quorum model by configuring a particular node to have 0 votes.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Microsoft SQL Server Default Trace and its Benefits

SQLServerF1

Default Trace was introduced starting with SQL Server 2005, and is one of the very useful feature in some situations. It was not very popular or was not much talked feature initially, but as years progressed it came in to light during various situations where DBAs and management were desperate of finding certain information on who could have performed a certain critical operation against the SQL Server Database.

Default Trace is almost similar to server side trace, but this comes by default meaning this is enabled by default with the installation of the SQL Server instance and since then will start collecting some important pieces of information. As per BOL “Default trace provides troubleshooting assistance to database administrators by ensuring that they have the log data necessary to diagnose problems the first time they occur.

The default Trace becomes very helpful in various cases while troubleshooting because it collects various events which are worth looking into when there is some unexpected event that occurs like a particular object was dropped, which should have not been or in cases were auto-growth or shrink operation caused some performance problems during business hours and we need to find who performed that task. It would be interesting and important to know what all events are captured by the default trace and we can find this information easily by browsing to the log folder where all SQL errorlogs are present and in same location the default trace files are also located. Open one of the default trace file and go to its properties and we can find all the events captured by the default trace.

If you are looking for options to add additional events to be captured by the default trace, then it is not possible as it is not supported, so you will need to setup a different server side trace or look for other alternatives. If you have implemented other alternatives and the default trace is not required any more, it can be disabled too using sp_configure. Default trace is very light waited, so you can just leave it running as you never know when it may become important and only source where you can get the information you are looking for.

Also from SQL Server Management Studio (SSMS) we can see various reports like schema change history, and few other information. We can open the default trace files using the SQL Server Profiler utility or we can read the contents of the default trace using T-SQL queries as well and it will be helpful in querying for a particular event or at a particular date and time. SQL Server maintains 5 default trace files in the log folder and overwrites once all the 5 files are used. Each file is can be a maximum of 20 MB after which it will create a new trace file and once it reaches 5 trace files, it will start removing the oldest trace file, thus ensuring there are 5 trace files at any given point in time. If the server has lot of activity then it may be possible that the old data which you may be looking for may already been lost.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Reducing Number of VLFs in SQL Server Database Transaction Log File

SQLServerF1

Every SQL Server Databases consists of two or more files, of which two types of files are mandatory, one of which is a data file and another transaction log file. Data file contains actual data in specific format which SQL Server can understand and transaction log files contains all the changes that are made to the data in the databases using DDL(Create/Alter/Drop) or DML(Insert/Update/Delete) queries. The transaction log is used to guarantee the data integrity of the database and for data recovery. Data files and transaction log files architecture is different and it is important as a DBA to understand the architectures of the data and log files in a database. In this article we will discuss about the VLFs in transaction log file and how to reduce those VLFs.

The transaction log in a database maps over one or more physical files. Conceptually, the log file is a string of log records. Physically, the sequence of log records is stored efficiently in the set of physical files that implement the transaction log. The SQL Server Database Engine divides each physical log file internally into a number of virtual log files. Virtual log files have no fixed size, and there is no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the virtual log files dynamically while it is creating or extending log files. The Database Engine tries to maintain a small number of virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size of the new file increment. The size or number of virtual log files cannot be configured or set by administrators.

At times the number of Virtual Log files (VLFs) are increased to very high number due to improper configuration to transaction log files. SQL Server follows an algorithm which creates these virtual log files based on the initial size and auto growth that occurs. For example, If the initial size of the log file is set to any value less than 64 MB, then there will be 4 VLFs created by SQL Server, if initial size is between 64MB and up to 1GB then 8 VLFs gets created and it initial size is greater than 1 GB then the number of VLFs created are 16. The same formula applies to auto-growth events of the log file too. Auto-growth evert occurs when the log file reaches the max initial size and all the space used is active, then physical log file size is increased on the auto-growth settings of the database.

If the initial size of database is set to very low value, then it will cause auto-growth and if this auto-growth values are set to low then there will be frequent small increase to the log file which can create lot of VLFs. If there are too many VLFs in a database log file, then it will cause performance issues and also affect certain operations which are dependent on log file. Some problems which occur due to large transaction log files include long recovery of database during startup which interrupts users from using the database, slow performance with replication, database mirroring, AlwaysON, etc. So, as a DBA it is important to maintain the number of VLFs to smaller size thus avoiding unnecessary problems.

When you notice that the number of VLFs are very high like more than 300 then you can follow below steps to reduce the number of VLFs

– First monitor for few days or month to see what is the maximum size the transaction log file of a particular database can grow when there are regular log backups performed. That will be the size the log file may grow in future too.

– First shrink the transaction log file to the smallest size possible(better perform this during off business hours as it will cause some performance problems). You may need to perform multiple log backups and perform shrink operation on log file multiple times to reduce the log file size to minimum.

– Once the log file is very low, now change the initial size of the log file to higher value like 2000 MB or 4000 MB or 8000 MB based on the estimated size the log file can grow to in future. For example, if you estimate that the log file may grow up to 20 GB, then set the initial size of the log file to 4000 MB first time and to 8000 MB second time and to 12000 MB third time and to 16000 MB fourth time and finally to 20000 MB.

Example:

ALTER DATABASE [DBName] MODIFY FILE ( NAME = N’LogicalLogFileName’, SIZE = 4000MB )
GO
ALTER DATABASE [DBName] MODIFY FILE ( NAME = N’LogicalLogFileName’, SIZE = 8000MB )
GO
ALTER DATABASE [DBName] MODIFY FILE ( NAME = N’LogicalLogFileName’, SIZE = 12000MB )
GO
ALTER DATABASE [DBName] MODIFY FILE ( NAME = N’LogicalLogFileName’, SIZE = 16000MB )
GO
ALTER DATABASE [DBName] MODIFY FILE ( NAME = N’LogicalLogFileName’, SIZE = 20000MB )

– Now set the auto-growth setting of the database log file to grow in increments of 1024MB.

Now the number of VLFs in the database will be less and will not increase by too much in future too.

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Introduction to Remote Database Administration (DBA) Services

SQLServerF1

Almost every organization produces data and has need to store and retive data in an efficient manner. There are various database management products like SQL Server, Oracle, MySQL, DB2, Sybase, Informatica, TeraData, etc are available in the market and every organization uses one or more of the database management products. Most of the large and medium organizations have dedicated DBA team or atleast one or two DBAs who manage the databases and many organizations outsource their DBA operations to other reputed companies which provide Database Administration and related services.

The companies which employees DBAs and support Database Administration and Management operations may have dedicated office at different locations around the globe and DBAs visit the office daily and connects to the servers or databases and provides day to day support. However with recent development in the hardware and software industry the access to the servers or databases has become far more easier and flexible which allows DBAs to connect and manage servers or databases from any where with just having an Internet connection. There are various security measures taken to safe guard the data and the servers.

With these recent developments there is now an increase options of remove DBA services where in a company employees DBAs across the world and allow them to work from their houses and connect to the client environment through secure methods and then perform day to day activities from home itself without having to visit clients office or the vendors office. In future this will only further increase with the current success with the remote DBA services. Remote DBA services companies specialize in multiple Database Management and related softwares.

Management of databases systems is not plain simple as there are lot of things involved, thus many companies outsource their database operations to remote DBA service companies so that they can concentrate on their core business instead of worrying about the operations and management of non core stuff which is an overhead. Also, many organizations may not be able to afford to have DBAs available 24×7 as it may get very costly, instead they can avail 24×7 reliable DBA services by spending lesser amount or money. Currently remote DBA services are popularly used by many organizations for SQL Server, Oracle, Oracle E-Business  Suite, MySQL, DB2, Sybase, etc.

Remote DBA services generally operate on hourly price model, where a client buys certain number of hours per month and the DBA team installs an monitoring tool or uses existing monitoring tools already installed and works on the incidents raised by the monitoring tools. In addition to the regular monitoring, remote DBA service teams also provide many proactive tasks like health checks, performance reviews, security reviews, backup reviews, etc. The contracted hours can also be flexible like it can be increased if there are any new projects and can be reduced once the projects completes.

If you are looking for low cost remote DBA services for less than 5 critical SQL servers during US non-business hours for SQL Server technology, please contact us at SQLServerF1@GMail.Com

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Displaying SQL Server Query Execution Plans and TIME and IO Statistics

SQLServerF1

DBAs mostly operate, manager or administer a SQL Server instance using SQL Server Management Studio (SSMS). Any operation performed against SQL Server instance through SSMS GUI internally will be translated into set of T-SQL command which can be individual queries or Stored Procedures, functions, etc. Also, any one can run the T-SQL queries directly from the SSMS which gets executed and the operation gets performed and any results are returned.

From a user point of view, we understand T-SQL commands, but SQL Server database engine will further translate these queries into query execution plans so that it can perform the requested operations in the best way possible. So, if there are complains or request that the SQL queries are performing slow, then we can try run those queries from SSMS and use various options available to understand what the query might be doing internally with the help of SQL Server query execution plans, execution IO and TIME statistics.

Below are some of the options provided by SQL Server to see various details about the query execution. These below Transact-SQL SET statements are the options for displaying execution plan information produce output in XML and text.

SET SHOWPLAN_XML ON – This causes SQL Server not to execute Transact-SQL statements, instead SQL Server returns the execution plan information about how the statements are going to get executed in the form of an XML document.

SET SHOWPLAN_TEXT ON – This causes SQL Server to return the execution plan information for each query in text. The Transact-SQL statements or batches are not executed.

SET SHOWPLAN_ALL ON – This is similar to SET SHOWPLAN_TEXT, except that the output is more verbose than that of SHOWPLAN_TEXT.

SET STATISTICS XML ON – This returns execution information for each statement after this statement executes in addition to the regular result set the statement returns. The output is in an XML format. SET STATISTICS XML ON produces an XML output for each statement that executes. The difference between SET SHOWPLAN_XML ON and SET STATISTICS XML ON is that the second SET option executes the Transact-SQL statement or batch. SET STATISTICS XML ON output also includes information about the actual number of rows processed by various operators and the actual number of executes of the operators.

SET STATISTICS PROFILE ON – This returns the execution information for each statement after the statement executes in addition to the regular result set the statement returns. Both SET statement options provide output in text. The difference between SET SHOWPLAN_ALL ON and SET STATISTICS PROFILE ON is that the second SET option executes the Transact-SQL statement or batch. SET STATISTICS PROFILE ON output also includes information about the actual number of rows processed by various operators and the actual number of executes of the operators.

SET STATISTICS IO ON – This displays information about the amount of disk activity that is generated by Transact-SQL statements after the statements execute. This SET option produces text output.

SET STATISTICS TIME ON – Displays the number of milliseconds required to parse, compile, and execute each Transact-SQL statement after statements execute. This SET option produces text output.

Some of these options can also be turned on from the SSMS GUI.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

Introduction to SQL Server Execution Plans

SQLServerF1

Any query run against SQL Server instance will be internally compiled and an execution plan will be generated which gets executed to satisfy the query and generate the required output. When a Query is submitted to the SQL Server instance, it determines different ways of completing this query and then chooses the best way of executing that particular query. To further improve the performance, then generated execution plan is saved and is reused if the same query comes again, thus saves the time taken to generate an execution plan.

If a particular query is reported to be running slow or taking long time to complete, then execution plans are the best way to look and understand what all operations the query is performing. SQL Server database engine uses various metrics like Statistics, Indexes to create all possible execution plans to satisfy the query and then chooses the best plan which would be the fastest. SQL Server uses cost based method to determine which execution plan is the better one compared to other plans possible for the same query. It estimates certain cost for each operation like certain cost for performing logical or physical IO, cost for Memory, CPU, Sort, Scan, etc.

There are different types of execution plans which SQL Server can generate which include Estimated plans and Actual Execution Plans. SQL Server has a pool of memory that is used to store both execution plans and data buffers. The percentage of the pool allocated to either execution plans or data buffers fluctuates dynamically, depending on the state of the system. The part of the memory pool that is used to store execution plans is referred to as the procedure cache.

SQL Server execution plans has the following two main components, which are Query plan and execution context.

Query Plan – The bulk of the execution plan is a re-entrant, read-only data structure used by any number of users. This is referred to as the query plan. No user context is stored in the query plan. There are never more than one or two copies of the query plan in memory: one copy for all serial executions and another for all parallel executions. The parallel copy covers all parallel executions, regardless of their degree of parallelism.

Execution Context – Each user that is currently executing the query has a data structure that holds the data specific to their execution, such as parameter values. This data structure is referred to as the execution context. The execution context data structures are reused. If a user executes a query and one of the structures is not being used, it is reinitialized with the context for the new user.

This is applicable on below versions of SQL Server

SQL Server 2005
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014

Hope this was helpful.

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, SQL Server Frequently asked questions, SQL Server Trainings.

 

 
1 2 3 4 35