Database snapshots are not accessible after an In-place upgrade

I worked on an issue where database snapshots were inaccessible after performing an In-place upgrade of SQL Server 2005 instance which has database snapshots to SQL Server 2008 R2. Upgrade completed successfully and all the SQL Server databases are accessible without any issues, however got the following error when I tried to access the database snapshots.

************************************************************************************************************************************************

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
——————————
Cannot open database ‘Test_db_snapshot’ version 611. Upgrade the database to the latest version. (Microsoft SQL Server, Error: 946)

************************************************************************************************************************************************

Database Snapshots (http://msdn2.microsoft.com/en-us/library/ms175158.aspx) in SQL Server 2005 or later Enterprise Edition allow a read-only point-in-time view of data.

To understand this behavior, I had installed SQL Server 2005 and created a database snapshot. I then ran the upgrade to SQL Server 2008 R2 which completed successfully, after which I got the same error when I tried to access the snapshot database.

As the issue occurs after upgrading the SQL Server 2005 instance to SQL Server 2008 R2, I planned to run the SQL Server 2008 R2 upgrade advisor to test if it detects the issue.

I have installed new SQL Server 2005 instance and created a database snapshot. I then Installed and ran the SQL Server 2008 R2 upgrade advisor which generated an upgrade advisor report. From the report I could notice an error which says “Read-only databases cannot be upgraded” and under that when I clicked on “Show affected objects” got a pop up listing “Test_db_snapshot”.

Screenshot of the error generated by SQL Server 2008 R2 upgrade advisor.

SQLServer2008R2UpgradeAdvisor

SQLServer2008R2UpgradeAdvisor


SQL Server 2008 R2 Upgrade should be able to modify any of the existing objects on the instance while it is upgrading, but a database snapshot is a read-only, static view of a database (the source database), So the upgrade cannot make any changes to this database snapshot.
Before upgrading SQL Server instance, it is recommended to run the upgrade advisor to find out the objects that will get affected by upgrade and which need to be fixed before running the upgrade.

The report of the upgrade advisor will list out the issues & objects that will be affected by the upgrade from where we can find that “read-only databases cannot be upgraded” and when we click on “show affected objects”. We can find the database snapshots listed over there.
Delete database snapshots before upgrading the SQL Server 2005 to higher versions.

Happy Reading,
SQLServerF1 Team

 

What is SQL Server Consolidation and the Benifits of it

What is SQL Server Consolidation?
Consolidation is a process where we combine various units into more efficient and stable larger units. We here talk about Database(SQL Server) consolidation, which is one of the important components in the IT environment. Databases are used in many business systems and in different departments, so it is possible that we may easily lose control of the number of databases that need to be maintained, because each group may simply create their own database for their requirement alone. This leads to many servers, many database instances and databases. Thus databases are one of the prime candidates for consolidation.

Benefits of SQL Server Consolidation
Reduce cost – Consolidation reduces the cost greatly on Hardware, Software, labor time, licenses, hosting, etc.

Reduce management overhead – It reduces the overhead of maintaining and managing activities like server, SQL Server Patching, taking care of security, Performing and maintenance of backups, etc.

Increase resources utilization – It is possible that some machines with higher configuration are under utilized and are almost idle, so it was logical to just move the instances/databases. Upgrading to fewer machines and newer hardware allows for reductions in rack space, power, and cooling needs and allows utilizing the resources more efficiently.

Upgrade – In IT industry, on periodic basis old hardware is retired and replaced with newer hardware. Consolidation should come into picture during those time to bring efficiency.

Compliance – During the consolidation, documenting the process and reviewing the security will get the systems under compliance. Using new software features will bring environment to more unification.

Bring things under control – There are many clients who are unaware where their data is stored, used by whom, managed by whom, etc. Consolidation will get things under control with fewer machines and documentation.

What are the levels of SQL Server Consolidation?
SQL Server consolidation can be done at various levels as mentioned below

Database – At Database Level, multiple databases used by different application will be moved/hosted on a single SQL Server Instance.

Disadvantage with this approach would be managing security issues at instance level where users of one application should not be able to access data of other application, need to ensure that all applications use the shared resources like tempdb, hardware and software resources efficiently.

Instance – At Instance Level, multiple databases used by different application which are part of separate instances on their dedicated servers are moved/hosted on a single Server with multiple SQL Server Instances.

Disadvantage with this approach would be managing Security at OS and Network level. Need to ensure that each instance does not use all the available hardware resources, causing problems to other instances.

Virtualization – In Virtualization level, multiple physical servers are removed and are hosted on a single physical machine as Virtual Machine and share the Hardware resources.

Disadvantage with this approach would be managing Virtualization like ensuring that all VMs are granted appropriate hardware resources.

All these three levels can be mixed to get optimal SQL Server Consolidation for your environment.

Happy Reading,
SQLServerF1 Team

 

Missing SQL Server management Studio

I have installed SQL Server 2012 Evaluation Edition on my laptop which is Windows 7 Professional. Due to some issues, I had to uninstall the SQL Server for some time. After few days, I reinstalled the SQL Server 2012 Evaluation Edition using the same setup media which I used before to install on the same laptop. Installation completed successfully and I can see the SQL Server services from the services console, but the problem I was facing was that I could not see SQL Server Management Studio.

– I have looked at Start -> All Programs -> SQL Server 2012 -> Could not see SQL Server Management Studio

SSMS-Missing

SQL Server Management Studio Missing


– Tried to install the SQL Server Management Studio again using the Setup, but the setup wizard shows that SSMS Basic and Advanced are already installed.

– Searched for SQL Server Management Studio on the file system, but could not find it.
– Verified the configuration state from the registry(Start -> run -> regedit)
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\110\ConfigurationState

Updated the “SQL_SSMS_Adv” key value from 3 to 1

– Started the repair of the SQL Server instance and Shared features which finished successfully, but still could not find the SSMS
– I ran SQL Server 2012 Setup Discovery Report which shows me that the Management Tools – Basic and Management Tools – Complete are present.
– Restarted my laptop.
– Started the installation of Shared features which completed successfully.

Now I was able to see SQL Server Management Studio and I could open and use it without any issues.

Happy Reading,
SQLServerF1 Team

 

Extensible Key Management (EKM) and Hardware Security Module (HSM) in SQL Server

Extensible Key Management (EKM):
SQL Server 2008 introduced Extensible Key Management (EKM) for managing keys outside of SQL Server. Traditionally, all Symmetric and Asymmetric Keys used by SQL Server reside in the database itself; however EKM allows key creation, storage, encryption and decryption to be done outside the database using an HSM.

With the growing demand for regulatory compliance and concern for data privacy, organizations are needs to come up with strong security solutions. One of the approaches followed is use of encryption, but this is often impractical for a strong security solution using only database encryption management tools.

Hardware Security Module (HSM) – Hardware vendors provide products that address enterprise key management by using Hardware Security Modules (HSM). HSM devices store encryption keys on hardware or software modules. This is a more secure solution because the encryption keys do not reside with encryption data.

Hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. HSMs may possess controls that provide tamper evidence such as logging and alerting and tamper resistance such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing.

An HSM should adhere to one or more recognized security and operational standards defined by various industry groups, such as the Federal Information Processing Standard (FIPS), Common Criteria, etc. An HSM deployment, and supporting operational practices, should also address the requirements of reputable business processes and security auditors to provide the highest degree of protection for the CA and its root keys.

A number of vendors (CREN, Safenet, Thales, etc) offer HSM for both key management and encryption acceleration. HSM devices use hardware interfaces with a server process as an intermediary between an application and an HSM. Vendors also implement MSCAPI providers over their modules, which might be hardware or software. Vendors can also provide management software for HSM, key configuration, and key access.

HSM implementations vary from vendor to vendor, and to use them with SQL Server requires a common interface. Although the MSCAPI provides this interface, it supports only a subset of the HSM features. It also has other limitations, such as the inability to natively persist symmetric keys, and a lack of session-oriented support.

Extensible Key Management – The SQL Server Extensible Key Management enables third-party EKM/HSM vendors to register their modules in SQL Server. When registered, SQL Server users can use the encryption keys stored on EKM modules. This enables SQL Server to access the advanced encryption features these modules support such as bulk encryption and decryption, and key management functions such as key aging and key rotation.

EKM Configuration – Extensible Key Management is Enterprise Edition feature. By default, Extensible Key Management is off. To enable this feature in SQL Server, use the sp_configure command that has the following option and value, as in the following example:

sp_configure 'show advanced', 1
Go
Reconfigure with override
Go
sp_configure 'EKM provider enabled', 1
Go
Reconfigure with override
Go

To disable the feature, set the value to 0.

SQL Server Extensible Key Management enables the encryption keys that protect the database files to be stored in an off-box device such as a smartcard, USB device, or EKM/HSM module. This also enables data protection from database administrators (except members of the sysadmin group). Data can be encrypted by using encryption keys that only the database user has access to on the external EKM/HSM module.

You can use Extensible Key Management for a username and password combination or other methods defined by the EKM driver. An EKM module can support more than one type of authentication. Each provider exposes only one type of authentication to SQL Server, that is if the module supports basic or other authentication types, it exposes one or the other, but not both.

EKM Device-Specific Basic Authentication Using username/password – For those EKM modules that support Basic authentication using a username/password pair, SQL Server provides transparent authentication using credentials.

A credential can be created for an EKM provider and mapped to a login (both Windows and SQL Server accounts) to access an EKM module on per-login basis. The Identify field of the credential contains the username; the secret field contains a password to connect to an EKM module.
If there is no login mapped credential for the EKM provider, the credential mapped to the SQL Server service account is used.

A login can have multiple credentials mapped to it, as long as they are used for distinctive EKM providers. There must be only one mapped credential per EKM provider per login. The same credential can be mapped to other logins.

Other Types of EKM Device-Specific Authentication – For EKM modules that have authentication other than Windows or user/password combinations, authentication must be performed independently from SQL Server.

SQL Server can use EKM keys to encrypt other keys in a database. You can create and use both symmetric and asymmetric keys on an EKM device. You can encrypt native (non-EKM) symmetric keys with EKM asymmetric keys.

The following example creates a database symmetric key and encrypts it using a key on an EKM module.

CREATE SYMMETRIC KEY Key1
WITH ALGORITHM = AES_256
ENCRYPTION BY EKM_AKey1;
GO
--Open database key
OPEN SYMMETRIC KEY Key1
DECRYPTION BY EKM_AKey1

The main idea here is to get a HSM solution from a Vendor and then integrate it with the SQL Server and store the encryption keys on a HSM. The data can be encrypted/decrypted with the keys (Symmetric key, Asymmetric key, certificate) which are stored in HSM and only authorized users can gain access to these keys based on process defined by the HSM provider.

Happy Reading,
SQLServerF1 Team

 

Introduction to basics of Security and Encryption in SQL Server

What is Encryption?
Encryption is the process of obfuscating data by the use of a key or password. This can make the data useless without the corresponding decryption key or password. Encryption does not solve access control problems. However, it enhances security by limiting data loss even if access controls are bypassed. For example, if the database host computer is misconfigured and a hacker obtains sensitive data, that stolen information might be useless if it is encrypted.

In SQL Server encryption can be used for connections, data, and stored procedures. We will see further about security, encryption, types of encryption.

Security:
Securing SQL Server can be viewed as a series of steps, involving four areas: the platform, authentication, objects (including data), and applications that access the system.

Platform and Network Security – The platform for SQL Server includes the physical hardware and networking systems connecting clients to the database servers, and the binary files that are used to process database requests. Best practices for physical security strictly limit access to the physical server and hardware components. For example, use locked rooms with restricted access for the database server hardware and networking devices. In addition, limit access to backup media by storing it at a secure offsite location.

Operating system service packs and upgrades include important security enhancements. Apply all updates and upgrades to the operating system after you test them with the database applications. Firewalls also provide effective ways to implement security. Logically, a firewall is a separator or restrictor of network traffic, which can be configured to enforce your organization’s data security policy. If you use a firewall, you will increase security at the operating system level by providing a chokepoint where your security measures can be focused. SQL Server uses operating system files for operation and data storage. A best practice for file security requires that you restrict access to these files.

SQL Server Objects Security – Access to SQL Server objects like SQL Server instance, databases, tables and other objects can be restricted with SQL Server logins and database users. Granting appropriate permissions limits the users from accessing unauthorized information.

Data in SQL Server can be encrypted. Encryption enhances security by limiting data loss even in the rare occurrence that access controls are bypassed. For example, if the database host computer is misconfigured and a malicious user obtains sensitive data, such as credit card numbers, that stolen information might be useless if it is encrypted.

Application Security – SQL Server security best practices include writing secure client applications. Which involves application securely connecting to SQL Server, using appropriate protocols, using network security and network encryption.

Not only applications, but there are tools which are used to access SQL Server like SQL Server Management Studio, SQLCMD utility, SQL Server configuration manager, profiler, BIDS studio, etc. Need to ensure these tools also use secure methods to connect to databases to get the data.

Encryption to Secure SQL Server Data:
Encryption is the process of obfuscating data by the use of a key or password. This can make the data useless without the corresponding decryption key or password. In SQL Server encryption can be used for connections, data, and stored procedures. Data encryption can be performed by the OS, by SQL Server, or by the application.

There are different types of encryptions available in SQL Server which includes Cell-level Encryption, Transparent Data Encryption, and Transport-Level Encryption.

Cell-level Encryption – Cell-level encryption allows sensitive data to be encryption at a finer level of detail than the entire database. In a table, you might just want to encrypt certain column like salary or credit card number which should not be visible.

Benefits of using Cell-Level Encryption:
(1) Granular, user specific control on encrypting individual cells or column values rather than entire databases (compared to using Transparent Data Encryption – TDE).
(2) Data retains its encrypted state in memory unless it is actively decrypted.

Drawbacks of Cell-Level Encryption:
(1) Requires application changes and analysis of tables to locate sensitive data that needs to be encrypted.
(2) Encryption of data introduces randomization. This makes it impossible to index data and causes a performance impact since indexes on encrypted columns cannot be used while searching for a value.
(3) Cell-level encryption built-in functions only return varbinary type data and the output is limited to up to 8000 bytes.

Transparent Data Encryption – Transparent Data Encryption (also called as TDE) is a technology in SQL Server that offers encryption of data-at-rest. This feature automatically encrypts the entire database (data and log files), as well as database backups, without requiring any programming or code changes to your application. The process is entirely transparent, hence the name Transparent Data Encryption.

When TDE is first enabled for a specific database, SQL Server encrypts the database in the background. During this process, the database remains online and responsive to client requests (similarly, when encryption is disabled, SQL Server decrypts the database in the background). Encryption is performed at the page level, and does not increase the size of the database in any way. Once the entire database is encrypted, new data gets encrypted on the fly as it is written to disk, and all data gets decrypted when read back.

Benefits of using TDE:
(1) Ease of implementation and transparency: TDE is essentially a “flip-the-switch” solution that allows you to encrypt your entire database and log files without application modifications.
(2) Additional security compared to cell-level encryption: TDE automatically encrypts tempdb and the database log files to prevent data leakage.
(3) Any data-type supported: Unlike cell-level encryption that returns only varbinary data, TDE allows you to store data using any native data type.
(4) Support for indexes: Because data is decrypted in the buffer pool, TDE allows the SQL Server query processor to use existing indexes on the data.

Drawbacks of TDE:
(1) Lowest support encryption granularity is the database.
(2) The data is not protected from authenticated, authorized database users, including the DBA.

Transport-Level Encryption – Secure Sockets Layer (SSL) encryption enables transmitting encrypted data across the network between an instance of SQL Server and a client application. SSL is used in SQL Server to provide security at transport level. It is most commonly used to support web clients, but it can also be used to support SQL Server native clients.

Secure Sockets Layer (SSL) is a protocol for establishing a secure communication channel to prevent the interception of critical or sensitive information across the network and other Internet communications. SSL allows the client and the server to authenticate the identity of each other. After the participants are authenticated, SSL provides encrypted connections between them for secure message transmission.

Enabling SSL encryption increases the security of data transmitted across networks between instances of SQL Server and applications. However, enabling encryption does slow performance.

Encryption Mechanisms – SQL Server provides the following mechanisms for encryption:

Transact-SQL functions – Individual items can be encrypted as they are inserted or updated using Transact-SQL functions.

Asymmetric keys – An asymmetric key is made up of a private key and the corresponding public key. Each key can decrypt data encrypted by the other. Asymmetric encryption and decryption are relatively resource-intensive, but they provide a higher level of security than symmetric encryption. An asymmetric key can be used to encrypt a symmetric key for storage in a database.

Symmetric keys – A symmetric key is one key that is used for both encryption and decryption. Encryption and decryption by using a symmetric key is fast, and suitable for routine use with sensitive data in the database.

Certificates – A public key certificate, usually just called a certificate, is a digitally-signed statement that binds the value of a public key to the identity of the person, device, or service that holds the corresponding private key. Certificates are issued and signed by a certification authority (CA). The entity that receives a certificate from a CA is the subject of that certificate.

References:
MSDN Reference
Cell Level ENcryption
TDE

Happy Reading,
SQLServerF1 Team

 

Methods to Shrink Data File in SQL Server

Shrinking a Data file in SQL Server is considered the worst decision as it would lead to huge fragmentation, however most DBA’s supporting the Customers often run into situations where a lot of data was deleted from the data file leaving lot of free space inside the data file, but the Customer wants to get this free space as they do not prefer adding additional disk space or new disk for other databases running short of space.

Why is shrinking a Data file not good?
Because it causes huge fragmentation. Shrink of large data files take lot of time and causes heavy locking/blocking and the database is as good as “down” while Shrink operation is running on the data file. Transaction log file grows large during the Shrink operation.

As the management wants to go ahead and reclaim the free space from data file, below are some of the methods to release the free space from the data file.

Method1:
– Shrink the data file during downtime.(You may choose the Shrnik the file at once to release the free space Or you may choose to incrementally perform Shrink, each time releasing certain amount of free space).
– Rebuild the indexes to reduce the fragmentation, which may increase the size of the data file again though.
– Update the Statistics with Full Scan

Method2:
Method suggested in the below article, end result will change the physical structure of the database as we will end up with a new file group.
Move Indexes to New FileGroup
– Create a new filegroup
– Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time
– Drop the old filegroup that we were going to shrink anyway (or shrink it way down if it’s the primary filegroup)

method2 which involves moving the tables to another filegroup and is complex and Some issues which makes it complex are

– LOB data cannot be moved to another filegroup( If LOB data needs to be moved, then we need to create a new table in other filefroup, copy the data and then rename the table names and also special care needs to be taken care in moving the dependent objects like foreign key constraints, triggers, indexes, statistics)

– There will be many tables in the database and requirement is that all these tables need to have clustered index( If a table does not have clustered index, then we need to create one and then drop it later)

– This method uses CREATE INDEX … WITH (DROP_EXISTING = ON) which has to be done individually on all the tables. For each table it may take very long time and any human error can be costly

– It requires additional disk space as while the command is being run on the largest table, it will require more than twice the size of the table.

– Any unknown/unexpected issues.

Both the above methods require downtime window as we have to run the shrink to release the free space out of the file. Above methods needs to be tested on test server before implementing on production.

One of the case where I worked on Shrink of a data file using Method1 from 470 GB to 300 GB took about 7 hours to complete on a fairly fast server. After the Shrink operation we had to perform Rebuild Indexes and Update Statistics which took another 15 hours. So totally we had about 22 hours of significant performance degradation. We had to monitor the progress of the Shrink operation and reindexing and update states jobs during this time. Fortunately, there were no performance issues reported after the activity.

Happy Reading,
SQLServerF1 Team

 

Could Not Open SQL Server Errorlog File Errors in Event Viewer of Passive Node

You may sometimes notice errors like below on Passive cluster node.

initerrlog: Could not open error log file 'F:\MSSQL10_50.SQLInst\MSSQL\Log\ERRORLOG'. Operating system error = 3(failed to retrieve text for this error. Reason: 15100).

All SQL Server, IP Address, Network Name, Disk resources will be owned by the Active node, so if you try to start the SQL Services on passive node, it would result in this error.

At times, although no one tries to start the SQL Services on passive cluster node, still you may see this error in the Application Eventlog. This could be result of possibly some application is attempting to start the SQL Server service on passive node.

By running a process monitor(procmon) you should be able to see which process is trying to start the SQL instance on passive node. Some application which may do this are some Virus, Monitoring or Audit software. On one of the occasion we noticed SCCM application which has a service similar to “SMS_SITE_SQL_BACKUP_SCCMXXX” was trying to start the SQL instance on passive node.

Disabling this service on the passive node stopped these errors in our case.

 

SQL Server 2012 SP1 Upgrade Step ‘msdb110_upgrade.sql’ Encountered Error 537, State 3, Severity 16

SQL Server 2012 instance does not start after applying SP1. Setup finished successfully, but SQL instance failed to start. Reviewed SQL Errorlog file and noticed below errors at the end of the file where it failed applying the upgrade scripts.

spid8s Error: 537, Severity: 16, State: 3.
spid8s Invalid length parameter passed to the LEFT or SUBSTRING function.
spid8s Error: 912, Severity: 21, State: 2.
spid8s Script level upgrade for database 'master' failed because upgrade step 'msdb110_upgrade.sql' encountered error 537, state 3, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the 'master' database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.
spid8s Error: 3417, Severity: 21, State: 3.
spid8s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

This error Invalid length parameter passed to the LEFT or SUBSTRING function. will occur if there are any user database logical file names in the instance matches the master or mastlog.

Start SQL Server using -T902 Trace Flag:
To run any commands, we first need to connect to the SQL Server instance, but the SQL Services are in stopped state. We can start the SQL Server by skipping the script upgrade using Trace Flag -T902. Please refer below on how to start SQL Server instance using -T902

How to Start SQL Server Using -T902 Trace Flag to Skip the Script Upgrade

Run below command to find if there are any database logical file names similar to Master database logical file names.

select db_name(database_id) as DatabaseName, name, Physical_name from master.sys.master_files where name like 'mast%'

If above query returns any other user database having logical names are master or mastlog, then either detach that database or rename the logical file names to some other name.

To Detach a Database
Use Master
Go
EXEC SP_Detach_DB @dbname = N'SpecifyDatabaseName'

To Rename Logical File Names
Use Master
Go
ALTER DATABASE MODIFY FILE ( NAME = 'current_logical_name', NEWNAME = 'new_logical_name')

Now start the SQL Server instance by removing Trace Flag -T902. Check SQL Errorlog to ensure the upgrade scripts completed successfully.

Cheers,
Keerthi Deep

 

Starting SQL Server using -T902 Trace Flag to skip the Script Upgrade

Whenever we install a patch(Service Pack or Cumulative Update) to SQL Server or if we upgrade SQL Server from one version to another version(Ex: SQL Server 2008 to SQL Server 2012), at the end of the up-gradation process some scripts are applied to Master and MSDB databases. Sometimes, there are possibilities that the script upgrade may fail due to various reasons like below.

Script level upgrade for database ‘master’ failed after Applying SQL 2008 R2 SP2 due to Error “The database principal owns a schema in the database, and cannot be dropped”

Script Upgrade Failure to Apply msdb110_upgrade Sript

At that time we would have to start SQL Server instance using -T902 to skip SQL Server from attempting to apply the scripts, so that we can take corrective actions to fix the problem causing failure of the scripts and finally remove Trace Flag -T902 and start SQL Server normally which will then successfully apply the upgrade scripts.

Below are the steps to perform to start SQL Server to skip the script upgrade using Trace Flag -T902

1. Start -> All Programs -> Microsoft SQL Server 2008 R2 or the highest version installed -> Configuration Tools -> SQL Server Configuration Manager

2. In SQL Server Configuration Manager, click SQL Server Services.

3. In the right pane, right-click SQL Server (), and then click Properties.

4. On the Startup Parameters tab, in the Specify a startup parameter box, type the parameter (in this case the trace flag -T902), and then click Add. You will now see the parameters similar to below

SQL Server Istance Startup Parameters

SQL Server Istance Startup Parameters


5. Click OK.
6. Restart the Database Engine.
7. Connect to SQL Server instance from SQL Server Management Studio and take corrective actions to resolve the errors causing script upgrade to fail.
8. Now finally remove the -T902 trace flag from SQL Server Configuration manager
9. Restart SQL Server Instance
10. Verify Errorlog to make sure script upgrade finished successfully

 

Script level upgrade for database ‘master’ failed after Applying SQL 2008 R2 SP2

I worked on a issue where SQL Server 2008 R2 instance failed to start after installing service pack 2. SP2 installation was successful, but SQL Services fail to come online. Checked from services.msc and found SQL Services are in stopped state and SQL cluster resources were in offline state.

Checked SQL errorlog and found below error caused the failure

spid8s Creating procedure [dbo].[sp_syspolicy_purge_health_state] ...
spid8s Error: 15138, Severity: 16, State: 1.
spid8s The database principal owns a schema in the database, and cannot be dropped.
spid8s Error: 912, Severity: 21, State: 2.
spid8s Script level upgrade for database 'master' failed because upgrade step 'sqlagent100_msdb_upgrade.sql' encountered error 15138, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be tak
spid8s Error: 3417, Severity: 21, State: 3.
spid8s Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.


Error “The database principal owns a schema in the database, and cannot be dropped” will occur if we try to drop a user that owns a schema. In this case, script upgrade is running scripts from file sqlagent100_msdb_upgrade.sql where it was trying to drop a user. Reviewed the script “sqlagent100_msdb_upgrade.sql” which is found in “C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Install”

From the script, found below code where it was trying to drop user ##MS_PolicyEventProcessingLogin## and later recreating it.

IF EXISTS (SELECT * from sys.database_principals where name = N'##MS_PolicyEventProcessingLogin##')
DROP USER [##MS_PolicyEventProcessingLogin##]
GO
use master
GO
IF EXISTS (SELECT * from sys.server_principals WHERE name = '##MS_PolicyEventProcessingLogin##')
BEGIN
IF EXISTS (SELECT * from sys.server_triggers WHERE name = N'syspolicy_server_trigger')
DROP TRIGGER [syspolicy_server_trigger] ON ALL SERVER
DROP LOGIN [##MS_PolicyEventProcessingLogin##]
END

SQL was unable to drop the user “MS_PolicyEventProcessingLogin” because this user was owning schema “db_owner” in MSDB database. To allow dropping this user we need to transfer the ownership to another user.

First, to run any SQL commands, we need to establish the connection to SQL Server, but in this case the SQL instance is in stopped state.

Using below steps started SQL Server using trace flag -T902 where it skips running the upgrade scripts.
SQL Server Configuration Manager can be used to set the SQL Server start-up parameters. You can specify the trace flag there:

Start -> All Programs -> Microsoft SQL Server 2008 R2 -> Configuration Tools -> SQL Server Configuration Manager

In SQL Server Configuration Manager, click SQL Server Services.

In the right pane, right-click SQL Server (), and then click Properties.

On the Startup Parameters tab, in the Specify a startup parameter box, type the parameter (in this case the trace flag -T902), and then click Add. You will now see the parameters similar to below

SQL Server Istance Startup Parameters

SQL Server Istance Startup Parameters

Click OK.
Restart the Database Engine.

Ran below command to transfer the ownership of schema db_owner to user dbo
ALTER AUTHORIZATION ON SCHEMA::db_owner TO dbo

Removed trace flag -T902 and started SQL Server normally and this time the SQL Server completed running the upgrade scripts successfully.

Tested the failover and failback and everything worked well.

Cheers,
Keerthi Deep

 
1 2