How to View the Data Encrypted Using Always Encrypted in SQL Server 2016

SQLServerF1

Always Encrypted is a new feature introduced with SQL Server 2016. This has been introduced to secure sensitive data in a better way which can port very well with SQL Azure databases data as well. Although there have been other features available prior to SQL Server 2016 for encryption like TDE, SSL and cell level encryption, but the new Always Encrypted feature ensures critical data is not visible to anyone, either SQL Server DBAs, System Administrators, Network Administrators or hackers listening on the network between client and server. This is achieved by Always Encrypted as it encrypts the data both at rest and and also in motion. Important components of Always Encrypted feature as listed below,
Column Master Key – It is an encryption key that protects the column encryption keys. We must have at least one master key before encrypting any columns.
Column Encryption Key – It is the encryption key that actually protects our encrypted columns data.

Most important thing, which allows us to view or work with the Always Encrypted data is the Connection string.
Connection string – For a client driver to understand that column encryption is in use, the connection string must have the attribute Column Encryption Setting = enabled;
As, we must have seen different connection strings which are used to connect to SQL Server from applications and has different options depending up on the type of authentication, additional parameters in case of any database mirroring features being used. So, for Always Encrypted as well, we need to use this additional parameter in the connection string to work with the Always Encrypted data which is Column Encryption Setting = enabled;
Another useful thing with Always Encrypted is that the application code does not need to changed to access this encrypted data, once we use this parameter in the connection string.
We can find the Column Master Key and Column Encryption Key metadata under DatabaseName -> Security -> Column Master Key Definitions -> Column Encryption Keys.

At this point, this feature is not supported by all client libraries. The only provider that currently works with Always Encrypted data is the ADO.NET 4.6, so we will need to ensure that the .NET Framework 4.6 is installed on any machine that will run the client application that interfaces with Always Encrypted data.
– It is important for many developers to be able to view We can use SQL Server Management Studio (SSMS), so when connecting to SSSMS, we can go to Options which will bring additional tabs, then we need to use additional Connection Parameter where we need to specify “column encryption setting=enabled” and once we make the connection successfully to SSMS, we now will be able to view the data.
– But when we try to read the Always Encrypted data, we may receive erros, where encrypted data could not be decrypted, because on the system we are launching the SSMS may not have the certificate which can decrypt the data.
We need to get the certificate installed on the local system with both public and private keys, after which we will be able to successfully view the encrypted data.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Failed to Decrypt A Column Encryption Key Error in SQL Server 2016

SQLServerF1

There have been many new features introduced with SQL Server 2016 and you may get different errors while you use these new features. One such issue and errors DBAs might get while they try to use new SQL Server 2016 feature Always Encrypted data is as below.
Msg 0, Level 11, State 0, Line 0
Failed to decrypt column ‘test’
Msg 0, Level 11, State 0, Line 0
Failed to decrypt a column encryption key using key store provider: ‘MSSQL_CERTIFICATE_STORE’. The last 10 bytes of the encrypted column encryption key are: ‘F1-S3-K2-8E-69-E7-F7-O8-1W-B4’.
Msg 0, Level 11, State 0, Line 0
Certificate with thumbprint ‘664EBAB58B3A8150AC51E2BEF40BDEC17055167C0′ not found in certificate store ‘My’ in certificate location ‘CurrentUser’.
Verify the certificate path in the column master key definition in the database is correct, and the certificate has been imported correctly into the certificate location/store.
Parameter name: masterKeyPath

This error occurs if we do not have certificate imported to the local machine’s certificate store. Always Encryped has two main keys, one is column master key, which is stored in the trusted key store and its information about column master keys, including their location, is stored in the database in system catalog views. Another important part is Column encryption keys, which actually encrypts the sensitive data. For clients to be able to access the data, there needs to be a certificate which needs to be installed on the client systems, with out which we will get the errors related to the certificate. This is to allow only authorized users with the certificate to be able to access and decrypt the sensitive data.

Apart from this error for not importing the certificate, there can be other errors too returned with Always Encrypted even when the certificate in installed or imported on the client system like below,
Failed to decrypt a column encryption key using key store provider: ‘MSSQL_CERTIFICATE_STORE’.
The last 10 bytes of the encrypted column encryption key are: ‘F1-S3-K2-8E-69-E7-F7-O8-1W-B4’. Certificate specified in key path ‘CurrentUser/My/664EBAB58B3A8150AC51E2BEF40BDEC17055167C0’ does not have a private key to decrypt a column encryption key. Verify the certificate is imported correctly.
Parameter name: masterKeyPath

The above error occurs even when we import the certificate because the imported certificate only has a public key, which is not enough to decrypt the data. We will also need a private key to be able to successfully decrypt the data.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

New Always Encrypted Feature in SQL Server 2016

SQLServerF1

It is important for every organization to protect their important customers or company’s data and ensure that no one should be able to view or update the data. There are many security mechanisms in SQL Server which will ensure only authorized users can see the data or make changes to it, like logins and database users to restrict the permissions on what they can do with the data. However, still there are many ways, where in unauthorized users still try to gain access to the data, like tapping the network between SQL Server and client system, trying to hack someone else account and gaining access, etc. So, only having security at login/database user level is not enough, so there are other mechanisms available like SSL certificates to encrypt the data that gets transmitted over the network, encryption of data at individual cell/column level or table level or at database level. Prior to SQL Server 2016, database/cell level encryption, still lets the database administrators or other users with high permissions to still gain access to the encrypted data, as the encryption keys are mostly stored in the database and are managed by the DBAs. This feature will be helpful for Microsoft to promote the SQL Azure databases more to the customers showcasing that their data will only be accessible to the clients, but no one else.

Starting with SQL Server 2016, new feature Always Encrypted has been introduced to safeguard the sensitive data from high privilege users as well as unauthorized users. Always Encrypted feature has been designed to protect sensitive data, such as credit card numbers or national identification numbers which are mostly stored in SQL Azure Database or on-premise SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine like SQL Database or SQL Server. Due to this, Always Encrypted provides a separation between those who own the data (who can view it) and those who manage the data (DBAs, but should have no access to sensitive data). This ensure that on-premises database administrators, cloud database operators, or other high-privileged, or unauthorized users cannot gain access to the encrypted data. This will help in for comfortable delegation of on-premises database administration tasks to third parties or to reduce security clearance requirements for the DBA staff.

Always Encrypted features mainly uses two types of keys which are, column encryption keys and column master keys.
Column master keys – are to protect the keys used to encrypt column encryption keys. It is important for the Column master keys should be stored in a trusted key store. The information about column master keys, including their location, is stored in the database in system catalog views.
Column encryption keys – are used to encrypt sensitive data stored which is stored in the database columns. All the values which are in a column can be encrypted using a single column encryption key. The encrypted values of column encryption keys are stored in the database in system catalog views. We need to store column encryption keys in a secure/trusted location for backup.

Another important component of the Always Encrypted feature which plays a key role is Always Encrypted enabled driver which ensures the transparency of encryption to client applications. This Always Encrypted enabled driver calls a server i.e., makes a roundtrip for each query with parameters to retrieve information on how to encrypt query parameter and whether they should be encrypted. This driver then calls a key store provider to decrypt the encrypted column encryption key value. The resultant plaintext column encryption key values are cached. Query results containing data from encrypted columns are accompanied by encryption metadata to enable transparent decryption. So, when an Always Encrypted enabled client driver queries encrypted columns, SQL Server sends the information about encryption settings for the queried columns, including encryption type information. The encrypted value of column encryption keys are used to protect the data in the queried columns, and also the location of the corresponding column master keys. The driver uses that information to, contact the key store containing each column master key and decrypt the column encryption keys, encrypted with the given master key. If for some reason, Always Encrypted is not enabled on the client side, the driver returns encrypted values and the values have the varbinary(max) data type.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

New Enhancements to T-SQL in SQL Server 2016

SQLServerF1

Transact-SQL is the key to interact with SQL Server, it acts as medium of communication between SQL Server and DBAs or Developers. Although SQL Server provides great GUI features to administer SQL Server, but underlying, the actions performed on GUI are translated into T-SQL queries or commands. Over a period, with many releases of SQL Server, there have been lot of changes, new additions supporting new features, enhancements to TSQL to support existing features and new commands introduced to support the requirements of Developers and DBAs. So, just like any other SQL Server release, even with SQL Server 2016, there have been some great enhancements which are worth talking or knowing about which include TRUNCATEE TABLEE with partitions, NO_PERFORMANCE_SPOOL, DROP IF EXISTS, T-SQL queries for new features like JSON and temporal tables, enhancements to FORMATMESSAGE, etc.

Truncatee Tablee – Everyone must be aware of the functioning of this command, which will remove data from entire table, although what happens internally is a different story, but from user perspective, all the data in the data will not be found anymore. Now, starting with SQL Server 2016, there has been improvement made to this command, where in now we can truncate only one partition, instead of entire table. This will be beneficial in making the maintenance easier with large partitioned tables and this operation will be faster too compared to deleting all the data from a partition.
– Another enhancement is that now ALTER TABLEE can now alter many columns, while the table remains online for which we can use WITH ONLINE = ON|OFF options.
– There has been new query hint NO_PERFORMANCE_SPOOL added, which will prevent spool operators from being added to the query plans. This is to help improve the performance when many concurrent queries are running the spool operations.

– Starting with SQL Server 2016, MAXDOP option has been added to use with DBCC CHECKTABLE, DBCC CHECKDB, and DBCC CHECKFILEGROUP, which will limit the degree of parallelism as per the requirement of the DBA in specific environments to reduce the load of integrity checks and to can avoid using resource governor for this purpose.
– There has been improvement in the FORMATMESSAGE where we can now supply our own message string. In prior versions of SQL Server, FORMATMESSAGE used to construct a message from an existing message in sys.messages, but now with this new enhancement, we can supply our own messages too, which are not part of sys.messages.
– There have been new T-SQL commands added for supporting new features like JSON, Temporal tables, session context, etc.
– Another new and useful enhancement is introduction of DROP IF EXISTS, which simplifies the checking of table using longer queries through an IF condition.
– The maximum index key size for NONCLUSTERED indexes has been increased to 1700 bytes.
– Another enhancement include, support for Advanced Analytics Extensions, which allow us to execute scripts written in a different supported language such as R. Transact-SQL starting with SQL Server 2016 now supports R by using the sp_execute_external_script stored procedure, and with the external scripts enabled Server Configuration Option.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

New SESSION_CONTEXT() Support in SQL Server 2016

SQLServerF1

There are many application which have requirement to store and retrieve session context information from SQL Server. Session context information is very useful in cases which can be used to store information specific to each user or their current state of the application. This is required in some applications to control the logic in Transact-SQL statements. Prior to SQL Server 2016, application developers used to rely mostly on CONTEXT_INFO function or sometimes on TSQL session variables, but both these have their own disadvantages like, Transact-SQL variables, scope is limited to that current Transact-SQL batch, stored procedure, trigger, or user-defined function. SQL/Application developers were using the CONTEXT_INFO function for retrieving the session context for the current session. Also, CONTEXT_INFO allows to retrieve session context values for all current sessions and batches from the context_info columns in the sys.dm_exec_requests or sys.dm_exec_sessions dynamic management views. Inorder to run queries against these views, required SELECT and VIEW SERVER STATEE permissions on the SQL Server instance, but these permissions were not required anymore when we use the CONTEXT_INFO function.

But this CONTEXT_INFO function has its own limitations too like, It is a single and a binary value, therefore is very difficult to work with because we will need to convert the data from binary to human readable format and also the involves the complexity of storing multiple values in a single binary representation, which will be difficult to use, another limitation include the size supported was only 128 bytes for the connection which was not liked my many developers, this CONTEXT_INFO also has another security concern too as it allows the users to overwrite the data at any time, and also this does not work well with SQL Azure databases. So, upon the feedback from the developer community, Microsoft has introduced new built-in function called SESSION_CONTEXT() starting with Microsoft SQL Server 2016. This is not a new innovation by Microsoft, but there are equivalents in other RDBMS like Oracle.

The way SESSION_CONTEXT function is different from CONTEXT_INFO is that the SESSION_CONTEXT uses key-value pairs to store the session data and the keys are of SYSNAMEE type where as the values are SQL_VARIANT type. Now, the size of the data that can be store has been increased/set to 256 KB, which is significantly greater than the size of data allowed with CONTEXT_INFO. Also, the security concerns has been addressed, wherein we can set the key as read only, so the value will not be able to be changed once it has been established. Also, as this has been introduced after SQL Azure was launched, Microsoft has kept the SQL Azure databases in mind and made this work similarly on both on-premise and on SQL Azure databases. We need to set the key-value pairs using the stored procedure called as sys.sp_set_session_context.
@key – is the key which is being set, and is of type sysnamee. The maximum key size allowed 128 bytes.
@value – is the value for the specified key, and it is of type sql_variantt. Setting this value of NULL will free the memory. The maximum size is 8,000 bytes.
@read_only – This allows zero or one. This is a flag of type bit. If 1 is set, then the value for the specified key cannot be changed again on this logical connection. If 0 (default), then the value can be changed.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

JSON (JavaScript Object Notation) Support with SQL Server 2016

SQLServerF1

There have been many new features introduced with SQL Server 2016 and many enhancements and additional support to new features, etc. One such interesting change with Microsoft SQL Server 2016 is native support for JSON. JSON stands for JavaScript Object Notation. If you are wondering what is this JSON (JavaScript Object Notation), is is an open and text-based data exchange format, that provides a standardized data exchange format better suited for Ajax-style web applications. In other words, JSON is a data interchange format that has become popular for moving data between systems. There was no support for JSON feature prior to SQL Server 2016, so the application developers had to fall on to use third party tools or workarounds to achieve the output in this format.

One of the popular language which many of us are aware for exchanging data between different heterogeneous systems is XML. Since many versions of SQL Server has been supporting XML by allowing queries which deal with XML data to store, retrieve or manipulate the XML data using SQL Server TSQL queries, specifically designed to handle XML data. JSON is another such data format like XML, which is also used to exchange data between different heterogeneous systems. Initially JSON was developed specifically for javascript, but later it has become as an independent language. As Microsoft wants to spread the usage of SQL Server across different platforms, there has been many changes like support to running SQL Server on linux and support for languages like JSON. Although, SQL Server 2016 supports JSON language, but that does not mean, XML is no more supported. Microsoft SQL Server newer versions will continue to support both XML and JSON languages and in future, we may see even more languages added to this list, this will help SQL Server scale new heights and can be used with any applications.

Format of JSON looks like below

JSON
[ 
   { "name": "Test", "skills":["SQL","C#","Azure"] },
   { "name": "Mike", "surname": "Taite" }
]

SQL Server offers many built-in functions and operators that lets us do the things like,
– Parse JSON text and read or modify the values.
– Convert arrays of JSON objects into table format.
– Use any TSQL queries on the converted JSON objects.
– Format the results of TSQL queries in JSON format.

Some of the key capabilities that SQL Server provides related JSON include,
– Use the JSON_VALUE function to extract a scalar value from a JSON string
– Use JSON_QUERY to extract an object or an array
– Use the ISJSON function to test whether a string contains valid JSON.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

DATABASE SCOPED CONFIGURATION in SQL Server 2016

SQLServerF1

Microsoft SQL Server 2016 has many new features and enhancements to existing features. Also, Microsoft claims break through improvement in the performance of queries in SQL Server 2016, compared to its previous versions. Database Scoped Configurations is one of the interesting enhancement made in SQL Server 2016. In previous versions of SQL Server, we can open database properties and can view various details about the databases and also make changes to some of the database properties to alter the behavior of the database. Starting with Microsoft SQL Server 2016, in database properties window, options tab, new section has been added with the name “Database Scoped Configurations”, which includes different options which can be configured.

Legacy Cardinality Estimation – This options allows us to set the query optimizer cardinality estimation at an individual database level, which is different from the database compatibility level. In SQL Server 2016 there are a lot of enhancements that come when we set Compatibility Level 130 for a database in SQL Server 2016, such as performance improvements with Columnstore and In-Memory OLTP and all the Query Optimizer enhancements done over the years under Trace Flag 4199. But there can be situations where you want to benefit from the new compatibility, but still want the application to work well and run under old compatibility mode, then we can enable this option. Once we set the Compatibility Level of database to 130, then we will get all the performance benefits for query execution that come with SQL Server 2016, while still using old CE.
Legacy Cardinality Estimation For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

MAXDOP – In previous versions of SQL Server, we can set MAXDOP at an instance level or at batch/stored procedure/query level, but not at the database level. But starting SQL Server 2016, we can set Maximum degree of parallelism(MAXDOP) for an individual database. SP_Configure MAXDOP setting will be overwritten by this database setting for queries ran against this database.
MAXDOP For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Parameter Sniffing – Using this option, we can enable or disable parameter sniffing at database level. In previous versions of SQL Server, we could turn on or off using the trace flag, but now it is made simple with just changing the option from database properties.
Parameter Sniffing For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.

Query Optimizer Fixes – In previous versions of SQL Server, there are many query optimizer hotfixes enabled by some trace flags. In SQL Server 2016, we can choose to either enable or disable query optimization hotfixes for an individual database.
Query Optimizer Fixes For Secondary – This option can be set to PRIMARY or SECONDARY. This option can be set only the secondaries and indicates that the same configuration will be used which are set on the primary. If the configuration for the primary changes, secondary will also adjust to the same value. PRIMARY is the default setting for the secondaries. While you can configure secondary databases to have different scoped configuration settings from their primary, all secondary databases will use the same configuration. Different settings cannot be configured for individual secondaries.
– Starting SQL Server 2016, we now have the ability to clear individual Database Plan Cache using command ALTER

DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Changes to SQL Server 2016 Management Studio(SSMS)

With each new release of SQL Server, there are many new features that get introduced. Microsoft has put lot of effort in improving the SQL Server Management Studio as well with the new releases to support the new features that get introduced. Also, there has been lot of support improvements for Azure integration and management with SSMS. Previously there was not much, DBAs were able to make changes to Azure databases from SQL Server Management, but with the release of SSMS 2016, there has much much closer integration for managing SQL Azure instances and databases. For those who are not aware, we can download and install SQL Server Management Studio (SSMS) as separate individual download and install just SSMS on a client machine or on a server. Some of the new features support for SQL Server Management Studio(SSMS) include,

– Addition of human-readable error messages. With this, now when there is an error while performing an operation using SSMS, we get more meaningful error, which will be of great help in troubleshooting, instead of wondering or decoding on what the error is about, as like in some of the previous versions of SQL Server Management Studio.
– Stretch database wizard improvements which adds support for predicates.
– SQL Server 2016 management Studio now allows us to create ER Entity-Relationship Diagrams for SQL Azure Databases without the need for any external tool.
– Now we can see some new options when we right click on the SQL Azure database like “Open in Management Portal”, new “Reports” to view “Transaction Performance Analysis Overview”.
– New options supported when we right click on the user table like, Design(now we can design SQL Azure database tables from SSMS), option to choose selecting or edition top x rows, options for graphical interface to create and administer Full-Text indexes on SQL Azure database tables.

– Database properties now have additional properties or settings like, in the options tab, there is a section for Azure which has options which allows us to change the database edition, the Service Level Objective and the maximum database size.
– There have been many other options enabled in SQL Azure database and table properties which allows us to manage or make changes to database or tables of SQL Azure database and tables, just like regular on premise SQL Server database or table.
– Improvement in the AlwaysEncrypted Powershell commandlet to add key encryption APIs.
– Some known issues has been fixed, like to turn off IntelliSense in the SSMS toolbar, if it has been disabled in the Tools,Options dialog.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 

Basics of Temporal Tables in SQL Server 2016

SQLServerF1

Microsoft has introduced many new features with their new release of SQL Server 2016. One of the such interesting feature released with SQL Server 2016 is Temporal Tables. Temporal Tables are also known as system-versioned tables. In a single liner, these Temporal Tables allows SQL Server to automatically keep history of the data in the table instead of just the current data which we see in regular or temporary tables or variables. This Temporal Tables is a database feature, introduced newly with SQL Server 2016. Although this is called as system-versioned temporal table, but this is a new type of user table in SQL Server 2016, which is designed to keep a full history of data changes in the table and allows an easy point in time analysis of the data. You might wonder, why this type of temporal table is called as a system-versioned temporal table, it is because the period/time of validity for each row is managed by the system, which is the database engine.

Temporal tables were introduced in the ANSI SQL 2011 standard, which is now adopted by Microsoft SQL Server 2016 onwards. To understand more about the Temporal tables, we can compare it with regular user table to understand its behavior, which is different from a normal user table. In a normal user table, we insert the data and we can see the inserted data using select statement, but once we issue a delete or update statement, we will now see the latest data only with the new select statements, but with Temporal tables, we can query and get the data which was deleted or the data which was modified and what was the old value of the data. Taking a real time example, suppose we have a table with one column and it has one row with a data value of ABCD, now we updated this data from ABCD to ABC, now in a normal user table, we can only see ABC value, but with a Temporal tables we can actually see both the old value ABCD and ABC. This is possible by storing the history of the data changes and its values in a history table. History table has the old data and also the start and end times to identify when the data was present in the Temporal tables, thus allowing us to see the data at a older given point of time.

Now, as we have some basic idea about the Temporal tables, lets try to understand further about how to create and query these Temporal tables. Before, we proceed with seeing, how to create these Temporal tables, as like any other feature or object, there are some pre-requisites for creating Temporal tables.
– It is mandatory for a temporal table to have a primary key defined.
– Two additional columns are required to be defined while creating the temporal tables, which are used to record the start and end date and times. These two columns must be of datetime2 data type. These columns are referred to as period columns. These period columns are used exclusively by the system to record period of validity for each row whenever a row is modified.
– These tables cannot be part of In-memory OLTP feature.
– There are some restrictions on types of triggers that can be created on these tables.

CREATE TABLE dbo.TestTemporalTable
	(ID int primary key,
	Col1 int
	Col2 int
	SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL
	SysEndTime datetime2 GENERATED ALWAYS AS ROW END NOT NULL
	PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)) 
        WITH(SYSTEM_VERSIONING = ON(HISTORY_TABLE = dbo.TestTemporalTableHistory));

If we don’t specify and History_table, the SQL Server creates a history table with its own name like dbo.MSSQL_TemporalHistoryFor_xxx, where xxx is the object id of the main table. In GUI, the TestTemporalTable is shown under the tables list, where as the TestTemporalTableHistory will be shown under as a subset of TestTemporalTable. TestTemporalTableHistory has same columns as the actual temporal table, but any constraints will be removed. Also, we can create different indexes on TestTemporalTableHistory and TestTemporalTable tables based on the usage of these tables in the application and having different indexes, will greatly help improving the performance. Also, this is not compared to the CDC, which mainly uses transaction log file to track the changes.

Hope this was helpful.

This is applicable for below versions of SQL Server

SQL Server 2016

Thanks,
SQLServerF1 Team
In-Depth Blogs on SQL Server, Information about SQL Server Conferences and Events, Frequently asked questions, SQL Server Trainings

 
1 2 3 4 5 35