Shrinking a Data file in SQL Server is considered the worst decision as it would lead to huge fragmentation, however most DBA’s supporting the Customers often run into situations where a lot of data was deleted from the data file leaving lot of free space inside the data file, but the Customer wants to get this free space as they do not prefer adding additional disk space or new disk for other databases running short of space.
Why is shrinking a Data file not good?
Because it causes huge fragmentation. Shrink of large data files take lot of time and causes heavy locking/blocking and the database is as good as “down” while Shrink operation is running on the data file. Transaction log file grows large during the Shrink operation.
As the management wants to go ahead and reclaim the free space from data file, below are some of the methods to release the free space from the data file.
– Shrink the data file during downtime.(You may choose the Shrnik the file at once to release the free space Or you may choose to incrementally perform Shrink, each time releasing certain amount of free space).
– Rebuild the indexes to reduce the fragmentation, which may increase the size of the data file again though.
– Update the Statistics with Full Scan
Method suggested in the below article, end result will change the physical structure of the database as we will end up with a new file group.
Move Indexes to New FileGroup
– Create a new filegroup
– Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time
– Drop the old filegroup that we were going to shrink anyway (or shrink it way down if it’s the primary filegroup)
method2 which involves moving the tables to another filegroup and is complex and Some issues which makes it complex are
– LOB data cannot be moved to another filegroup( If LOB data needs to be moved, then we need to create a new table in other filefroup, copy the data and then rename the table names and also special care needs to be taken care in moving the dependent objects like foreign key constraints, triggers, indexes, statistics)
– There will be many tables in the database and requirement is that all these tables need to have clustered index( If a table does not have clustered index, then we need to create one and then drop it later)
– This method uses CREATE INDEX … WITH (DROP_EXISTING = ON) which has to be done individually on all the tables. For each table it may take very long time and any human error can be costly
– It requires additional disk space as while the command is being run on the largest table, it will require more than twice the size of the table.
– Any unknown/unexpected issues.
Both the above methods require downtime window as we have to run the shrink to release the free space out of the file. Above methods needs to be tested on test server before implementing on production.
One of the case where I worked on Shrink of a data file using Method1 from 470 GB to 300 GB took about 7 hours to complete on a fairly fast server. After the Shrink operation we had to perform Rebuild Indexes and Update Statistics which took another 15 hours. So totally we had about 22 hours of significant performance degradation. We had to monitor the progress of the Shrink operation and reindexing and update states jobs during this time. Fortunately, there were no performance issues reported after the activity.