In this article we give a close look at salient aspects related to compressing SQL Server backups in detail.
Compression in SQL Servers is a great way to make space for new data and it majorly increases the usage of CPU. However, it might consume some extra strength from the additional CPU, and thus, has the potential to negatively affect the ongoing, coinciding operations. For this reason, you need to create what we term as ‘compressed backups’. A compressed backup is naturally, much smaller in size than the uncompressed one and therefore, you can easily accelerate the speed of your CPU.
You should note that this is not at all related to database shrinking and is completely different from it. Shrinkage, in fact, is not recommended at all. Compression is basically getting rid of the extra material. The main objective behind compressing data is so that you can carry out restoration. The SQL server backups can be compressed in the following manner:
1. Drop indexes that are occupying space
Indexes can be of two types: the ones that take up space are what we term as Non Clustered, while the clustered ones do not take up any space. The non clustered ones occupy space because they are simply extra copies. So, for instance, if the content of the database takes up, say 20% of your index, then you drop the backup by exact 20%. This is a fairly simply step and does not take much time.
2. Rebuild the Indexes Completely
The data in SQL server is saved in pages. If the page is already full, you will have to make a lot of rearrangements and modifications when you have to add a record later on. So it is advisable to leave some space on all these pages, so you can add new records without much rearrangement or shuffling. Every page uses a certain default space. The amount taken up for this default space is called the ‘Fill Factor’. Since you want to compress all of the data completely, you should rebuild the clustered indexes to its fullest extent, i.e. a fill factor of cent percent.
3. The Ultimate Outcome and Advantages of Compressing Backup Data
With these steps, you can compress the data size by half or even more than half of its original size. The total duration taken for the entire process will mainly depend on the fill factor and the number of indexes you have. This process will make your back up data cut down to half its size, which simply means that the expense of the bandwidth will be halved. It increases the speed of your CPU considerably. Further, in case of a disaster recovery situation, you will only take 50% of the time to move your backup data from one wire to the other. And also, your requirement for storage in the long term will be cut short to half. Owing to compressed backups, you can initiate complete backups more frequently which would aid in complete SQL Server recovery in case of a contingency.
Victor Simon is a data recovery expert in DataNumen, Inc., which is the world leader in data recovery technologies, including Access fix and sql recovery software products. For more information visit www.datanumen.com