Learn How SQL Server 2016 addresses size limitation issues in In Memory OLTP

In this article we give a close look at the jazzed up In Memory OLTP feature present in SQL Server 2016

SQL Server In Memory OLTPThe In-Memory OLTP feature, introduced by SQL Server in its 2014 addition has been upgraded in the SQL Server 2016 edition. The size limitation in the 2014 edition for memory-optimized tables has now been done away with, meaning that the tables can now be as big as the user wants them to be and can go on till there is enough memory available, this was not possible in earlier editions. The Windows Server 2016 edition provides a massive memory of 12 TB, most of which can be leveraged with the help of In-Memory OLTP, if not all of it.

The 2014 edition of SQL Server came with a size limitation of 256 GB for user databases in In- Memory; this was mainly due to limited storage subsystem. Any database occupying space greater than the prescribed limit of 256 GB would lead to choking up of the database. However this size limitation was not a hard limit and theoretically there was scope for storing data beyond 256 GB in durable tables. The nature of storage size is usually dynamic hence allowing for a little bit of margin for difference in actual storage capacity and theoretical storage capacity. The limit of 256 GB was set after internal testing and examining expected behavior in multiple scenarios.

SQL Server Size Limitation in In MemoryThis limitation has been done away with in the upgraded version of SQL Server. The 2016 edition of SQL Server allows the user to store large volumes of user databases in In- Memory. Although here too the 12TB of memory is the theoretical limit, for the amount of data the system can handle. Experts have tested the system with large sizes of data and concluded that SQL server works fine with memory-optimized tables of up to 4TB, not to say that it can’t handle more than that, but when tested with data as big as 4TB, it works pretty well. These were the results of a system which had a total storage of 5TB, the other 1TB was used for overhead operations. Out of this, 1TB w3as used for overhead operations It is difficult to state if the system will be able to support 12 TB of data without crashing, because the size limits as stated above, are often dynamic, leaving scope for margins.

It is not only because of system capabilities that storing a very large amount of data, around 12 TB will be difficult but also because it is very expensive to have such a huge amount of storage space available. Although there is scope for extending the In- Memory up to 12TB, it is an expensive proposition, which all SQL users might not be able to afford, as it will involve expenditure on hardware. But considering the fact that it is mainly used by companies with extremely large databases and customer base, there is a strong possibility that might actually end up having a system that is advanced enough to make use of the whole 12 TB.

The sophisticated SQL Server 2016 too can suffer a debilitating SQL Crash

If you have recently updated to the SQL Server 2016 edition and believe that this sophisticated product is free from SQL crashes then you are sadly mistaken. Despite its array of advanced features, the 2016 edition too is vulnerable to crashes. Thus it would be prudent to get hold of a sql recovery tool like DataNumen SQL Recovery to avoid chances of data loss. This incisive software is designed to extract every possible data element stored in the SQL file in quick time and can even handle gigantic files with ease.

Author Introduction:

Alan Chen is President & Chairman of DataNumen, Inc., which is the world leader in data recovery technologies, including access recovery and sql recovery software products. For more information visit https://www.datanumen.com/

Comments are closed.