Learn about the Adaptive Query Processing Mechanism, and what all it includes.
The Query Optimization feature in SQL Server works pretty well for deciding which execution plan will work best for any given query. It not only selects the cheapest plan, but also takes into account rows in a table. However, using the query optimizer for selecting the best execution plan is not always the best idea. At times it can also misjudge the tables and size of your database, thus selecting a plan that does not work well. And if the initial estimates by the optimizer turn out to be incorrect, the results returned would also most likely be incorrect.
Therefore, to help users decide the genuinely best execution plan for their query, they have been provided with the Adaptive Query Processing feature. However, to make your SQL Server workload eligible for using this process, you will have to enable compatibility level 140. This is present in SQL Server 2017 edition, and can be activated through T –SQL. Continue reading to understand the complete Mechanism of Adaptive Query Processing in detail.
Batch Mode Memory Grant Feedback
SQL server needs access to a minimum amount of memory to complete execution, and some extra memory grant size to ensure that all rows are accommodated in the memory. The performance will not be optimized if memory grant size is not proper. Incase of excessive grant, memory will be wasted, along with reduced concurrency. Incase of less memory, there would be spills on the disk.
Batch Mode Adaptive Joins
When using this feature, your choices of hash join, nested loop will be deferred till the time first input has not been scanned. The adaptive join operator here identifies a threshold, which will be responsible for deciding the best time to transition to nested loop plan. Given below is the procedure of adaptive joins.
- Depending on the row count in the build join input, it will be decided whether loop join is appropriate or hash join. Incase loop join is optimal, you plan would automatically shift to nested loop algorithm.
- If the input in build join is more than the threshold of a given row count, there would be no shift, and the plan will continue making use of hash join.
This type of execution can make changes to the unidirectional boundary that marks the optimization and execution process for one single query. It also activates plans that can adapt on the basis of cardinality estimates that have been revised. If you come across a query for interleaved execution, which as of now is multi-statement table valued functions (MSTVFs), you should pause the process of optimization in the middle. And resume it after you have executed applicable subtree, and accurately collected the cardinality estimates. In SQL Server 2014 and 2016 editions, the cardinality guess of MSTVFs is fixed to ‘100’. The workload performance issues that arise because of these fixed estimates are handled using interleaved execution.
In case you are using a SQL Server instance in your company for any line of business application, always make sure you have a tool to fix sql and deal with contingencies.
Victor Simon is a data recovery expert in DataNumen, Inc., which is the world leader in data recovery technologies, including accdb repair and sql recovery software products. For more information visit https://www.datanumen.com/