How to Keep Your SQL Server CPU Healthy and Avoid Bottleneck

In this article, we will discuss different methods to detect and resolve CPU bottlenecking, factors which ultimately leads to it, handling of inefficient query compilations and dealing with incapable query plans.

SQL Server CPUWhenever we talk about maintaining large SQL Server schemas, the issue that arises the most is the throughput of the SQL Server CPU because as the schema objects of the database increases the time complexity of queries and database operations also increases. If these things are not controlled, they might result in a damaged SQL server in future due to high load on server. This article focuses on covering the methods through which we can increase the throughput of our SQL Server CPU and avoid bottlenecking.

Performance Monitor

Performance MonitorTo check whether your CPU is bound or not, “Performance monitor” is the easiest method to find it out. We can see if %Processor Time counters are high or not. Your CPU is deemed to be bottleneck if values are in excess to 80% of utilization. DMVs can also be used to check for CPU bottleneck within the SQL Server. If the CPU utilization is very high then we can check the queries by drilling through them to find the ones that are consuming most of the CPU resources.

The following query will give us a view of procedures which are utilizing our CPU the most. Our plan handle consists of various statements and we should check them all. Drilling down through queries will ultimately help us find that one query which is utilizing our CPU the most.

Select top 20
--Total number of times the plan is executed since last compilation
          sum(qs.execution_count) as EXECUTION-COUNT, 
          sum(qs.total_worker_time) as CPU-TIME,
          count(*) as STATEMENTS-SUM,
          qs.plan_handle
from 
         sys.dm_exec_query_stats qs
group by 
--plan wise we want 
         qs.plan_handle

Outrageous QUERY compilations

Outrageous query compilations and their optimization is another process which puts a lot of load on the CPU. Cost is directly proportional to size of underlying schema and complexity of the queries being used.

Most of the time the query plans do not depend on exact values or the values which are used as a predicate in query. The reason for that is lookup are based upon key. Reuse of query plan in OLTP workload is crucial because compilation cost might be high than the cost execution of query. A data warehouse workload will benefit a lot with the use ad hoc queries. Application based on OLTP ultimately increase the chances of re-using the cache plans and will help by reducing the load on SQL Server CPU.

SQL server compute signature of query and expose it as query_hash column. Entities having same query hash column possesses high chances of referring to similar query text. Queries which varies in literal value only must have same values. In below example first three queries have same query hash and the third one have different query hash. The reason for that is they are performing different type of operations.

Select * from user.object where id = 1
Select * from user.object where id = 2
Select * from user.object where listname = 'abc'

Structure tree is being produced during the compilation of query which tell us about the query hash. If queries are referring to the same object, it doesn’t matter if one of the query use full name and other use only table name.

Incapable QUERY plans

There are applications which submit a lot of ad hoc queries and we need to keep track of them. They can be detected using SQL Server repair methods which includes applying grouping on query hash column.

Select 
     a.text as QUERY-SAMPLE, 
     b.query_hash,
     b.number_of_entries,
     c.query_plans as PLAN-SAMPLE
from
    (Select top 40 query_hash, 
    min(sql_handle) as SQL-HANDLE-SAMPLE 
    count(*) as TOTAL-ENTRIES
    min(plan_handle) as PLAN-HANDLE-SAMPLE
from sys.dm_exec_query_stats
  --grouping with respect to query hash
group by query_hash 
--making sure we get enough rows 
having count(*) > 1 
order by count(*) desc) as b
cross apply sys.dm_exec_sql_text(b.SQL-HANDLE-SAMPLE) as a
cross apply sys.dm_exec_query_plan(b.PLAN-HANDLE-SAMPLE) as p

By elimination of unwanted cached plans we can have some major benefits, like freeing the memory so that we can cache other compiled plans and giving the buffer cache more memory. We can use sys.dm_exec_query_plan to fetch various query plans and judge them to find out whether different we got is necessary to obtain maximum performance.

After we determine queries that must be parametrized, we have to find a place to parameterize them and the best place for that is client application. How to do this? Well it depends on which API client we are using. Query string with the literal predicated is a consistent thing over all APIs. For that we just use question mark as the parameter mark to build our string.

--query submission as ad hoc 
Select * from Stakeholders.Emp where Emp.Empid = 100
--query submission as parameterized
Select * from Stakeholders.Emp where Emp.Empid = ?

PARAMETERIZATION

A proper API should be used in order to bind the value to parametrized mark. The provide or client drivers then submit these queries in parameterized form. Since query is parameterized, it uses existing cached plans. We can also enable option to force the parameterization.

ALTER DATABASE database-name SET PARAMETERIZATION FORCED

Server Profile is helpful for the detection of excess compilations. We need to look at the trace profiler to see the procedures which are being re-compiled. The tracker provides us with information and also the reasons due to which re-complications are occurring. Following query is being used to see events related to recompilations.

Select 
          Textdata,SQLHandle,ObjectID,SPID
From 
--Location of Trace file
         fn_trace_gettable('c:\compiletrack.trc',1)
Where
--166 = Cursor recompile, 37 = ClassEvent, 75 = RecompileSP
         EventClass in (166,37,75)

Excessive Re-compilations

If excessive re-compilations and compilations are detected, following options can be used.

  • Changing temporary tables to table variables because cardinality of table variables doesn’t cause recompilations.
  • Using KEEP PLAN query hint because it sets up threshold of the temporary tables same as permanent table.
  • By turning off automatic update of stats for indexes which are defined on tables or views can also result in prevention of recompilations

There is another main factor contributing to excessive usage of SQL Server CPU and that is the use of incapable query plans. They can be detected comparatively. This following query determine queries that are resulting in high CPU utilizations.

Select
          a.objectid,
          a.number,
          a.text, 
          HIGH-CPU-QUERY.total_worker_time,
          HIGH-CPU-QUERY.plan_handle
from
         (Select top 20 
         qs.total_worker_time,
         qs.plan_handle
         from 
         sys.dm_exec_query_stats qs
         order by qs.total_worker_time desc) as HIGH-CPU-QUERY
         cross apply sys.dm_exec_sql_text(plan_handle) as a
order by HIGH-CPU-QUERIES.total_worker_time desc

Following options can be used to detect query plans that are not efficient.

  • Checking for issues related to bad cardinality.
  • Running UPDATE STATS on the table that is involved in the detected query and then checking if the problem still exists or not.
  • Considering the option query modification.

Using all these mentioned methods we can both detect and resolve the factors which are contributing to SQL Server CPU bottlenecking and this as a whole aids to longer life of our SQL Server.

Database Crash

In case if your database is crashed you can easily rollback to get to the backup of your last stored (stable) data but the data which was stored after backup cannot be recovered using this method. To recover such data you can use a third-party SQL Server repair tool.

Author Introduction:

Upton Mark is a data recovery expert in DataNumen, Inc., which is the world leader in data recovery technologies, including access recovery and word recovery software products. For more information visit www.datanumen.com

Comments are closed.