“Catch the mist, catch the myth…Catch the mystery, catch the drift”
So after basking in last week’s accomplishment of passing the AZ-900: Microsoft Azure Fundamentals certification exam this week we decided that we need come down from the clouds. Now, grounded we knew exactly where we needed to go. For those who know me well, I have spent a large part of my career supporting and Engineering solutions for Microsoft SQL Server. Microsoft released SQL Server 2019 back in November of the same year.
For the most part, we really haven’t spent too much on our journey going deep into SQL Server. So it has been long overdue that we give SQL Server 2019 a looksy and besides we needed to put the S-Q-L back in SQL Squirrels
SQL Server 2019 had been Microsoft’s most ambitious release of the product that has been around now for just a little over 25 years.
The great folks at MSFT built on previous innovations in the product to further enhance the development of languages, data types, on-premises or cloud environments, and operating systems. Just a few of the big headline features of SQL Server 2019 are:
- Intelligent Query Processing (IQP)
- Memory-optimized TempDB
- Accelerated Database Recovery (ADR)
- Data Virtualization with PolyBase
- Big Data Clusters
So let’s un-peel the onion and begin diving into SQL Server 2019. So, where to begin? Full disclosure, I am kind of a bit of SQL Server 2019 internals geek. So we started with a tour of some of the offerings in Intelligent Query Processing
The IQP feature in SQL Server 2019 provides a broad impact that improves the performance of existing workloads with minimal implementation effort to adopt. In theory, you just simply change the database compatibility level of your existing user databases when you upgrade to 2019 and your query just works faster with no code changes! Of course this sounds almost too good to be true?
“Here we are now, entertain us”
To take us through an unbiased lens at IQP is no other than the man, the myth, the legend, Microsoft Certified Master Brent Ozar. Brent produced two amazing presentations for the SQL Server community, What’s New in SQL Server 2019 and The New Robots in SQL Server 2017, 2019 and Azure.
Brent with his always quirky, galvanic, and sometimes cynical style gives us the skinny on the Good, the Bad, and the Ugly on IQP.
“I never played by the rules, I never really cared…My nasty reputation takes me everywhere“
First we began, looking at Table Variables. For those who have been around the SQL Server block, table variables have been around since the good o’l days of SQL 2000 as a performance improvement alternative to using temporary tables.
However, SQL Server doesn’t maintain statistics on table variables which as you probably know the query optimizer uses to determine the best execution plan. Basically, the SQL Server query optimizer just assumes that a table variable(s) only has one row whether it does or doesn’t. So performance wasn’t so great to say the least. Thus, Table Variables don’t have the best reputation.
Well, SQL Server 2019 to rescue! (Sort of..) In SQL Server 2019, by simply switching the Database “compat mode” to SQL Server 2019 (15.x) SQL Server, SQL Server will now use the actual number of rows in the table variable to create the plan. This is just fantastic! However, as Brent diligently points out this nifty little enhancement isn’t all Rainbows and Unicorns. Although it does solve the pesky estimated row problem with the query optimizer has with table variables, but now introduces the enigma that is parameter sniffing.
Parameter sniffing has plagued SQL Server since the beginning of time and is one of the more common performance
issues we SQL people encounter. In simplest terms, SQL Server tries to optimize the execution plan by leveraging a previously ran plan used. The problem lies when there is different results set due to the parameters provided. Overall, this feature is definite improvement but not the silver bullet.
“Memories may be beautiful and yet…”
Next, we took a look @ the Memory Grant Feedback (row mode) feature which originally made its debut in SQL Server 2017. A memory grant is used by the SQL Server database engine to allocate how much memory it will use for a given query.
Sometimes it can allocate too much or too little than the actual memory needed. Obviously, if too much memory is allocated than there will be less memory available for other processes, and if too little is allocated than memory will spill to disk and performance will suffer.
“Here I come to save the day!”
Once again, by simply switching the Database “compat mode” to SQL Server 2019 (15.x), SQL Server will now automatically adjust the amount of memory granted for a query. Awesome! … but what’s the catch? Well, similar to the new Table variable enhancement situation, memory grants are sensitive to the dreaded parameter sniffing as SQL Server will base its decision making on the previous run query. In addition, If you rebuild your indexes or create new indexes, the adaptive memory grants feature will forget everything it learned about previously run plans and start all over.
This is an improvement to the past performance of memory grants but unfortunately not the panacea.
“I will choose a path that’s clear.. I will choose free will”
Further, we next explored another feature that first made an appearance in SQL Server 2017 in Adaptive Joins. Back then this sleek feature was only available for Columnstore index but now in SQL Server 2019 is available for our run of the mill b-tree or row-type work loads. In SQL Server, with Adaptive Joins the query optimizer will dynamically determines at runtime the threshold number of rows and then it chooses between a Nested Loop or Hash Match join operator. Again, we simply switch the Database “compat mode” to SQL Server 2019 (15.x) and abracadabra!
This is awesome! But… Well, once again parameter sniffing rears its ugly head. As Brent astutely points out during his presentation and documents in his detailed blog post when dealing with complex queries often times SQL Server will produce several plans. So when SQL Server 2019 tries to solve this problem with more options for the optimizer to choose from it will in some particular cases might backfire depending on your workloads.
“Just when I thought our chance had passed..You go and save the best for last”
The last feature we keyed on as part of IQP was Batch Mode execution. This is just another great innovation that came straight out Columnstore Index (SQL 2012). Now offered in SQL Server 2019, users can take advantage of this enhancement without needing to use a Columnstore Index.
Batch mode is a huge performance enhancement especially with CPU-bound queries and for queries that use aggregation, sorts, and group by operations. Batch mode performs scans and calculations using batches. To enable the batch mode processing all you have to do is… thats right.. you guessed it… Switching the Database “compat mode” to SQL Server 2019 (15.x)
“Don’t cry Don’t raise your eye It’s only teenage wasteland”
Continuing our journey with SQL Server 2019.. We ventured to take a sneak peak at the improvements SQL Server 2019 made to TempDB. Guiding us through the murky terrain of TempDB was SQL Royalty and Microsoft Certified Master Pam Lahoud. Pam presented in two short but very thorough videos as part of the Data Exposed series:
- It Just Runs Faster: SQL Server 2019 TempDB Improvements
- How (and When) To: Memory Optimized TempDB Metadata
In these great videos, Pam gives us the lowdown on improvements to TempDB in SQL Server 2019. TempDB is of course one of the famous or infamous system databases as part of SQL Server. As SQL Server, has been enhanced over the last several decades more and more “stuff” has been chucked into TempDB. TempDB has often been referred to as the “Wastelands”. Some of the activities that go on in TempDB are:
- Temp Tables
- Table Variables
- DBCC CHECKDB
- Table-Valued Functions
- Row Versions
- Online Index Operations
- Statistics Updates
- Hash Worktables
As result of all these happenings in SQL Server, we often might experience some pain in Object allocation contention, Metadata contention, and Temp table cache contention.
“Look up in the sky! It’s a Bird… It’s a Plane… It’s ..“
..SQL Server 2019. (Of course)
In SQL 2019, there are two major improvements that impact TempDB performance. The first improvement made helps reduce some of the Temp table cache contention issues. SQL 2019 intelligently partitions cache objects and optimizing cache look up.
To address object allocation contention issues SQL 2019 now offers concurrent PFS updates. In addition, by enabling concurrent PFS updates this allows us to have multiple threads share a latch on the PFS page and therefore we can have more more concurrent threads and less attention on those object allocation pages
The Next major enhancement is memory optimized metadata tables. This is the big one that got into the brochure!
Under the hood, SQL Server 2019 moves the system objects into memory optimized tables that have latched free, lock free structures which greatly increases the concurrency that we can have against those metadata tables and that helps us alleviate that metadata contention.
By Default the Memory-Optimized TempDB Metadata feature is not turned out but its quite easy to enable:
ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;
For those who see a flurry of tempdb activity, this feature seems like a no brainer
“Makes us want to stay, stay, stay… For awhile”
Below are some of the places I am considering for my explorations for next week:
- Continuing with SQL Server 2019
- Google Cloud Certified Associate Cloud Engineer Path
Originally published at https://sqlsquirrels.com on September 18, 2020.