Virtual Indexing Part 8: The Indexed Splitter Function

The fastest way to split a string is to have already done so.

Introduction

Happy 2021! Indexed views and tally tables – a virtually unexplored topic, one I will write about at length this year. This will be a quick read and there is likely to be at least a couple typos; time is short so today is about quality, not quantity. In Part 3: The Indexed Function I introduced you to I said “the fastest way to perform a calculation is to have already done so;” in this installment I will demonstrate that the fastest way to split a string is also to have already done so.

When I introduced the Indexed function I used prime numbers as my example. 7 is always a prime number, Factorial(5) always equals 120. Creating an Indexed prime numbers, factorial or any function where a function always returns the same output for a given argument. What if we wanted a more complex indexed function, such as one that splits a string?

I know about database normalization but I am not responsible for designing most of the databases I work with. If you’ve been in this racket for a couple fortnights you have run across that vender DB written by great application developers with terrible back-end databases. It’s not uncommon to see fields that contain a delimited list – ‘1,4,13,99’ instead of a single atomic value. You can write an ETL package that leverages a good string splitter (such as DelimitedSplit8K or DelimitedSplit8k_Lead) to regularly split and copy the values into a correctly normalized table. In this article I will show you a much faster, maintenance-free alternative which leverages a Numbers Table and an indexed view.

Introducing The “Indexed Splitter”

For this you will need a tally table with at least as many rows as the longest string you intend to split. 8,000 is enough unless you need to split string longer than 8K. My tally table (dbo.tally) has 1,000,000 rows. First the sample data.

Sample data

CREATE TABLE dbo.Test (ID INT, Column1 VARCHAR(20));
INSERT INTO  dbo.Test (ID,Column1) 
  VALUES (1,'ORM;PS;SUP'),(2,'ABC;XYZ;999;123');

Next we’re going to borrow Jeff Moden’s DelimitedSplit8K logic to create a set-based splitter that leverages permanent tally table (e.g. an regular ol properly indexed numbers table named dbo.tally. There’s a small performance hit compared to delimitedSplit8K but the upside makes it more than worth it, as you will see in a moment. Here’s our splitter as a view

The Splitter as a View

CREATE OR ALTER VIEW dbo.TestSplit WITH SCHEMABINDING AS 
SELECT 
  Id   = t.ID, 
  item = 
    SUBSTRING
    (
      t.Column1,
      tt.N+SIGN(tt.N-1),
      ISNULL(NULLIF((CHARINDEX(';',t.Column1,tt.N+1)),0),LEN(t.Column1)+1)-(tt.N)-SIGN(tt.N-1)
    ),
  ItemIndex = tt.N+1
FROM       dbo.Test  AS t
CROSS JOIN dbo.tally AS tt
WHERE      tt.N <= LEN(t.Column1)
AND        (tt.N = 1 OR SUBSTRING(t.column1,tt.N,1) = ';');
GO

The values are split and returned with their associated ID. This logic is going to be applied to an index view which means we can’t include parameters; the delimiter and source of the string need to be embedded in our logic. Above we are splitting the contents of dbo.test.Column1 using a semicolon as the delimiter.

SELECT f.Id, F.Item FROM dbo.TestSplit AS f;

Below is exactly what we expected.

Function Output

Result Set

Next the execution plan.

Splitter View Execution Plan

That’s a good plan: set-based, linear and fast. There are faster splitters out there but none that work without the APPLY table operator. APPLY is not allowed in Indexed Views. Subqueries either. My splitter logic doesn’t use APPLY , subqueries or any other logic that prevents me from adding an index to my view so let’s see if we can add a unique clustered index and turn this view into an indexed Split where the split happens ahead of time and only once (unless the value is modified or deleted).

CREATE UNIQUE CLUSTERED INDEX uq_cl__testSplit ON dbo.TestSplit(Id,Item)

No error. Let’s run the same SELECT query from earlier and check the execution plan.

The Indexed Split Execution plan

It doesn’t get cleaner or faster than that. This is a powerful example of the what you can accomplish when using a physical numbers table to build an Indexed view. This barely scratches the surface as I hope to demonstrate in the near future.

Make my index a double

In 2020 “make it a double” became:

2o2o got me like ^^^

I digress.

Now that your indexed function is in place you can expand it’s power by adding more indexes! I’ll have my clustered index with a nonclustered chaser. Here we’ll create an index for item.

CREATE NONCLUSTERED INDEX nc__testSplit__item ON dbo.TestSplit(Item);

Now a query that filters on item.

SELECT f.* 
FROM   dbo.TestSplit AS f WITH (NOEXPAND)
WHERE  f.item <> '999,123';

To my chagrin, the optimizer often ignores nonclustered indexes on indexed views which is the case here. This is why I’m using NOEXPAND. Nonetheless, the execution plan is even better than before: an indexed seek against a narrower and lighter nonclustered index.

Execution plan leveraging a second, nonclustered index

The Indexed Splitter Function

Let’s add our RowNumber column and wrap the guy in an iTVF for a Nasty fast index function that splits a string ahead of time and maintains itself as needed! We’ll include the option to exclude one or more values using an @exclude parameter.

The Function

CREATE FUNCTION dbo.itvf_TestSplit
(
  @exclude VARCHAR(20)
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT f.ID, f.Item, f.ItemIndex
FROM   dbo.TestSplit AS f WITH (NOEXPAND)
WHERE  f.Item <> @exclude;
GO

Let’s give’r a spin and check the execution plan.

Execution Plan

This image has an empty alt attribute; its file name is image-7.png

Same as before but now we are accepting parameters which means we now have an indexed function that can split strings ahead of time. Not even STRING_SPLIT or a CLR can compete here.

What about an “Item Number”?

Sometimes you need the item’s ordinal in the string; for this we need an item number (labeled as “ItemNumber” in my function logic. To return an item number (without a sort in the execution plan) some assembly will be required. DelimitedSplit8K and DelimitedSplit8K_LEAD both use ROW_NUMBER() to produce the ItemNumber column. ROW_NUMBER is not allowed in indexed views which is why I included the “ItemIndex” . It represents the item’s position in the string, something I normally don’t need. Here it’s perfect however – it allows me to add an ItemNumber while still avoiding a sort in my final plan. Not a big deal with strings 8K or less, but it is for longer strings (something an indexed function can handle at no extra cost.) First the new index:

EATE NONCLUSTERED INDEX nc_poc__testSplit__ItemIndex ON dbo.TestSplit(Id, ItemIndex);

With that index in place let’s run a query which includes the item number.

SELECT 
  Id= split.Id,
  ItemNumber = ROW_NUMBER() OVER (PARTITION BY split.ID ORDER BY split.ItemIndex),
  split.Item
FROM dbo.itvf_TestSplit('999,123') AS split;

Output

Result set that includes ItemNumber

Execution plan

easy peasy lemon-squeezy

Still a fantastic execution plan the same low cost as before with more complexity. Again, the calculations happened already, this function is leveraging one or more indexes to retrieve, not compute, the required value.

Conclusion

The Indexed Split is another game changer yet barely scratches the surface of the power of the Virtual Index and the Indexed Function. When you add a physical numbers tally to you Indexed views they can do more than pre-join or pre-aggregate values. Here we introduced the world’s first “indexed split” and we’re still just getting started! Tomorrow I’m going to introduce the Indexed Hierarchy. Thanks for reading!

Virtual Indexing Part 6: Combinatorial Index Optimization

In Part 5 we learned how to return a distinct set of numbers from a one Billion row table in under 1/10th of a second. We can do better!

Disclaimer

I’ve never made it to Part 6 of a technical series. For me, a huge obstacle to getting quality content out in a timely manner is the proof-reading and final editing process. Like a few of my friends in this field, I have many new techniques and programming concepts that I’m dying to share which have been decaying in MS Word Docs and .sql files for years. I started my NTally article in 2014 for example.

In September, while quarantined and after listening to too much David Goggins, I decided to step up my writing (and research) game and simply began publishing the first draft of my work the moment I’m done authoring it… What I’m trying to say is, I’ve decided to err on the side of delivering content. This means there will be typos, now and in the future. Sorry. I feel better getting it done, however, typos and all, than taking my good ideas to the grave. If you are a writer/blogger or are aspiring to do so, I suggest following my lead and just get it done – It feels so much bettr! 😉

Introduction

This series began with the introduction of the Virtual Index, the suitably sorted stream of numbers returned by T-SQL ROW_NUMBER which act very similar to an physical index with respect to sorting, grouping and filtering. Unlike a physical index, created using DDL, the virtual index is invoked with a SELECT statement against fnTally and goes away when you’re query is finished. No disk space to consume, fragmentation to address, nothing to check into your CI/CD system – just a magical, in-memory index (suitably sorted stream) which comes and goes as requested.

In Part 5 we reviewed how to return a distinct set of numbers from a one Billion row table in under 1/10th of a second using what I have coined, a multidimensional virtual index – that is, combining a virtual index with a real one speeds things up. More technically, we’re giving the optimizer a wider range of options than it would have with only one index in place.

So, if adding one virtual index to your query can speed things up, can we speed them up even further by adding more? I mean there basically free and can be pulled out of thin air – why not? That’s what we’re going to review today.

Dimensionality

Seasoned SQL developers as well as anyone who has written MDX expressions is familiar with the concept of dimensionality. Using SSAS for example, we can write and MDX expression to return a matrix with two or more dimensions as shown below. The Month dimension is on the x-axis, AKA the columns(0) axis in MDX; the Vehicle dimension is on y-axis, AKA rows(1). For brevity I included two Month members and three Vehicle members. The numbers can represent anything (e.g. Sales, Inventory, etc.) Using MDX we can easily write an MDX expression to return something that looks like this:

Figure 1. Two-dimensional matrix with Vehicle on the x-axis, month(s) on the y-axis.

Two-dimension matrix with a dimension for months and a second dimension for vehicles.

Now let’s say we wanted an deeper level of granularity for each vehicle color. We could add a third dimension for color to our expression to return something like this:

Figure 2. Three-dimensional matrix; a 3rd dimension added for color

Here we’re added a third dimension for “color” to transform it into a three-dimension matrix.

What does this have to do with indexing or performance you ask? Let’s apply this same concept, leveraging RangeAB, to perform what I’ve coined, Combinatorial Index Optimization. Think of it as combinatorial optimization for virtual indexes and indexed functions.

Combinatorial Index Optimization

Below is our query from Part 5 of this series where we reduced the the time it took to return a distinct set of numbers from a one billion set from 111 seconds using traditional methods, but only 0.07 seconds using a two-dimensional index.

  SELECT fnFilter.N1 --fnFilter.RN-(b.Mn-1)
  FROM  (SELECT MIN(t.N), MAX(t.N) FROM #t AS t) AS b(Mn,Mx)
  CROSS APPLY
  (
    SELECT      r.RN, r.N1
    FROM        core.rangeAB(b.Mn,b.Mx,1,1) AS r
    CROSS APPLY (SELECT TOP (1) 1 FROM #t AS t WHERE t.N = r.N1) AS z(x)
  ) AS fnFilter
  GROUP BY fnFilter.N1, b.Mn
  ORDER BY fnFilter.N1;

As you may recall, we reduced the runtime by using the tally table for the left side of the one-to-many relationship and only returning numbers that exist in the source table (#t). We’ve reduced the question from, “where is the next distinct value?” to a simple yes/no question: “Is 1 in here? Is 2 in here?” This is an example of a two-dimensional virtual index created by way of combinatorial index optimization.

Figure 3. Two-dimensional Virtual Index

The tally table (RangeAB in this example)

Virtual Indexing while wearing 3-D glasses

As I have said before, the great thing about a virtual index is that it’s free. Armed with a correctly constructed tally table function I can conjure up one any time I wish just like an AD&D Level 7 Magic User.

In the real-world, for a table with millions/billions of rows there would almost certainly be a table or other indexed object (e.g. an indexed view) that I could use to join to for filtering. Either way, using rangeAB, and it’s @Gap parameter we can add another index for an even cooler type of filtering. First let’s review our 2-D solution. Note the code and my comments:

Figure 4. How the Two-dimensional Virtual Index works

Next our 3-D solution.

DECLARE @Gap BIGINT = 100;

SELECT @N = fnFilter.N1
  FROM        (SELECT MAX(t.N) FROM #t AS t) AS mx(N)
  CROSS APPLY core.rangeAB(1,mx.N,@Gap,1) AS b
  CROSS APPLY
  (
    SELECT r.RN, r.N1
    FROM   core.rangeAB(b.N1,b.N2-1,1,1) AS r
    CROSS APPLY (SELECT TOP (1) 1 FROM #t AS t WHERE t.N = r.N1) AS z(x)
  ) AS fnFilter
  WHERE    EXISTS (SELECT 1 FROM #t AS t WHERE t.N BETWEEN b.N1 AND b.N2-1) -- $$$
  GROUP BY fnFilter.N1
  ORDER BY fnFilter.N1;

This image will help explain the changes.

Figure 5. How the Three-dimensional Virtual Index works

Adding this 3rd dimension will dramatically improve performance in two distinct ways. First it’s going to reduce the amount of work required simply by ignoring entire partitions of our sequence once the EXISTS statement determines it’s safe to do so. As Figure 2 should help you understand a 2-D virtual index, here’s what a 3-D virtual index looks like.

Figure 6. How the Three-dimensional virtual index works

Partitions without members are ignored

Performance Test 1

For brevity I set @gap to 3 for the example above. Notice how, for 1-3 and 13-15, our second tally table (by order of appearance in this article) is pre-filtering entire subsets of work. Next the performance test. For this test I used the same test harness from Part 5 of this series to create one billion rows with numbers ranging from 1 to 10,000 with very few gaps. For the 3-D solution I set @g to 100.

Figure 7. Test #1 Results – 1 Billion, Dense

The 3-D index improves with a parallel plan

In this case the 3-D index, with a serial plan slowed things down just a smidgen because I purposely made sure there were few gaps. I included this test to demonstrate how, unlike the 2-D index, we get a faster result set with a parallel plan. This despite the fact that the additional rangeAB call couldn’t really filter anything. Adding an additional dimension allows the optimizer better multithreading options. That is the second distinct advantage of the 3-D index over a 2-D one.

Performance Test 2

For this test I filled a table with the 50,000 numbers ranging from 1 to 1,000,000. Next I used my test harness to duplicate the rows until I had two Billion. The means my tally table must generate 1,000,000 rows for the left side of my query. This is where the 3-D index shines. I set my @gap parameter to 6400 for this test.

Figure 8. Test #2 Results – 2 Billion, Sparse

3-D virtual index with gap reduction saves the day.

The 2-D solution runs for 12 seconds regardless of how many CPUs you throw at it. The 3-D solution runs for 1.3 seconds serial, 0.396 seconds with a parallel plan. That’s correct, less than 1/2 a second to extract 50K distinct values from a set of 2,000,000,000.

Conclusion

This, again, is what I mean by More Fast, Less Code. In Part 7 we’ll dive into what is arguably my favorite topic: Finite Opposite Numbers. You’ll want to be seated for that one.

Virtual Indexing Part 3: The Indexed Function

“The fastest way to perform a calculation is to have already done so.”

“The fastest way to perform a calculation is to have already done so.”

Introduction

Here’s an interesting problem: given a sequence of numbers between M and N, how many are prime numbers? How would you solve this and how efficient will your solution be if M=0 and N=10,000,000? On my PC I’m able to get the job done in about 3 seconds.

Introducing the Indexed Function

One day while trying to figure out some recursive query I stumble across a SQL Server Central post about calculating a Fibonacci sequence where someone asked, “why not just store the values in a table?” It seems obscenely obvious to me now but at the time, but it blew me away that I never thought of it. It still blows my mind how so few developers take advantage of this. Just populate a table (on-disk or in-memory) with the values you need, add indices as needed, then build an inline table valued function (iTVF) to retrieve the value with an index seek. This allows your function to run in O(1) time for each value passed to it. Indexed values are stored as a  suitably sorted stream which makes it possible to retrieve the value(s) without a scan. A web search for “create Fibonacci function“,  “function for prime numbers” or something similar will generally return various algorithms (mostly recursive) but scarce mentions of pre-calculating problems where the values never change. Doesn’t Fibonacci(7) always 13? Factorial(5) = 120? When is 7 not a prime number? I suspect that, the fastest way to perform a calculation is to have already done so, then save the result in an index for fast retrieval.

In Paul White’s article, Understanding and using APPLY (Part 1) he refers to an inline table valued function (iTVF) as “a view that accepts parameters”. With that in mind, what would be the best way to describe an indexed view (or table) that accepts parameters?  I’m thinking indexed function.

Nasty Fast Primes

Primes is a great example since calculating them can be costly, especially when using a solution which “counts up” iteratively or recursively. For this exercise we’re going to build a prime number function that can determine if an integer between 1 and 10,000,000 is prime and do so in O(1) time and space. I know that 99.999% of you don’t need a prime number function so remember that this is for learning and to prove the power of this concept; this technique will work against any immutable set of values.You don’t have to play along or can test this with a smaller number of primes. I’m including the DDL used to create the function so you can keep me honest about the performance tests later in the article.

The Setup Procedure

This proc indexes and populates a prime numbers table called idx.prime. There are two parameters: @Counter and @maxrows. @counter is the number that the proc starts at and should be left at 14. You can use @counter to set up batch inserts. If you execute usp_primes and accept the default values, the function will support 1-10M with a bit column indicating if the number is prime. You don’t have to do 10M rows but it’s ~300MB, uncompressed, if you do. For me it this runs 20 minutes without batching the insert; I did it all in one shot. Note that I keep my indexed functions in an “idx” schema, no harm in using dbo. Once the procedure is done you can create our prime numbers function as an indexed function. As mentioned in Part 2 of my PerfML series, functional purity is key so only an iTVF will do. If you’re playing along you will need a one million row tally table, you can get the DDL here and the stored procedure to create an indexed prime number function is attached below.

Note that it creates a schema name idx; this is where my indexed functions live but it’s not necessary.

The Indexed Function (idx.fnPrimes)

Now we have a sorted set of numbers from 1 to 10M. Our function takes @N as an input and performs a one-row index seek.

Figure 1: idx.fnPrimes

CREATE FUNCTION idx.fnPrimes
(
  @N BIGINT
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
  SELECT p.N, p.PRIME FROM idx.prime AS p WHERE p.N = @N;
GO

To check if a number is prime:

Figure 2: Basic idx.fnPrimes demo

SELECT p.N, p.PRIME FROM idx.fnPrimes(11) AS p;

To return the numbers 1 to 7 along with their prime value we use fnTally to generate the stream of numbers and map them to fnPrimes.

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)  AS f
CROSS APPLY idx.fnPrimes(f.N) AS p;

This returns: [(1 1),(2 1),(3 1),(4 0),(5 1),(6 0),(7,1)]; now a quick performance test.

Performance Test #1: Calculate Primes 1 to 1 Million

The purpose of this test is to prove out the concept of the indexed function. This query will take the values 1 through 10,000,000 and determine if they are prime or not. I am forcing a serial plan since the serial plan actually performs best. @P variable captures the value so as to avoid SSMS spitting out 10 million rows and ruining the performance metrics.

Figure 3.1: idx.fnPrimes Performance Test

PRINT CHAR(13)+'fnTally serial'+CHAR(13)+REPLICATE('-',90);
GO
DECLARE @ST DATETIME = GETDATE(), @P BIT;
  SELECT      @P = p.PRIME
  FROM        dbo.fnTally(1,10000000) AS f
  CROSS APPLY idx.fnPrimes(f.N)       AS p
  OPTION (MAXDOP 1);
PRINT DATEDIFF(MS,@ST,GETDATE());
GO 3

Figure 3.2: idx.fnPrimes Performance Test Results

fnTally serial
------------------------------------------------------------------------------------------
Beginning execution loop
2660
2727
2750

Under 3 seconds for idx.fnPrimes to accept the numbers 1 to 10,000,000 and determine if which ones are prime.  This with a serial execution plan!

Unexpected Virtual Index Pitfalls (continued…)

One motivation for giving a name to what I call, the “virtual index,” is to make the following type of problem (and solution) easier to explain. Part 2 of this series you were introduced to some unexpected virtual indexing pitfalls which can be tricky to identify. There’s another issue to watch out for people using SQL 2019 which includes batch mode over rowstore, or users or who leverage Itzik Ben-Gan’s batch mode processing trick to enjoy this functionality on SQL Server 2016+.  I use SQL 2019 and the optimizer leverages a window aggregate function that invokes batch mode over rowstore which is the culprit here. Below are two techniques to retrieve the numbers 1 to 7 and identify which are prime.

Figure 4: Hidden CTE Tally table row retrieval issue

--==== 1. Leveraging fnTally
SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)  AS f
CROSS APPLY idx.fnPrimes(f.N) AS p; 

--====  2. Static Values
SELECT      p.N, p.PRIME
FROM        (VALUES(1),(2),(3),(4),(5),(6),(7)) AS f(N)
CROSS APPLY idx.fnPrimes(f.N)                  AS p; 

Each function returns the expected result set: [(1 1),(2 1),(3 1),(4 0),(5 1),(6 0),(7,1)]; The execution plan for the fnTally solution.

Figure 4.1: Execution Plan Details

fnTally retrieved 2700 rows to generate the numbers 1-7! The second query didn’t have this issue. Luckily there are more than a few ways to solve this.

Solution 1: Disable batch mode over rowstore

Let’s temporarily disable batch mode over rowstore and try again.

Figure 5.1: Solution #1: Disable batch mode over rowstore query

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)    AS f
CROSS APPLY idx.fnPrimes(f.N)   AS p;
OPTION(USE HINT('DISALLOW_BATCH_MODE'));

Figure 5.2: Solution #1: Execution plan

7 rows only, much better!

Solution 2: make_parallel

In my experience Adam Machanic’s make_parallel always forces plans that don’t include batch mode over rowstore. I don’t know if it’s an option with make_parallel in play.

Figure 6.1: Solution #2: make_parallel

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)    AS f
CROSS APPLY idx.fnPrimes(f.N)   AS p
CROSS JOIN  dbo.make_parallel() AS parallel_execution;

Below is the portion of the plan where the ROW_NUMBER sequence is created; again, only 7 rows

Figure 6.2: Solution #2:  Execution plan

Solution 3: Use a physical tally table

The conventional wisdom seems to be that the CTE tally table is faster than a physical table pretty much all the time, a reasonable assumption since the CTE tally table is readless but physical tally tables do require IO. Here is an example of where the physical tally table (correctly indexed) has an edge.

Figure 7.1: Solution #3: Physical Tally Table

Note that both a TOP + ORDER BY, WHERE clause filter, or a both together will do just fine.

SELECT TOP(7) p.N, p.PRIME
FROM        dbo.tally         AS f
CROSS APPLY idx.fnPrimes(f.N) AS p
ORDER BY    f.N;

Again, only 7 numbers retrieved.

Figure 7.2: Solution #3: Execution Plan

This is one of many subtle differences between a persisted tally table and a CTE version. Let’s explore deeper.

RangeAB and RangeAB2

We will conclude by running two performance tests against one million rows. The one million limit is because that’s what dbo.tally is capped at. For this test let two tally table functions: the first can be considered an “add-on” to dbo.fnTally named dbo.rangeAB. The second takes identical parameters and returns identical results, the only difference is that it leverages physical tally table (dbo.tally) instead of dbo.fnTally.

Both functions can be used to help enforce best practices with as little code as possible. Even better: Both functions return the exact result set given the same parameters. This gives you the ability to test your set-based code against both a physical and virtual tally table index simply adding or removing the number “2” from the end of “RangeAB.”

The functions and usage examples are located here. For the purposes of this article I will show you the execution plans for a basic usage scenario so we can compare execution plans. Both examples use one of the RangeAB functions to return a series of weeks. We’ll run these with “show actual execution plan” turned on.

Figure 8.1: Basic Usage Example

DECLARE
  @startdate DATE   = '20200831',
  @weeks     BIGINT = 5,
  @row1      BIGINT = 1;

BEGIN
--==== RangeAB (fnTally version)
  SELECT
    WeekNumber     = f.RN,
    WeeksLeft      = f.OP,
    DaysFromMedian = ABS(f.DM),
    WeekStart      = DATEADD(DAY,f.N1-1,@startdate),
    WeekEnd        = DATEADD(DAY,f.N2-2,@startdate)
  FROM     dbo.rangeAB(@row1, @weeks*7-(1-@row1), 7, @row1) AS f
  ORDER BY f.RN;

--==== RangeAB2 (dbo.tally version)
  SELECT
    WeekNumber = f.RN,
    WeeksLeft  = f.OP,
    DaysFromMedian = ABS(f.DM),
    WeekStart  = DATEADD(DAY,f.N1-1,@startdate),
    WeekEnd    = DATEADD(DAY,f.N2-2,@startdate)
  FROM dbo.rangeAB2(@row1, @weeks*7-(1-@row1), 7, @row1) AS f
  ORDER BY f.RN;
END;

Figure 8.2: Results (for each)

    WeekNumber  WeeksLeft  DaysFromMedian  WeekStart  WeekEnd
    ----------- ---------- --------------- ---------- ----------
    1           5          14              2020-08-31 2020-09-06
    2           4          7               2020-09-07 2020-09-13
    3           3          0               2020-09-14 2020-09-20
    4           2          7               2020-09-21 2020-09-27
    5           1          14              2020-09-28 2020-10-04

Execution plans

Ignore the 40/60% estimates, they aren’t accurate, both perform almost identically. The win is that we can use each to quickly determine which is better – dbo.tally or dbo.fnTally without any logic changes. fnTally uses the Itzik Ben-Gan style ROW_NUMBER over cross joins, the other gets its rows from dbo.tally.

Figure 8.3: Execution Plans

Prime Numbers Test #1: RangeAB for a running total of Primes

Now back to our indexed function, idx.fnPrimes. This first example is a simple running total of prime numbers from 1 to 50. Ignore the @gap parameter for now.

DECLARE @gap BIGINT = 1, 
        @low BIGINT = 0, 
        @hgh BIGINT = 50; 

--==== Solution #1: Primes running total over an Indexed Function + Virtual Index (RangeAB) 
SELECT
  N           = ((rnd.RN-2)*@gap+@low)+1, 
  TotalPrimes = SUM(p.PRIME+0) 
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
GROUP BY    rnd.RN
ORDER BY    rnd.RN; 

--==== Solution #2: Primes running total over two indexed functions
SELECT
  N           = ((rnd.RN-2)*@gap+@low)+1, 
  TotalPrimes = SUM(p.PRIME+0) 
FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)             AS p
GROUP BY    rnd.RN
ORDER BY    rnd.RN; 

Figure 10.1: Primes Test #1 Results (truncated for brevity)

N       TotalPrimes
------- -----------
1       1
2       2
3       3
4       3
5       4
6       4
7       5
8       5
...
46      15
47      16
48      16
49      16
50      16

Prime Numbers Test #2: Performance

Here we’ll do two tests, first with a serial plan, then with a parallel plan. What we need to do is create @gap-sized groups and collect a between @low and @hgh along with a count of prime numbers in that group. We’ll compare the performance RangeAB vs RangeAB2. Again, the logic is identical except that the latter leverages dbo.tally. Note the parameters.

First to test serial execution using MAXDOP 1.

Figure 11.1 Serial Test

SET STATISTICS IO, TIME ON;

DECLARE
  @gap BIGINT = 100000,
  @low BIGINT = 0,
  @hgh BIGINT = 1000000;

PRINT CHAR(10)+'fnTally version'+CHAR(10)+REPLICATE('-',90);
  SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
GROUP BY   rnd.RN
ORDER BY   rnd.RN
OPTION (MAXDOP 1);

PRINT CHAR(10)+'Tally Function V2 - Indexed'+CHAR(10)+REPLICATE('-',90);
 SELECT
   L           = ((rnd.RN-2)*@gap+@low)+1,
   H           = (rnd.RN-1)*@gap+@low,
   TotalPrimes = SUM(p.PRIME+0)
 FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
 CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
 CROSS APPLY idx.fnPrimes(r.RN)             AS p
 GROUP BY   rnd.RN
 ORDER BY   rnd.RN
 OPTION (MAXDOP 1);
SET STATISTICS IO, TIME OFF;

Figure 11.2: Primes Test #2 Result Set

L           H            TotalPrimes
----------- ------------ -----------
1           100000       19187
100001      200000       38367
200001      300000       57548
300001      400000       76728
400001      500000       95910
500001      600000       115092
600001      700000       134272
700001      800000       153453
800001      900000       172634
900001      1000000      191814

Figure 11.3: Primes Test #2 Performance (Serial: Time & IO)

fnTally version
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843792, physical reads 0...
Table 'Worktable'. Scan count 0, logical reads 15101551, physical reads 0...

 SQL Server Execution Times: CPU time = 23312 ms,  elapsed time = 23452 ms.

Tally Function V2 - Indexed
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843792, physical reads 0...
Table 'tally'. Scan count 2, logical reads 9555, physical reads 0...

 SQL Server Execution Times: CPU time = 9110 ms,  elapsed time = 9163 ms.

This a case where dbo.tally finds redemption. 9 Seconds vs 23 – not bad. Next for a parallel execution plan leveraging dbo.make_parallel.

Figure 11.4: Parallel Test

SET STATISTICS IO, TIME ON;

DECLARE
  @gap BIGINT = 100000,
  @low BIGINT = 0,
  @hgh BIGINT = 1000000;

PRINT CHAR(10)+'fnTally version'+CHAR(10)+REPLICATE('-',90);
  SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
CROSS JOIN  dbo.make_parallel()           AS x
GROUP BY   rnd.RN
ORDER BY   rnd.RN;

PRINT CHAR(10)+'Tally Function V2 - Indexed'+CHAR(10)+REPLICATE('-',90);
SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)             AS p
CROSS JOIN  dbo.make_parallel()            AS x
GROUP BY   rnd.RN
ORDER BY   rnd.RN;
SET STATISTICS IO, TIME OFF;

Figure 11.5: Primes Test #2 Performance (Parallel: Time & IO)

fnTally version (RangeAB)
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843855, physical reads 0...
Table 'Worktable'. Scan count 0, logical reads 15101558, physical reads 0...

 SQL Server Execution Times: CPU time = 35877 ms,  elapsed time = 7816 ms.

dbo.tally version (RangeAB2)
------------------------------------------------------------------------------------------
Table 'tally'. Scan count 9, logical reads 9604, physical reads 0...
Table 'prime'. Scan count 0, logical reads 16843855, physical reads 0...

 SQL Server Execution Times: CPU time = 16687 ms,  elapsed time = 3638 ms.

The great news is that, in both cases, parallel execution speeds things up for both. The fist version (virtual index) increasing to 7.8 seconds, the second (the indexed function) completes in 3.6 seconds. The reads against the primes table (via the idx.primes indexed function) are excessive. That said, to calculate the numbers of primes, split into 100K chunks up to 1,000,000… that should take longer than 3.6 seconds.

Conclusion

We have a new way to take complicated functions and speed them up to O(1) time leveraging a new concept I coined as the Indexed Function.; a powerful tool for you set-based programming toolbox. We have a way to quickly compare a CTE-Tally to a real one. We’ve learned that a physical tally table, in some cases, can blow the doors off a virtual one.

The Virtual Index and Indexed Functions are indispensable tools in my toolbox. Thanks for reading.

How to find strings longer than 8K in SQL Server (Part 1)

Ever try searching a string for a substring larger than 8,000 characters? It’s actually more difficult than you think… without a good N-Grams function, that is.

Before you continue

FIRST, this article leverages my NGrams2B function which can be found here: Nasty Fast N-Grams (Part 1). NGrams2B is the same as the 8K version but handles VARCHAR(MAX) strings. Since publishing this article I have since dropped the “2B” from the function name; you will see it referred to simply as dbo.ngrams in the code examples below.

SECOND, For people following my N-Grams series on SQL Server Central (SSC), let not your heart be troubled: I am in the process of wrapping up my N-Grams Series (Parts 2-6). This is a quick “one-off” intended to keep my new blog posts moving along…

Introduction

Riddle me this: what will the query below return?

DECLARE @SearchText VARCHAR(MAX) = REPLICATE('X',9000),
        @SomeString VARCHAR(MAX) = REPLICATE('X',8001);

SELECT CHARINDEX(@SearchText,@SomeString);

You would expect it to return a Zero(0) – @SearchText is a string of 9000 X’s, @SomeString is 8001 X’s. The query, however, returns a 1. That’s not right, how can this be? To get to the bottom of this let’s append this line to our query:

SELECT LEN(@SearchText), LEN(@SomeString)

This returns: 8000 for each. Ahhhhh, our sample strings are being implicitly converted from VARCHAR(MAX) to VARCHAR(8000). I make this mistake periodically when using REPLICATE for creating sample strings. To get around the 8K limitation we can case the input string as VARCHAR(MAX):

--==== Casting "X" as VARCHAR(MAX) circumvents the truncation issue
DECLARE 
  @SearchText VARCHAR(MAX) =       REPLICATE(CAST('X' AS VARCHAR(MAX)),9000),
  SomeString  VARCHAR(MAX) = '###'+REPLICATE(CAST('X' AS VARCHAR(MAX)),9001);

SELECT LEN(@SearchText), LEN(@SomeString);

This prevents the truncation. Let’s run add our CHARINDEX query again:

SELECT CHARINDEX(@SearchText,@SomeString);
Results:
Msg 8152, Level 16, State 10, Line 132
String or binary data would be truncated.

Not as easy as you thought huh?. Perhaps PATINDEX can save the day…

SELECT PATINDEX('%'+@SearchText+'%',@SomeString);

Same error, grrrr… What about LIKE? Surely we can solve this using LIKE right?

SELECT CASE WHEN @SearchText LIKE '%'+@SomeString+'%' THEN 1 ELSE 0 END;

Fail – same error. I encountered this problem for the first time earlier this year and attempting to solve it took a toll on my keyboard and self-esteem.

The Problem

To my knowledge, SQL Server doesn’t provide a way to search for a VACHAR longer than 8,000 or 4,000 for NVARCHAR. Based on my research it’s obvious that this is not a common problem. It will be more common as Developers continue to push the limits of what you can do with strings in SQL Server. And if I’m wrong? So what. Solving problems like this is fun and talking about N-Grams never gets old.

dbo.NGrams to the Rescue (as per usual)

dbo.NGrams and dbo.NGrams8K are Super heroes

Ok-LIKE, CHARINDEX and PATINDEX are not valid options. To quote Ned Flanders, “As melon scratchers go, this one is a honeydoodle!” Fortunately SQL Server is powerful enough to check two VARCHAR(MAX) strings for equality. This is where a good N-Grams function is a super hero.

First we’ll tokenize (split) the @SomeString into tokens as long as @SearchText. It’s six characters long so we can split @SomeString into 6-grams.

--==== Sample Search Text and String to analyze
DECLARE @SearchText VARCHAR(MAX) = 'ABC123',
        @SomeString VARCHAR(MAX) = 'AAABC123XYZ';

--==== Tokenize @SomeString into @SearchText-sized tokens
SELECT   ng.Position, ng.Token
FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng -- LEN(@SeachText=6)
ORDER BY ng.Position;
Results:
Position  Token
--------- ---------
1         AAABC1
2         AABC12
3         ABC123
4         BC123X
5         C123XY
6         123XYZ

The value of @SearchText (“ABC123”) is at Position 3: AAABC123XYZ. All that’s left is to add a WHERE filter to compare @SearchText to each token returned by dbo.NGrams.

SELECT   ng.Position, ng.Token
FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
WHERE    ng.Token = @SearchText
ORDER BY ng.Position;
Returns:
Position   Token   
---------- ---------------
3          ABC123

This is what we want. Note that, thanks to the virtual index that my N-Grams function uses, I can sort by the token’s position in the string without a sort in the execution plan. That’s going to help us because we need to add a TOP(1) clause to identify the location of the first instance of @SearchText . Remember, we only need to know if the text exists. Adding TOP(1), with an ORDER BY to keep it deterministic, has multiple benefits: not only do we get a predictable result, this also assists the cardinality estimator and enables us to tap into the SQL optimizer’s Row Goals which, when correctly leveraged, allow for obscene performance gains.

Here we’re going to use strings (8K+) as variables from earlier but with a couple small changes then:

  1. 1. Add the TOP(1) clause along with the proper ORDER BY clause
  2. 2. Wrap the logic up in a subquery named f
  3. 3. Retrieve the value MAX(f.ItemIndex) from f
  4. 4. Add an ISNULL our MAX() to return 0 when there isn’t a match
DECLARE @SearchText VARCHAR(MAX) = REPLICATE(CAST('X' AS VARCHAR(MAX)),9000),
        @SomeString VARCHAR(MAX) = '###'+REPLICATE(CAST('X' AS VARCHAR(MAX)),9001);

SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
FROM
(
  -- Returns the first position of @SearchText
  SELECT TOP(1) ng.Position
  FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
  WHERE    ng.Token = @SearchText
  ORDER BY ng.Position
) AS f(ItemIndex);

This returns: 4 which is correct – @SearchText is a series of 9000 X’s and exists in Position 4 in @SomeString. And that’s it – easy peasy. Next for performance.

Introducing the 2 Billion+ row test harness

I said two Billion, with a B – not a typo. I began the heading with “Introducing” because any test harness exceeding a billion rows deserves a proper introduction. I say, rows, not characters, because dbo.NGrams tokenizes (splits) strings into rows. You’re going to see many more billion+ row test harnesses as we explore the PerfML Universe.

The VARCHAR(MAX) character limit is 2,147,483,647. I rounded my sample values down to two billion, created a search string 50K characters long then stuffed that value into @SomeString, nested between 100,011 junk characters followed by two billion more for a grand total of 2,000,150,011 characters. We’ll run the query twice: once with a serial plan, another with a parallel plan using make_parallel() by Adam Machanic.

Test Harness

SET STATISTICS TIME ON;
PRINT 'Build the test string'+CHAR(13)+REPLICATE('-',90);

  DECLARE @Match VARCHAR(MAX) = REPLICATE(CAST('X' AS VARCHAR(MAX)),50000),
          @Junk1 VARCHAR(MAX) = REPLICATE(CAST('Z' AS VARCHAR(MAX)),100011),
          @Junk2 VARCHAR(MAX) = REPLICATE(CAST('#' AS VARCHAR(MAX)),2000000000);

  DECLARE @SearchText VARCHAR(MAX) = @Match,
          @SomeString VARCHAR(MAX) = @Junk1+@Match+@Junk2;

PRINT 'Performance Test #1: Serial Execution'+CHAR(13)+REPLICATE('-',90);

  SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
  FROM
  (
    -- Returns the first position of @SearchText
    SELECT TOP(1) ng.Position
    FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
    WHERE    ng.Token = @SearchText
    ORDER BY ng.Position
  ) AS f(ItemIndex)
  OPTION (MAXDOP 0);

PRINT 'Performance Test #2: Parallel Execution'+CHAR(13)+REPLICATE('-',90);

  SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
  FROM
  (
    -- Returns the first position of @SearchText
    SELECT TOP(1) ng.Position
    FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
    WHERE    ng.Token = @SearchText
    ORDER BY ng.Position
  ) AS f(ItemIndex)
  CROSS JOIN dbo.make_parallel() --  NOOOO >> OPTION (QUERYTRACEON 8649);

SET STATISTICS TIME OFF;

How do you think we’ll do? How many minutes do you think this will run? Drum roll………….

2B Row Test Results

Build the test string
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 12453 ms,  elapsed time = 12576 ms.
 SQL Server Execution Times: CPU time = 7032 ms,  elapsed time = 7060 ms.

Performance Test #1: Serial Execution
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 1953 ms,  elapsed time = 1950 ms.

Performance Test #2: Parallel Execution
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 3568 ms,  elapsed time = 3570 ms.

This ran for about 25 seconds on my laptop. These results are more impressive when we dig deeper. SQL Server spent ~20 seconds building the string, another 1.9 seconds to find what we were looking for with a serial plan, 3.5 seconds for parallel. make_parallel adds a bit of spaghetti to the plan but, aside from that, the plans are identical.

Execution plan for the first (serial) query

If we hover over the second Nested Loops Join operator in the plan we can see the actual number of rows generated by dbo.NGrams:

The matching search text is 10,011 characters deep which explains the 100,012 operations

100,012 rows (pseudo iterations) to parse a 2,000,150,011 character string; this in under 2 seconds with only one CPU. That’s Nasty Fast! Can you guess how we got to 100,012 rows? I buried the search string 100,011 characters deep. The deeper the search string is, the number of rows dbo.NGrams creates increases, something we’ll address in Part 2.

make_parallel is a Super Power

Queries which leverage dbo.NGrams and dbo.NGrams8k generally perform better with parallel execution but not in this case. Here I’d likely force a serial plan to prevent a slower parallel plan. I included a parallel solution to help you understand the magic which is make_parallel. I first discussed the obvious benefits of make_parallel in PerfML Part 2 Self-Tuning with a Tally Table. Now for a not-so-obvious benefit, and it’s big.

In my experience, make_parallel and TraceFlag 8649 basically perform identically ~95% of the time. The only reason I ever use TraceFlag 8649 is during development as make_parallel makes the execution plans less readable. Every once in a while, not often, TraceFlag 8649 causes a query to run for minutes (even hours) instead of seconds/milliseconds. This is due to Intra-Query Parallel Deadlocking. What makes this type of deadlocking distinctly painful is the pure randomness of it. Sometimes it happens every 10-100 query executions, sometimes it’s a few times a month. This an example of the type of adventures undocumented trace flags will take you on. make_parallel doesn’t have this issue. Just another reason make_parallel deserves a cape.

Conclusion

2B strings are not common but this should give you a taste for how this technique will hold up against millions or even billions of smaller strings. This is the power of the tally table and dbo.NGrams. Unless someone can point me to a book, article or forum post that details a faster way to find a string longer that 8K in SQL Server I’m going to aver that this is the fastest known method for finding VARCHARs 8001 characters long in SQL Server without an Index. But can we make it faster, much faster, unimaginably faster! See you at Part 2 of this series. Thanks for reading!

Virtual Indexing Part 2: Unexpected Pitfalls

Diving deeper into the world of virtual indexing while being reminded that set-based code is not always bulletproof

Introduction                                                 

In Virtual Indexing Part 1 I introduced the idea of the virtual index, the ordered (AKA “suitably  sorted”) stream of numbers returned by ROW_NUMBER which allows you to perform operations like grouping and window aggregations without the help of an index OR a sort in the execution plan. Today I want to draw your attention to some hidden CTE tally table dangers that are easy to detect and resolve but can be catastrophic when missed.

Problem #1: Gaps and the Hidden Row Explosion

Identifying gaps and islands in sequences is a common SQL task. One of the first things I learned to do with a tally table is identify gaps. It’s an easy concept to grasp, easier than identifying islands. In the query below we have a table variable populated with the numbers 1 to 10, but with the numbers 3,5 and 9 missing. Let’s use fnTally to identify the missing numbers.

Figure 1. Using fnTally to find gaps in a sequence

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     t.N IS NULL;

This returns the expected results: [3 5 9]. Now the execution plan:

Figure 1.1. fnTally gaps in a sequence query execution plan

fnTally returns 10 rows, as expected, to create the numbers 1 to 10. But look at the clustered index scan against the table variable – 70 rows. That is tragic: 70 rows scanned to identify 3 missing values! Can you identify the problem? Take a minute to see if you can figure this out…

@t has 7 rows, fnTally is returning 10 rows: 7*10=70. The optimizer knows that the clustered index on @t is unique but is not taking advantage of how the numbers returned by fnTally are also unique and ordered. The optimizer is behaving as if a one-to-many relationship between t.N and f.N is possible. With this plan the execution engine is forced to compare every value in @t to every value returned by fnTally. As sad as this might make you feel at first, let not your heart be troubled. The workarounds are endless, the trick, really, is to understand which one is best and why.

Table Operator Modifications

Let’s start with the three solutions below, ordered by which I would recommend. Each of these will get the job done with minimal changes to the underlying logic. The first solution is to just add an additional filter: f.N <= @Max; the second is to replace fnTally with a physical tally table, the third is to use a hint.

Figure 2. fnTally gaps in a sequence solutions

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #1: Add a WHERE filter for fnTally.N
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     f.N <= @Max -- solves the fake one-to-many issue
  AND     t.N IS NULL;

--==== Solution #2: Use a persisted tally table with a WHERE filter
SELECT    f.N
FROM      dbo.tally AS f
LEFT JOIN @t        AS t 
  ON      t.N = f.N
WHERE     f.N <= @Max -- required becuase there is no TOP(@Max) clause
  AND     t.N IS NULL;

--==== Solution #3: Add a Query Hint to the left join
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t AS t -- use Merge Join algorithm instead of loop join
  ON      t.N = f.N
WHERE     t.N IS NULL;

Each query returns the correct result and in each case only 7 rows total are retrieved from @t instead of 7 rows for each number returned by fnTally.

Figure 3. Execution plan for the table operator solutions

I didn’t call it out in the screen shot but with each solution only 7 rows were retrieved instead of 70. The first solution is the easiest – just add an additional WHERE filter. This may seem redundant as it’s not necessary, but it works and it’s a simple fix. If it were always this simple I could end the discussion here but it’s not. The second solution is to use a physical tally table (dbo.tally in this example) instead; this solves the problem in this example and has other advantages I’ll cover momentarily. The third solution is to force a merge join using a query hint; this works but is my last choice. There have been times where a query hint is the only option which is why I’m calling it out now.

Anti-Join Solutions

Despite the fact that you have seen the term, Anti-join in the SSMS execution plan, it is not a commonly used phrase in the RDBMS world. An anti-join is but it is the best way to describe a scenario where you need all items from one table that do not exist in a second table.

Figure 4. Anti-join Venn Diagram

Reviewing the execution plan for the first set of solutions above (figure 2) you see that each used a Left Outer Join operator to identify the missing rows, then by a filter to exclude the others. Let’s examine two more solutions which leverage (NOT) EXISTS logical operator and the EXCEPT set operator.

Figure 5. Anti-join solutions

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #4: EXCEPT >> Merge join handles the filter
SELECT f.N FROM dbo.fnTally(1,@Max) AS f
EXCEPT
SELECT t.N FROM @t AS t;

--==== Solution #5: NOT EXISTS >> Identical to above but with nested loop join
SELECT f.N
FROM   dbo.fnTally(1,@Max) AS f
WHERE  NOT EXISTS (SELECT t.N FROM @t AS t WHERE t.N = f.N)
AND    f.N <= @Max; -- Optional, forces loop join

Figure 5.1. Anti-join solutions execution plans

Both produce almost identical execution plans and are both efficient. The EXCEPT solution is the best IMO because it’s the cleanest.

Parallelism

Now let’s safely force a parallel execution plan against the left join solution with the WHERE filter (Figure #2, Solution 1) then, again, with our EXCEPT anti-join from Figure #5.

Figure 6. Parallel execution test

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Good when serial, bad for parallel
SELECT     f.N
FROM       dbo.fnTally(1,@Max) AS f
LEFT JOIN  @t AS t ON t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      t.N IS NULL
AND        f.N <= @Max

--==== Winner for serial and parallel
SELECT f.N FROM dbo.fnTally(1,@Max) AS f 
EXCEPT
SELECT t.N FROM @t AS t CROSS JOIN dbo.make_parallel() AS x;

Below is the portion of each plan where the rows are retrieved. Even with the WHERE f.N <= @Max clause in place, which solved the 70-row explosion problem earlier with a serial plan, the row explosion returns with parallel execution. The EXCEPT solution, however, does not have this problem with a serial or parallel plan.

Figure 6.1. Parallel execution performance plan

Problem #2: Gaps, Left Join Aggregation and Parallelism

Keeping with the theme of gaps let’s use the same table variable from earlier but include duplicate and missing values. Using this sample data:

Figure 7. Problem #2 sample data

DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);

The expected output would be:

Figure 7.1. Problem #2 expected output

N     Total
----- -------
1     2
2     4
3     0
4     2
5     0
6     1
7     1
8     3
9     0
10    3

Like earlier, this query will suffer the aforementioned hidden row explosions; instead of retrieving 16 rows from @t, it’s retrieving 160 rows (16*@Max.)

Figure 8. Problem #2, Solution #1 (fail)

SELECT    f.N, Total = COUNT(t.N)
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t                  AS t
  ON      t.N = f.N -- could not shake the row explosion
GROUP BY  f.N;

Again, an f.N <= @Max in a WHERE filter or as a join. t.N <= @Max will also do the trick in the WHERE or join filters. The problem, again, is that this only works when the optimizer chooses a serial execution plan. This means that, in this case, you must always use OPTION (MAXDOP 1) but be banished to eternal serial processing. C’mon man!

Problem #2 Workarounds

Merge Join hint will solve the problem but cannot get a parallel execution plan; in other words, make_parallel will be successful at solving the row explosion issue but not at forcing a parallel plan. The persisted tally table solution on the other hand, does not experience the row explosion issue and can enjoy both serial and parallel execution.

Figure 9. Problem #2 – Solutions #2 & #3

DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);

DECLARE @Max BIGINT = 10;

-- Solution 1 - works, query hint though & no parallel execution
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t             AS t
  ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
GROUP BY     f.N;

-- Solution 2 - good serial & Parallel
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.tally AS f
LEFT JOIN  @t AS t
  ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      f.N <= @Max
GROUP BY   f.N;

Figure 9.1. Problem #2 – Solutions #2 & #3 execution plans

In each case the row explosion is gone but only the physical tally table solution (dbo.tally) solves the problem with a serial or parallel plan while preserving the optimizer’s option to execute a parallel plan. This is an example of where fnTally simply cannot compete with dbo.tally.

Set-Based =! Bulletproof (Conclusion)

If there is a lesson to be learned today it’s that the very existence of a numbers table or the text, “tally” in your code does not guarantee that it will be fast. Furthermore, just because your set-based solution is fast – it can always be faster, exponentially faster in many cases as you will see in future articles. In this article we watched an innocent query to identify gaps in a sequence go terribly wrong. Fortunately, there are many workarounds provided that you are identify the problem when it arises. Hopefully you are now better prepared.  

Which is faster again? fnTally or a tally persisted tally table? It depends. As per usual.

Virtual Indexing Part 1: The Suitably Sorted Stream

Does a virtual auxiliary table of numbers include a virtual index?

Before you continue

This post is intended for new and old data professionals alike. To get the most of this article you should have a basic understanding of tally tables and Jeff Moden’s tally table function, fnTally. You also need a basic understanding of how to use APPLY table operator; if not read Understanding and Using APPLY (Part 1), Part 2 and The Cascading (CROSS) APPLY. Each is a quick read and can change your career.

Intro

Less Code, More Fast – this is my coding philosophy in four words and the tagline for this site. Replacing bloated iterative procedural code with clean, set-based, pure, reusable code makes it easy to deploy code faster and with fewer bugs. Veteran programmers who write high-performing declarative code, such as SQL Developers, know that pure, set-based, loop-free code performs exponentially faster than code that uses loops, mutable data structures and procedural constructs such as “goto”. A correctly written tally table function, such as fnTally, count from 0 to 1,000,000,000 in 90 seconds on my laptop. I wouldn’t even try with a loop. Replacing loops with a tally table (or some other form of lazy sequence) is a low-risk, low-cost, high-reward way to write cleaner, faster code.

The Virtual Index

One small benefit to writing loops to perform tasks in your functions and procedures is that you don’t have to worry about sorting. With a set-based approach that leverages a tally table, if the order of the numbers returned matters then the optimizer will have to sort the numbers for you unless they are presorted, AKA indexed.  When using physical tally table that is properly indexed sorting is not a concern, but what about with a tally table function like fnTally? You can’t index a function right?

Returning the numbers 1 to 7, in order, without a sort

An important consideration when replacing a loop with a tally table is sorting. The most common way to handle sorting in the SQL world is to presort using an index. fnTally is a function, you can’t add an index to a function so how do you return the numbers [ 1 2 3 4 5 6 7] in that order without sorting them? You can’t unless they are already sorted. fnTally returns it’s numbers as an ordered set. This ordered stream of numbers is what I refer to as the virtual index. Virtual indexing  can be thought of as the art/science of:

  1. Understanding what the virtual index is
  2. Knowing when\how to use the virtual index
  3. Understanding the alternatives and when to use them

If you are a set- based SQL coding warrior then you know that you can use ROW_NUMBER to create a Virtual Auxiliary Table of Numbers, AKA: tally table. With that virtual table comes with a virtual index. To understand and appreciate the difference between a virtual index and a physical index, lets create a mini-numbers temp table with the numbers 1 to 6. We won’t include an index which means that the numbers are unordered.

Figure 1 – Mini tally table (#nums)

--==== Create a small Number's table
IF OBJECT_ID('tempdb..#nums','U') IS NOT NULL DROP TABLE #nums;
CREATE TABLE #nums (N INT NOT NULL);
INSERT #nums VALUES (1),(2),(3),(4),(5),(6);

This table could be better described as a “tally heap” or a “virtual auxiliary heap of numbers” Let’s run a three queries against #nums without and index present then review execution plans.

Heap vs Index Tests

First to test our queries against #nums without an index in place:

Figure 2 – Test #1: No index  (Heap)

SELECT TOP(3) t.N
FROM     #nums AS t
ORDER BY t.N;

SELECT   t.N, AVG(t.N)
FROM     #nums AS t
GROUP BY t.N;

SELECT 
  [SUM] = SUM(t.N)     OVER (ORDER BY t.N),
  [LAG] = LAG(t.N,1,0) OVER (ORDER BY t.N)
FROM   #nums AS t;

Figure 3 – Test #1 execution plans without an index on #nums

Each query required a sort operator. TOP with an ORDER BY, grouping (the GROUP BY), and the window functions (SUM OVER and LAG) are examples of operations which require an ordered set, AKA “suitably sorted stream.” If the stream of input values is unordered, the most common remedy is a sort operator to create the suitably sorted stream when the query runs.

Hover over the properties for the stream aggregate and sequence operators in the second and third plans to see what each operator does and what they need:

Figure 3.1. – Stream Aggregate and Sequence Project Details

Suitable Sorting with a Physical Index

Sorting at runtime should be avoided, especially when dealing with a large volume of rows. The typical sort cost is n log n which is slower than linear. It’s like the exact opposite of a bulk discount – the cost per row increases as you increase rows. Let’s presort with a unique clustered index on #nums.

Figure 4 – Index for our numbers table (#nums)

CREATE UNIQUE CLUSTERED INDEX uq_nums ON #nums(N);

Now execute the same queries from Test #1(Figure 2) with the clustered index in place.

Figure 4.1. – Execution plan with index in place on #nums

Now that #nums is presorted there isn’t a need to sort at runtime.

The Suitably Sorted Virtual Index

First for the same queries but using fnTally and the execution plan.

Figure 5: Virtual Index Test Query

SELECT TOP(3) t.N FROM dbo.fnTally(1,6) AS t ORDER BY t.N;

SELECT t.N, AVG(t.N) FROM dbo.fnTally(1,6) AS t GROUP BY t.N;

SELECT [SUM] = SUM(t.N)     OVER (ORDER BY t.N),
       [LAG] = LAG(t.N,1,0) OVER (ORDER BY t.N)
FROM     dbo.fnTally(1,6) AS t;

Figure 5.1: Virtual Index Test Execution Plans


That’s the exact same execution plan as in figure 4.1, after the clustered index was added to #nums! The only difference is the nested loop joins on the right which are used to generate the rows. fnTally returns the numbers as an ordered set. In other words, for each query:

  1. Using #nums without an index forces the optimizer to add a sort operator to execution plan for handling TOP, GROUP BY and a couple window function functions (LAG and SUM() OVER())
  2. Adding an index to #nums presorts the values and eliminates the requirement to sort the values at runtime
  3. fnTally is an inline table values function(iTVF) and cannot be indexed
  4. In this example fnTally’s N column behaves like the indexed N column on #nums

The CTE tally table returns the numbers as an ordered set and generates an identical execution plan as when using a persisted tally table with an index. This is what I mean by, a virtual index. The virtual index here is the ordered set of numbers that ROW_NUMBER creates. There is a lot more to this topic that I look forward to exploring in Part 2 of this series.