# Virtual Indexing Part 2: Unexpected Pitfalls

Diving deeper into the world of virtual indexing while being reminded that set-based code is not always bulletproof

## Introduction

In Virtual Indexing Part 1 I introduced the idea of the virtual index, the ordered (AKA “suitably  sorted”) stream of numbers returned by ROW_NUMBER which allows you to perform operations like grouping and window aggregations without the help of an index OR a sort in the execution plan. Today I want to draw your attention to some hidden CTE tally table dangers that are easy to detect and resolve but can be catastrophic when missed.

## Problem #1: Gaps and the Hidden Row Explosion

Identifying gaps and islands in sequences is a common SQL task. One of the first things I learned to do with a tally table is identify gaps. It’s an easy concept to grasp, easier than identifying islands. In the query below we have a table variable populated with the numbers 1 to 10, but with the numbers 3,5 and 9 missing. Let’s use fnTally to identify the missing numbers.

Figure 1. Using fnTally to find gaps in a sequence

``````DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     t.N IS NULL;``````

This returns the expected results: [3 5 9]. Now the execution plan:

Figure 1.1. fnTally gaps in a sequence query execution plan

fnTally returns 10 rows, as expected, to create the numbers 1 to 10. But look at the clustered index scan against the table variable – 70 rows. That is tragic: 70 rows scanned to identify 3 missing values! Can you identify the problem? Take a minute to see if you can figure this out…

@t has 7 rows, fnTally is returning 10 rows: 7*10=70. The optimizer knows that the clustered index on @t is unique but is not taking advantage of how the numbers returned by fnTally are also unique and ordered. The optimizer is behaving as if a one-to-many relationship between t.N and f.N is possible. With this plan the execution engine is forced to compare every value in @t to every value returned by fnTally. As sad as this might make you feel at first, let not your heart be troubled. The workarounds are endless, the trick, really, is to understand which one is best and why.

### Table Operator Modifications

Let’s start with the three solutions below, ordered by which I would recommend. Each of these will get the job done with minimal changes to the underlying logic. The first solution is to just add an additional filter: f.N <= @Max; the second is to replace fnTally with a physical tally table, the third is to use a hint.

Figure 2. fnTally gaps in a sequence solutions

``````DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #1: Add a WHERE filter for fnTally.N
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     f.N <= @Max -- solves the fake one-to-many issue
AND     t.N IS NULL;

--==== Solution #2: Use a persisted tally table with a WHERE filter
SELECT    f.N
FROM      dbo.tally AS f
LEFT JOIN @t        AS t
ON      t.N = f.N
WHERE     f.N <= @Max -- required becuase there is no TOP(@Max) clause
AND     t.N IS NULL;

--==== Solution #3: Add a Query Hint to the left join
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t AS t -- use Merge Join algorithm instead of loop join
ON      t.N = f.N
WHERE     t.N IS NULL;``````

Each query returns the correct result and in each case only 7 rows total are retrieved from @t instead of 7 rows for each number returned by fnTally.

Figure 3. Execution plan for the table operator solutions

I didn’t call it out in the screen shot but with each solution only 7 rows were retrieved instead of 70. The first solution is the easiest – just add an additional WHERE filter. This may seem redundant as it’s not necessary, but it works and it’s a simple fix. If it were always this simple I could end the discussion here but it’s not. The second solution is to use a physical tally table (dbo.tally in this example) instead; this solves the problem in this example and has other advantages I’ll cover momentarily. The third solution is to force a merge join using a query hint; this works but is my last choice. There have been times where a query hint is the only option which is why I’m calling it out now.

### Anti-Join Solutions

Despite the fact that you have seen the term, Anti-join in the SSMS execution plan, it is not a commonly used phrase in the RDBMS world. An anti-join is but it is the best way to describe a scenario where you need all items from one table that do not exist in a second table.

Figure 4. Anti-join Venn Diagram

Reviewing the execution plan for the first set of solutions above (figure 2) you see that each used a Left Outer Join operator to identify the missing rows, then by a filter to exclude the others. Let’s examine two more solutions which leverage (NOT) EXISTS logical operator and the EXCEPT set operator.

Figure 5. Anti-join solutions

``````DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #4: EXCEPT >> Merge join handles the filter
SELECT f.N FROM dbo.fnTally(1,@Max) AS f
EXCEPT
SELECT t.N FROM @t AS t;

--==== Solution #5: NOT EXISTS >> Identical to above but with nested loop join
SELECT f.N
FROM   dbo.fnTally(1,@Max) AS f
WHERE  NOT EXISTS (SELECT t.N FROM @t AS t WHERE t.N = f.N)
AND    f.N <= @Max; -- Optional, forces loop join``````

Figure 5.1. Anti-join solutions execution plans

Both produce almost identical execution plans and are both efficient. The EXCEPT solution is the best IMO because it’s the cleanest.

### Parallelism

Now let’s safely force a parallel execution plan against the left join solution with the WHERE filter (Figure #2, Solution 1) then, again, with our EXCEPT anti-join from Figure #5.

Figure 6. Parallel execution test

``````DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Good when serial, bad for parallel
SELECT     f.N
FROM       dbo.fnTally(1,@Max) AS f
LEFT JOIN  @t AS t ON t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      t.N IS NULL
AND        f.N <= @Max

--==== Winner for serial and parallel
SELECT f.N FROM dbo.fnTally(1,@Max) AS f
EXCEPT
SELECT t.N FROM @t AS t CROSS JOIN dbo.make_parallel() AS x;``````

Below is the portion of each plan where the rows are retrieved. Even with the WHERE f.N <= @Max clause in place, which solved the 70-row explosion problem earlier with a serial plan, the row explosion returns with parallel execution. The EXCEPT solution, however, does not have this problem with a serial or parallel plan.

Figure 6.1. Parallel execution performance plan

## Problem #2: Gaps, Left Join Aggregation and Parallelism

Keeping with the theme of gaps let’s use the same table variable from earlier but include duplicate and missing values. Using this sample data:

Figure 7. Problem #2 sample data

``````DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);``````

The expected output would be:

Figure 7.1. Problem #2 expected output

``````N     Total
----- -------
1     2
2     4
3     0
4     2
5     0
6     1
7     1
8     3
9     0
10    3``````

Like earlier, this query will suffer the aforementioned hidden row explosions; instead of retrieving 16 rows from @t, it’s retrieving 160 rows (16*@Max.)

Figure 8. Problem #2, Solution #1 (fail)

``````SELECT    f.N, Total = COUNT(t.N)
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t                  AS t
ON      t.N = f.N -- could not shake the row explosion
GROUP BY  f.N;``````

Again, an f.N <= @Max in a WHERE filter or as a join. t.N <= @Max will also do the trick in the WHERE or join filters. The problem, again, is that this only works when the optimizer chooses a serial execution plan. This means that, in this case, you must always use OPTION (MAXDOP 1) but be banished to eternal serial processing. C’mon man!

### Problem #2 Workarounds

Merge Join hint will solve the problem but cannot get a parallel execution plan; in other words, make_parallel will be successful at solving the row explosion issue but not at forcing a parallel plan. The persisted tally table solution on the other hand, does not experience the row explosion issue and can enjoy both serial and parallel execution.

Figure 9. Problem #2 – Solutions #2 & #3

``````DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);

DECLARE @Max BIGINT = 10;

-- Solution 1 - works, query hint though & no parallel execution
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t             AS t
ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
GROUP BY     f.N;

-- Solution 2 - good serial & Parallel
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.tally AS f
LEFT JOIN  @t AS t
ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      f.N <= @Max
GROUP BY   f.N;``````

Figure 9.1. Problem #2 – Solutions #2 & #3 execution plans

In each case the row explosion is gone but only the physical tally table solution (dbo.tally) solves the problem with a serial or parallel plan while preserving the optimizer’s option to execute a parallel plan. This is an example of where fnTally simply cannot compete with dbo.tally.

## Set-Based =! Bulletproof (Conclusion)

If there is a lesson to be learned today it’s that the very existence of a numbers table or the text, “tally” in your code does not guarantee that it will be fast. Furthermore, just because your set-based solution is fast – it can always be faster, exponentially faster in many cases as you will see in future articles. In this article we watched an innocent query to identify gaps in a sequence go terribly wrong. Fortunately, there are many workarounds provided that you are identify the problem when it arises. Hopefully you are now better prepared.

Which is faster again? fnTally or a tally persisted tally table? It depends. As per usual.

## Author: Alan Burstein

SQL Performance Ninja with 20+ years of writing high-performing, elegant SQL code. Expert at developing super fast NLP algorithms