Virtual Indexing Part 9: The Indexed Hierarchy

Turbo-charge your SQL hierarchy with a numbers table.

Introduction

Did you know you can add Indexes to your parent-child hierarchy in SQL Server? As tricky as this sounds – it’s actually quite easy with a physical tally (numbers) table and an enumerated sort path (AKA “materialized path”.) The technique in this article will make it possible to easily traverse a DB parent-child hierarchy table with blazingly fast performance.

In Part 3 of this series you were introduced to the Indexed Function. In Part 8 we created an indexed splitter — an indexed view that splits a delimited string once, then stores the “items” in a clustered index for fast retrieval. Building on the indexed split concept, lets create an Indexed Hierarchy, a parent-child hierarchy stored in you SQL database. All upstream and downstream relationships in your hierarchy can be easily traversed without recursion, self-joins, XML or intermediate aggregation tables which need to be maintained. To pull this off we’ll create an indexed view that leverages a tally table and an enumerated sort path.

What I love most about this approach is how it’s not limited to SQL Server, it can be applied to any RDBMS or database system that supports indexed views or similar (which is pretty much everything).

Setup and Sample Data

First for the set-up DDL and sample data. The code below builds (1) a tally (numbers) table, (2) dbo.nodes, a table to store a parent-child relationship — nodeId and parentID, (3) an enumerated path, and (4) sample values for dbo.nodes.

--==== 1.1. Tally table code
IF 1=2 -- remove this if you need a tally table
BEGIN
  IF OBJECT_ID('dbo.tally') IS NOT NULL DROP TABLE dbo.tally;
  CREATE TABLE dbo.tally (N INT NOT NULL);
  
  --==== Add Primary Key and Clustered Index
  ALTER TABLE dbo.tally 
    ADD CONSTRAINT pk_cl_tally PRIMARY KEY CLUSTERED(N) 
      WITH FILLFACTOR=100;
  
  --==== Add a Unique Index; the optimizer picks this, it requires less IO
  ALTER TABLE dbo.tally 
    ADD CONSTRAINT uq_tally UNIQUE NONCLUSTERED(N);
  
  --==== Populate with at least 1 Million Rows
  INSERT dbo.tally(N) SELECT t.N FROM dbo.fnTally(1,1000000) AS t;
END
GO
--==== 1.2. Test Data DDL to build the hierarchy
CREATE TABLE dbo.nodes
(
  ParentId INT NULL,
  NodeId   INT NOT NULL,
  NodePath VARCHAR(200) NOT NULL DEFAULT(''),
  CONSTRAINT pk_nodes__nodeid PRIMARY KEY CLUSTERED(nodeId),
  CONSTRAINT fk_nodes__parentId FOREIGN KEY (parentid) REFERENCES dbo.nodes(nodeId),
);
;--==== 1.3. Add Hierarchy values
INSERT dbo.nodes (ParentId,NodeId)
VALUES (NULL,1),(1,2),(1,3),(1,4),(1,5),(2,6),(2,7),(2,8),(2,9),(2,10),(3,20),(3,25),
  (3,30),(4,100),(4,101),(101,1000),(101,1012),(101,1014),(1014,2000);

Next, a recursive inline table valued function (iTVF) for building our enumerated path, and for maintenance.

;--==== 1.4. Create Function to build the node path; allows for individual updates
CREATE OR ALTER FUNCTION dbo.fnBuildNodePath
(
  @nodeId INT
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH BuildNodePath AS 
(
  SELECT n.ParentId, n.NodeId, N=1
  FROM   dbo.Nodes AS n 
  WHERE  n.NodeId = @nodeId
  UNION ALL
  SELECT v.ParentId, v.NodeId, a.N-1
  FROM   dbo.Nodes     AS v
  JOIN   BuildNodePath AS a
    ON a.ParentId = v.NodeId
)
SELECT 
  NodeId   = @NodeId, 
  NodePath = STUFF((
               SELECT   CONCAT('.',p.NodeId)
               FROM     BuildNodePath AS p
               ORDER BY p.N
               FOR XML PATH('')),1,1,'');

dbo.fnBuildNodePath allows us to return the enumerated path for each NodeId. Note the query below and results.

--==== 1.5. Build the enumerated sort paths
SELECT      n.NodeId, b.NodePath
FROM        dbo.nodes                     AS n
CROSS APPLY dbo.fnBuildNodePath(n.NodeId) AS b;
Results
Figure 1: dbo.fnBuildNodePath query results

We can use dbo.fnBuildNodePath to add the enumerated path to dbo.nodes, which is required for our indexed hierarchy. dbo.fnBuildNodePath can also be used to modify records as needed.

The Indexed Hierarchy

Before we create our indexed view lets review the underlying logic. Using dbo.fnTally we could split the enumerated path and return each node and downstream members it’s node tree.

Indexed Hierarchy Logic
;--==== 2. Index parsing logic using fnTally
SELECT
  n.NodeId,
  n.ParentId,
  n.NodePath,
  CurrentNode = Nd.C,
  IsLeafNode  = IIF(n.nodePath=p.Nd,1,0), -- leaf or descendant
  NodeDepth   = LEN(Nd.C) - LEN(REPLACE(Nd.C,'.',''))+1
FROM        dbo.nodes AS n
CROSS APPLY (VALUES(LEN(n.NodePath)+1))             AS np(Ln)
CROSS APPLY dbo.fnTally(1,np.Ln)                    AS t
CROSS APPLY (VALUES(SUBSTRING(n.NodePath,1,t.N-1))) AS p(Nd)
CROSS APPLY (VALUES(ISNULL(p.Nd,0)))                AS Nd(C)
CROSS APPLY (VALUES(IIF(n.nodePath=p.Nd,1,0)))      AS l(LNd)
WHERE       (SUBSTRING(n.NodePath,t.N,1)='.' OR t.N=np.Ln)
--ORDER BY Nd.C, -IIF(n.nodePath=p.Nd,1,0); -- for presentation
The denormalized result set

This query splits returns each node to return a row for each of it’s descendants.

Figure 2: Denormalized subsets of all node trees in dbo.nodes

CurrentNode represents the top each node tree; all records where IsLeafNode = 0 are downstream members of CurrentNode‘s node tree. This design makes it easy to group, filter and analyze hierarchal while enjoying big performance gains. Reviewing the execution plan, note the absence of a sort operator, thanks to the presence of a suitably sorted stream (AKA virtual index).

Figure 3: Indexed hierarchy logic using fnTally

This will not be the final execution plan but it is the plan that you get for each row you modify in dbo.nodeId. The tally table is never modified so data modifications are very quick.

Indexed Hierarchy DDL

With SQL Server indexed views we can’t reference table valued functions nor leverage APPLY. The circumvent these limitations we’ll replace fnTally with a physical tally table, dbo.tally (DDL at the beginning of this article).

;--==== 3. Use the tally table logic to create an indexed view
CREATE VIEW dbo.vwNodes WITH SCHEMABINDING AS
SELECT
  n.ParentId,
  n.NodeId,
  n.NodePath,
  CurrentNode = ISNULL(SUBSTRING(n.NodePath, 1, t.N-1), 0),
  IsLeafNode  = CASE n.nodePath WHEN SUBSTRING(n.NodePath, 1, t.N-1) THEN 1 ELSE 0 END, -- leaf or ancestor
  NodeDepth   = LEN(ISNULL(SUBSTRING(n.NodePath, 1, t.N-1), 0)) -
                LEN(REPLACE(ISNULL(SUBSTRING(n.NodePath, 1, t.N-1), 0),'.',''))+1
FROM        dbo.nodes AS n
CROSS JOIN  dbo.tally AS t
WHERE       t.N BETWEEN 1 AND LEN(n.NodePath)+1 -- A node level Edge N-Gram "tokenization"
  AND       (SUBSTRING(n.NodePath,t.N,1)='.' OR t.N=LEN(n.NodePath)+1);

For all intent and purposes this logic, as is, will produce the same execution plan as the earlier fnTally logic and will perform relatively the same. Let’s super charge our view with some indexes.

--==== 4.1. Unique Clustered index to make it an indexed view
CREATE UNIQUE CLUSTERED INDEX uq_cl_vwNodes__currentNode
  ON dbo.vwNodes(currentNode, NodeId);

--==== 4.2. Nonclustered Indexes
CREATE NONCLUSTERED INDEX nc_vwNodes__1
  ON dbo.vwNodes(NodePath, NodeId, isLeafNode);
CREATE NONCLUSTERED INDEX nc_vwNodes__2
  ON dbo.vwNodes(NodeId, isLeafNode, nodeDepth) INCLUDE (NodePath,ParentId);
CREATE NONCLUSTERED INDEX nc_vwNodes__3
  ON dbo.vwNodes(IsLeafNode, nodeDepth);
CREATE NONCLUSTERED INDEX nc_vwNodes__4
  ON dbo.vwNodes(IsLeafNode, nodeId) INCLUDE (parentId,NodePath,NodeDepth);
CREATE NONCLUSTERED INDEX nc_vwNodes__5
  ON dbo.vwNodes(nodeDepth) INCLUDE(IsLeafNode);

Now we have clustered index (by way of an indexed view) as well as some nonclustered indexes which can be used when traversing our hierarchy.

Basic Retrieval

With our indexed view in place we can return the denormalized hierarchy structure from figure 2 (above) except with our hierarchy is stored nice and neatly in a single clustered index.

--==== 5. Basic SELECT against our indexed view
SELECT   h.*
FROM     dbo.vwNodes AS h WITH (NOEXPAND)
ORDER BY h.currentNode;

I included that NOEXPAND hint, this forces the optimizer to the clustered index or one it’s nonclustered indexes created on our indexed view afterwards. I generally don’t suggest query hints but I’ve had nothing but success with this technique. That said – make sure to do your on testing.

Execution Plan
Figure 4: Basic indexed hierarchy execution plan

It doesn’t get cleaner or more efficient than that.

Basic Hierarchy Analysis

Our indexed hierarchy can be used to join to other tables to perform aggregations and other analysis lets start with few basic aggregations against our indexed view. We’ll keep it simple for this article and run some basic aggregations against our indexed hierarchy without joining to anything else. Note the simplicity of these queries compared to the complexity of what is being returned. These queries below will return:

  1. A count of all descendants for each node with children
  2. A count of members by level
--==== 6.1. Count Descendants for each node with children
SELECT
  HLevel      = ISNULL(h.NodeDepth,0),
  CurrentNode = ISNULL(currentNode,'Total'),
  Descendents = COUNT(*)
FROM     dbo.vwNodes AS h WITH (NOEXPAND)
WHERE    h.isLeafNode = 0
GROUP BY h.NodeDepth, h.currentNode;

--==== 6.2. Count members by level 
SELECT
  HLevel   = ISNULL(n.nodeDepth,'Total'),
  Members  = COUNT(*)
FROM     dbo.vwNodes AS n WITH (NOEXPAND)
GROUP BY n.nodeDepth WITH ROLLUP;
Results
Figure 5: Left to right: results for Queries #1, #2 and #3

Again, virtually readless, no complex queries or intermediate data structures to maintain. Only a GROUP BY clause and you’re off to the races.

Execution Plans
Figure 6: Execution plan when leveraging an Indexed Hierarchy for traversal.

Conclusion

A physical tally table doesn’t just allow you to replace loops in your functions and stored procedures, nor should indexed views be limited to pre-aggregations and such. The topic of tally tables and indexed views is one that deserves much more attention.

Again, to keep things simple I restricted my examples to analysis of our hierarchy without joining to anything else. We’ll expand on this concept further in the next installment of this series. Thanks for reading!

Virtual Indexing Part 8: The Indexed Splitter Function

The fastest way to split a string is to have already done so.

Introduction

Happy 2021! Indexed views and tally tables – a virtually unexplored topic, one I will write about at length this year. This will be a quick read and there is likely to be at least a couple typos; time is short so today is about quality, not quantity. In Part 3: The Indexed Function I introduced you to I said “the fastest way to perform a calculation is to have already done so;” in this installment I will demonstrate that the fastest way to split a string is also to have already done so.

When I introduced the Indexed function I used prime numbers as my example. 7 is always a prime number, Factorial(5) always equals 120. Creating an Indexed prime numbers, factorial or any function where a function always returns the same output for a given argument. What if we wanted a more complex indexed function, such as one that splits a string?

I know about database normalization but I am not responsible for designing most of the databases I work with. If you’ve been in this racket for a couple fortnights you have run across that vender DB written by great application developers with terrible back-end databases. It’s not uncommon to see fields that contain a delimited list – ‘1,4,13,99’ instead of a single atomic value. You can write an ETL package that leverages a good string splitter (such as DelimitedSplit8K or DelimitedSplit8k_Lead) to regularly split and copy the values into a correctly normalized table. In this article I will show you a much faster, maintenance-free alternative which leverages a Numbers Table and an indexed view.

Introducing The “Indexed Splitter”

For this you will need a tally table with at least as many rows as the longest string you intend to split. 8,000 is enough unless you need to split string longer than 8K. My tally table (dbo.tally) has 1,000,000 rows. First the sample data.

Sample data

CREATE TABLE dbo.Test (ID INT, Column1 VARCHAR(20));
INSERT INTO  dbo.Test (ID,Column1) 
  VALUES (1,'ORM;PS;SUP'),(2,'ABC;XYZ;999;123');

Next we’re going to borrow Jeff Moden’s DelimitedSplit8K logic to create a set-based splitter that leverages permanent tally table (e.g. an regular ol properly indexed numbers table named dbo.tally. There’s a small performance hit compared to delimitedSplit8K but the upside makes it more than worth it, as you will see in a moment. Here’s our splitter as a view

The Splitter as a View

CREATE OR ALTER VIEW dbo.TestSplit WITH SCHEMABINDING AS 
SELECT 
  Id   = t.ID, 
  item = 
    SUBSTRING
    (
      t.Column1,
      tt.N+SIGN(tt.N-1),
      ISNULL(NULLIF((CHARINDEX(';',t.Column1,tt.N+1)),0),LEN(t.Column1)+1)-(tt.N)-SIGN(tt.N-1)
    ),
  ItemIndex = tt.N+1
FROM       dbo.Test  AS t
CROSS JOIN dbo.tally AS tt
WHERE      tt.N <= LEN(t.Column1)
AND        (tt.N = 1 OR SUBSTRING(t.column1,tt.N,1) = ';');
GO

The values are split and returned with their associated ID. This logic is going to be applied to an index view which means we can’t include parameters; the delimiter and source of the string need to be embedded in our logic. Above we are splitting the contents of dbo.test.Column1 using a semicolon as the delimiter.

SELECT f.Id, F.Item FROM dbo.TestSplit AS f;

Below is exactly what we expected.

Function Output

Result Set

Next the execution plan.

Splitter View Execution Plan

That’s a good plan: set-based, linear and fast. There are faster splitters out there but none that work without the APPLY table operator. APPLY is not allowed in Indexed Views. Subqueries either. My splitter logic doesn’t use APPLY , subqueries or any other logic that prevents me from adding an index to my view so let’s see if we can add a unique clustered index and turn this view into an indexed Split where the split happens ahead of time and only once (unless the value is modified or deleted).

CREATE UNIQUE CLUSTERED INDEX uq_cl__testSplit ON dbo.TestSplit(Id,Item)

No error. Let’s run the same SELECT query from earlier and check the execution plan.

The Indexed Split Execution plan

It doesn’t get cleaner or faster than that. This is a powerful example of the what you can accomplish when using a physical numbers table to build an Indexed view. This barely scratches the surface as I hope to demonstrate in the near future.

Make my index a double

In 2020 “make it a double” became:

2o2o got me like ^^^

I digress.

Now that your indexed function is in place you can expand it’s power by adding more indexes! I’ll have my clustered index with a nonclustered chaser. Here we’ll create an index for item.

CREATE NONCLUSTERED INDEX nc__testSplit__item ON dbo.TestSplit(Item);

Now a query that filters on item.

SELECT f.* 
FROM   dbo.TestSplit AS f WITH (NOEXPAND)
WHERE  f.item <> '999,123';

To my chagrin, the optimizer often ignores nonclustered indexes on indexed views which is the case here. This is why I’m using NOEXPAND. Nonetheless, the execution plan is even better than before: an indexed seek against a narrower and lighter nonclustered index.

Execution plan leveraging a second, nonclustered index

The Indexed Splitter Function

Let’s add our RowNumber column and wrap the guy in an iTVF for a Nasty fast index function that splits a string ahead of time and maintains itself as needed! We’ll include the option to exclude one or more values using an @exclude parameter.

The Function

CREATE FUNCTION dbo.itvf_TestSplit
(
  @exclude VARCHAR(20)
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
SELECT f.ID, f.Item, f.ItemIndex
FROM   dbo.TestSplit AS f WITH (NOEXPAND)
WHERE  f.Item <> @exclude;
GO

Let’s give’r a spin and check the execution plan.

Execution Plan

This image has an empty alt attribute; its file name is image-7.png

Same as before but now we are accepting parameters which means we now have an indexed function that can split strings ahead of time. Not even STRING_SPLIT or a CLR can compete here.

What about an “Item Number”?

Sometimes you need the item’s ordinal in the string; for this we need an item number (labeled as “ItemNumber” in my function logic. To return an item number (without a sort in the execution plan) some assembly will be required. DelimitedSplit8K and DelimitedSplit8K_LEAD both use ROW_NUMBER() to produce the ItemNumber column. ROW_NUMBER is not allowed in indexed views which is why I included the “ItemIndex” . It represents the item’s position in the string, something I normally don’t need. Here it’s perfect however – it allows me to add an ItemNumber while still avoiding a sort in my final plan. Not a big deal with strings 8K or less, but it is for longer strings (something an indexed function can handle at no extra cost.) First the new index:

EATE NONCLUSTERED INDEX nc_poc__testSplit__ItemIndex ON dbo.TestSplit(Id, ItemIndex);

With that index in place let’s run a query which includes the item number.

SELECT 
  Id= split.Id,
  ItemNumber = ROW_NUMBER() OVER (PARTITION BY split.ID ORDER BY split.ItemIndex),
  split.Item
FROM dbo.itvf_TestSplit('999,123') AS split;

Output

Result set that includes ItemNumber

Execution plan

easy peasy lemon-squeezy

Still a fantastic execution plan the same low cost as before with more complexity. Again, the calculations happened already, this function is leveraging one or more indexes to retrieve, not compute, the required value.

Conclusion

The Indexed Split is another game changer yet barely scratches the surface of the power of the Virtual Index and the Indexed Function. When you add a physical numbers tally to you Indexed views they can do more than pre-join or pre-aggregate values. Here we introduced the world’s first “indexed split” and we’re still just getting started! Tomorrow I’m going to introduce the Indexed Hierarchy. Thanks for reading!

Virtual Indexing Part 7: Finite Opposite Numbers

Virtual Indexing continues…How do we to avoid sorting in either direction (ascending and descending)?

Introduction

This article is special for a few reasons, the first reason being that it’s about an idea that I came up with in Second grade and have built on since. This is arguably my favorite topic with respect to Virtual Indexing

My “Opposite Numbers”

First for a few subtraction problems solved the way we did it in the 80’s:

Figure 1.1. Subtraction Series #1

For me, these problems were very easy, each problem only requires one or two simple calculations. For the first problem (13-1) just subtract 1 from 3. For the next weI subtract 4 from 5, then 1 from 2 to get 11. I had subtraction down pat when subtracting smaller numbers from larger ones.  This next series would be more of a struggle:

Figure 1.2. Subtraction Series #2

The first calculations for each problem involve subtracting a one-digit number from a two-digit number. For 37-19 the first step is to 9 from 17; for #2, the first step is to subtract 6 from 13; for the third it’s 5 from 14. For whatever reason I struggled with this so I came up with opposite numbers, not the ones you know but my own. Though I would not describe it this way then, the first step was to assign a relation between the numbers 1 through 9(N) and it’s “opposite”; it was a little auxiliary table in my brain. The opposite of 1 is 9 (O(1)=9) ,the opposite of 2 is 8 (O(2)=8), 3 is 7, 4 is 6, and 5 is 5. Note the table below:

Figure 2. Set of Numbers(N) and it’s “Opposite”

Looking back at the problems in Figure 1.2 it was easier for me to first subtract 7 from 9 to get 2. The opposite of 2 is 8, then, each time I invoke my mental “opposite number” function I subtract 1 from the top number.  This was easier to me than remembering that 17-9=8. For the second problem (13-6) I calculate 6-3, then the opposite, which is 7. This can be expressed at O(6 – 3) = 7;

This made it possible for me to do subtraction much faster and is a mental technique I use to this day. 

Their Opposite Numbers

In 5th grade Pre Algebra we learned about Opposite Numbers (also known as additive inverse numbers.) Here the opposite of 5 is -5, the opposite of -20 is 20. Though okay with the concept, I was upset that someone took “Opposite Numbers” from me. Especially for something so trivial. In my head there were two types of opposite numbers: Mine and Theirs. 

Their system of “opposite numbers” was important for understanding equations such as: (-(-9))-(5-(+5))-(-1). Then later for equations like: (-(-@A))-(@B-(+@B))-(-@C).  In the world of relational algebra, expressed using T-SQL we have: 

DECLARE @A INT = 9, @B INT = 5, @C INT = 1;
SELECT (-(-@A))-(@B-(+@B))-(-@C); 

Countable and Uncountable Infinity

This article could survive without this section but it’s relevant enough, and such a cool topic that I decided to slip it in. Did you know that there are different kinds of Infinity? For the purpose of this article let’s look at Countable Infinity. A set of Integers are countably infinite when “its elements can be put in one-to-one correspondence with the set of natural numbers. In other words, one can count off all elements in the set in such a way that, even though the counting will take forever.” 

For example, this series in countably infinite because you can always add another number(1):
  1,2,3,4,5,6,7

This sequence is as well:
  0, -3, -6, -9, -13

Uncountable infinity looks more like:
  0.1, 0.2, 0.3… 0.9, 0.11, 0.12…. 0.99999991, 0.99999992

Uncountable infinity demonstrates that there is an infinite amount of numbers between 0 and 1. I only included this so you can better understand what Countable Infinity is. Moving on…

Introducing Finite Opposite Numbers

My opposite numbers” and “their opposite numbers” works in my head but, to pass the Idea I’m about to cover to others I had to come up with better nomenclature. What do my opposite numbers have in common with their opposite numbers? Each number in the set has an “opposite”, a value which is equal distance away from an arbitrary center point, AKA median. I say arbitrary because: who made 0 the official median value? Any number can represent a median right? 0 works for traditional opposite numbers because the set is infinite. Without an upper-bound or lower-bound values ; the numbers are countably infinite in two directions beginning with 0. 

Figure 3.1. Infinite Sequence traveling away from 0

This infinite set of numbers with zero as the median is what I have coined, Infinite Opposite Numbers.  When we are dealing with a finite set with both an upper and lower bound, I came up with Finite Opposite Numbers. Let’s take the numbers 1 through 11. Here the median is 6; O(5) = 7, O(4) = 8, O(3) = 9, etc. 

Figure X. Infinite Sequence traveling away from 6

Assigning upper and lower bounds moves the median value.

Figure 3.2. How the median shifts

In the example above we have an odd number of rows, what happens when the rowcount is even? Consider the difference between 1-7 and 1-6:

Figure 3.3. Infinite Sequence traveling away from 0

With the first example, 4 is the median, what is the median for the numbers 1-6?  “3.5” is not a valid integer.  For even numbers we’ll consider the two numbers closest to the middle (3.5 is in this example): the median would be the tuple containing the numbers 3 & 4 as a tuple in this example. To handle both odd and even sets I came up with this simple formula to calculate the opposite number: H+LN

The Finite Opposites Function

This may seem unnecessary to create a function for this calculation but I use it so that others know what I’m doing, especially considering that this is a new concept. As a T-SQL function we have:

Figure 4: Finite Opposite Number Logic

CREATE FUNCTION [dbo].[O] (@L INT,@H INT,@N INT)
RETURNS TABLE WITH SCHEMABINDING AS RETURN SELECT Op = @L+@H-@N;

Here’s the entire function with comments.

Basic Use

In each of these examples the function accepts the parameters L, H, and N then builds a set of numbers from L to H. Returning the finite opposite for each N in the set. 

Figure 5. Basic usage examples

DECLARE @L INT = 1, @H INT = 5;

--==== 1. Run dbo.fnTally
SELECT 
  N  = f.N,
  OP = N.Op
FROM        dbo.fnTally(@L,@H) AS f
CROSS APPLY o(@L,@H,f.N) AS N;

--==== 2. Set new values for @L and @H
SELECT @L = -2, @H = 2;

--==== Run dbo.GetNums
SELECT 
  N  = f.N,
  OP = N.Op
FROM        dbo.GetNums(-2,2) AS f
CROSS APPLY o(@L,@H,f.N) AS N;

Figure 6. Basic example results

Understanding the median tuple (1 or 2 members)

Below is an example of how the median value(s) are assigned.

Figure 7.1. Calculating the median

--==== 1. Even row count
DECLARE @L INT = 1, @H INT = 6;

SELECT 
  N  = f.N,  -- N
  OP = N.Op, -- O(N)
  DM = ABS(f.N-N.OP)/2 -- Distance from the median
FROM        dbo.fnTally(@L,@H) AS f
CROSS APPLY o(@L,@H,f.N) AS N;
GO
--==== 1. Even row count
DECLARE @L INT = 1, @H INT = 7;

SELECT 
  N  = f.N,  -- N
  OP = N.Op, -- O(N)
  DM = ABS(f.N-N.OP)/2 -- Distance from the median
FROM        dbo.fnTally(@L,@H) AS f
CROSS APPLY o(@L,@H,f.N) AS N;

7.2. Odd/Even row counts 

Quick Review

We have now defined types of “Opposite Numbers”:

  1. Infinite opposite numbers (no upper or lower boundaries, 0 is the median)
  2. Finite opposite numbers (both the upper and lower boundaries are defined, the median is calculated at ABS(N-OP)/2

Examples

There are a couple examples of finite opposites here. Below are a couple basic examples. 

Example #1: A Home-Grown REVERSE Function

Let’s start with a fun string problem and create our own REVERSE function. We can start by tokenizing the string into unigrams. I’ll use the old FOR XML PATH trick and using STRING_AGG for those on 2017+. 

Figure 8. Home-grown REVERSE examples

--==== 1. Basic REVERSE Function
DECLARE 
  @String     VARCHAR(8000) = 'ABC-123-XYZ',
  @pattern    VARCHAR(100)  = '%[A-Z]%';

--== 1.1. Using STRING_AGG (SQL 2017+)
SELECT
  Original = @String,
  Reversed = STRING_AGG(SUBSTRING(@string,N.Op,1),'') WITHIN GROUP(ORDER BY f.N)
FROM        dbo.fnTally(1,LEN(@String)) AS f
CROSS APPLY dbo.o(1,LEN(@String),f.N)   AS N;

--== 1.2. Using FOR XML PATH (2005 to 2014)
SELECT
  Original = @String,  
  Reversed = f.String
FROM
(
  SELECT      SUBSTRING(@string,N.Op,1)
  FROM        dbo.fnTally(1,LEN(@String)) AS f
  CROSS APPLY dbo.o(1,LEN(@String),f.N)   AS N
  ORDER BY f.N
  FOR XML PATH('')
) AS f(String);

Each of these works as expected. The performance can’t match the T-SQL REVERSE, this is an example of how to return data in either direction leveraging the Virtual Index to avoid a sort. Now let’s say you need a function to return the first N number of characters that match a pattern and need the option to return the string in a user-defined direction (forward or backward). 

Figure 9. Custom REVERSE Function Examples

--==== 2. Get first/Last @matches number of characters that match @pattern
DECLARE 
  @String     VARCHAR(8000) = 'ABC-123-XYZ',
  @pattern    VARCHAR(100)  = '%[A-Z]%',
  @matches    BIGINT        = 4,
  @descending BIT           = 0; 

SELECT STRING_AGG(f.chars,'')
FROM 
(
  SELECT TOP(@matches) chars = a.Txt
  FROM        dbo.fnTally(1,LEN(@String)) AS f
  CROSS APPLY dbo.o(1,LEN(@String),f.N)   AS N
  CROSS APPLY (VALUES(IIF(@descending=0,
             SUBSTRING(@string,f.N,1),
             SUBSTRING(@string,N.Op,1)))) AS a(Txt) 
  WHERE       a.Txt LIKE @pattern 
) AS f;

Example #2: Sortless Parallel Apply 

I’ll wrap up with this as it’s an excellent example of finite opposites in use. In Itizik Ben-Gan’s Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions Ben Gan demonstrates the “Parallel APPLY” trick, a way to improve the parallel plan produced by the optimizer. The code below is the setup code from the book (I misplace mine so I can’t correctly cite this, sorry.)

Figure 10. Parallel APPLY setup code (from the book above)

BEGIN
  IF OBJECT_ID('dbo.Transactions','U') IS NOT NULL DROP TABLE dbo.Transactions;
  IF OBJECT_ID('dbo.Accounts','U')     IS NOT NULL DROP TABLE dbo.Accounts;
  CREATE TABLE dbo.Accounts
  (
    actid   INT         NOT NULL,
    actname VARCHAR(50) NOT NULL,
    CONSTRAINT PK_Accounts PRIMARY KEY(actid)
  );
  CREATE TABLE dbo.Transactions
  (
    actid  INT   NOT NULL,
    tranid INT   NOT NULL,
    val    MONEY NOT NULL,
    CONSTRAINT PK_Transactions          PRIMARY KEY(actid, tranid),
    CONSTRAINT FK_Transactions_Accounts FOREIGN KEY(actid) REFERENCES dbo.Accounts(actid)
  );
  
  INSERT INTO dbo.Accounts(actid, actname) 
    VALUES(1,'account 1'),(2,'account 2'),(3,'account 3');
  INSERT INTO dbo.Transactions(actid, tranid, val) VALUES
    (1, 3, 5.00),(1, 4, 2.00),(1, 5, 1.00),(1, 6, 3.00),(1, 7,-4.00),(1, 8,-1.00),
    (1, 9,-2.00),(1,10,-3.00),(2, 1, 2.00),(2, 2, 1.00),(2, 3, 5.00),(2, 4, 1.00),
  	(2, 5,-5.00),(2, 6, 4.00),(2, 7, 2.00),(2, 8,-4.00),(2, 9,-5.00),(2,10, 4.00),
    (3, 1,-3.00),(3, 2, 3.00),(3, 3,-2.00),(3, 4, 1.00),(3, 5, 4.00),(3, 6,-1.00),
    (3, 7, 5.00),(3, 8, 3.00),(3, 9, 5.00),(3,10,-3.00),(1, 1, 4.00),(1, 2,-2.00);
  
  CREATE INDEX idx_actid_val_i_tranid
  ON dbo.Transactions(actid /* P */, val /* O */)
  INCLUDE(tranid /* C */);
END;

Let’s compare the original Parallel APPLY technique then another which leverages my finite opposites function (dbo.O). For this to work I need the total number of rows as my upper-bound. I don’t have the row count ahead of time I can use COUNT as a window aggregate partitioned by actid. 

Figure 11. Parallel APPLY with/without Finite Opposites

PRINT CHAR(10)+('Parallel APPLY')+CHAR(10)+REPLICATE('-',90);
SELECT      
  c.actid,
  fn_pre.tranid,
  fn_pre.val,
  fn_pre.rownumasc,
  fn_pre.rownumdesc
FROM  dbo.Accounts AS c
CROSS APPLY 
(
  SELECT t.tranid, t.val,
    rownumasc  = ROW_NUMBER() OVER(ORDER BY t.val),
    rownumdesc = ROW_NUMBER() OVER(ORDER BY t.val DESC)
  FROM  dbo.Transactions AS t
  WHERE t.actid = c.actid
) AS fn_pre
OPTION (QUERYTRACEON 8649);
GO

--== 1.6.4. Parallel APPLY + Finite Opposites
PRINT CHAR(10)+('Parallel APPLY+ Finite Opposites')+CHAR(10)+REPLICATE('-',90);
SELECT
  actid      = c.actid,
  tranid     = fn_pre.tranid,
  val        = fn_pre.val,
  rownumasc  = fn_pre.rownumasc,
  rownumdesc = n.Op
FROM dbo.Accounts AS c
CROSS APPLY
(
  SELECT t.tranid, t.val, 
    rownumasc = ROW_NUMBER() OVER(ORDER BY t.val),
    rowz      = COUNT(*) OVER (PARTITION BY t.actid ORDER BY (SELECT NULL))
  FROM   dbo.Transactions AS t
  WHERE  T.actid = C.actid
) AS fn_pre
CROSS APPLY dbo.o(1,fn_pre.rowz-fn_pre.rownumasc+1,fn_pre.rownumasc) AS N
OPTION (QUERYTRACEON 8649);

Figure 12. Execution Plans

Note the second plan does not require a sort.

I will update this article with some performance test results later this week. I realized a small test error as I would about to publish this. That said, the sortless version is faster. I promise.

Conclusion

Ordering rows in both Ascending and Descending order without a sort. Happy 2021!

Virtual Indexing Part 6: Combinatorial Index Optimization

In Part 5 we learned how to return a distinct set of numbers from a one Billion row table in under 1/10th of a second. We can do better!

Disclaimer

I’ve never made it to Part 6 of a technical series. For me, a huge obstacle to getting quality content out in a timely manner is the proof-reading and final editing process. Like a few of my friends in this field, I have many new techniques and programming concepts that I’m dying to share which have been decaying in MS Word Docs and .sql files for years. I started my NTally article in 2014 for example.

In September, while quarantined and after listening to too much David Goggins, I decided to step up my writing (and research) game and simply began publishing the first draft of my work the moment I’m done authoring it… What I’m trying to say is, I’ve decided to err on the side of delivering content. This means there will be typos, now and in the future. Sorry. I feel better getting it done, however, typos and all, than taking my good ideas to the grave. If you are a writer/blogger or are aspiring to do so, I suggest following my lead and just get it done – It feels so much bettr! 😉

Introduction

This series began with the introduction of the Virtual Index, the suitably sorted stream of numbers returned by T-SQL ROW_NUMBER which act very similar to an physical index with respect to sorting, grouping and filtering. Unlike a physical index, created using DDL, the virtual index is invoked with a SELECT statement against fnTally and goes away when you’re query is finished. No disk space to consume, fragmentation to address, nothing to check into your CI/CD system – just a magical, in-memory index (suitably sorted stream) which comes and goes as requested.

In Part 5 we reviewed how to return a distinct set of numbers from a one Billion row table in under 1/10th of a second using what I have coined, a multidimensional virtual index – that is, combining a virtual index with a real one speeds things up. More technically, we’re giving the optimizer a wider range of options than it would have with only one index in place.

So, if adding one virtual index to your query can speed things up, can we speed them up even further by adding more? I mean there basically free and can be pulled out of thin air – why not? That’s what we’re going to review today.

Dimensionality

Seasoned SQL developers as well as anyone who has written MDX expressions is familiar with the concept of dimensionality. Using SSAS for example, we can write and MDX expression to return a matrix with two or more dimensions as shown below. The Month dimension is on the x-axis, AKA the columns(0) axis in MDX; the Vehicle dimension is on y-axis, AKA rows(1). For brevity I included two Month members and three Vehicle members. The numbers can represent anything (e.g. Sales, Inventory, etc.) Using MDX we can easily write an MDX expression to return something that looks like this:

Figure 1. Two-dimensional matrix with Vehicle on the x-axis, month(s) on the y-axis.

Two-dimension matrix with a dimension for months and a second dimension for vehicles.

Now let’s say we wanted an deeper level of granularity for each vehicle color. We could add a third dimension for color to our expression to return something like this:

Figure 2. Three-dimensional matrix; a 3rd dimension added for color

Here we’re added a third dimension for “color” to transform it into a three-dimension matrix.

What does this have to do with indexing or performance you ask? Let’s apply this same concept, leveraging RangeAB, to perform what I’ve coined, Combinatorial Index Optimization. Think of it as combinatorial optimization for virtual indexes and indexed functions.

Combinatorial Index Optimization

Below is our query from Part 5 of this series where we reduced the the time it took to return a distinct set of numbers from a one billion set from 111 seconds using traditional methods, but only 0.07 seconds using a two-dimensional index.

  SELECT fnFilter.N1 --fnFilter.RN-(b.Mn-1)
  FROM  (SELECT MIN(t.N), MAX(t.N) FROM #t AS t) AS b(Mn,Mx)
  CROSS APPLY
  (
    SELECT      r.RN, r.N1
    FROM        core.rangeAB(b.Mn,b.Mx,1,1) AS r
    CROSS APPLY (SELECT TOP (1) 1 FROM #t AS t WHERE t.N = r.N1) AS z(x)
  ) AS fnFilter
  GROUP BY fnFilter.N1, b.Mn
  ORDER BY fnFilter.N1;

As you may recall, we reduced the runtime by using the tally table for the left side of the one-to-many relationship and only returning numbers that exist in the source table (#t). We’ve reduced the question from, “where is the next distinct value?” to a simple yes/no question: “Is 1 in here? Is 2 in here?” This is an example of a two-dimensional virtual index created by way of combinatorial index optimization.

Figure 3. Two-dimensional Virtual Index

The tally table (RangeAB in this example)

Virtual Indexing while wearing 3-D glasses

As I have said before, the great thing about a virtual index is that it’s free. Armed with a correctly constructed tally table function I can conjure up one any time I wish just like an AD&D Level 7 Magic User.

In the real-world, for a table with millions/billions of rows there would almost certainly be a table or other indexed object (e.g. an indexed view) that I could use to join to for filtering. Either way, using rangeAB, and it’s @Gap parameter we can add another index for an even cooler type of filtering. First let’s review our 2-D solution. Note the code and my comments:

Figure 4. How the Two-dimensional Virtual Index works

Next our 3-D solution.

DECLARE @Gap BIGINT = 100;

SELECT @N = fnFilter.N1
  FROM        (SELECT MAX(t.N) FROM #t AS t) AS mx(N)
  CROSS APPLY core.rangeAB(1,mx.N,@Gap,1) AS b
  CROSS APPLY
  (
    SELECT r.RN, r.N1
    FROM   core.rangeAB(b.N1,b.N2-1,1,1) AS r
    CROSS APPLY (SELECT TOP (1) 1 FROM #t AS t WHERE t.N = r.N1) AS z(x)
  ) AS fnFilter
  WHERE    EXISTS (SELECT 1 FROM #t AS t WHERE t.N BETWEEN b.N1 AND b.N2-1) -- $$$
  GROUP BY fnFilter.N1
  ORDER BY fnFilter.N1;

This image will help explain the changes.

Figure 5. How the Three-dimensional Virtual Index works

Adding this 3rd dimension will dramatically improve performance in two distinct ways. First it’s going to reduce the amount of work required simply by ignoring entire partitions of our sequence once the EXISTS statement determines it’s safe to do so. As Figure 2 should help you understand a 2-D virtual index, here’s what a 3-D virtual index looks like.

Figure 6. How the Three-dimensional virtual index works

Partitions without members are ignored

Performance Test 1

For brevity I set @gap to 3 for the example above. Notice how, for 1-3 and 13-15, our second tally table (by order of appearance in this article) is pre-filtering entire subsets of work. Next the performance test. For this test I used the same test harness from Part 5 of this series to create one billion rows with numbers ranging from 1 to 10,000 with very few gaps. For the 3-D solution I set @g to 100.

Figure 7. Test #1 Results – 1 Billion, Dense

The 3-D index improves with a parallel plan

In this case the 3-D index, with a serial plan slowed things down just a smidgen because I purposely made sure there were few gaps. I included this test to demonstrate how, unlike the 2-D index, we get a faster result set with a parallel plan. This despite the fact that the additional rangeAB call couldn’t really filter anything. Adding an additional dimension allows the optimizer better multithreading options. That is the second distinct advantage of the 3-D index over a 2-D one.

Performance Test 2

For this test I filled a table with the 50,000 numbers ranging from 1 to 1,000,000. Next I used my test harness to duplicate the rows until I had two Billion. The means my tally table must generate 1,000,000 rows for the left side of my query. This is where the 3-D index shines. I set my @gap parameter to 6400 for this test.

Figure 8. Test #2 Results – 2 Billion, Sparse

3-D virtual index with gap reduction saves the day.

The 2-D solution runs for 12 seconds regardless of how many CPUs you throw at it. The 3-D solution runs for 1.3 seconds serial, 0.396 seconds with a parallel plan. That’s correct, less than 1/2 a second to extract 50K distinct values from a set of 2,000,000,000.

Conclusion

This, again, is what I mean by More Fast, Less Code. In Part 7 we’ll dive into what is arguably my favorite topic: Finite Opposite Numbers. You’ll want to be seated for that one.

Virtual Indexing Part 4: The NTally Table

A set-based, more performant alternative to SQL Server’s NTILE function coming right up.

Introduction

Few authors have impacted my career like Itzik Ben-Gan. If you are trying to get a grasp on how to develop high-performing, set-based SQL then a good Ben-Gan book will be a vital part of your SQL toolbox. Today Mr. Ben-Gan posted a challenge to create the fastest number generator. I offered up rangeAB which leverages Jeff Moden’s fnTally function which the fastest “number generator” in the business based on my experience.

When I first learned about the how to use a numbers table I began trying to use them to solve problems I’d never seen other people solve without a loop or other iterative method. I thought it was fun and doing so helped me grow my skills. One thing I never saw anyone do is use a numbers table to develop a set-based alternative to NTILE. I even heard people say that it’s not possible. Today we’re going to come up with two functions, each which blow the doors of NTILE with respect to speed and IO. The fist using a virtual auxiliary table of numbers (AKA “Tally Table”), the second using a physical numbers tables.

Quick NTILE overview

Before I dive into the algorithm let’s quickly review what NTILE does. NTILE is a window ranking function used to divide a set of rows as evenly as possible.

In High-Performance T-SQL Using Window Functions Ben-Gan explains it like this:

The NTILE function allows you to arrange the rows within the window partition in roughly equally sized tiles, based on the input number of tiles and specified window ordering. For example, suppose that you want to arrange the rows from the OrderValues view in 10 equally sized tiles based on val ordering. There are 830 rows in the view; hence, with 10 requested tiles, the tile size is 83 (that’s 830 divided by 10). So the first 83 rows (the first tenth) based on val ordering will be assigned with tile number 1, the next 83 with tile number 2, and so on.

High-Performance T-SQL Using Window Functions, Itzik Ben Gan

Let’s use fnTally let’s demo generate 13 rows and use NTILE to divide the rows into 4 tile groups.

Figure 1: NTILE test

DECLARE  @tiles BIGINT = 4,
         @rows  BIGINT = 13;

--==== NTILE CLASSIC
SELECT f.N, Tile = NTILE(@tiles) OVER (ORDER BY f.N)
FROM   dbo.fnTally(1,@rows) AS f;

Figure 1.2: NTILE test results

N     Tile
----- -------
1     1
2     1
3     1
4     1
5     2
6     2
7     2
8     3
9     3
10    3
11    4
12    4
13    4

To calculate the number of tiles required, for each number, from 1 to @tile, the quantity is calculated by taking the @rows/@tiles, then adding 1 for each number that is less than, or equal to: @rows%@tiles. In the example below we get 3 tile numbers for 1,2,3 and 4 (13/4 = 3). Then the number 1 gets an additional member because 1<=(13%4). If add a row then 2 will also get an additional member (14%4 = 2), if we set @rows to 15 then the number 1,2&3 will have 4 members (15%4=3). If we set @rows to 16 we get 4 rows for the numbers 1,2,3 & 4 (16/4 = 4, 16%4 = 0.)

This is how NTILE works, this is how our alternative must also work.

The NTally Table – What it is and how it replaces NTILE

This heading is a nod to The “Numbers” or “Tally” Table: What it is and how it replaces a loop. You can think of an NTally table as a tally table with tiles or a tiled tally table. I developed my first “NTally” in 2014 using a physical numbers table (the DDL for my numbers table can be found here.) An example of my first NTally table is posted here: Odd (to me) Behavior with NTILE() and NULL Values in Source Data. I have since improved on the physical numbers table version and developed one using a virtual table. Let’s examine the logic.

Figure 2.1: NTally Logic

DECLARE  @tiles BIGINT = 4,
         @rows  BIGINT = 13,
         @desc  BIT    = 0;

--==== NTILE CLASSIC
SELECT RN = f.N, Tile = NTILE(@tiles) OVER (ORDER BY f.N)
FROM   dbo.fnTally(1,@rows) AS f;

--==== ALTERNATIVE #1: PHYSICAL Auxiliary table of Numbers (dbo.NTally)
SELECT RN = ROW_NUMBER() OVER (ORDER BY t.N), Tile = t.N
FROM   dbo.tally AS t
CROSS APPLY
(
  SELECT TOP(@rows/@tiles+IIF(t.N<=@rows%@tiles,1,0)) $
  FROM       dbo.tally AS t1
  CROSS JOIN dbo.tally AS t2
) AS x(x)
WHERE t.N<=@tiles;

--==== ALTERNATIVE #2: PHYSICAL Auxiliary table of Numbers (dbo.NTallyRangeAB)
SELECT      RN     = ROW_NUMBER() OVER (ORDER BY t.RN),
            Tile   = t.RN
FROM        core.rangeAB(1,@tiles,1,1)                                  AS t
CROSS APPLY (VALUES(IIF(@desc=0,t.RN,t.OP)))                            AS d(D)
CROSS APPLY core.rangeAB(1,@rows/@tiles+IIF(d.D<=@rows%@tiles,1,0),1,1) AS x;

Each query returns the same thing except that I renamed “N” to “RN”.

Figure 2.2: NTally Results

RN    Tile
----- ------
1     1
2     1
3     1
4     1
5     2
6     2
7     2
8     3
9     3
10    3
11    4
12    4
13    4
The functions:
Simple Demo

The NTally table is not ready to replace NTILE out of the box, a small amount of assembly is required; this is what the RN column is for. Below is an example of how to “tile” a table using NTILE followed by examples which leverage dbo.NTally and dbo.NTallyRangeAB, which leverages dbo.rangeAB.

The NTally solutions are easy to assemble: instead of “NTILE(@tiles) OVER (ORDER BY s.SomeID)” we’ll use ROW_NUMBER inside a CTE with the same ORDER BY. With the row number in place (RN) we can use it to join to our NTally table and return a “tiled” result set without NTILE.

Note that the function needs the number of rows ahead of time, a simple COUNT of the rows passed to the function as the @rows parameter does the trick. This will add a little penalty but we’ll still see a huge time/IO gains nonetheless.

Figure 3.1: NTally Functional Demo

--==== 1. Sample Data
IF OBJECT_ID('tempdb..#SomeTable') IS NOT NULL DROP TABLE #SomeTable;
CREATE TABLE #SomeTable
(
  SomeId BIGINT PRIMARY KEY
);
INSERT #SomeTable(SomeId) SELECT f.N FROM dbo.GetNums(1,14) AS f;

--==== 2. Parameters and examples
DECLARE  @tiles BIGINT = 5,
         @rows  BIGINT = 14,
         @desc  BIT    = 0;

PRINT CHAR(13)+'NTILE CLASSIC'+CHAR(13)+REPLICATE('-',90);
  SELECT s.SomeId, Tile = NTILE(@tiles) OVER (ORDER BY s.SomeID)
  FROM   #SomeTable AS s;

PRINT CHAR(13)+'NTally'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      a.SomeID, nt.Tile
  FROM        anchor AS a
  CROSS JOIN  dbo.NTally(@tiles, (SELECT COUNT(*) FROM #SomeTable)) AS nt
  WHERE       a.rn = nt.rn

PRINT CHAR(13)+'NTallyRangeAB'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      a.SomeID, nt.Tile
  FROM        anchor AS a
  CROSS APPLY core.NTallyRangeAB(@tiles, (SELECT COUNT(*) FROM #SomeTable),@desc) AS nt
  WHERE       a.rn = nt.rn;

Each query returns the correct result set:

Figure 3.2: Results

SomeID  Tile
------- -------
1       1
2       1
3       1
4       2
5       2
6       2
7       3
8       3
9       3
10      4
11      4
12      4
13      5
14      5

If you have kept up with this series you can think of dbo.NTally as an indexed function and dbo.NTallyRangeAB as a function that leverages the virtual index. Now for performance.

Test #1: 1,000,000 Rows – Time and IO

I’m running SQL 2019 which allows for batch mode on rowstore. I’m going to run the NTILE query twice, first as-is, then with batch mode disabled. I can’t compete with batch mode but we’ll get close. We’ll compare the results with the NTally functions. Unlike the NTILE examples here, the NTally versions get faster when they get a parallel plan; I excluded the parallel NTILE test because it was too awful.

Note, too, that the optimizer tries to use a Hash Match join algorithm for joining the row numbers. A merge join is the right join for this task; in the examples below I’ll let the optimizer pick the join, then force a merge join so you can see the performance difference.

Figure 4: 1M Row Test

IF OBJECT_ID('tempdb..#SomeTable') IS NOT NULL DROP TABLE #SomeTable;
CREATE TABLE #SomeTable
(
  SomeId BIGINT PRIMARY KEY
);
INSERT #SomeTable(SomeId) SELECT f.N FROM dbo.GetNums(1,1000000) AS f;

SET STATISTICS TIME, IO ON;
  DECLARE @Tiles BIGINT = 3, @NT BIGINT;

  PRINT CHAR(13)+'NTILE CLASSIC with Batch Mode'+CHAR(13)+REPLICATE('-',90);
  SELECT @NT = NTILE(@Tiles) OVER (ORDER BY s.SomeID)
  FROM   #SomeTable AS s;

  PRINT CHAR(13)+'NTILE CLASSIC No Batch Mode'+CHAR(13)+REPLICATE('-',90);
  SELECT @NT = NTILE(@Tiles) OVER (ORDER BY s.SomeID)
  FROM   #SomeTable AS s
  OPTION(USE HINT('DISALLOW_BATCH_MODE')); -- Crawls when Parallel

  PRINT CHAR(13)+'dbo.NTALLY SERIAL, Hash Join'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  CROSS APPLY dbo.NTally(3, (SELECT COUNT(*) FROM #SomeTable)) AS nt
  WHERE       a.rn = nt.rn
  OPTION (MAXDOP 1);

  PRINT CHAR(13)+'dbo.NTally SERIAL, Merge Join'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  INNER 
  MERGE JOIN dbo.NTally(@Tiles, (SELECT COUNT(*) FROM #SomeTable)) AS nt
    ON       a.rn = nt.rn
  OPTION (MAXDOP 1);

  PRINT CHAR(13)+'dbo.NTally PARALLEL, Merge Join'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  INNER 
  MERGE JOIN dbo.NTally(@Tiles, (SELECT COUNT(*) FROM #SomeTable)) AS nt
    ON       a.rn = nt.rn
  CROSS JOIN dbo.make_parallel() AS x;

  PRINT CHAR(13)+'dbo.NTallyRangeAB SERIAL, Hash Join'+CHAR(13)+REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  CROSS APPLY core.NTallyRangeAB(@Tiles, (SELECT COUNT(*) FROM #SomeTable),0) AS nt
  WHERE       a.rn = nt.rn
  OPTION (MAXDOP 1);

  PRINT CHAR(13)+'dbo.NTallyRangeAB SERIAL, Merge Join'+CHAR(13)+
    REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  INNER 
  MERGE JOIN core.NTallyRangeAB(@Tiles, (SELECT COUNT(*) FROM #SomeTable),0) AS nt
    ON       a.rn = nt.rn
  OPTION (MAXDOP 1);

  PRINT CHAR(13)+'dbo.NTallyRangeAB Parallel, Merge Join'+CHAR(13)+
    REPLICATE('-',90);
  WITH anchor AS
  (
   SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
   FROM   #SomeTable AS s
  )
  SELECT      @NT = nt.Tile -- a.SomeID, nt.* --AS TileGroup
  FROM        anchor AS a
  INNER 
  MERGE JOIN core.NTallyRangeAB(@Tiles, (SELECT COUNT(*) FROM #SomeTable),0) AS nt
    ON       a.rn = nt.rn
  CROSS JOIN dbo.make_parallel() AS x;
SET STATISTICS TIME, IO OFF;
GO

Figure 4.2 Results:

NTILE CLASSIC with Batch Mode
------------------------------------------------------------------------------------------
Table '#SomeTable_00000001CF14'. Scan count 1, logical reads 2108...
Table 'Worktable'. Scan count 1, logical reads 46274, physical reads 0...

 SQL Server Execution Times: CPU time = 281 ms,  elapsed time = 284 ms.

NTILE CLASSIC No Batch Mode
------------------------------------------------------------------------------------------
Table 'Worktable'. Scan count 3, logical reads 2867958, physical reads 0... 
Table '#SomeTable_00000001CF14'. Scan count 1, logical reads 2108...

 SQL Server Execution Times: CPU time = 2375 ms,  elapsed time = 2378 ms.

dbo.NTALLY SERIAL, Hash Join
------------------------------------------------------------------------------------------
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750
Table 'tally'. Scan count 3, logical reads 1756, physical reads 0...

 SQL Server Execution Times: CPU time = 984 ms,  elapsed time = 997 ms.

dbo.NTally SERIAL, Merge Join
------------------------------------------------------------------------------------------
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750, physical reads 0...
Table 'Worktable'. Scan count 0...
Table 'tally'. Scan count 3, logical reads 599, physical reads 0...

 SQL Server Execution Times: CPU time = 922 ms,  elapsed time = 917 ms.

dbo.NTally PARALLEL, Merge Join
------------------------------------------------------------------------------------------
Table 'Worktable'. Scan count 0...
Table 'Worktable'. Scan count 0...
Table 'tally'. Scan count 9, logical reads 601, physical reads 0...
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750, physical reads 0

 SQL Server Execution Times: CPU time = 2408 ms,  elapsed time = 790 ms.

dbo.NTallyRangeAB SERIAL, Hash Join
------------------------------------------------------------------------------------------
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750, physical reads 0

 SQL Server Execution Times: CPU time = 906 ms,  elapsed time = 946 ms.

dbo.NTallyRangeAB SERIAL, Merge Join
------------------------------------------------------------------------------------------
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750, physical reads 0...
Table 'Worktable'. Scan count 0...

 SQL Server Execution Times: CPU time = 969 ms,  elapsed time = 977 ms.

dbo.NTallyRangeAB Parallel, Merge Join
------------------------------------------------------------------------------------------
Table 'Worktable'. Scan count 0, logical reads 0...
Table '#SomeTable_00000001CF14'. Scan count 7, logical reads 14750, physical reads 0...

 SQL Server Execution Times: CPU time = 1639 ms,  elapsed time = 535 ms.

Completion time: 2020-12-10T20:02:28.6874120-06:00

NTILE with batch mode runs for 284MS with a serial plan. Without batch mode, however, just can’t compete with the dbo.NTally or dbo.NTallyRangeAB. First, check those reads – 2,867,958 and it runs for 2 seconds. Each NTally solution runs for under 1 second with the best performance being NTallyRangeAB as the winner at 1/2 a second and 14,750 reads – 4X speed increase and a 99.5% I/O reduction.

No Looking Back

I snuck a bonus feature on dbo.NTallyRangeAB. You may have noticed that I added a third column named TileOP, this is just the tile group in descending order with the distribution flipped, e.g. the “overflow” tiles are assigned to the highest tile numbers first. For this we simply return TileOP instead of the Tile row. Let’s use our 14 row test from figure 3.2

Figure 5.1 NTallyRangeAB TileOP Demo:

DECLARE  @tiles BIGINT = 5,
         @rows  BIGINT = 14,
         @desc  BIT    = 0;

WITH anchor AS
(
 SELECT s.SomeID, ROW_NUMBER() OVER (ORDER BY s.SomeID) AS rn
 FROM   #SomeTable AS s
)
SELECT      a.SomeID, nt.Tile, nt.TileOP
FROM        anchor AS a
CROSS APPLY core.NTallyRangeAB(@tiles, (SELECT COUNT(*) FROM #SomeTable),@desc) AS nt
WHERE       a.rn = nt.rn
ORDER BY    a.SomeId

I returned the Tile and TileOP columns so you can see the difference.

Figure 5.2. Results:

SomeID   Tile   TileOP
-------- ------ --------
1        1      5
2        1      5
3        1      5
4        2      4
5        2      4
6        2      4
7        3      3
8        3      3
9        3      3
10       4      2
11       4      2
12       4      2
13       5      1
14       5      1

Some interesting lessons here. First, NTILE teamed with batch mode is nasty fast and tough to beat. If batch mode is not an option then NTally and NTallyRangeAB are the only game in town as far as I’m concerned .

This concludes Part 4 of this series. Forgive any typos – this is the first/last draft. Cheers!

Virtual Indexing Part 3: The Indexed Function

“The fastest way to perform a calculation is to have already done so.”

“The fastest way to perform a calculation is to have already done so.”

Introduction

Here’s an interesting problem: given a sequence of numbers between M and N, how many are prime numbers? How would you solve this and how efficient will your solution be if M=0 and N=10,000,000? On my PC I’m able to get the job done in about 3 seconds.

Introducing the Indexed Function

One day while trying to figure out some recursive query I stumble across a SQL Server Central post about calculating a Fibonacci sequence where someone asked, “why not just store the values in a table?” It seems obscenely obvious to me now but at the time, but it blew me away that I never thought of it. It still blows my mind how so few developers take advantage of this. Just populate a table (on-disk or in-memory) with the values you need, add indices as needed, then build an inline table valued function (iTVF) to retrieve the value with an index seek. This allows your function to run in O(1) time for each value passed to it. Indexed values are stored as a  suitably sorted stream which makes it possible to retrieve the value(s) without a scan. A web search for “create Fibonacci function“,  “function for prime numbers” or something similar will generally return various algorithms (mostly recursive) but scarce mentions of pre-calculating problems where the values never change. Doesn’t Fibonacci(7) always 13? Factorial(5) = 120? When is 7 not a prime number? I suspect that, the fastest way to perform a calculation is to have already done so, then save the result in an index for fast retrieval.

In Paul White’s article, Understanding and using APPLY (Part 1) he refers to an inline table valued function (iTVF) as “a view that accepts parameters”. With that in mind, what would be the best way to describe an indexed view (or table) that accepts parameters?  I’m thinking indexed function.

Nasty Fast Primes

Primes is a great example since calculating them can be costly, especially when using a solution which “counts up” iteratively or recursively. For this exercise we’re going to build a prime number function that can determine if an integer between 1 and 10,000,000 is prime and do so in O(1) time and space. I know that 99.999% of you don’t need a prime number function so remember that this is for learning and to prove the power of this concept; this technique will work against any immutable set of values.You don’t have to play along or can test this with a smaller number of primes. I’m including the DDL used to create the function so you can keep me honest about the performance tests later in the article.

The Setup Procedure

This proc indexes and populates a prime numbers table called idx.prime. There are two parameters: @Counter and @maxrows. @counter is the number that the proc starts at and should be left at 14. You can use @counter to set up batch inserts. If you execute usp_primes and accept the default values, the function will support 1-10M with a bit column indicating if the number is prime. You don’t have to do 10M rows but it’s ~300MB, uncompressed, if you do. For me it this runs 20 minutes without batching the insert; I did it all in one shot. Note that I keep my indexed functions in an “idx” schema, no harm in using dbo. Once the procedure is done you can create our prime numbers function as an indexed function. As mentioned in Part 2 of my PerfML series, functional purity is key so only an iTVF will do. If you’re playing along you will need a one million row tally table, you can get the DDL here and the stored procedure to create an indexed prime number function is attached below.

Note that it creates a schema name idx; this is where my indexed functions live but it’s not necessary.

The Indexed Function (idx.fnPrimes)

Now we have a sorted set of numbers from 1 to 10M. Our function takes @N as an input and performs a one-row index seek.

Figure 1: idx.fnPrimes

CREATE FUNCTION idx.fnPrimes
(
  @N BIGINT
)
RETURNS TABLE WITH SCHEMABINDING AS RETURN
  SELECT p.N, p.PRIME FROM idx.prime AS p WHERE p.N = @N;
GO

To check if a number is prime:

Figure 2: Basic idx.fnPrimes demo

SELECT p.N, p.PRIME FROM idx.fnPrimes(11) AS p;

To return the numbers 1 to 7 along with their prime value we use fnTally to generate the stream of numbers and map them to fnPrimes.

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)  AS f
CROSS APPLY idx.fnPrimes(f.N) AS p;

This returns: [(1 1),(2 1),(3 1),(4 0),(5 1),(6 0),(7,1)]; now a quick performance test.

Performance Test #1: Calculate Primes 1 to 1 Million

The purpose of this test is to prove out the concept of the indexed function. This query will take the values 1 through 10,000,000 and determine if they are prime or not. I am forcing a serial plan since the serial plan actually performs best. @P variable captures the value so as to avoid SSMS spitting out 10 million rows and ruining the performance metrics.

Figure 3.1: idx.fnPrimes Performance Test

PRINT CHAR(13)+'fnTally serial'+CHAR(13)+REPLICATE('-',90);
GO
DECLARE @ST DATETIME = GETDATE(), @P BIT;
  SELECT      @P = p.PRIME
  FROM        dbo.fnTally(1,10000000) AS f
  CROSS APPLY idx.fnPrimes(f.N)       AS p
  OPTION (MAXDOP 1);
PRINT DATEDIFF(MS,@ST,GETDATE());
GO 3

Figure 3.2: idx.fnPrimes Performance Test Results

fnTally serial
------------------------------------------------------------------------------------------
Beginning execution loop
2660
2727
2750

Under 3 seconds for idx.fnPrimes to accept the numbers 1 to 10,000,000 and determine if which ones are prime.  This with a serial execution plan!

Unexpected Virtual Index Pitfalls (continued…)

One motivation for giving a name to what I call, the “virtual index,” is to make the following type of problem (and solution) easier to explain. Part 2 of this series you were introduced to some unexpected virtual indexing pitfalls which can be tricky to identify. There’s another issue to watch out for people using SQL 2019 which includes batch mode over rowstore, or users or who leverage Itzik Ben-Gan’s batch mode processing trick to enjoy this functionality on SQL Server 2016+.  I use SQL 2019 and the optimizer leverages a window aggregate function that invokes batch mode over rowstore which is the culprit here. Below are two techniques to retrieve the numbers 1 to 7 and identify which are prime.

Figure 4: Hidden CTE Tally table row retrieval issue

--==== 1. Leveraging fnTally
SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)  AS f
CROSS APPLY idx.fnPrimes(f.N) AS p; 

--====  2. Static Values
SELECT      p.N, p.PRIME
FROM        (VALUES(1),(2),(3),(4),(5),(6),(7)) AS f(N)
CROSS APPLY idx.fnPrimes(f.N)                  AS p; 

Each function returns the expected result set: [(1 1),(2 1),(3 1),(4 0),(5 1),(6 0),(7,1)]; The execution plan for the fnTally solution.

Figure 4.1: Execution Plan Details

fnTally retrieved 2700 rows to generate the numbers 1-7! The second query didn’t have this issue. Luckily there are more than a few ways to solve this.

Solution 1: Disable batch mode over rowstore

Let’s temporarily disable batch mode over rowstore and try again.

Figure 5.1: Solution #1: Disable batch mode over rowstore query

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)    AS f
CROSS APPLY idx.fnPrimes(f.N)   AS p;
OPTION(USE HINT('DISALLOW_BATCH_MODE'));

Figure 5.2: Solution #1: Execution plan

7 rows only, much better!

Solution 2: make_parallel

In my experience Adam Machanic’s make_parallel always forces plans that don’t include batch mode over rowstore. I don’t know if it’s an option with make_parallel in play.

Figure 6.1: Solution #2: make_parallel

SELECT      p.N, p.PRIME
FROM        dbo.fnTally(1,7)    AS f
CROSS APPLY idx.fnPrimes(f.N)   AS p
CROSS JOIN  dbo.make_parallel() AS parallel_execution;

Below is the portion of the plan where the ROW_NUMBER sequence is created; again, only 7 rows

Figure 6.2: Solution #2:  Execution plan

Solution 3: Use a physical tally table

The conventional wisdom seems to be that the CTE tally table is faster than a physical table pretty much all the time, a reasonable assumption since the CTE tally table is readless but physical tally tables do require IO. Here is an example of where the physical tally table (correctly indexed) has an edge.

Figure 7.1: Solution #3: Physical Tally Table

Note that both a TOP + ORDER BY, WHERE clause filter, or a both together will do just fine.

SELECT TOP(7) p.N, p.PRIME
FROM        dbo.tally         AS f
CROSS APPLY idx.fnPrimes(f.N) AS p
ORDER BY    f.N;

Again, only 7 numbers retrieved.

Figure 7.2: Solution #3: Execution Plan

This is one of many subtle differences between a persisted tally table and a CTE version. Let’s explore deeper.

RangeAB and RangeAB2

We will conclude by running two performance tests against one million rows. The one million limit is because that’s what dbo.tally is capped at. For this test let two tally table functions: the first can be considered an “add-on” to dbo.fnTally named dbo.rangeAB. The second takes identical parameters and returns identical results, the only difference is that it leverages physical tally table (dbo.tally) instead of dbo.fnTally.

Both functions can be used to help enforce best practices with as little code as possible. Even better: Both functions return the exact result set given the same parameters. This gives you the ability to test your set-based code against both a physical and virtual tally table index simply adding or removing the number “2” from the end of “RangeAB.”

The functions and usage examples are located here. For the purposes of this article I will show you the execution plans for a basic usage scenario so we can compare execution plans. Both examples use one of the RangeAB functions to return a series of weeks. We’ll run these with “show actual execution plan” turned on.

Figure 8.1: Basic Usage Example

DECLARE
  @startdate DATE   = '20200831',
  @weeks     BIGINT = 5,
  @row1      BIGINT = 1;

BEGIN
--==== RangeAB (fnTally version)
  SELECT
    WeekNumber     = f.RN,
    WeeksLeft      = f.OP,
    DaysFromMedian = ABS(f.DM),
    WeekStart      = DATEADD(DAY,f.N1-1,@startdate),
    WeekEnd        = DATEADD(DAY,f.N2-2,@startdate)
  FROM     dbo.rangeAB(@row1, @weeks*7-(1-@row1), 7, @row1) AS f
  ORDER BY f.RN;

--==== RangeAB2 (dbo.tally version)
  SELECT
    WeekNumber = f.RN,
    WeeksLeft  = f.OP,
    DaysFromMedian = ABS(f.DM),
    WeekStart  = DATEADD(DAY,f.N1-1,@startdate),
    WeekEnd    = DATEADD(DAY,f.N2-2,@startdate)
  FROM dbo.rangeAB2(@row1, @weeks*7-(1-@row1), 7, @row1) AS f
  ORDER BY f.RN;
END;

Figure 8.2: Results (for each)

    WeekNumber  WeeksLeft  DaysFromMedian  WeekStart  WeekEnd
    ----------- ---------- --------------- ---------- ----------
    1           5          14              2020-08-31 2020-09-06
    2           4          7               2020-09-07 2020-09-13
    3           3          0               2020-09-14 2020-09-20
    4           2          7               2020-09-21 2020-09-27
    5           1          14              2020-09-28 2020-10-04

Execution plans

Ignore the 40/60% estimates, they aren’t accurate, both perform almost identically. The win is that we can use each to quickly determine which is better – dbo.tally or dbo.fnTally without any logic changes. fnTally uses the Itzik Ben-Gan style ROW_NUMBER over cross joins, the other gets its rows from dbo.tally.

Figure 8.3: Execution Plans

Prime Numbers Test #1: RangeAB for a running total of Primes

Now back to our indexed function, idx.fnPrimes. This first example is a simple running total of prime numbers from 1 to 50. Ignore the @gap parameter for now.

DECLARE @gap BIGINT = 1, 
        @low BIGINT = 0, 
        @hgh BIGINT = 50; 

--==== Solution #1: Primes running total over an Indexed Function + Virtual Index (RangeAB) 
SELECT
  N           = ((rnd.RN-2)*@gap+@low)+1, 
  TotalPrimes = SUM(p.PRIME+0) 
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
GROUP BY    rnd.RN
ORDER BY    rnd.RN; 

--==== Solution #2: Primes running total over two indexed functions
SELECT
  N           = ((rnd.RN-2)*@gap+@low)+1, 
  TotalPrimes = SUM(p.PRIME+0) 
FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)             AS p
GROUP BY    rnd.RN
ORDER BY    rnd.RN; 

Figure 10.1: Primes Test #1 Results (truncated for brevity)

N       TotalPrimes
------- -----------
1       1
2       2
3       3
4       3
5       4
6       4
7       5
8       5
...
46      15
47      16
48      16
49      16
50      16

Prime Numbers Test #2: Performance

Here we’ll do two tests, first with a serial plan, then with a parallel plan. What we need to do is create @gap-sized groups and collect a between @low and @hgh along with a count of prime numbers in that group. We’ll compare the performance RangeAB vs RangeAB2. Again, the logic is identical except that the latter leverages dbo.tally. Note the parameters.

First to test serial execution using MAXDOP 1.

Figure 11.1 Serial Test

SET STATISTICS IO, TIME ON;

DECLARE
  @gap BIGINT = 100000,
  @low BIGINT = 0,
  @hgh BIGINT = 1000000;

PRINT CHAR(10)+'fnTally version'+CHAR(10)+REPLICATE('-',90);
  SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
GROUP BY   rnd.RN
ORDER BY   rnd.RN
OPTION (MAXDOP 1);

PRINT CHAR(10)+'Tally Function V2 - Indexed'+CHAR(10)+REPLICATE('-',90);
 SELECT
   L           = ((rnd.RN-2)*@gap+@low)+1,
   H           = (rnd.RN-1)*@gap+@low,
   TotalPrimes = SUM(p.PRIME+0)
 FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
 CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
 CROSS APPLY idx.fnPrimes(r.RN)             AS p
 GROUP BY   rnd.RN
 ORDER BY   rnd.RN
 OPTION (MAXDOP 1);
SET STATISTICS IO, TIME OFF;

Figure 11.2: Primes Test #2 Result Set

L           H            TotalPrimes
----------- ------------ -----------
1           100000       19187
100001      200000       38367
200001      300000       57548
300001      400000       76728
400001      500000       95910
500001      600000       115092
600001      700000       134272
700001      800000       153453
800001      900000       172634
900001      1000000      191814

Figure 11.3: Primes Test #2 Performance (Serial: Time & IO)

fnTally version
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843792, physical reads 0...
Table 'Worktable'. Scan count 0, logical reads 15101551, physical reads 0...

 SQL Server Execution Times: CPU time = 23312 ms,  elapsed time = 23452 ms.

Tally Function V2 - Indexed
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843792, physical reads 0...
Table 'tally'. Scan count 2, logical reads 9555, physical reads 0...

 SQL Server Execution Times: CPU time = 9110 ms,  elapsed time = 9163 ms.

This a case where dbo.tally finds redemption. 9 Seconds vs 23 – not bad. Next for a parallel execution plan leveraging dbo.make_parallel.

Figure 11.4: Parallel Test

SET STATISTICS IO, TIME ON;

DECLARE
  @gap BIGINT = 100000,
  @low BIGINT = 0,
  @hgh BIGINT = 1000000;

PRINT CHAR(10)+'fnTally version'+CHAR(10)+REPLICATE('-',90);
  SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)            AS p
CROSS JOIN  dbo.make_parallel()           AS x
GROUP BY   rnd.RN
ORDER BY   rnd.RN;

PRINT CHAR(10)+'Tally Function V2 - Indexed'+CHAR(10)+REPLICATE('-',90);
SELECT
  L           = ((rnd.RN-2)*@gap+@low)+1,
  H           = (rnd.RN-1)*@gap+@low,
  TotalPrimes = SUM(p.PRIME+0)
FROM        dbo.rangeAB2(@low,@hgh,@gap,1) AS rnd
CROSS APPLY dbo.rangeAB2(0,rnd.N1,1,0)     AS r
CROSS APPLY idx.fnPrimes(r.RN)             AS p
CROSS JOIN  dbo.make_parallel()            AS x
GROUP BY   rnd.RN
ORDER BY   rnd.RN;
SET STATISTICS IO, TIME OFF;

Figure 11.5: Primes Test #2 Performance (Parallel: Time & IO)

fnTally version (RangeAB)
------------------------------------------------------------------------------------------
Table 'prime'. Scan count 0, logical reads 16843855, physical reads 0...
Table 'Worktable'. Scan count 0, logical reads 15101558, physical reads 0...

 SQL Server Execution Times: CPU time = 35877 ms,  elapsed time = 7816 ms.

dbo.tally version (RangeAB2)
------------------------------------------------------------------------------------------
Table 'tally'. Scan count 9, logical reads 9604, physical reads 0...
Table 'prime'. Scan count 0, logical reads 16843855, physical reads 0...

 SQL Server Execution Times: CPU time = 16687 ms,  elapsed time = 3638 ms.

The great news is that, in both cases, parallel execution speeds things up for both. The fist version (virtual index) increasing to 7.8 seconds, the second (the indexed function) completes in 3.6 seconds. The reads against the primes table (via the idx.primes indexed function) are excessive. That said, to calculate the numbers of primes, split into 100K chunks up to 1,000,000… that should take longer than 3.6 seconds.

Conclusion

We have a new way to take complicated functions and speed them up to O(1) time leveraging a new concept I coined as the Indexed Function.; a powerful tool for you set-based programming toolbox. We have a way to quickly compare a CTE-Tally to a real one. We’ve learned that a physical tally table, in some cases, can blow the doors off a virtual one.

The Virtual Index and Indexed Functions are indispensable tools in my toolbox. Thanks for reading.

How to find strings longer than 8K in SQL Server (Part 1)

Ever try searching a string for a substring larger than 8,000 characters? It’s actually more difficult than you think… without a good N-Grams function, that is.

Before you continue

FIRST, this article leverages my NGrams2B function which can be found here: Nasty Fast N-Grams (Part 1). NGrams2B is the same as the 8K version but handles VARCHAR(MAX) strings. Since publishing this article I have since dropped the “2B” from the function name; you will see it referred to simply as dbo.ngrams in the code examples below.

SECOND, For people following my N-Grams series on SQL Server Central (SSC), let not your heart be troubled: I am in the process of wrapping up my N-Grams Series (Parts 2-6). This is a quick “one-off” intended to keep my new blog posts moving along…

Introduction

Riddle me this: what will the query below return?

DECLARE @SearchText VARCHAR(MAX) = REPLICATE('X',9000),
        @SomeString VARCHAR(MAX) = REPLICATE('X',8001);

SELECT CHARINDEX(@SearchText,@SomeString);

You would expect it to return a Zero(0) – @SearchText is a string of 9000 X’s, @SomeString is 8001 X’s. The query, however, returns a 1. That’s not right, how can this be? To get to the bottom of this let’s append this line to our query:

SELECT LEN(@SearchText), LEN(@SomeString)

This returns: 8000 for each. Ahhhhh, our sample strings are being implicitly converted from VARCHAR(MAX) to VARCHAR(8000). I make this mistake periodically when using REPLICATE for creating sample strings. To get around the 8K limitation we can case the input string as VARCHAR(MAX):

--==== Casting "X" as VARCHAR(MAX) circumvents the truncation issue
DECLARE 
  @SearchText VARCHAR(MAX) =       REPLICATE(CAST('X' AS VARCHAR(MAX)),9000),
  SomeString  VARCHAR(MAX) = '###'+REPLICATE(CAST('X' AS VARCHAR(MAX)),9001);

SELECT LEN(@SearchText), LEN(@SomeString);

This prevents the truncation. Let’s run add our CHARINDEX query again:

SELECT CHARINDEX(@SearchText,@SomeString);
Results:
Msg 8152, Level 16, State 10, Line 132
String or binary data would be truncated.

Not as easy as you thought huh?. Perhaps PATINDEX can save the day…

SELECT PATINDEX('%'+@SearchText+'%',@SomeString);

Same error, grrrr… What about LIKE? Surely we can solve this using LIKE right?

SELECT CASE WHEN @SearchText LIKE '%'+@SomeString+'%' THEN 1 ELSE 0 END;

Fail – same error. I encountered this problem for the first time earlier this year and attempting to solve it took a toll on my keyboard and self-esteem.

The Problem

To my knowledge, SQL Server doesn’t provide a way to search for a VACHAR longer than 8,000 or 4,000 for NVARCHAR. Based on my research it’s obvious that this is not a common problem. It will be more common as Developers continue to push the limits of what you can do with strings in SQL Server. And if I’m wrong? So what. Solving problems like this is fun and talking about N-Grams never gets old.

dbo.NGrams to the Rescue (as per usual)

dbo.NGrams and dbo.NGrams8K are Super heroes

Ok-LIKE, CHARINDEX and PATINDEX are not valid options. To quote Ned Flanders, “As melon scratchers go, this one is a honeydoodle!” Fortunately SQL Server is powerful enough to check two VARCHAR(MAX) strings for equality. This is where a good N-Grams function is a super hero.

First we’ll tokenize (split) the @SomeString into tokens as long as @SearchText. It’s six characters long so we can split @SomeString into 6-grams.

--==== Sample Search Text and String to analyze
DECLARE @SearchText VARCHAR(MAX) = 'ABC123',
        @SomeString VARCHAR(MAX) = 'AAABC123XYZ';

--==== Tokenize @SomeString into @SearchText-sized tokens
SELECT   ng.Position, ng.Token
FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng -- LEN(@SeachText=6)
ORDER BY ng.Position;
Results:
Position  Token
--------- ---------
1         AAABC1
2         AABC12
3         ABC123
4         BC123X
5         C123XY
6         123XYZ

The value of @SearchText (“ABC123”) is at Position 3: AAABC123XYZ. All that’s left is to add a WHERE filter to compare @SearchText to each token returned by dbo.NGrams.

SELECT   ng.Position, ng.Token
FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
WHERE    ng.Token = @SearchText
ORDER BY ng.Position;
Returns:
Position   Token   
---------- ---------------
3          ABC123

This is what we want. Note that, thanks to the virtual index that my N-Grams function uses, I can sort by the token’s position in the string without a sort in the execution plan. That’s going to help us because we need to add a TOP(1) clause to identify the location of the first instance of @SearchText . Remember, we only need to know if the text exists. Adding TOP(1), with an ORDER BY to keep it deterministic, has multiple benefits: not only do we get a predictable result, this also assists the cardinality estimator and enables us to tap into the SQL optimizer’s Row Goals which, when correctly leveraged, allow for obscene performance gains.

Here we’re going to use strings (8K+) as variables from earlier but with a couple small changes then:

  1. 1. Add the TOP(1) clause along with the proper ORDER BY clause
  2. 2. Wrap the logic up in a subquery named f
  3. 3. Retrieve the value MAX(f.ItemIndex) from f
  4. 4. Add an ISNULL our MAX() to return 0 when there isn’t a match
DECLARE @SearchText VARCHAR(MAX) = REPLICATE(CAST('X' AS VARCHAR(MAX)),9000),
        @SomeString VARCHAR(MAX) = '###'+REPLICATE(CAST('X' AS VARCHAR(MAX)),9001);

SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
FROM
(
  -- Returns the first position of @SearchText
  SELECT TOP(1) ng.Position
  FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
  WHERE    ng.Token = @SearchText
  ORDER BY ng.Position
) AS f(ItemIndex);

This returns: 4 which is correct – @SearchText is a series of 9000 X’s and exists in Position 4 in @SomeString. And that’s it – easy peasy. Next for performance.

Introducing the 2 Billion+ row test harness

I said two Billion, with a B – not a typo. I began the heading with “Introducing” because any test harness exceeding a billion rows deserves a proper introduction. I say, rows, not characters, because dbo.NGrams tokenizes (splits) strings into rows. You’re going to see many more billion+ row test harnesses as we explore the PerfML Universe.

The VARCHAR(MAX) character limit is 2,147,483,647. I rounded my sample values down to two billion, created a search string 50K characters long then stuffed that value into @SomeString, nested between 100,011 junk characters followed by two billion more for a grand total of 2,000,150,011 characters. We’ll run the query twice: once with a serial plan, another with a parallel plan using make_parallel() by Adam Machanic.

Test Harness

SET STATISTICS TIME ON;
PRINT 'Build the test string'+CHAR(13)+REPLICATE('-',90);

  DECLARE @Match VARCHAR(MAX) = REPLICATE(CAST('X' AS VARCHAR(MAX)),50000),
          @Junk1 VARCHAR(MAX) = REPLICATE(CAST('Z' AS VARCHAR(MAX)),100011),
          @Junk2 VARCHAR(MAX) = REPLICATE(CAST('#' AS VARCHAR(MAX)),2000000000);

  DECLARE @SearchText VARCHAR(MAX) = @Match,
          @SomeString VARCHAR(MAX) = @Junk1+@Match+@Junk2;

PRINT 'Performance Test #1: Serial Execution'+CHAR(13)+REPLICATE('-',90);

  SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
  FROM
  (
    -- Returns the first position of @SearchText
    SELECT TOP(1) ng.Position
    FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
    WHERE    ng.Token = @SearchText
    ORDER BY ng.Position
  ) AS f(ItemIndex)
  OPTION (MAXDOP 0);

PRINT 'Performance Test #2: Parallel Execution'+CHAR(13)+REPLICATE('-',90);

  SELECT ItemIndex = ISNULL(MAX(f.ItemIndex),0)
  FROM
  (
    -- Returns the first position of @SearchText
    SELECT TOP(1) ng.Position
    FROM     dbo.ngrams(@SomeString,LEN(@SearchText)) AS ng
    WHERE    ng.Token = @SearchText
    ORDER BY ng.Position
  ) AS f(ItemIndex)
  CROSS JOIN dbo.make_parallel() --  NOOOO >> OPTION (QUERYTRACEON 8649);

SET STATISTICS TIME OFF;

How do you think we’ll do? How many minutes do you think this will run? Drum roll………….

2B Row Test Results

Build the test string
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 12453 ms,  elapsed time = 12576 ms.
 SQL Server Execution Times: CPU time = 7032 ms,  elapsed time = 7060 ms.

Performance Test #1: Serial Execution
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 1953 ms,  elapsed time = 1950 ms.

Performance Test #2: Parallel Execution
------------------------------------------------------------------------------------------
 SQL Server Execution Times: CPU time = 3568 ms,  elapsed time = 3570 ms.

This ran for about 25 seconds on my laptop. These results are more impressive when we dig deeper. SQL Server spent ~20 seconds building the string, another 1.9 seconds to find what we were looking for with a serial plan, 3.5 seconds for parallel. make_parallel adds a bit of spaghetti to the plan but, aside from that, the plans are identical.

Execution plan for the first (serial) query

If we hover over the second Nested Loops Join operator in the plan we can see the actual number of rows generated by dbo.NGrams:

The matching search text is 10,011 characters deep which explains the 100,012 operations

100,012 rows (pseudo iterations) to parse a 2,000,150,011 character string; this in under 2 seconds with only one CPU. That’s Nasty Fast! Can you guess how we got to 100,012 rows? I buried the search string 100,011 characters deep. The deeper the search string is, the number of rows dbo.NGrams creates increases, something we’ll address in Part 2.

make_parallel is a Super Power

Queries which leverage dbo.NGrams and dbo.NGrams8k generally perform better with parallel execution but not in this case. Here I’d likely force a serial plan to prevent a slower parallel plan. I included a parallel solution to help you understand the magic which is make_parallel. I first discussed the obvious benefits of make_parallel in PerfML Part 2 Self-Tuning with a Tally Table. Now for a not-so-obvious benefit, and it’s big.

In my experience, make_parallel and TraceFlag 8649 basically perform identically ~95% of the time. The only reason I ever use TraceFlag 8649 is during development as make_parallel makes the execution plans less readable. Every once in a while, not often, TraceFlag 8649 causes a query to run for minutes (even hours) instead of seconds/milliseconds. This is due to Intra-Query Parallel Deadlocking. What makes this type of deadlocking distinctly painful is the pure randomness of it. Sometimes it happens every 10-100 query executions, sometimes it’s a few times a month. This an example of the type of adventures undocumented trace flags will take you on. make_parallel doesn’t have this issue. Just another reason make_parallel deserves a cape.

Conclusion

2B strings are not common but this should give you a taste for how this technique will hold up against millions or even billions of smaller strings. This is the power of the tally table and dbo.NGrams. Unless someone can point me to a book, article or forum post that details a faster way to find a string longer that 8K in SQL Server I’m going to aver that this is the fastest known method for finding VARCHARs 8001 characters long in SQL Server without an Index. But can we make it faster, much faster, unimaginably faster! See you at Part 2 of this series. Thanks for reading!

Virtual Indexing Part 2: Unexpected Pitfalls

Diving deeper into the world of virtual indexing while being reminded that set-based code is not always bulletproof

Introduction                                                 

In Virtual Indexing Part 1 I introduced the idea of the virtual index, the ordered (AKA “suitably  sorted”) stream of numbers returned by ROW_NUMBER which allows you to perform operations like grouping and window aggregations without the help of an index OR a sort in the execution plan. Today I want to draw your attention to some hidden CTE tally table dangers that are easy to detect and resolve but can be catastrophic when missed.

Problem #1: Gaps and the Hidden Row Explosion

Identifying gaps and islands in sequences is a common SQL task. One of the first things I learned to do with a tally table is identify gaps. It’s an easy concept to grasp, easier than identifying islands. In the query below we have a table variable populated with the numbers 1 to 10, but with the numbers 3,5 and 9 missing. Let’s use fnTally to identify the missing numbers.

Figure 1. Using fnTally to find gaps in a sequence

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     t.N IS NULL;

This returns the expected results: [3 5 9]. Now the execution plan:

Figure 1.1. fnTally gaps in a sequence query execution plan

fnTally returns 10 rows, as expected, to create the numbers 1 to 10. But look at the clustered index scan against the table variable – 70 rows. That is tragic: 70 rows scanned to identify 3 missing values! Can you identify the problem? Take a minute to see if you can figure this out…

@t has 7 rows, fnTally is returning 10 rows: 7*10=70. The optimizer knows that the clustered index on @t is unique but is not taking advantage of how the numbers returned by fnTally are also unique and ordered. The optimizer is behaving as if a one-to-many relationship between t.N and f.N is possible. With this plan the execution engine is forced to compare every value in @t to every value returned by fnTally. As sad as this might make you feel at first, let not your heart be troubled. The workarounds are endless, the trick, really, is to understand which one is best and why.

Table Operator Modifications

Let’s start with the three solutions below, ordered by which I would recommend. Each of these will get the job done with minimal changes to the underlying logic. The first solution is to just add an additional filter: f.N <= @Max; the second is to replace fnTally with a physical tally table, the third is to use a hint.

Figure 2. fnTally gaps in a sequence solutions

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #1: Add a WHERE filter for fnTally.N
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t AS t ON t.N = f.N
WHERE     f.N <= @Max -- solves the fake one-to-many issue
  AND     t.N IS NULL;

--==== Solution #2: Use a persisted tally table with a WHERE filter
SELECT    f.N
FROM      dbo.tally AS f
LEFT JOIN @t        AS t 
  ON      t.N = f.N
WHERE     f.N <= @Max -- required becuase there is no TOP(@Max) clause
  AND     t.N IS NULL;

--==== Solution #3: Add a Query Hint to the left join
SELECT    f.N
FROM      dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t AS t -- use Merge Join algorithm instead of loop join
  ON      t.N = f.N
WHERE     t.N IS NULL;

Each query returns the correct result and in each case only 7 rows total are retrieved from @t instead of 7 rows for each number returned by fnTally.

Figure 3. Execution plan for the table operator solutions

I didn’t call it out in the screen shot but with each solution only 7 rows were retrieved instead of 70. The first solution is the easiest – just add an additional WHERE filter. This may seem redundant as it’s not necessary, but it works and it’s a simple fix. If it were always this simple I could end the discussion here but it’s not. The second solution is to use a physical tally table (dbo.tally in this example) instead; this solves the problem in this example and has other advantages I’ll cover momentarily. The third solution is to force a merge join using a query hint; this works but is my last choice. There have been times where a query hint is the only option which is why I’m calling it out now.

Anti-Join Solutions

Despite the fact that you have seen the term, Anti-join in the SSMS execution plan, it is not a commonly used phrase in the RDBMS world. An anti-join is but it is the best way to describe a scenario where you need all items from one table that do not exist in a second table.

Figure 4. Anti-join Venn Diagram

Reviewing the execution plan for the first set of solutions above (figure 2) you see that each used a Left Outer Join operator to identify the missing rows, then by a filter to exclude the others. Let’s examine two more solutions which leverage (NOT) EXISTS logical operator and the EXCEPT set operator.

Figure 5. Anti-join solutions

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Solution #4: EXCEPT >> Merge join handles the filter
SELECT f.N FROM dbo.fnTally(1,@Max) AS f
EXCEPT
SELECT t.N FROM @t AS t;

--==== Solution #5: NOT EXISTS >> Identical to above but with nested loop join
SELECT f.N
FROM   dbo.fnTally(1,@Max) AS f
WHERE  NOT EXISTS (SELECT t.N FROM @t AS t WHERE t.N = f.N)
AND    f.N <= @Max; -- Optional, forces loop join

Figure 5.1. Anti-join solutions execution plans

Both produce almost identical execution plans and are both efficient. The EXCEPT solution is the best IMO because it’s the cleanest.

Parallelism

Now let’s safely force a parallel execution plan against the left join solution with the WHERE filter (Figure #2, Solution 1) then, again, with our EXCEPT anti-join from Figure #5.

Figure 6. Parallel execution test

DECLARE @t TABLE (N BIGINT NOT NULL PRIMARY KEY);
INSERT  @t(N) VALUES(1),(2),(4),(6),(7),(8),(10);

DECLARE @Max BIGINT = 10;

--==== Good when serial, bad for parallel
SELECT     f.N
FROM       dbo.fnTally(1,@Max) AS f
LEFT JOIN  @t AS t ON t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      t.N IS NULL
AND        f.N <= @Max

--==== Winner for serial and parallel
SELECT f.N FROM dbo.fnTally(1,@Max) AS f 
EXCEPT
SELECT t.N FROM @t AS t CROSS JOIN dbo.make_parallel() AS x;

Below is the portion of each plan where the rows are retrieved. Even with the WHERE f.N <= @Max clause in place, which solved the 70-row explosion problem earlier with a serial plan, the row explosion returns with parallel execution. The EXCEPT solution, however, does not have this problem with a serial or parallel plan.

Figure 6.1. Parallel execution performance plan

Problem #2: Gaps, Left Join Aggregation and Parallelism

Keeping with the theme of gaps let’s use the same table variable from earlier but include duplicate and missing values. Using this sample data:

Figure 7. Problem #2 sample data

DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);

The expected output would be:

Figure 7.1. Problem #2 expected output

N     Total
----- -------
1     2
2     4
3     0
4     2
5     0
6     1
7     1
8     3
9     0
10    3

Like earlier, this query will suffer the aforementioned hidden row explosions; instead of retrieving 16 rows from @t, it’s retrieving 160 rows (16*@Max.)

Figure 8. Problem #2, Solution #1 (fail)

SELECT    f.N, Total = COUNT(t.N)
FROM      dbo.fnTally(1,@Max) AS f
LEFT JOIN @t                  AS t
  ON      t.N = f.N -- could not shake the row explosion
GROUP BY  f.N;

Again, an f.N <= @Max in a WHERE filter or as a join. t.N <= @Max will also do the trick in the WHERE or join filters. The problem, again, is that this only works when the optimizer chooses a serial execution plan. This means that, in this case, you must always use OPTION (MAXDOP 1) but be banished to eternal serial processing. C’mon man!

Problem #2 Workarounds

Merge Join hint will solve the problem but cannot get a parallel execution plan; in other words, make_parallel will be successful at solving the row explosion issue but not at forcing a parallel plan. The persisted tally table solution on the other hand, does not experience the row explosion issue and can enjoy both serial and parallel execution.

Figure 9. Problem #2 – Solutions #2 & #3

DECLARE @t TABLE (N BIGINT INDEX IX1 CLUSTERED NOT NULL);
INSERT  @t(N) VALUES(1),(1),(2),(2),(2),(2),(4),(4),(6),(7),(8),(8),(8),(10),(10),(10);

DECLARE @Max BIGINT = 10;

-- Solution 1 - works, query hint though & no parallel execution
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.fnTally(1,@Max) AS f
LEFT MERGE JOIN @t             AS t
  ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
GROUP BY     f.N;

-- Solution 2 - good serial & Parallel
SELECT     f.N, Total = COUNT(t.N)
FROM       dbo.tally AS f
LEFT JOIN  @t AS t
  ON       t.N = f.N
CROSS JOIN dbo.make_parallel() AS x
WHERE      f.N <= @Max
GROUP BY   f.N;

Figure 9.1. Problem #2 – Solutions #2 & #3 execution plans

In each case the row explosion is gone but only the physical tally table solution (dbo.tally) solves the problem with a serial or parallel plan while preserving the optimizer’s option to execute a parallel plan. This is an example of where fnTally simply cannot compete with dbo.tally.

Set-Based =! Bulletproof (Conclusion)

If there is a lesson to be learned today it’s that the very existence of a numbers table or the text, “tally” in your code does not guarantee that it will be fast. Furthermore, just because your set-based solution is fast – it can always be faster, exponentially faster in many cases as you will see in future articles. In this article we watched an innocent query to identify gaps in a sequence go terribly wrong. Fortunately, there are many workarounds provided that you are identify the problem when it arises. Hopefully you are now better prepared.  

Which is faster again? fnTally or a tally persisted tally table? It depends. As per usual.

Virtual Indexing Part 1: The Suitably Sorted Stream

Does a virtual auxiliary table of numbers include a virtual index?

Before you continue

This post is intended for new and old data professionals alike. To get the most of this article you should have a basic understanding of tally tables and Jeff Moden’s tally table function, fnTally. You also need a basic understanding of how to use APPLY table operator; if not read Understanding and Using APPLY (Part 1), Part 2 and The Cascading (CROSS) APPLY. Each is a quick read and can change your career.

Intro

Less Code, More Fast – this is my coding philosophy in four words and the tagline for this site. Replacing bloated iterative procedural code with clean, set-based, pure, reusable code makes it easy to deploy code faster and with fewer bugs. Veteran programmers who write high-performing declarative code, such as SQL Developers, know that pure, set-based, loop-free code performs exponentially faster than code that uses loops, mutable data structures and procedural constructs such as “goto”. A correctly written tally table function, such as fnTally, count from 0 to 1,000,000,000 in 90 seconds on my laptop. I wouldn’t even try with a loop. Replacing loops with a tally table (or some other form of lazy sequence) is a low-risk, low-cost, high-reward way to write cleaner, faster code.

The Virtual Index

One small benefit to writing loops to perform tasks in your functions and procedures is that you don’t have to worry about sorting. With a set-based approach that leverages a tally table, if the order of the numbers returned matters then the optimizer will have to sort the numbers for you unless they are presorted, AKA indexed.  When using physical tally table that is properly indexed sorting is not a concern, but what about with a tally table function like fnTally? You can’t index a function right?

Returning the numbers 1 to 7, in order, without a sort

An important consideration when replacing a loop with a tally table is sorting. The most common way to handle sorting in the SQL world is to presort using an index. fnTally is a function, you can’t add an index to a function so how do you return the numbers [ 1 2 3 4 5 6 7] in that order without sorting them? You can’t unless they are already sorted. fnTally returns it’s numbers as an ordered set. This ordered stream of numbers is what I refer to as the virtual index. Virtual indexing  can be thought of as the art/science of:

  1. Understanding what the virtual index is
  2. Knowing when\how to use the virtual index
  3. Understanding the alternatives and when to use them

If you are a set- based SQL coding warrior then you know that you can use ROW_NUMBER to create a Virtual Auxiliary Table of Numbers, AKA: tally table. With that virtual table comes with a virtual index. To understand and appreciate the difference between a virtual index and a physical index, lets create a mini-numbers temp table with the numbers 1 to 6. We won’t include an index which means that the numbers are unordered.

Figure 1 – Mini tally table (#nums)

--==== Create a small Number's table
IF OBJECT_ID('tempdb..#nums','U') IS NOT NULL DROP TABLE #nums;
CREATE TABLE #nums (N INT NOT NULL);
INSERT #nums VALUES (1),(2),(3),(4),(5),(6);

This table could be better described as a “tally heap” or a “virtual auxiliary heap of numbers” Let’s run a three queries against #nums without and index present then review execution plans.

Heap vs Index Tests

First to test our queries against #nums without an index in place:

Figure 2 – Test #1: No index  (Heap)

SELECT TOP(3) t.N
FROM     #nums AS t
ORDER BY t.N;

SELECT   t.N, AVG(t.N)
FROM     #nums AS t
GROUP BY t.N;

SELECT 
  [SUM] = SUM(t.N)     OVER (ORDER BY t.N),
  [LAG] = LAG(t.N,1,0) OVER (ORDER BY t.N)
FROM   #nums AS t;

Figure 3 – Test #1 execution plans without an index on #nums

Each query required a sort operator. TOP with an ORDER BY, grouping (the GROUP BY), and the window functions (SUM OVER and LAG) are examples of operations which require an ordered set, AKA “suitably sorted stream.” If the stream of input values is unordered, the most common remedy is a sort operator to create the suitably sorted stream when the query runs.

Hover over the properties for the stream aggregate and sequence operators in the second and third plans to see what each operator does and what they need:

Figure 3.1. – Stream Aggregate and Sequence Project Details

Suitable Sorting with a Physical Index

Sorting at runtime should be avoided, especially when dealing with a large volume of rows. The typical sort cost is n log n which is slower than linear. It’s like the exact opposite of a bulk discount – the cost per row increases as you increase rows. Let’s presort with a unique clustered index on #nums.

Figure 4 – Index for our numbers table (#nums)

CREATE UNIQUE CLUSTERED INDEX uq_nums ON #nums(N);

Now execute the same queries from Test #1(Figure 2) with the clustered index in place.

Figure 4.1. – Execution plan with index in place on #nums

Now that #nums is presorted there isn’t a need to sort at runtime.

The Suitably Sorted Virtual Index

First for the same queries but using fnTally and the execution plan.

Figure 5: Virtual Index Test Query

SELECT TOP(3) t.N FROM dbo.fnTally(1,6) AS t ORDER BY t.N;

SELECT t.N, AVG(t.N) FROM dbo.fnTally(1,6) AS t GROUP BY t.N;

SELECT [SUM] = SUM(t.N)     OVER (ORDER BY t.N),
       [LAG] = LAG(t.N,1,0) OVER (ORDER BY t.N)
FROM     dbo.fnTally(1,6) AS t;

Figure 5.1: Virtual Index Test Execution Plans


That’s the exact same execution plan as in figure 4.1, after the clustered index was added to #nums! The only difference is the nested loop joins on the right which are used to generate the rows. fnTally returns the numbers as an ordered set. In other words, for each query:

  1. Using #nums without an index forces the optimizer to add a sort operator to execution plan for handling TOP, GROUP BY and a couple window function functions (LAG and SUM() OVER())
  2. Adding an index to #nums presorts the values and eliminates the requirement to sort the values at runtime
  3. fnTally is an inline table values function(iTVF) and cannot be indexed
  4. In this example fnTally’s N column behaves like the indexed N column on #nums

The CTE tally table returns the numbers as an ordered set and generates an identical execution plan as when using a persisted tally table with an index. This is what I mean by, a virtual index. The virtual index here is the ordered set of numbers that ROW_NUMBER creates. There is a lot more to this topic that I look forward to exploring in Part 2 of this series.