Have you ever analyzed the use of nvarchar data type? Do you know how the size of nvarchar may affect performance? Let’s explore how SQL Server stores this data type on the disk and how it is processed in the RAM.
We use nvarchar when the size of column data entries are probably going to vary considerably. The storage size (in bytes) is twice as much the actual length of data entered + 2 bytes. This allows us to save disk storage in comparison of using nchar data type. Let us consider following example. We are creating two tables. One table contains nvarchar column, another table contains nchar columns. The size of the column is 2000 characters (4000 bytes).
CREATE TABLE dbo.testnvarchar (
col1 NVARCHAR(2000) NULL
INSERT INTO dbo.testnvarchar (col1)
Now we see the following — https://goo.gl/3JJYi8
Now let’s look what we have with nchar:
We use the following code
CREATE TABLE dbo.testnchar (
col1 NCHAR(2000) NULL
INSERT INTO dbo.testnchar (col1)
And here is what we received — https://goo.gl/R3Jzfq
As we can see, the actual row size of the nvarchar datatype is much smaller than the nchar datatype. In the case of the nchar datatype, we use ~4000 bytes to store 10 symbols character string. We use ~20 bytes to store the same character string in case of the nvarchar datatype.
The SQL Server engine processes data into RAM (buffer pool). What about row size in the memory? If you want to know the answer, you can find it here — http://codingsight.com/sql-server-dark-side-nvarchar/.