-1

"Often, you get the right outcome without doing so."

Example:

select CONVERT(varchar, getdate(), 112)

outputs

20240417

I saw this in quite a few places on Stack Exchange until I found a tiny remark that told the reader not to forget the length in brackets. I claimed that this is not needed if you set the style number for a datetime like this. Yet, I got enough insight now to understand that "varchar should never stand alone", it should always be written as something like varchar(1234).

As for the code above, that would be:

select CONVERT(varchar(8), getdate(), 112)

If you look at it, the output is the same:

20240417

Why should you still always write varchar with the length in brackets behind it?

PS: If you can, take even char(8) instead of varchar(8) / nvarchar(8)

A remark below went even further: char(8) should be taken since:

no value in style 112 will ever be less than 8 characters.

Links that bring up this question

This is just a random list of some links where I saw varchar() without brackets. There are many more out there.

questionto42
  • 366
  • 1
  • 2
  • 12

2 Answers2

9

Why should you still always write varchar with the length in brackets behind it? Often, you get the right outcome without doing so.

Often and always are two distinctly different things.

Example:

DECLARE @bin VARBINARY(200) = CAST('I have a lovely bunch of coconuts' AS varbinary(200))

select convert(varchar, @bin)

Output is missing uts

I have a lovely bunch of cocon

This is due do the default behavior:

When n isn't specified in a data definition or variable declaration statement, the default length is 1. If n isn't specified when using the CAST and CONVERT functions, the default length is 30.

Which means anything without the number of bytes in the definition is an eventual bug at best and results in data loss that no one will ever know about at worst.


Also, note that n in varchar(n) denotes the number of bytes, not characters:

A common misconception is to think that with char(n) and varchar(n), the n defines the number of characters. However, in char(n) and varchar(n), the n defines the string length in bytes (0 to 8,000). n never defines numbers of characters that can be stored. This is similar to the definition of nchar(n) and nvarchar(n).

The misconception happens because when using single-byte encoding, the storage size of char and varchar is n bytes and the number of characters is also n. However, for multibyte encoding such as UTF-8, higher Unicode ranges (128 to 1,114,111) result in one character using two or more bytes.

For example, in a UTF8 collation context, the following returns 16:

SELECT DATALENGTH('');

To store that string, you'd need at least varchar(16), not varchar(4).

Paul White
  • 94,921
  • 30
  • 437
  • 687
Sean Gallardy
  • 38,135
  • 3
  • 49
  • 91
-1

Not setting the length for a varchar is widespread, see for example What is the significance of 1/1/1753 in SQL Server? where the top voted answer shows SELECT CONVERT(VARCHAR, DATEADD(DAY,-5,CAST('1752-09-13' AS DATETIME2)),100) and no one complained about it after 13 years and 171k views. Yet, being widespread does not always mean being good.

Why should you put the length behind every varchar?

DECLARE @x varchar = CONVERT(varchar, getdate(), 112);
select @x
select CONVERT(varchar(8), getdate(), 112)

Out:

20240417

I thought that the 8 would not be needed since the output length might be set by the style 112. Yet, a remark under Convert date yyyy-mm-dd to integer YYYYMM by @AaronBertrand shows:

If the style could correct the length then this would work as expected:
DECLARE @x varchar = CONVERT(varchar, getdate(), 112);
but that just returns the single character 2 [instead of 20240417] (and FORMAT can’t save you either). The problem is SQL Server has decided for you that that’s a varchar(1) long before you assign anything, and some style number won’t change its mind.

And another remark even asks you to take char instead of varchar if you know for sure that it must be that length all the time:

Do you know you want 6 characters? Then type char(6) instead of varchar /* typing is hard */. There’s no reason to NOT be explicit (or to use varchar for fixed length), and every reason to be careful.

His blog Bad Habits to Kick: Varchar without length shows in depth why you should write the length in brackets behind any varchar:

Because the length attribute is optional, people seem to make the assumption that defining a varchar in T-SQL is like defining a string in C#: "it will just expand to whatever size you stuff into it." The problem is, if you don't define a length, SQL Server will assign one for you, and it may not be the length you expect. In some scenarios, it will be 1 ; in others, 30.

questionto42
  • 366
  • 1
  • 2
  • 12