Monthly Archives: November 2013
Grody to the VARCHAR(MAX)
This morning I was looking through a new vendor database to see exactly what I would be supporting and I stumbled upon a curious table. This table seemed quite normal to the naked eye until I glanced upon the data types. There were four, count them four fields set to VARCHAR(MAX) and two set to NVARCHAR(MAX). First of all, why the inconsistency in the data types? Maybe there is a legitimate reason, but I am not buying it. If you need Unicode support provided by the NVARCHAR data type wouldn’t you need it across all fields? I have only worked with a few databases that needed Unicode support and that was the case for them. Maybe there is a case for mixed usage, but I do not understand it and obviously I was not part of their design team.
Now onto the bigger elephant in the room, why on earth would you have all of these fields set to the maximum storage amount of two gigabytes (1GB for NVARCHAR taking up 2GB of space). Are you really storing that much data in six fields of EACH record? Keep in mind that SQL Server stores records in 8K pages. When you exceed that the data goes to an overflow page with a pointer so that SQL knows where to find the rest of the data. I understand that it will not take up that much space if you are not actually storing that much data, but there is a potential here for these table to be problematic. Granted this might be a good design for one field if it stores a large amount of text or notes, but six fields in the same record? I looked at the length of the records currently entered for these six fields and I found nothing larger than 100 characters. Overkill? What do you think?