Blog Archives

Grody to the VARCHAR(MAX)

This morning I was looking through a new vendor database to see exactly what I would be supporting and I stumbled upon a curious table.  This table seemed quite normal to the naked eye until I glanced upon the data types.  There were four, count them four fields set to VARCHAR(MAX) and two set to NVARCHAR(MAX).  First of all, why the inconsistency in the data types?  Maybe there is a legitimate reason, but I am not buying it.  If you need Unicode support provided by the NVARCHAR data type wouldn’t you need it across all fields?  I have only worked with a few databases that needed Unicode support and that was the case for them.  Maybe there is a case for mixed usage, but I do not understand it and obviously I was not part of their design team.

Now onto the bigger elephant in the room, why on earth would you have all of these fields set to the maximum storage amount of two gigabytes (1GB for NVARCHAR taking up 2GB of space).  Are you really storing that much data in six fields of EACH record?  Keep in mind that SQL Server stores records in 8K pages.  When you exceed that the data goes to an overflow page with a pointer so that SQL knows where to find the rest of the data.  I understand that it will not take up that much space if you are not actually storing that much data, but there is a potential here for these table to be problematic.  Granted this might be a good design for one field if it stores a large amount of text or notes, but six fields in the same record?  I looked at the length of the records currently entered for these six fields and I found nothing larger than 100 characters.  Overkill?  What do you think?

Enjoy!

 

T-SQL Tuesday #38: Speaking of Resolve

This month’s topic for T-SQL Tuesday #38 hosted by Jason Brimhall (b|t) is an aptly themed variation on the word resolution.  I personally chose the word resolve.

In 2012, I made a resolution to begin presenting at the local level.  That sounds like a normal enough resolution for most, however being an introvert standing in front of people is quite terrifying.  After attending many user group meetings and SQL Saturdays, I decided that since I have met many of these speakers and they are not much  different from me, I should be able to stand up and do that too!

In the past, when called upon to speak in public or in front of a group my fight or flight instinct would kick in and generally I would want to run with every fiber of my being.   Sometimes, I did run shamefully.  Interviews are no different in many cases therefore it is safe to assume that I do not interview well.  In the past, I have probably missed out on some good job opportunities simply because they thought I was an idiot based solely upon my presentation skills.  It is because of this that I resolved to improve upon the skill.

Sounds good, huh?  Make a resolution and then you do it, right?  Easy as pie.

Not so fast, my friend.  I still wanted to run Forrest run!  Karla Landrum (b|t), the SQL Community Evangelist for SQL PASS, realized what was going on and she gently nudged me along to speak at the Pensacola SQL Saturday last summer.  This is not unusual as historically I have been talked into doing stuff by women (and I probably should not have put that out there, so now you know).

That day in Pensacola, my fight or flight instinct kicked in again all morning long, but now I was on the hook and I did not want to disappoint Karla.  Plus many of my colleagues knew I was speaking.  Quite honestly this held me accountable for I did not want to ruin my reputation in the SQL community before I really even had one.  My resolve that day held firm and I spoke quite nervously.  But I did not run.  I climbed the proverbial mountain and planted my flag upon its peak.

Since that time I have spoken at SQL Saturday Orlando and a user group meeting.  It is getting easier with each event, but I still need to resolve myself to get better each time.  Maybe one day I can speak at the PASS Summit?  I will resolve myself to accomplish that goal and now you can hold me accountable.  Enjoy!

Red Gate Virtual Restore Scripting (Part 10)

If you have been following along since December, then you know that I have been doing some investigation with the Red Gate Backup and Restore bundle to implement in our system with our use of Ola Hallengren’s backup maintenance scripts.  One of our last stumbling blocks has been automating and scripting of the Red Gate Virtual Restore product.  We have moved methodically through the list and with the help of Red Gate’s excellent support department resolved each item including this one.

The problem we had been that when we tried to verify the backup through restoration and DBCC CHECKDB on a certain database, that was considerably larger than all of the other databases on this server, we would not have enough space on the drive using our restore script we created last week.  That scripting solution is great as long as there is enough space on the drives to attach the database files from backup and then drop them.

The beauty of the Virtual Restore product, in case you did not already know, is that it can restore these files with a much smaller footprint than the actual database files take up in the exiting format.  However, the problem is that it is strictly a wizard at this point that will grab a specific backup and run the restore manually and then drop it once it checks the integrity and consistency.  This is a great product but this is feature should be included or else it does not help much especially when you have hundreds of databases.

We ran the wizard and then viewed the script:

RESTORE DATABASE [TestDB_Virtual] FROM 
DISK=N’I:\TestDB\FULL\SQLPROD40_TestDB_FULL_20130101_213217.sqb’
WITH MOVE N’TestDB_Data’ TO N’F:\VirtualRestore\TestDB_Data_TestDB_Virtual.vmdf’,
MOVE N’TestDB_Index’ TO N’F:\VirtualRestore\TestDB_idx_TestDB_Virtual.vmdf’,
MOVE N’TestDB_Log’ TO N’F:\VirtualRestore\TestDB_Log_TestDB_Virtual.vldf’, NORECOVERY, STATS=1,REPLACE
GO

RESTORE DATABASE [TestDB_Virtual] WITH RECOVERY, RESTRICTED_USER
GO

DBCC CHECKDB ([TestDB_Virtual])
GO

DROP DATABASE [TestDB_Virtual]
GO

This script did not work when we ran it via T-SQL because of a lack of disk space which is the same problem we encountered using the Backup Pro restore script, however it did work with the Red Gate Virtual Restore Wizard.  We contacted support to find out why there was a difference on the same box.  Basically SQL Server does not know that the HyperBac service is running in the background.  The wizard is smart enough to check your disk space and if you do not have enough it temporarily adds an entry into the HyperBac configuration that tricks SQL Server into thinking that you have the right amount of disk space in order to complete the task.

The parameter is “VirtualDiskSize=”  where the value is an integer to represent the disk size in megabytes.   You add the parameter to the bottom of the  to the hyper.conf file found in the C:\Program Files (x86)\Red Gate\HyperBac\Bin folder.  At this point then you restart the HyperBac service and the problem is solved!  Next I will try to automate this script….stay tuned!

Creating a Red Gate Backup Pro “Virtual Restore” Job Automatically (Part 9)

If you have been playing along at home this month then you know about some of the struggles that I have faced with automating the Red Gate Backup Pro automatically.  Well, with the help of Red Gate and some rewriting of their provided script, I now have a script that will work automagically!  When I say automagically, I mean that I want the script to run forever without any more interference from me in the same manner that our Ola Hallengren backup scripts work.

If you are not familiar, this tool is excellent for many reasons but one of the reasons that we like it is for the simple fact that if a new database is created, it will be maintained automagically.  I do not have to create any maintenance plans or agent jobs each time I add a database to a server.  We have several servers that get new small databases all the time and this is a time saver as well as a life saver.  Now this scripts will do the same, it will iterate through the database names and restore the latest backup set as a new database with the name ‘_Restored’ tacked on the end of the existing name and place the data and log file in a separate directory to avoid overwriting your existing databases.

Do not worry about failure reporting with the job as Red Gate will send you an error for each database, pass or fail.  If you wish to see only the failures then change ‘MAILTO’ to ‘MAILTO_ONERRORONLY’ in the script.

–This script will restore multiple databases where each database has it’s own Full Backup to be restored.
USE master

DECLARE @dbname NVARCHAR(260)
— Add any databases in that you want to exclude
DECLARE cdatabases CURSOR FOR
SELECT name
FROM   sysdatabases
WHERE  name != ‘tempdb’
AND name != ‘master’
AND name != ‘msdb’
–Set @restorepath to be the path of where your backups are located, in my example this is ‘D:\Backup\’
DECLARE @restorepath VARCHAR(500)
–@filename will need to be modified depending how the backup files are named
DECLARE @filename VARCHAR(500)
DECLARE @restorestring VARCHAR(1000)
–Not needed if running in a SQL job
DECLARE @exitcode INT
DECLARE @sqlerrorcode INT

OPEN cdatabases

FETCH next FROM cdatabases INTO @dbname

WHILE @@FETCH_STATUS = 0
BEGIN
SET @restorepath = ‘D:\SQLVMDEV10\’ + @dbname + ‘\Full\’
— @filename will need to be modified depending how the backup files are named
SET @filename = @restorepath + ‘SQLVMDEV10_’ + @dbname + ‘_FULL_*.sqb’
SET @restorestring = ‘-SQL “RESTORE DATABASE [‘ + @dbname + ‘_Restored] FROM DISK = ”’ + @filename
+ ”’ SOURCE = ”’ + @dbname + ”’ LATEST_FULL WITH MAILTO = ”ed@sqlgator.com”, RECOVERY, DISCONNECT_EXISTING, MOVE DATAFILES TO ”G:\VirtualRestore”, MOVE LOGFILES TO ”G:\VirtualRestore”, REPLACE, ORPHAN_CHECK, CHECKDB = ”ALL_ERRORMSGS, DATA_PURITY, EXTENDED_LOGICAL_CHECKS”, DROPDB” -E’
–If you wish to apply additional backup, remember to change the WITH RECOVERY to WITH NORECOVERY,
EXEC master..Sqlbackup
@restorestring,
@exitcode output,
@sqlerrorcode output

–IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
—  BEGIN
—    RAISERROR (‘SQL Backup job failed with exitcode: %d  SQL error code: %d’, 16, 1, @exitcode, @sqlerrorcode)
—  END
FETCH next FROM cdatabases INTO @dbname
END

CLOSE cdatabases
DEALLOCATE cdatabases 

This will generate multiple entries in Red Gate Monitor, as mentioned in Part 4 of our journey.  In order to prevent these setup a maintenance window on your monitoring tool and run this script only during the maintenance window ensuring that no other jobs run during that time so that you do not miss any important alerts from those maintenance jobs.  Here is a link from Red Gate detailing how to set the maintenance windows in Monitor.  Enjoy!

Corruption and Verifying Backups

This morning I had a conversation with a Subject Matter Expert and Application Administrator who asked me quite simply for a verified backup before he does an install on Saturday.   My response was that I could test it manually for them and I was willing to do so but currently with our infrastructure the nightly backups are not tested without manual intervention.  I have tried to change this policy for some time and unfortunately it may not occur until something bad happens.  With this in mind, I do the best I can to minimize the risk given that I cannot automate a solution to restore databases on a regular basis as I would prefer.  How can you minimize the risk?

  1. Use the WITH CHECKSUM option on your backup scripts.  Read Paul Randall’s The Importance of Validating Backups article on SQL Server Central for more information.
  2. Use the RESTORE VERIFYONLY command (although we use Ola Hallengren’s Backup Solution which is the verify option which runs the command).
  3. Use the WITH CHECKSUM on the RESTORE command to recheck the page checksums in the backup as well as the backup file.
  4. Automate or manually restore your databases from scratch on a regular basis.  (This is where we are lacking and where I want to take my servers).
  5. Once you have restored, then run DBCC CHECKDB against the restored database.

If you are not doing all five then you cannot say confidently that your backups are verified.  However, if you are doing all five keep in mind that there is no fool proof guarantee against corruption, this merely minimizes the destruction by having viable backups.  I hope this helps….Enjoy!

Nifty Code to Update Collation

Today, we interrupt our countdown series in progress to bring you some actual T-SQL coding.  I had to bring out this snippet and dust it off for my purposes, to change some column-level collation in a database.  To give you some background, we have a vendor product that requires case-insensitive collation at the server and table level, but case-sensitive on the column level.  Yes, I know that is not best practices and seems like a weird design choice but I have to support it as the DBA.

Normally this would require a script from the vendor because it is their code, but the product allows you to create some custom tables (they default to the table level collation) in the database and those are our responsibility. So why would you need to change it you say?  The developers were trying to write some reports against these tables and as you would expect they received an error (Msg 468, Level 16, State 9, Line 4 Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CS_AS” and “SQL_Latin1_General_CP1_CI_AS” in the equal to operation.)  It would take forever to go table by table and column by column to change the collation, hence the need for the script!

DISCLAIMER: I have had this little snippet for a while and I do not know who wrote it originally, probably found on MSDN.  I also know that it is not the most updated way of doing this (you should use the sys schema) but it works and sometimes that is easier than rewriting the code.

The first thing you should do is right-click on your query window in SSMS and set your results to text so that the script outputs you a new script that you will run in another query window.  Keep in mind with some of your tables you might have to drop and recreate the indexes in order to alter the collation.  Always take a backup before changing something like this and know what you are doing and changing before executing any script.

SELECT  ‘ALTER TABLE ‘ + syso.name + ‘ ALTER COLUMN [‘ + sysc.name + ‘]’,
syst.name + ‘ (‘ ,
sysc.length ,
‘) COLLATE SQL_Latin1_General_CP1_CS_AS’ ,
CASE sysc.ISNULLABLE
WHEN ‘1’ THEN ‘NULL’
ELSE ‘NOT NULL’
END + CHAR(13) + CHAR(10) + ‘GO’
FROM    sysobjects syso ,
syscolumns sysc ,
systypes syst
WHERE   syso.id = sysc.id
AND syst.xtype = sysc.xtype
AND sysc.collation = ‘SQL_Latin1_General_CP1_CI_AS’
AND syso.type = ‘U’
AND syst.name != ‘text’
AND syso.name NOT IN ( ‘SYSDIAGRAMS’ )

Enjoy and I hope this helps you!

Sunday Funday

Today’s installment of Sunday Funday brings you some fun with T-SQL by Nick Jacobsen, the classic 99 Bottles of Beer T-SQL style.   Enjoy!

Duplicate Columns

Yesterday I needed to find several tables in a database that contained a certain column name. It is usually easy if you have only a few tables but this database has almost 2000 tables. I needed to use some T-SQL skills.
I consulted with the SQLHelp hash tag on twitter because it has been a while since I did this last and I knew someone there would remember how to do it quicker than I could. Robert Davis, MCM extraordinaire (blog |twitter) had the following line:

SELECT object_name(object_ID) FROM sys.columns WHERE NAME = ‘<column name>’ 

While Bill Fellows (blog|twitter) had a different approach:

SELECT TABLE_NAME FROMinformation_schema.COLUMNS ISC WHERE isc.COLUMN_NAME =‘<column name>’

Both worked very well and here they are here for you and for me when I need to remember how I did it.  Enjoy!

Sunday Funday

Today’s installment of Sunday Funday brings you a great little Tic-Tac-Toe game with T-SQL exercise by Adam Haines found on SQL Server Central.   Enjoy!

SQL Formatting Tip

Last night was the first meeting for the new Tampa side SQL Server User Group and the turnout was great.  I gave my first user group presentation and I was a little more nervous than the last time I gave this presentation.  At this point, I am going to attribute that to the fact that I have to see this group every month where as in Pensacola  I do not.  But I am moving closer to completing my goals for the year and that is a great thing.  To be a great speaker, obviously you have to practice as a speaker, seems simple enough.

Moving forward, I wanted to pass along a great tip last night from Pam Shaw (blog|twitter), our fearless leader.  She shared a site with us called Instant SQL Formatter that provides automatic formatting and coloring of many different flavors of coding including SQL Server and Oracle.  It is a pretty nifty utility especially with all of the available formatting options.  Check it out and thanks Pam!