Blog Archives

Update on Backup Tool Testing (Part 2)

After a day of testing, we were able to integrate our Ola Hallengren solution with the Red Gate SQL Backup Pro v7.2 turning a SQL compressed backup on one database from 1.2 GB to 700 MB using compression level 4.  This is a nice reduction for us with storage space being at a premium.  Today we will be testing the SQL HyperBac solution.

All that was needed for Ola’s script was to add the following:

@Compress = ‘Y’,
@BackupSoftware = ‘SQLBACKUP’,
@CompressionLevel = 4

For HyperBac the settings would be:

@Compress = ‘Y’,
@BackupSoftware = ‘HYPERBAC’

Enjoy!

Backup Tool Testing Begins (Part 1)

Today we are beginning our testing phase for comparing the default SQL Server backup solution to the Red Gate HyperBac and SQL Backup Pro 7.  We have already purchased Red Gate, but simply have not had the time to implement this tool.  We did some initial testing, but it has been over a year and there is a newer version available now.  We are curious how well this integrates with our ultimate maintenance plan by Ola Hallengren.  Any input?

Miscellaneous Backup Notes

This week I have been teaching my understudy about backups and restores.  Here are some important things we went over this week:

  1. Checking “Verify backup when finished” does not ensure that the backup is good.
  2. Checking “Perform checksum before writing to media” does not ensure the backup is good.
  3. Checking both 1 and 2 also does not ensure that the backup is good.
  4. Checking both 1 and 2 and performing a RESTORY VERIFYONLY does not ensure that the backup is good.
  5. The only way to ensure without a shadow of a doubt that a backup is good is using 1 and 2 and then restoring it somewhere else and running a DBCC CHECKDB on the database.
  6. Unless you are regularly restoring your backups you do not have a backup solution.
  7. If you switch from simple recovery mode to full recovery mode then you need to kick off a backup to start the chain.
  8. In full recovery mode (or bulk-logged recovery) you must backup the transaction log regularly enough to keep the log from filling up.  This will truncate the log and allow you to provide point in time recovery.  Mileage may vary with how often to perform this backup depending upon the size and requirements of your database.
  9. When using full recovery mode and you need an ad hoc backup make sure that you check  the “Copy-only backup” so that you do not disrupt the backup chain by moving that backup to another location.  You will screw up your recovery options at that point.
  10. In the full recovery model, using differential backups can reduce the number of log backups that you have to restore.

Enjoy!

 

Shrink That Database…Seriously?

Shrink the DB?Yesterday afternoon my storage administrator came to me and asked me to look at his Enterprise Vault database and make sure that they were receiving the proper maintenance because he was running a routine over the weekend to remove some e-mails that had exceeded their retention period and it was painfully slow.  He was only about to delete about a half-million e-mails over the course of the entire weekend.  It is even more interesting because when I checked the databases yesterday all of them combined did not exceed 100 GB.

The administrator proceeded to call Symantec and spoke with an engineer who directed him to the following page.  He then forwarded the link and asked me to make sure I was following their best practices listed on their site.  No problem, I would be glad to compare their recommendations to our maintenance regimen.

I have included a screen shot just in case the page disappears…

Maintenance Plan Recommendation from Symantec

Needless to say, I did not modify my maintenance regimen to include the deprecated Shrink Databases recommendation.  If you are fine with this item, then you really need to read Paul Randall’s blog entry on why you should NEVER shrink your database.  He explains much better than I do and he is the expert.  Maybe someone out there knows someone at Symantec and we can get them to read the blog, attend a SQL Saturday, PASS Summit, or even Paul’s Immersion Event training.  Someone please get them some help!  Enjoy!

Outlook Rules Are Your Friend

If you are like me, you receive quite a bit of automated e-mails from all of your SQL Server jobs and maintenance plans.  This can take a fair amount of time to read these e-mails daily.  Until now! 

Outlook rules and alerts are a great way to manage this workload.  First of all, and most importantly, use a unique description in the Notify Operator Task for maintenance plan notifications, such as Figure 1.  Be consistent!  Use this same string in all of your backup tasks in your maintenance plans. 

Maintenance Plan Failed

Figure 1

Then create a rule in Outlook (Tools -> Rules and Alerts -> New Rule) to handle these messages.  Personally, I set them to move to another folder to keep them organized, look for specific words in the body (the phrase we put consistently in our notification tasks), and make sure they come from the e-mail account that I setup to alert me from all of my servers (as shown in Figure 2).

Outlook Rules Conditions

Figure 2

 From here, let the fun begin as I then do the following (see Figure 3):

  • Mark them as high importance
  • Flag the message for follow-up today
  • Move it to the specified folder (I know we did this on the last step, but it is highlighted on this step as well, silly Outlook)
  • Display a specific message, Database Backup has FAILED, in the New Item Alert window to ensure that I see it as it happens (granted most jobs are at night, but it will be there waiting for me in the morning)
  • Display a Desktop Alert (can you tell that this is a big deal?)
Outlook Actions

Figure 3

From here you can name the rule and finish the task.  I also like to setup a rule for successful jobs that searches for a successful string that I have designated, but in that case I mark the e-mail as read and move it to the folder in case I need to search for it later. 

For SQL Server Agent jobs, you have to do things just a little bit different since you cannot set a custom string for the alert message.  In that case, search for “STATUS:     Succeeded” (or failed, if that is the case) in the body of the e-mail and setup your other options in the same manner. 

Outlook Rules are your friend, use them to be more productive.  Enjoy!

Master Maintenance Plan, Part One

For the last two days, in between other tasks, I have been kicking around ideas to organize or reorganize scheduled jobs and maintenance plans into a grander scheme solution that sends me ONE e-mail a day.   Sounds like a minor operation, but I have 32 SQL servers including four clusters.  Currently, many of the 300 databases send me nightly e-mails to let me know whether the individual operations succeeded or failed.  There are still some maintenance plans that I have not replaced yet.  I have also implemented Ola Hallengren’s backup solution on many of the servers which greatly reduced the number of e-mails that I receive.  The plan is to install that solution on every server.  This solution has saved me hours upon hours of work, check it out if you have not already. 

Sometime ago, I wrote or found a script (no, I cannot remember) that would send me an e-mail letting me know the backup type and date/time of the last backup.  So here we are, and I am trying to take this to another level by exploring solutions to greatly reduce e-mail traffic and reduce the amount of time it takes me to assess my server’s overnight maintenance thus allowing me to spend more time being proactive.  With all that being said, I found the following article, Use PowerShell to Report SQL Server Backup Status, written on the Hey, Scripting Guy! Blog.

As this plan develops, I will post new blog posts.  Enjoy or let me know how you handle it.  Let’s share ideas!

SQL Nexus, Using RML Utilities

Continuing in our journey of SQL Nexus, today we will look at the RML Utilities that we installed prior to the installation of Nexus.  While thinking about what to blog about this morning, I decided to search for some background information to integrate into our learning.  Once a found an article from the SQL Customer Advisory Team (aka SQL CAT), I thought I would not reinvent the wheel and simply pass this information on to you.  The article, Precision Performance for Microsoft SQL Server using RML Utilities 9.0, gives an excellent break down for the uses of the tool.  Check it out and enjoy!

SQL Nexus, Read Trace Errors Resolved!

Last Monday we started our SQL Nexus journey together in order for me to refresh my knowledge of  this awesome tool.  Near the end of last week, the blog series was delayed due to an error with the ReadTrace and its corresponding report.  Yesterday and again today, I am troubleshooting our attempt to resume our journey but with little hope for correcting the error.  I have posted the error on CodePlex and am awaiting a response.

Yesterday, I referenced a “ReadTrace exit code: -2” error in the ReadTrace log file.   I did find one page out there on SQL Server Central where Grant Fritchey (Twitter | Blog) had encountered the problem.  After contacting, Grant he told me that the problem that he had was a simple one, he did not have the file and path name correct which corresponded with the file not found error higher up in the ReadTrace log.  However, when cutting and pasting this file straight out of the log, the file opens.  Strange indeed, but Grant got me thinking and thus reading through all of the files looking for a possible mistake in the scripts.  I could not find any.

Keep in mind, this is with running the StartSQLDiagDetailed_Trace2008R2.cmd performance script that I outlined in one of the previous posts.  This simply reference XML files that appear to only outline the performance indicators in which we want to monitor.    It is in the MSDiagProcs.sql script that it identifies the trace files to load and run.  In addition, none of the three scripts will load trace files into the import.

Here is a sample of what is called from the CMD scripts:

H:\PerfScripts\2008R2>”C:\Program Files\Microsoft SQL Server\100\Tools\Binn\SQLdiag.exe” /I “H:\PerfScripts\2008R2\SQLDiagPerfStats_Trace2008.XML” /O “H:\PerfScripts\2008R2\SQLDiagOutput” /P

However, attempts to run this manually through ReadTrace also failed with one of the trace files created by the performance script, such as:

H:\Program Files\Microsoft Corporation\RMLUtils>readtrace.exe -I”H:\PerfScripts\2008R2\SQLDiagOutput\SQLVMDEV10_SQLDIAG__sp_trace.trc” -o”C:\Users\crssqladmin\AppData\Local\Temp\2\RML” -SSQLVMDEV10 -dsqlnexus -E -T18 -f -H!PSSDIAG

However, playing with the file names yielded a different problem.  When I renamed the file taking out the double underscore, ‘__’,  thus resulting in SQLVMDEV10_SQLDIAGsp_trace.trc and SQLVMDEV10_SQLDIAGsp_trace_blk.trc, which seemed odd to me from the beginning, the ReadTrace load now indicated a permissions error.  When I added my administrative account to these two trace files, the ReadTrace worked.  Hmm, weird but interesting.  I wonder now if the SQL Nexus import would load?  See for yourself:

SQL Nexus Import Successful

 Enjoy and I hope this helps if you encountered the same issue!

SQL Nexus, Read Trace Errors

Last Monday we started our SQL Nexus journey together in order for me to refresh my knowledge of  this awesome tool.  Near the end of last week, the blog series was delayed due to an error with the ReadTrace and its corresponding report.  Today, I am still attempting to resume our journey but with little hope for correcting the error.  I have posted the error on CodePlex and am awaiting a response.

The first error I received when loading the trace files in was error 26 which many people translated into a permissions problem on the trace file.  I resolved that issue by granting full control to my admin login and SQL Server when that did not resolve the issue.  Now I am receiving a “ReadTrace exit code: -2” error in the ReadTrace log file. 

At this point,  I can find no helpful information on the interwebs.  I do find it interesting the that log file also denotes that it cannot find the trace file that it is trying to load, however when I cut and paste the file in question right from the log file it opens in notepad so I am not sure that the error is entirely accurate.  I checked the permissions on the trace file and it is consistent with the other files.  I’m stumped.

Has anyone else seen this?

SQL Nexus, Data Collection

Monday we started our SQL Nexus journey together in order for me to refresh my knowledge of how to use this awesome tool.  On Tuesday, we installed some essential pieces in order to get the SQL Nexus tool installed and ready to use.  We detoured on Wednesday to attend the SQL Server 2012 Technology Days at Microsoft in Tampa.  Today we return to the fun, so let’s jump right back in!

I realized this morning that I left out another download, the SQL Server 2005/2008 Performance Statistics collection scripts.  This has the XML configurations needed to run SQLDiag in order to load performance data into SQL Nexus.  Well let me qualify that, these XML files are pre-configured and they can be modified if necessary.  These configurations will capture DMVs and also blocking information that can help you troubleshoot performance problems.  It will also collect machine information from Perfmon, msinfo32, various event logs, and the profiler trace (unless you run the NoTrace command script).  There is more detailed information on customization available on the CodePlex download page (make sure you read the fine print about the download location as it is not the big download button on the right).

SQL Nexus Import

Once you run one of the scripts, such as StartSQLDiagDetailed_Trace2008R2.cmd, you will see several lines of preparatory messages followed by “SQLDIAG Collection started.  Press Ctrl+C to stop.”  At this point you can reproduce your issue, if possible.  If not possible, then collect data for a few minutes and then press Ctrl+C to finish the collection and generate the output necessary for SQL Nexus.  Be vigilant of running this as the output can grow very large very quickly especially if you running this against a production server.  Be careful!  Once the dialog closes, then the output is ready to load into SQL Nexus.

Now open SQL Nexus and we can import the data that is output from SQLDiag from the output folder that is generated underneath the folder where you ran the script from, by default.   From there we can run some of the reports located on the upper left corner of the interface.  Not all of them will apply depending upon what data you collected and which script ran.  Below is a sample of the Blocking and Wait Statistics report.  This was just a sample ran for this blog, so unfortunately I did not having any blocking issues to show you.  I hope this helps your understanding of SQL Nexus.  Enjoy!

SQL Nexus Blocking

%d bloggers like this: