Category Archives: Database Administration Tasks
On Thursday, January 15, 2015, my good friend Mike Lawell (b|t) and I embarked upon our first pre-conference training for SQL Saturday Nashville. The event was a smashing success with a fabulous turnout. In addition, the crowd was engaged and responsive, as a teacher it does not get any better than that. For those that attended our session, here are the files that we promised you. We’ve included all of the links and the code contains a wealth of information as well as see the rest of the slides that we were unable to get to. Please use them and feel free to ask us any questions that you may have. In addition, we are more than happy to help you at the beginning of your DBA journey. Welcome to the #SQLFamily!
For those of you who did not attend, the session was entitled “Getting the New DBA Up to Speed with Performance Tuning.” The session was aimed at new and ‘accidental’ DBAs as they embark upon a journey learning the art of performance tuning. If you are a SQL Saturday organizer, we would love to offer this session at your event. Please feel free to contact us if you would like us to provide this session for you. Enjoy!
This morning I was looking through a new vendor database to see exactly what I would be supporting and I stumbled upon a curious table. This table seemed quite normal to the naked eye until I glanced upon the data types. There were four, count them four fields set to VARCHAR(MAX) and two set to NVARCHAR(MAX). First of all, why the inconsistency in the data types? Maybe there is a legitimate reason, but I am not buying it. If you need Unicode support provided by the NVARCHAR data type wouldn’t you need it across all fields? I have only worked with a few databases that needed Unicode support and that was the case for them. Maybe there is a case for mixed usage, but I do not understand it and obviously I was not part of their design team.
Now onto the bigger elephant in the room, why on earth would you have all of these fields set to the maximum storage amount of two gigabytes (1GB for NVARCHAR taking up 2GB of space). Are you really storing that much data in six fields of EACH record? Keep in mind that SQL Server stores records in 8K pages. When you exceed that the data goes to an overflow page with a pointer so that SQL knows where to find the rest of the data. I understand that it will not take up that much space if you are not actually storing that much data, but there is a potential here for these table to be problematic. Granted this might be a good design for one field if it stores a large amount of text or notes, but six fields in the same record? I looked at the length of the records currently entered for these six fields and I found nothing larger than 100 characters. Overkill? What do you think?
Having used SQL Server Migration Assistant for a major Oracle conversion project over the last year, I quickly grew aggravated with its project limitation involving a single database. In other words, if you setup the project file in SSMA for staging and then you are unable to point it to an acceptance environment to migrate to that environment. That can be very frustrating especially if you have customized many of the project settings. I have discovered two workarounds to resolve this problem.
- Backup (or copy) the project folder (c:\Users\<<Username>>\Documents\SSMAProjects\<<Project Name>>). Close SSMA and then delete the target-metabase.mb file in the project folder. Now open SSMA and now you can connect to a new SQL Server and/or database. You would want to copy the folder if you want to retain the settings for a different environment such as staging, acceptance, production, etc. If you rename the folder for a different environment, make sure to rename the *.o2ssproj file to match the folder name. This is the file that opens inside of SSMA.
- The second option is to create a new project using SSMA and then copying in the object-containers.mappings, project-container.mappings, and finally preferences.prefs. These are the project settings that you have painstakingly setup in your original project.
I hope this helps. Enjoy!
Ever wonder what your usage statistics are for your transaction log files without opening each individual one in the SQL Server Management Studio shrink file dialog. DBCC SQLPERF is a neat little utility that can tell you the size of the log file and the amount of the space occupied by transactions inside the file.
You can also use this utility to clear the sys.dm_os_wait_stats and sys.dm_os_latch_stats statistics, in such a manner:
A few weeks ago, I let loose a rebel yell and issued a command directive: Death to Oracle! Then I delivered the death knell as we migrated all of the data to SQL Server over Memorial Day weekend. Today, I am proud to announce that after two weeks of the Finance SQL Servers being in production, the users are happy and productive.
Most would say that is how it should be, but the Oracle DBAs told me that the users would complain daily. They were happy to rid themselves of those users and now I know why.
Hopefully I did not jinx myself but it looks like the domination plan is working!
We had a meeting this morning where our Director talked about initiatives and people bringing forth new ideas to help streamline our department. Therefore, soon I will be bringing forth a new initiative to eliminate Oracle altogether and replace it with SQL Server. It will ultimately be denied because of the cost involved with the conversion but they will have to discuss it and think about it. Take that Oracle, SQL Server Global Domination plan is gaining traction.
Last week I let loose a rebel yell and issued a command directive: Death to Oracle! Today I am proud to announce that I dealt a death knell to one of our Oracle servers by migrating our finance applications from Oracle to SQL Server. This was a year-long project that culminated in one really, really long weekend of SQL Fun. I survived and Oracle bit the dust. My plan for SQL Server World Domination has completed phase one.
My part of the project included a 27-hour marathon involving migrating the data using SQL Server Migration Assistant and then numerous scripts and comparisons to day-zero databases from the vendor while upgrading the application across five different product upgrades using RedGate SQL Compare (which worked beautifully). It was a day full of Starbucks triple espressos and five-hour energy shots before I crashed and the testers took over. We then rounded out the weekend with a 33-hour ETL process which all completed successfully before the application went live at 7am this morning. Mission accomplished.
This project has great visibility in my organization and its success will open the flood gates to future migrations in showing our world that SQL Server is an enterprise level database. I am not stopping here, I want their 100 terabytes of production data! It will be mine, oh yes it will be mine.
Today signals the end and the beginning for a year-long project. My project for SQL Server Domination. Our shop is primarily an Oracle shop with dot net developers. Although I do have almost 300 databases across all levels, the bulk of mission critical applications are written for Oracle.
A little over a year ago we began a plan to migrate our financial applications to SQL Server in order to retire the oldest Oracle servers on site and reduce our massive licensing footprint. This was all part of my plan for SQL Server World Domination (said in a maniacal voice, imagine that). We knew that if this mission critical migration was a success that we could leverage our product against the Oracle servers in-house and begin the destruction of their infrastructure in favor of my glorious SQL Servers! (Hear the trumpets heralding the new king?)
This project, if successful, will be the Oracle death knell. Over the last year I have prepared diligently with one goal in site: this migration weekend. The weekend of my birthday seems apropos to give me the present that I so desire, SQL Server Domination. You may have noticed that the number of blog posts recently have been limited especially after blogging every day last year. For the last few months, I have had one purpose and one purpose only: ensuring the success of this migration!
Let the trumpets herald the coming of a new dawn. Long live SQL Server. I will blog again next week on the other side of this project to declare that the Queen is dead and a new King has ascended the throne! Until then queue the Imperial March, hello Starbucks and massive four day working weekend with little sleep migration project here I come!
Back in January, I did a series on simple auditing which walked through setting up an audit and creating a simple SSRS report. I actually used the report and the query that I demonstrated for a couple of servers in my stack that require auditing. The basis of this revolves around the following line of code:
SELECT * FROM fn_get_audit_file (‘g:\auditing\*’, DEFAULT, DEFAULT)
This line of code runs great for my login and I never gave it much thought because I am running it from a sysadmin account. When asked for users to be able to run this report that is when things are getting sticky. I cannot fathom why a simple query for log data cannot be attributed to a read-only account. In order for you to run this select statement you either need a sysadmin level account or the CONTROL SERVER permission which pretty much gives you the keys to the kingdom. Why would they do that?
I am sure there is a great technical reason, but why not allow it to filter down to read-only access to simply view the data? With those permissions whoever is in charge of running the audit can manipulate the data however they see fit. That makes no sense to me but then again who am I in the grand scheme of things? Shortly I will post a few different scenarios in how to solve this problem. Enjoy!
I was doing some mentoring today with online restores, so I thought I would share it with my blog readers. The most important thing about online restores is that it is only supported in SQL Server 2005 and later versions. In addition, for the bulk-logged recovery model the following conditions must be met in order to complete an online restore:
- All of the log backups must have been taken and completed before you start the online restore.
- If you have bulk changes in the database, the files must be online.
- All of your bulk changes must be backed up before starting the online restore process.
In addition, I have seen it recommended that you switch your bulk-logged recovery model to the full model before starting the online process and then switching it back afterward for smoother recovery. Enjoy!