Taking a photo walk to a BBQ

This month’s #tsql2sday is being hosted by Aaron Bertrand (t|b). Aaron gives us a choice this month to talk about our passions outside of the SQL Server community or our T-SQL bad habits. I don’t have enough time to bore you with all my T-SQL bad habits so I’ll try to inspire you with my #SQLBBQ & #SQLPhoto passions.

When I not architecting SQL Server solutions at Elon University, I’m usually behind a camera or BBQ cooker. Both of these passions allow me to exercise the creative part of my brain when I’m away from the office.

I got the photography bug right before my son was born. I started with a point-and-shoot Sony Cybershot but was so frustrated with the delay in taking a picture. I love the digital photos when they turned out, but getting a consistent focus was difficult. My next purchased a Nikon D70 DSLR. This camera was a life changer for me. Having the ability to shoot 3 frames per second was eye-opening. I no longer had to deal with missed shots or not knowing where the focus point was set. I used the D70 for another 6 years before upgrading to my current setup, the Nikon D300s.

 

I continued taking photos over the years but with the onset of smartphones I don’t pull out my Nikon as much. The iPhone has been an asset to my photography just because it is always with me. What I really enjoy doing is taking photos at PASS events. Back in 2012, before my first PASS Conference, I reached out to Pat Wright (b|t) about joining the photo walk during the conference. What ended up happening was a full day “pre-con” of photography on the Sunday before the conference. We ended up at Snoqualmie Falls and the surrounding area and I had blast taking photos with Pat. We continue to have our photo “pre-cons” before PASS and other SQL Saturday events. This is a great time to talk shop about photography and SQL Server. I’m looking forward to PASS 2018 in Seattle and future photo walks.

Photo Albums:

2012 PASS Summit
https://www.flickr.com/photos/purnellphotography/albums/72157632003357227

2014 PASS Summit
https://sqlnikon.smugmug.com/2014/2014-11-02-SQL-Photo-Walk/
https://sqlnikon.smugmug.com/2014/2014-11-05-Denny-SQL-Karaoke/
https://sqlnikon.smugmug.com/2014/2014-11-06-SQL-PASS-Karaoke/

2016 PASS Summit

https://sqlnikon.smugmug.com/2016/2016-10-23-SQL-PASS-Photo-Walk/
https://sqlnikon.smugmug.com/2016/2016-10-25-SQL-Photo-Walk/
https://sqlnikon.smugmug.com/2016/2016-10-27-SQL-Summit-SQL-Dinner/

2017 SQL Sat Charlotte
https://sqlnikon.smugmug.com/2017/2017-10-14-SQL-Sat-Charlotte/

 

My other passion is cooking, specifically BBQ. I own a few grills but my favorite is a cooker that has a 3×6 foot cooking surface. It is fueled by propane but has a shelf for a piece of wood like hickory or oak. I bought the cooker from a friend about 15 years ago and have loved experimenting with different cuts of meat. My go-to meal is whole chickens, baby back ribs, and mac & cheese. The best part about the mac & cheese is it is just store bought Stouffer’s, but when it is left on the cooker for a few hours, it comes out with an amazing smokey flavor. During last year’s #sqlvacation, Kevin Kline made a stop at our user group in Greensboro, NC and I brought in the cooker for the meeting. I love sharing the food and wish I had a bigger cooking surface to cook for an entire SQL Saturday event.

Hope to see you on a photo walk or cookout very soon!

Doug Purnell
@SQLNikon
sqlnikon@gmail.com
https://sqlnikon.com

MySQL RPC with Linked Server

Regular blogging is one of my goals this year. One of the areas I struggle with is what content to blog about. While I was listening to SQL Server Radio (Episode 84) with Guy Glantser (t) and Matan Yungman (t), they mentioned a new blog post series they are starting titled “Something Cool Learned Today”. These are meant for sort 1-2 paragraph blog posts about things you learned. This is the spark I needed to build some additional content for my blog.

We have a door access system that uses MySQL. We interact with the data via a SQL Server linked server so we can join MySQL tables with other SQL Server tables. Because of a bug in the vendor application, I needed a way to update some records, every few minutes, based on some records having specific column values. I knew I wanted to build a Stored Procedure (SP) similar to SQL Server but didn’t know the specific syntax for doing so. Then there was the ability to execute the MySQL SP from SQL Server, didn’t know the syntax for this either.

Off to ‘The Google’. My first stop was the MySQL documentation for CREATE STORED PROCEDURE. I didn’t know the exact MySQL version but I wasn’t trying to do anything complicated so I figured it would suffice. MySQL enforces the semicolon delimiter for all commands, something SQL Server says it will enforce in future releases (start using them now). The issue with trying to create a new SP under MySQL is the SP needs a delimiter as well as the statements inside of the SP. In order for MySQL to recognize the new SP, you need to change the delimiter temporarily to something like //, create the SP, and change the delimiter back to a semicolon. I wonder how this will pan out with SQL Server?

Now that my SP is built and execution rights were set I needed to figure out how to call the MySQL SP from a SQL Server Linked Server. Most of the time we interact with the MySQL via the OPENQUERY call. This allows us to build the statements in native MySQL so they execute faster. I found that you execute MySQL SPs with ‘call’ instead of ‘exec’ as with SQL Server but when I tried to use the CALL syntax, the query threw an error.

Msg 7357, Level 16, State 2, Line 13
Cannot process the object “CALL BugFix();”. The OLE DB provider “MSDASQL” for linked server “MySQL” indicates that either the object has no columns or the current user does not have permissions on that object.

Off to ‘The Google’. Searching for “executing MySQL stored procedures from SQL Server linked servers” landed me on a sqlservercentral.com thread with the exact issue (score!). Buried in the replies was the syntax I was looking for, EXEC(‘CALL BugFix’) AT MySQL. I was not familiar with using EXEC this way. Looking through Books On Line for EXECUTE, there it was, you are able to Execute a pass-through command against a linked server. Feeling good I headed over to SQL Server to run the command and bam, error:

Msg 7411, Level 16, State 1, Line 7
Server ‘MySQL’ is not configured for RPC.

I remembered that linked servers have lots of options under the hood. I headed over to the linked server definition and found a few RPC settings but which ones should I enable? A quick search took me to a blog post that answered the exact question. I ran the following statement:EXEC master.dbo.sp_serveroption @server=N’MYSERVER’, @optname=N’rpc out’, @optvalue=N’true’;

I re-ran my EXEC statement (no restart required) and it returned the results I was looking for: (2 rows affected).

Going into this problem I was a little hesitant to get started, procrastination set in because of the fear of the unknown. I’ve identified this problem before and the best way to get over it is to just start a small task like doing a quick Google search. Most times, the momentum of small tasks will help me get over the fear and get excited about the task at hand.

The other part of this exercise was the various error messages along the way. You can’t just give up and think it can’t be solved, someone has usually gone through the same issue before. Because of search engines, product documentation, and community bloggers, solutions are out there waiting for you to find them.

Doug Purnell
@SQLNikon
sqlnikon@gmail.com
https://sqlnikon.com

Spectre/Meltdown Baselining

This month’s #tsql2sday is being hosted by Arun Sirpal (b|t). The topic this month is how we conquered a technical challenge. The area I’ve been spending the most time in recently is baselining for the Spectre/Meltdown patches that are coming. We have a monitoring tool from Idera, SQL Diagnostic Manager, on our key production boxes but I wanted something consistent across all boxes that we could use for tracking and baselining wait types.

The Spectre/Meltdown can have a significant performance impact on servers running Windows Server OS that have intensive IO-intensive applications. SQL Server falls right into this area. We need a way to identify CPU changes across all our instances (dev/test/prod) while we apply the patches

There are plenty of queries to help identify wait types but I was looking for something that could be put into place quickly as patches from VMware and Microsoft are being released this week. I found the perfect one, sp_BlitzFirst from Brent Ozar Unlimited.

Their solution gives you the wait stat breakdown and reporting to help you identify baseline changes. When you have limited time to put something in place, these types of solutions from community partners are a great place to start. Having baseline wait stats has been on my list for years. There was always another project that was pushed to the front of the list, but now Spectre/Meltdown is at the top of my list!

I used the following blog post to get me started. We have all of our instance registered and organized under a Central Management Server. This feature allows us to run a query across multiple instances at once. I pulled down the latest First Responder Kit which includes the script to schedule and a Power BI report to analyze the results.

The next phase was how to identify a process to baseline. We don’t have any automated tools to regression test our apps (something else to work on) so I needed something to stress our SQL Server instances and record the wait types throughout the patching process. Once again the fine folks at Brent Ozar Unlimited had a blog post for this problem as well. Using SQLQueryStress, we can simulate a load on our SQL Server Instances and record the wait types. After each patch, we can execute the same load and compare the results using Power BI.

When faced with a technical challenge like Spectre/Meltdown, having a process in place to baseline your SQL Server Instances is important to show how changes to your environment can have lasting effects on performance.

 

T-SQL Tuesday #51 – Don’t bet against CHECKSUM when performing backups

Automating repetitive tasksThis month’s T-SQL Tuesday is being hosted by Jason Brimhall (blog | Twitter).  Jason’s theme is about gambling with SQL Server.  What areas have you gambled with?  Are there tasks that you should be doing that you are betting won’t be needed?  One came to mind for me, CHECKSUM.  I’ve always been a fan of Maintenance Plans.  They are an easy way to get basic maintenance tasks established on an instance of SQL Server.  The major down side of the built-in maintenance plans is no support for the CHECKSUM media option for BACKUP DATABASE.  If you are performing backups without this option enabled, you run the risk of the pages in the backup media not matching the pages stored in the database.  This is because without the CHECKSUM option, SQL Server doesn’t know if the page written to the backup media was written out correctly.  The CHECKSUM option ensures that every page written out to your backup media has a checksum that matches the checksum of the page in the database.  The CHECKSUM option also goes as far as doing a checksum on the entire backup that the RESTORE statement can take advantage of.  There is a performance impact for using the CHECKSUM option but as everyone says, your backup is only as good as your restore.  If you bet against your restore, your odds of a resume updating event just increased.

Now that you understand the benefits of CHECKSUM for your backups, how do you integrate it into your backup process?  It’s time to graduate from the built-in Maintenance Plans and move to Ola Hallengren’s SQL Server Maintenance Solution.  The is by far the best maintenance scripts available within the SQL Server community.  Do yourself a favor and pull the scripts down to a test instance and give it a try.  I made the move a few months ago and have never looked back.

Don’t bet against CHECKSUM when performing your backups!

T-SQL Tuesday #50 – Automating Tasks

This month’s T-SQL Tuesday topic is the reason I love being a DBA, automating repetitive tasks.  Hemanth Damecharla (b|t) picked a timely topic, as you start the new year, are there certain tasks that you need to automate that tend to sit on the corner of your desk yelling at you?

This topic also reminds me of a chart I came across in the past describing the time savings associated with automating repetitive tasks.  As at most places, we are all asked to do more with fewer resources.  This often leads to additional workloads without the time in the day to complete them.  How many of these can you automate?

Automate some tasks!
Automate some tasks!

As I approach each task or even large projects, I like to start with a manual approach.  This gives me time to understand the business problem I’m solving and plan my attack for automation.  Some tasks don’t need to be automated, they may be solved by using simple T-SQL statement can be built to solve the problem.  As soon as I run across tasks that contain multiple data sources, SSIS is my go to tool.  SSIS allows me to build modular packages that can be reused and organized for automation.

Back in 2012 when PASS was in Seattle, WA, I attend a pre-con for SSIS Design Patters hosted by Andy Leonard, Tim Mitchell, Matt Mason, Michelle Ufford, and Jessica Moss.  This really opened my mind to what’s possible with SSIS.  I recommend the book  that sparked the pre-con if you’re interested in learning more.

As you start your automation effort, realize the upfront time spent may seem counter productive.  But the time invested now will pay benefits down the road.  Instead of having to manually run scripts (and having to be reminded to do so), your automated process will be running in the background making you look like a rock star!

Happy Automating

 

Doug Purnell

Virtual Book Club: SQL Server Execution Plans, Chapter 5

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 4, here are my thoughts on Chapter 5.  The optimizer is very good at what it does,  but no one is perfect.  There may be rare cases where the optimizer needs a hint to force a certain type of behavior.  Although this may bring immediate gains in query performance, over time, the new plan generated may cease to function better than the what the optimizer can compile on the fly.  The reason for this is data change.  As the data changes, statistics are updated and the selectivity of the data changes which causes indexes to become more or less useful.  This is why using hints is dangerous and should be used sparingly.

Things I learned

Besides using the occasional evil hint ‘WITH (NO LOCK)’, I don’t use hints.  Partly because I didn’t understand the query plans that well before reading this book and I had lots of respect for the optimizer.  Now that I understand how execution plans are generated, I can see a useful need for hints, testing.  To be able to change the behavior of the optimizer by using a Query, Join or Table hint is a powerful learning tool.  You can start from the optimizer’s auto generated plan and start poking it with hints to see how the I/O and query time changs.

If I were to start using hints, I would definitely see a need for a tracking document or metadata query to reevaluate the execution plans.  If the tables involved are static in nature, you may not need to bother with reworking the query plan hints but once a year.  On the flip side though, a monthly review may be necessary if the query is pulling data from volatile tables.

Likes

Even though controlling execution plans with hints is frowned upon, Grant did a great job going through examples of each hint and how you can use them to better understand the optimizer.  Grant may it very clear throughout the chapter to use hints at your own risk, but if you find a select operator has timed out looking for the best plan you may have an opportunity to use hints.

Dislikes

As with most chapters thus far, no complaints on the material presented.

On to chapter 6!

Virtual Book Club: SQL Server Execution Plans, Chapter 4

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 3, here are my thoughts on Chapter 4.  This chapter was a deeper dive into some of the more complex plans that can be generated from T-SQL statements like APPLY, CTEs, MERGE, views, as well as how indexes are chosen.

Things I learned

Writing queries is an art form, you need to understand all the T-SQL statements and how they affect the query plan.  Don’t get stuck on always using one technique over another.  For example, adding a WHERE clause to a sub-select may out perform an APPLY statement (based on the data set).  The point here is by understanding how each T-SQL statement will interact with your data, you can try different approaches to solving your problem and reach the results in the most efficient way.

Using ROLLBACK can be a useful approach when needing to view the actual execution plan for UPDATE, INSERT, DELETE, or MERGE statements.  This allows the actual query plan to be generated but the DML is not committed to the database allowing you to repeatedly change the query to generate the best quality plan.

Likes

Grant did a great job of teaching how to dissect large query plans.  Just like when you are trying to each an elephant, you need to approach large query plans one operator at a time.  I also thought the way Grant explained views and indexes was very thorough.  I now have a better understand of when a view might be expanded to the table level or when an index might be passed over because of bad statistics.

The other thing I noticed is how Grant was able to sharpen our understanding of T-SQL and database fundamentals in the process of this chapter.  Being able to learn how the query optimizer generates plans while learning better uses of sub-queries, APPLY, CTEs, MERGE, views, and indexes is a great thing.

Dislikes

As with most chapters thus far, no complaints on the material presented.  Just a minor typo on the T-SQL Listing 4.7.  The parameter being passed to the SP should be @BusinessEntityID and not @EmployeeID.

On to chapter 5!

Virtual Book Club: SQL Server Execution Plans, Chapter 3

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 2, here are my thoughts on Chapter 3.  This chapter was short and sweet.  XML plans were the main focus and this chapter gave me some great ideas on how to use XQuery to make reading the XML based plans easier.

Things I learned

I’ve always stayed away from XML.  In my many years working in the database world I have not had the need to interact with XML data.  Querying XML based query plans with XQuery is first time I’ve seen a benefit of using XML as a DBA.  For larger plans it will be easier to query the XML data instead of trying to hunt down operations with in the graphical view.

You can use XQuery to query .sqlplan files, XML data stored in columns or directly from the plan cache.  Caution needs to be used when querying directly against the plan cache as this can have performance side effects.

Likes

I liked how Grant told us upfront in the first part of chapter that if you’re not dealing with SQL Server instances before version 2005 that text-based query plans may not be something you need to understand.  Besides being a deprecated feature, XML offers plenty of advantages over text-based plans.

Dislikes

Nothing to dislike in this chapter.  There was just a minor typo in the supplied script for chapter 3.  The book references Listing 3.18 but in the script file it is listed as 3.19

On to chapter 4!

Virtual Book Club: SQL Server Execution Plans, Chapter 2

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 1, here are my thoughts on Chapter 2.  You could think of Chapter 1 as walking into the shallow end of the pool and Chapter 2, swimming into the deep end, but in a good way.  I never felt like I was drowning in the material, all the topics were presented in a way that made it easy to comprehend.

Things I learned

Blocking/Non-Blocking determines when an operator can begin processing data.  Blocking operators need to wait for all the data before they can begin outputting data (Hash-Join is an example of a blocking operation).  Non-Blocking, on the other hand, can begin processing as soon as the data starts arriving (Merge-Join is an example of a non-blocking operation).  This subtle idea can help you better understand how data flows between your operators.

Sorting is something I always used as a presentation technique but in the world of query plans and table joins it can have a big impact on the optimizer’s plan generation.  For example, a merger-join will only be used on sorted data, period.  But if you have two non-sorted lists, hash-join may be a better option.  Because hash-joins builds temporary table with in tempdb, too many of them can cause tempdb performance issues.

Depending on the data sets, hash-join may be a good fit (think small data sets).  They should also be warning sign.  Pay attention to missing indexes, a missing WHERE clause, or calculations in the WHERE clause that can make it non-sargable (unable to use an index).

When using SET STATISTICS IO ON to better understand logical and physical page reads, scan count was something that I was aware of but didn’t fully understand.  There is a great blog post on MSDN that explains when you will see scan count 0, 1 or N.  Just think of it as how many times a scan is started looking for data in a data set.

When generating a query plan for an UPDATE statement, the optimizer has to first determine what rows need to be updated.  This is accomplished by first doing scan/seek operation.  Pay attention to the WHERE clause of your UPDATE statement as this will determine how efficient the select portion of the UPDATE statement will be.

Likes

Even though there was plenty more information presented in chapter 2 over chapter 1, everything was clear and concise.  Following along with the scripts on my local instance of SQL Server made the query plans easier to follow and I was spend more time understanding how the optimizer behaves.

Dislikes

There were only a few things that I would have liked to see presented better.  Under the Nested Loops Join section, the tool top defines it as “For each row in the top (outer) input, scan the bottom (inner) input, and output matching rows.  But Grant explained it as “… scanning the outer data set (the bottom operator in a graphical execution plan) once for each row in the inner set”.  Grants explanation seemed on contradict the tool tip so I had to spend extra time reading through this section to understand what was actually happening within the nested loops join.

Scan count was referenced under the Filtering Data section but no time was given to explain if a low/high scan count was a sign of possible problems.  But, as I do with other blog posts, when I hit a term and I don’t understand I search for it.  I was able to find a blog post under MSDN that did a great job of explaining how it works as well as samples to show what would cause 0, 1, and N scan counts.

On to chapter 3!

Virtual Book Club: SQL Server Execution Plans, Chapter 1

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

This is actually my first book review.  Following Denis’s lead, I’m going to discuss what things I learned, what I liked about the chapter and areas I disliked.  As with speakers, feedback is important for authors.  Hopefully our discussion of the book will help Grant on his third edition.

Things I learned

Did you know that you can save a XML version of the execution plan and share them with others?  Well, neither did I.  This would be a great way to get feedback on better ways to build queries.  In fact SQLPerformance.com offers the ability to upload your plan directly from SQL Sentry’s Plan Explorer to their Q&A site.  The SQL Server community has a wealth of information and there are always folks available to help you with your questions.

I’m a big fan of Idera’s SQL Diagnostic Manager.  This tool makes it easy to see trouble areas within my SQL Server instances.  The only down side is that I’m not using the DMO statements on a daily basis.  With DMOs you’re able to pull a specific plan from the plan cache.  This is useful when you’re developing queries and need to view a plan that may have been saved from a previous execution of your query.  To interact with some DMOs, you need to use the APPLY operator because the DMOs are table-valued functions.  I’ve known about APPLY but have not had the need to use it in my daily work.

Likes

Grant did a great job introducing the query engine and how a plan is ultimately built and saved for later use.  Having a strong understanding of how the execution plan is built will help me write better queries.  Grant didn’t throw too much material at me during the first chapter.  This allowed me to comprehend the material better without feeling overwhelmed.

Dislikes

I didn’t find anything in this chapter that came across as a dislike and I’m guessing this trend will continue for the remainder of the book.

I’m looking forward the chapter 2 next week and the rest of the book.  My goal is to be able to read execution plans better than seeing how thick the lines are between operators.