SQL Life 2026

This month’s #tsql2sday is being hosted by Adam Machanic (t|b):

Your mission for month #100 is to put on your speculative shades and forecast the bright future ahead. Tell us what the world will be like when T-SQL Tuesday #200 comes to pass. If you’d like to describe your vision and back it up with points, trends, data, or whatever, that’s fine. If you’d like to have a bit more fun and maybe go slightly more science fiction than science, come up with a potential topic for the month and create a technical post that describes an exciting new feature or technology to your audience of June 2026. (Did I do the math properly there?)

Here is my vision…

As I step into my home office and close the door, the walls and ceiling automatically bring up a live landscape of Grand Teton National Park. My priority tasks are displayed and see today is the day we migrate our entire production environment over to the latest update of SQL Server to take advantage of the latest AI engine built on top of Azure Machine Learning.

The AI engine is going to be our DBA in the back pocket. We will no longer have to deal with monitoring and performance tuning. Ever since the initial automation database tuning release back in 2017, Microsoft has been freeing up DBA resources with each update. Notifications for transactions, indexing, backups, DR/HA, and security tuning are displayed under the integrated PowerBI console on one of the walls of my office. I remember the day I had multiple screens on my desk, now I just move around the room to the various virtual workspaces around the office depending on the project I’m working on.

The migration has already been done by the older AI engine countless times under our dev and test instances. With dbatools having been integrated into the SQL engine years ago and expanded to over 3,000 commands, PowerShell is now my primary interface to SQL Server. The developers are still coding in T-SQL when the ORM tool doesn’t behave properly, but for me, PowerShell is where I’m comfortable.

As I start the production migration, open transactions are frozen, databases are moved between memory spaces, and then the transactions are brought back online. Users never even noticed the tiniest of hiccups within their applications. The new SQL AI is already online and bringing up the status of the new instances on my wall, all is well.

The update cycle of SQL Server is still bringing value to us. The AI engine takes cares of the weekly cumulative updates and leaves the monthly feature releases for me to approve. Automatic DR/HA with Azure makes the older AG’s seem like the stone age. Our database, application, and web servers still reside under the control of our data centers, but DR/HA is handled for us.

Now that the migration is done, it’s time for a break. Catch you on the next #tsq2sday!

Doug Purnell

I’m Speaking at SQL Saturday Raleigh

I’m honored to have been selected to speak at SQL Saturday Raleigh #721 on April 14th. I’m speaking about choosing the right high availability (HA) and/or disaster recovery (DR) solution for SQL Server. I’m given this talk just once before at the Companero Conference back in the fall of 2017. With the release of SQL Server 2017, there are few more HA/DR options that we have available to us. There are plenty of other great sessions being offered that day so it is not too late to sign up!

For those of you not familiar with SQL Saturday events, they are organized by SQL Server professionals with help with sponsors and volunteers within the SQL Server community. Lunch is generally 10-15 dollars or you can choose to bring your own for an entire day of free training. There are multiple events every Saturday so find one near you!

I also signed up for Kalen Delaney’s (b|t) pre-con on SQL Server internals. I’ve always enjoyed digging into the internals of SQL Server. If you have a better understanding of what is under the covers it will help you build and support better solutions.

You’ll see me roaming the event with my camera so be sure and say hello and I hope to see you there!

Doug Purnell
@SQLNikon
sqlnikon@gmail.com
https://sqlnikon.com

Taking a photo walk to a BBQ

This month’s #tsql2sday is being hosted by Aaron Bertrand (t|b). Aaron gives us a choice this month to talk about our passions outside of the SQL Server community or our T-SQL bad habits. I don’t have enough time to bore you with all my T-SQL bad habits so I’ll try to inspire you with my #SQLBBQ & #SQLPhoto passions.

When I not architecting SQL Server solutions at Elon University, I’m usually behind a camera or BBQ cooker. Both of these passions allow me to exercise the creative part of my brain when I’m away from the office.

I got the photography bug right before my son was born. I started with a point-and-shoot Sony Cybershot but was so frustrated with the delay in taking a picture. I love the digital photos when they turned out, but getting a consistent focus was difficult. My next purchased a Nikon D70 DSLR. This camera was a life changer for me. Having the ability to shoot 3 frames per second was eye-opening. I no longer had to deal with missed shots or not knowing where the focus point was set. I used the D70 for another 6 years before upgrading to my current setup, the Nikon D300s.

 

I continued taking photos over the years but with the onset of smartphones I don’t pull out my Nikon as much. The iPhone has been an asset to my photography just because it is always with me. What I really enjoy doing is taking photos at PASS events. Back in 2012, before my first PASS Conference, I reached out to Pat Wright (b|t) about joining the photo walk during the conference. What ended up happening was a full day “pre-con” of photography on the Sunday before the conference. We ended up at Snoqualmie Falls and the surrounding area and I had blast taking photos with Pat. We continue to have our photo “pre-cons” before PASS and other SQL Saturday events. This is a great time to talk shop about photography and SQL Server. I’m looking forward to PASS 2018 in Seattle and future photo walks.

Photo Albums:

2012 PASS Summit
https://www.flickr.com/photos/purnellphotography/albums/72157632003357227

2014 PASS Summit
https://sqlnikon.smugmug.com/2014/2014-11-02-SQL-Photo-Walk/
https://sqlnikon.smugmug.com/2014/2014-11-05-Denny-SQL-Karaoke/
https://sqlnikon.smugmug.com/2014/2014-11-06-SQL-PASS-Karaoke/

2016 PASS Summit

https://sqlnikon.smugmug.com/2016/2016-10-23-SQL-PASS-Photo-Walk/
https://sqlnikon.smugmug.com/2016/2016-10-25-SQL-Photo-Walk/
https://sqlnikon.smugmug.com/2016/2016-10-27-SQL-Summit-SQL-Dinner/

2017 SQL Sat Charlotte
https://sqlnikon.smugmug.com/2017/2017-10-14-SQL-Sat-Charlotte/

 

My other passion is cooking, specifically BBQ. I own a few grills but my favorite is a cooker that has a 3×6 foot cooking surface. It is fueled by propane but has a shelf for a piece of wood like hickory or oak. I bought the cooker from a friend about 15 years ago and have loved experimenting with different cuts of meat. My go-to meal is whole chickens, baby back ribs, and mac & cheese. The best part about the mac & cheese is it is just store bought Stouffer’s, but when it is left on the cooker for a few hours, it comes out with an amazing smokey flavor. During last year’s #sqlvacation, Kevin Kline made a stop at our user group in Greensboro, NC and I brought in the cooker for the meeting. I love sharing the food and wish I had a bigger cooking surface to cook for an entire SQL Saturday event.

Hope to see you on a photo walk or cookout very soon!

Doug Purnell
@SQLNikon
sqlnikon@gmail.com
https://sqlnikon.com

MySQL RPC with Linked Server

Regular blogging is one of my goals this year. One of the areas I struggle with is what content to blog about. While I was listening to SQL Server Radio (Episode 84) with Guy Glantser (t) and Matan Yungman (t), they mentioned a new blog post series they are starting titled “Something Cool Learned Today”. These are meant for sort 1-2 paragraph blog posts about things you learned. This is the spark I needed to build some additional content for my blog.

We have a door access system that uses MySQL. We interact with the data via a SQL Server linked server so we can join MySQL tables with other SQL Server tables. Because of a bug in the vendor application, I needed a way to update some records, every few minutes, based on some records having specific column values. I knew I wanted to build a Stored Procedure (SP) similar to SQL Server but didn’t know the specific syntax for doing so. Then there was the ability to execute the MySQL SP from SQL Server, didn’t know the syntax for this either.

Off to ‘The Google’. My first stop was the MySQL documentation for CREATE STORED PROCEDURE. I didn’t know the exact MySQL version but I wasn’t trying to do anything complicated so I figured it would suffice. MySQL enforces the semicolon delimiter for all commands, something SQL Server says it will enforce in future releases (start using them now). The issue with trying to create a new SP under MySQL is the SP needs a delimiter as well as the statements inside of the SP. In order for MySQL to recognize the new SP, you need to change the delimiter temporarily to something like //, create the SP, and change the delimiter back to a semicolon. I wonder how this will pan out with SQL Server?

Now that my SP is built and execution rights were set I needed to figure out how to call the MySQL SP from a SQL Server Linked Server. Most of the time we interact with the MySQL via the OPENQUERY call. This allows us to build the statements in native MySQL so they execute faster. I found that you execute MySQL SPs with ‘call’ instead of ‘exec’ as with SQL Server but when I tried to use the CALL syntax, the query threw an error.

Msg 7357, Level 16, State 2, Line 13
Cannot process the object “CALL BugFix();”. The OLE DB provider “MSDASQL” for linked server “MySQL” indicates that either the object has no columns or the current user does not have permissions on that object.

Off to ‘The Google’. Searching for “executing MySQL stored procedures from SQL Server linked servers” landed me on a sqlservercentral.com thread with the exact issue (score!). Buried in the replies was the syntax I was looking for, EXEC(‘CALL BugFix’) AT MySQL. I was not familiar with using EXEC this way. Looking through Books On Line for EXECUTE, there it was, you are able to Execute a pass-through command against a linked server. Feeling good I headed over to SQL Server to run the command and bam, error:

Msg 7411, Level 16, State 1, Line 7
Server ‘MySQL’ is not configured for RPC.

I remembered that linked servers have lots of options under the hood. I headed over to the linked server definition and found a few RPC settings but which ones should I enable? A quick search took me to a blog post that answered the exact question. I ran the following statement:EXEC master.dbo.sp_serveroption @server=N’MYSERVER’, @optname=N’rpc out’, @optvalue=N’true’;

I re-ran my EXEC statement (no restart required) and it returned the results I was looking for: (2 rows affected).

Going into this problem I was a little hesitant to get started, procrastination set in because of the fear of the unknown. I’ve identified this problem before and the best way to get over it is to just start a small task like doing a quick Google search. Most times, the momentum of small tasks will help me get over the fear and get excited about the task at hand.

The other part of this exercise was the various error messages along the way. You can’t just give up and think it can’t be solved, someone has usually gone through the same issue before. Because of search engines, product documentation, and community bloggers, solutions are out there waiting for you to find them.

Doug Purnell
@SQLNikon
sqlnikon@gmail.com
https://sqlnikon.com

Spectre/Meltdown Baselining

This month’s #tsql2sday is being hosted by Arun Sirpal (b|t). The topic this month is how we conquered a technical challenge. The area I’ve been spending the most time in recently is baselining for the Spectre/Meltdown patches that are coming. We have a monitoring tool from Idera, SQL Diagnostic Manager, on our key production boxes but I wanted something consistent across all boxes that we could use for tracking and baselining wait types.

The Spectre/Meltdown can have a significant performance impact on servers running Windows Server OS that have intensive IO-intensive applications. SQL Server falls right into this area. We need a way to identify CPU changes across all our instances (dev/test/prod) while we apply the patches

There are plenty of queries to help identify wait types but I was looking for something that could be put into place quickly as patches from VMware and Microsoft are being released this week. I found the perfect one, sp_BlitzFirst from Brent Ozar Unlimited.

Their solution gives you the wait stat breakdown and reporting to help you identify baseline changes. When you have limited time to put something in place, these types of solutions from community partners are a great place to start. Having baseline wait stats has been on my list for years. There was always another project that was pushed to the front of the list, but now Spectre/Meltdown is at the top of my list!

I used the following blog post to get me started. We have all of our instance registered and organized under a Central Management Server. This feature allows us to run a query across multiple instances at once. I pulled down the latest First Responder Kit which includes the script to schedule and a Power BI report to analyze the results.

The next phase was how to identify a process to baseline. We don’t have any automated tools to regression test our apps (something else to work on) so I needed something to stress our SQL Server instances and record the wait types throughout the patching process. Once again the fine folks at Brent Ozar Unlimited had a blog post for this problem as well. Using SQLQueryStress, we can simulate a load on our SQL Server Instances and record the wait types. After each patch, we can execute the same load and compare the results using Power BI.

When faced with a technical challenge like Spectre/Meltdown, having a process in place to baseline your SQL Server Instances is important to show how changes to your environment can have lasting effects on performance.

 

T-SQL Tuesday #51 – Don’t bet against CHECKSUM when performing backups

Automating repetitive tasksThis month’s T-SQL Tuesday is being hosted by Jason Brimhall (blog | Twitter).  Jason’s theme is about gambling with SQL Server.  What areas have you gambled with?  Are there tasks that you should be doing that you are betting won’t be needed?  One came to mind for me, CHECKSUM.  I’ve always been a fan of Maintenance Plans.  They are an easy way to get basic maintenance tasks established on an instance of SQL Server.  The major down side of the built-in maintenance plans is no support for the CHECKSUM media option for BACKUP DATABASE.  If you are performing backups without this option enabled, you run the risk of the pages in the backup media not matching the pages stored in the database.  This is because without the CHECKSUM option, SQL Server doesn’t know if the page written to the backup media was written out correctly.  The CHECKSUM option ensures that every page written out to your backup media has a checksum that matches the checksum of the page in the database.  The CHECKSUM option also goes as far as doing a checksum on the entire backup that the RESTORE statement can take advantage of.  There is a performance impact for using the CHECKSUM option but as everyone says, your backup is only as good as your restore.  If you bet against your restore, your odds of a resume updating event just increased.

Now that you understand the benefits of CHECKSUM for your backups, how do you integrate it into your backup process?  It’s time to graduate from the built-in Maintenance Plans and move to Ola Hallengren’s SQL Server Maintenance Solution.  The is by far the best maintenance scripts available within the SQL Server community.  Do yourself a favor and pull the scripts down to a test instance and give it a try.  I made the move a few months ago and have never looked back.

Don’t bet against CHECKSUM when performing your backups!

T-SQL Tuesday #50 – Automating Tasks

This month’s T-SQL Tuesday topic is the reason I love being a DBA, automating repetitive tasks.  Hemanth Damecharla (b|t) picked a timely topic, as you start the new year, are there certain tasks that you need to automate that tend to sit on the corner of your desk yelling at you?

This topic also reminds me of a chart I came across in the past describing the time savings associated with automating repetitive tasks.  As at most places, we are all asked to do more with fewer resources.  This often leads to additional workloads without the time in the day to complete them.  How many of these can you automate?

Automate some tasks!
Automate some tasks!

As I approach each task or even large projects, I like to start with a manual approach.  This gives me time to understand the business problem I’m solving and plan my attack for automation.  Some tasks don’t need to be automated, they may be solved by using simple T-SQL statement can be built to solve the problem.  As soon as I run across tasks that contain multiple data sources, SSIS is my go to tool.  SSIS allows me to build modular packages that can be reused and organized for automation.

Back in 2012 when PASS was in Seattle, WA, I attend a pre-con for SSIS Design Patters hosted by Andy Leonard, Tim Mitchell, Matt Mason, Michelle Ufford, and Jessica Moss.  This really opened my mind to what’s possible with SSIS.  I recommend the book  that sparked the pre-con if you’re interested in learning more.

As you start your automation effort, realize the upfront time spent may seem counter productive.  But the time invested now will pay benefits down the road.  Instead of having to manually run scripts (and having to be reminded to do so), your automated process will be running in the background making you look like a rock star!

Happy Automating

 

Doug Purnell

Virtual Book Club: SQL Server Execution Plans, Chapter 5

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 4, here are my thoughts on Chapter 5.  The optimizer is very good at what it does,  but no one is perfect.  There may be rare cases where the optimizer needs a hint to force a certain type of behavior.  Although this may bring immediate gains in query performance, over time, the new plan generated may cease to function better than the what the optimizer can compile on the fly.  The reason for this is data change.  As the data changes, statistics are updated and the selectivity of the data changes which causes indexes to become more or less useful.  This is why using hints is dangerous and should be used sparingly.

Things I learned

Besides using the occasional evil hint ‘WITH (NO LOCK)’, I don’t use hints.  Partly because I didn’t understand the query plans that well before reading this book and I had lots of respect for the optimizer.  Now that I understand how execution plans are generated, I can see a useful need for hints, testing.  To be able to change the behavior of the optimizer by using a Query, Join or Table hint is a powerful learning tool.  You can start from the optimizer’s auto generated plan and start poking it with hints to see how the I/O and query time changs.

If I were to start using hints, I would definitely see a need for a tracking document or metadata query to reevaluate the execution plans.  If the tables involved are static in nature, you may not need to bother with reworking the query plan hints but once a year.  On the flip side though, a monthly review may be necessary if the query is pulling data from volatile tables.

Likes

Even though controlling execution plans with hints is frowned upon, Grant did a great job going through examples of each hint and how you can use them to better understand the optimizer.  Grant may it very clear throughout the chapter to use hints at your own risk, but if you find a select operator has timed out looking for the best plan you may have an opportunity to use hints.

Dislikes

As with most chapters thus far, no complaints on the material presented.

On to chapter 6!

[contact-form][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Website’ type=’url’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]

Virtual Book Club: SQL Server Execution Plans, Chapter 4

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 3, here are my thoughts on Chapter 4.  This chapter was a deeper dive into some of the more complex plans that can be generated from T-SQL statements like APPLY, CTEs, MERGE, views, as well as how indexes are chosen.

Things I learned

Writing queries is an art form, you need to understand all the T-SQL statements and how they affect the query plan.  Don’t get stuck on always using one technique over another.  For example, adding a WHERE clause to a sub-select may out perform an APPLY statement (based on the data set).  The point here is by understanding how each T-SQL statement will interact with your data, you can try different approaches to solving your problem and reach the results in the most efficient way.

Using ROLLBACK can be a useful approach when needing to view the actual execution plan for UPDATE, INSERT, DELETE, or MERGE statements.  This allows the actual query plan to be generated but the DML is not committed to the database allowing you to repeatedly change the query to generate the best quality plan.

Likes

Grant did a great job of teaching how to dissect large query plans.  Just like when you are trying to each an elephant, you need to approach large query plans one operator at a time.  I also thought the way Grant explained views and indexes was very thorough.  I now have a better understand of when a view might be expanded to the table level or when an index might be passed over because of bad statistics.

The other thing I noticed is how Grant was able to sharpen our understanding of T-SQL and database fundamentals in the process of this chapter.  Being able to learn how the query optimizer generates plans while learning better uses of sub-queries, APPLY, CTEs, MERGE, views, and indexes is a great thing.

Dislikes

As with most chapters thus far, no complaints on the material presented.  Just a minor typo on the T-SQL Listing 4.7.  The parameter being passed to the SP should be @BusinessEntityID and not @EmployeeID.

On to chapter 5!

[contact-form][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Website’ type=’url’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]

Virtual Book Club: SQL Server Execution Plans, Chapter 3

SQL Server Execution Plans

Everyone loves reading a good book, especially about SQL Server.  I’ve been on a training kick lately and the timing of this book club was perfect.  Denis Gobo (b|t) has experience starting a book club where he works and decided to bring it to the SQL Server community.  The goal of the club is to read a chapter each week and discuss things you learned, liked and disliked.  You can ready Denis’s original post here.  The first book chosen was written by Grant Fritchey (b|t) and is titled “SQL Server Execution Plans, Second Edition”.  You can get a free electronic copy or purchase a print copy from simple-talk.com.

Continuing with my review of Chapter 2, here are my thoughts on Chapter 3.  This chapter was short and sweet.  XML plans were the main focus and this chapter gave me some great ideas on how to use XQuery to make reading the XML based plans easier.

Things I learned

I’ve always stayed away from XML.  In my many years working in the database world I have not had the need to interact with XML data.  Querying XML based query plans with XQuery is first time I’ve seen a benefit of using XML as a DBA.  For larger plans it will be easier to query the XML data instead of trying to hunt down operations with in the graphical view.

You can use XQuery to query .sqlplan files, XML data stored in columns or directly from the plan cache.  Caution needs to be used when querying directly against the plan cache as this can have performance side effects.

Likes

I liked how Grant told us upfront in the first part of chapter that if you’re not dealing with SQL Server instances before version 2005 that text-based query plans may not be something you need to understand.  Besides being a deprecated feature, XML offers plenty of advantages over text-based plans.

Dislikes

Nothing to dislike in this chapter.  There was just a minor typo in the supplied script for chapter 3.  The book references Listing 3.18 but in the script file it is listed as 3.19

On to chapter 4!

[contact-form][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Website’ type=’url’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]