Skip to content

2015🔗

Red Gate SQL Source Control v4 offers schema locks

Looks like the rapid release channel now has a great feature for locking database objects that you are working on. Having worked in a shared environment before, this could have been a major help. It's like the poor man's version of checking an object out in visual studio except on database objects! With multiple developers working in a shared environment, this might help reduce conflicting multiple changes on the same object.

Note that this doesn't look to evaluate dependency chains, so there is always the risk of a dependent object being impacted. I think though that this has some promise, and is a great improvement for shared environment SQL development that uses source control.

Qure Analyzer Benchmark Testing

{{< admonition type="warning" title="Updated 2018-03-18" >}} Wouldn't recommend tool at this time. Development seemed to cease, resulting in me not being able to use with later versions of SQL Server. When I came back recently to check on it, the app was sold and it was with a new company. Didn't see to have progressed much based on a quick look and not really interested in it at this point. Instead, other solutions like the new DEA (Database Experimentation Assistant from Microsoft) would be where I'd spend more effort. {{< /admonition >}}

the problem of monster views

I've been working with tuning an application that performs a lot of Entity framework calls, as well as stored procedure activity. The stored procedures could be processing a large amount of data, or a small amount, depending on the configuration of the client. The major issue was the source of the data for the client application was a view with 20+ joins involved. In breaking down the logic, most of the joins really functioned as just adding additional columns of data. The logical grouping mean that mostly 2-3 tables at a time joined to provide a set of columns based almost exclusively on the source table. I needed to get away from this pattern, as I was finding tremendous issues with cardinality estimation for SQL Server, resulting in sometimes 20GB nested loop join worktables by the optimizer.

Simplify

the incredible masterful... well... actually quiet simple solution... simplify!

My proposed solution was to break up the large view into small pieces, which I identified by logically grouping the joins/related columns. Instead of 20+ joins to return the columns needed, I'd instead provide 10 separate selects of 2 tables or so in each query. These would be processed as a dataset on the client, instead of returning a single large datatable. The next issue was to identify the improvements based on evaluating the possibility of larger amounts of base accounts pulled at once with the new format, vs the smaller batch size the large view required to function at all.

first approach was statistics io & Sql stress

Using the SQL Stress tool along with evaluating the statistics io, time was the my first course of action. However, the problem I ran across was really that I needed to run this dozens of times in a row, and evaluate the impact on the client performance as well. SQL stress provided a great way to run the query manually, but with the input variables from the app, I really wanted to run the app as the end user experience, and request the batch job on the largest amount of rows I could get. This way, I truly was matching all the input from the app, and getting timings from this. In addition, I was looking for a good way to evaluate the workloads against each other.

the right tool for the analysis = Qure Profiler

I'd used ClearTrace before, and found it helpful, but hard to consume some comparison type reports easily. I needed something to help me identify the improvement or degradation of performance and Qure Analyzer solved this for me, turning something very messy into a much simpler proposition (which also helped with my end research presentation to the big kahonas). Dbsophic Qure Profiler has had some great reviews for assisting in database profiling, but I haven't had a chance until recently to fully dive into using it. Since I was doing batch testing, I figured now would be a great time to re-review it. The numerous variables at play made this pretty complex to really quantify any improvement based on a single run alone. So, I brought back up Qure Profiler to help me out. Necessity is the mother of experimentation.

Initial Performance Evaluation Run (Batch level analysis)

Evaluated runs on the largest current grouping. This was 1232 base accounts. I would later evaluate with larger batch sizes, but started simple.

Actual Execution Details with Qure Profiler

Tested Again With QURE Profiler set to minimum batch info. I also ran DBCC free proc cache to attempt to better ensure the plan was correctly rebuilt for the new counts, and that it was a fresh start in the comparison as far as impact on the disk IO.

Comparing 100 to larger batch sizes in the base account request

This final comparison shows 42% improvement by using 500 rows at a time. This seemed to be a good compromise at this point to increase batch sizes, while still maintaining lower logical reads. Next step was to test against a larger database to evaluate scalability.

Actual Execution Results on Larger database

Evaluating against a database with about 500GB of data, I found the best execution time seemed to be the base account count (tongue twister) seems to be the 1000 batch size at this time. It is returning the results in the shortest duration and the lowest impact on reads. FREE PROC CACHE COMPLETED ON EACH STEP

Qure Profiler workload comparison

Set the baseline as the 100 batch size (which is the current way of processing the request). Qure provided an extremely powerful side by side comparison of both of the workloads. The best value is still the 1000 batch size, showing that the logical reads at point.

Comparing to the original methodology of a huge view with 20+ joins

Against two databases, both performed signficantly better with the simplified approach. One database was much faster, while another that was completely timing out with 20GB+ nested loop join worktables finally ran without incident. Major win! The changes look to have positively improved the performance overall for both databases representing two much different usage patterns (and data stored in it) When comparing new (1000) simplified procs vs 100 on original views it showed: - 27% shorter per execution - 77% less cpu impact - 81% less logical read impact

Additional functionality I haven't even touched on

Qure Profiler offers additional functionality that I haven't even gotten a chance to touch on. It can normalize the count of events to compare an even 100 against another 100 even when the second trace might have run longer and caught the event more than the 100 times. Check out their page for more details.

TL;DR Summary

Long post. If you read this far, I commend you for either being a great page scroller, or for being tenacious and finding all my graph's super interesting. If that's you, you get #SqlCred I'm finding that with workload testing, Sql server workload comparison is one of the harder things to do right. There are a lot of variables to take into consideration, and even when doing a simple test on batch size changes like I did, aggregating the comparison results into usable statistics and in a presentable format can be daunting. Highly recommend the dbsophic product for a great utility to help save a lot of time in doing this comparison. This tool goes into my top SQL server performance tuning tools for sure. I'm going to be evaluating their more advanced Qure Optimizer soon as well, as it might help identify other issues I've missed on tuning by evaluating adjustments against a test copy of a database.

Are there any cons?

I didn't run into any significant issues that impacted my ability to use it. I do think there were a few stability issues that I had to work around by restarting the app a few times, but for the time it saved, I wasn't complaining too much. 1. Need to offer option to use extended events. This is on their roadmap apparently. Xevents should help lower the impact from doing the profiling on the workload being tested. 2. Some app errors related to closing workload comparison tabs. I reported this bug and hopefully they'll continue to improve the application. 3. Maybe some GUI improvements to make it more in-line with modern UI standards? Just a personal preference. I didn't find the tab based comparison approach the most intuitive. Not terrible, but would be a nice improvement in the future.

What could they improve?

  1. Offer comparison report option that wouldn't require as many screenshots, and instead summarize the selected items in a pane that you could copy to image. This would be slick!

SET NOEXEC is my new friend

Have never really played around with the option: SET NOEXEC ON Turns out this can be a helpful way to validate larger batch scripts before actually making changes, to ensure compilation happens. If you choose, you can verify syntax by "parsing" in SSMS. However, this doesn't compile. Compilation checks more than your syntax. It actually validates the objects referenced exist.

The execution of statements in SQL Server has two phases: compilation and execution. This setting is useful for having SQL Server validate the syntax and object names in Transact-SQL code when executing. It is also useful for debugging statements that would generally be part of a larger batch of statements. MSDN #188394 I previously had done this basic verification by running an estimated execution plan. This had the benefit of finding compilation errors as well. However, with large batch jobs it could be problematic and slow, as it had to process and return all execution plans for the statements, which I didn't need. Having this function can be a nice resource for validation of scripts before running, without the overhead of estimated execution plans. Additionally, you can handle transactional rollbacks and prevent cascading problems from happening when running a batch that changes your database by setting NOEXEC ON when encountering an error. Red Gate SQL Compare does this elegantly: MyDescription

Running very large scripts is not a strong area for SSMS

out of memory, argggh!

Am I the only one that has experienced the various out of memory issues with SSMS? Not according to google!

lovingly crafted in the forges of.. well ... dbforge

I've a huge fan of Devarts products. I've done a review in the past on their SQL Complete addin, which is the single most used tool in my SQL arsenal. It vanquishes nasty unformatted code into a standard lined up format I can easily read. The 100's of options to customize the formatting make it the most customizable formatter I've found for SQL code. This SQL Complete however, is a plugin for SSMS. It is native in their alternative to Sql Server Management Studio, dbForge Studio. Highly recommend checking this out. It's affordable, especially if you compare against other products that offer less.... and they have a dark theme muaaah!

execute script that is far too large

I'll post up more detail when time permits on some of the other features, but one noticeably cool feature is the "execute large script" option.

MyDescription

MyDescription You can see the progress and the update in the output log, but the entire script isn't slowing down your GUI. In fact, you can just putter along and keep coding.

MyDescription Other options to accomplish the same thing include executing via SQLCMD, powershell, or breaking things up into smaller files. This just happened to be a pretty convenient option!

Have I switched?

I haven't switched to using it as my primary development environment because of 2 reasons. Extensions... I do have quite a few that work in SSMS like SSMS Tools, SSMS Toolpack, and some Red Gate functionality. I lose that by switching over to dbForge Studio. Also, some of the keyboard shortcuts like delete line and others I'm so used to aren't in there. Regretably, they don't support importing a color scheme from visual studio, so you lose out on sites like https://studiostyl.es/ Other than a few minor quibbles like that I'm pretty happy with it. They've done a great job and the skinning of the app is great, giving you the option of dark or light themes.

Devart apps provided to me for my evaluation, but are not biasing my recommendation.

Database Stuck in Single-User Mode Due to Botched Restore

Working in a development environment, I botched up a restore. After this restore attempt to overwrite my database with the previous version, I had it stuck in single-user mode.

SSMS provided me with helpful messages such as this: Changes to the state or options of database 'PoorDb' cannot be made at this time. The database is in single-user mode, and a user is currently connected to it. Additionally, I was told I was the deadlock victim when attempting to set the user mode back to multi-user.

Going forward I looked at several articles from Stack Overflow and various other blogs, and followed the recommended steps such as

I even added a step to kill the connections to it by using this statement, helpfully posted by Matthew Haugen

Finally went through and removed all my connections from master based on an additional post. No luck. Stopped my monitoring tools, no luck. At this point, it felt like a Monday for sure.

Since I was working in a development environment, I went all gung ho and killed every session with my login name, as there seemed to be quite a few , except for the spid executing. Apparently, the blocking process was executing from master, probably the incomplete restore that didn't successfully rollback. I'll have to improve my transaction handling on this, as I just ran it straight with no error checks.

VICTORY!

What a waste of time, but at least I know to watch out next time, ensure my actions are checked for error and rolled back. I'm going to just blame it on the darn SSMS GUI. Seems like a convenient scapegoat this time.

Successful pushed out my changes with the following script:

What was I thinking? Deleting myself from localdb?

Was testing a batch file to add a user to a localdb instance. Assumed that my user as admin on the machine wouldn't have an issue inserting myself back.... didn't think that one through too carefully. Executing any type of SQLCMD against it denied me. SSMS denied me. No SA had been setup on it, so I couldn't login as SA either. Looked for various solutions, and ended up uninstalling and reinstalling (localdb)v11.0 so that I'd stop having myself denied permissions.

This however, didn't fix my issue. The solution that ended up working from me came from dba.stackstackexchange.

I ended up deleting everything in the v11.0 Instances folder and then issuing the following command sqllocaldb.exe c v11.0

Resulting in message: LocalDB instance "v11.0" created with version 11.0.3000.0.

Success! This resulted in the instance being created successfully, and then I was able to login with SSMS. Apparently today was my day for learning some localdb permissions issues. What a blast..... Could have avoided this if I had simply used a test login, or had setup the SA with a proper password for logging in. #sqlfail

Some simple examples of querying xml with sql

XML is a beast if you've never tackled it. Here are some simple examples of what I discovered as I experimented and walked through obtaining values out of a XML column.

Enabling Instant File Initialization

Found a couple good walkthroughs on enabling instant file initialization. However, I'm becoming more familar with the nuances of various setups and found it confusing in trying to map the correct user/group to enable this option. In my case, I had the SQL Service running under NT SERVICE/MSSSQLSERVER and as such this logic wasn't showing up when trying to find groups/users to add to the necessary permissions. Lo and behold...

I typed it in manually and it worked. If time permits I'll update the article later with a more technical explanation, but as of now, this is just a quick functional post to show what resolved the issue. Add the service account or group (whatever you have sql server in) to the perform volume maintenance privileges in the local security policy.

Instant File Initialization 1 Instant File Initialization 2 Instant File Initialization 3 Instant File Initialization 4

Upgrade from SQL 2014 Evaluation to Developer Edition

Couldn't find documentation showing that upgrade from SQL 2014 evaluation version was possible to developer edition. I just successfully converted an evaluation version to developer edition.

  1. Obtain the key for the developer edition (in my case I had to download the .ISO from MSDN downloads, and go through the installation wizard to get it)
  2. Run the installation center app for SQL Server 2014
  3. Select edition upgrade
  4. Plug in your new serial from the developer edition. Pretty simply, but thought I'd post a confirmation for anyone wanting confirmation the upgrade path was an option for developer. Probably somewhere in the MSDN documentation, but I couldn't find it with a quick search.

Utilizing the power of table parameters to reduce IO, improve performance, decrease pollution, and achieve world peace...

I was dealing with a challenging dynamic sql procedure that allowed a .NET app to pass in a list of columns and a view name, and it would generate a select statement from this view. Due to requirements at the time, I needed the flexibility of the "MAIN" proc which generated a dynamic select statement, while overriding certain requested views by executing a stored proc instead of the dynamic sql.

During this, I started looking into the string parsing being completed for a comma delimited list of numbers to lookup (the primary key). I figured I'd explore the benefits of the user defined table and pass through the list of ids from the .NET application with a table parameter instead of using comma delimited list. Some great material

I came across indicated the overhead might be a little more client side, but that the benefits to cardinality estimation and providing SQL Server a table to work with can far outweigh the initial startup cost when dealing with lots of results to join against. The main area I wanted to address first, that I couldn't find any clear documentation on was the memory footprint. I saw mention on various sources that a TVP can have a lower memory footprint in SQL Server's execution due to the fact as intermediate storage it can be pointed at by reference, rather than creating a new copy each time, like when working with parsing into another variable using comma delimited lists.

I get that passing the stored proc a table variable means it's working with provided object, but what about the portability of this object? In my case, there are at least 2 levels being worked. The MAIN proc and the CHILD proc. The child proc needs access to the same list of ids. The dynamic statement in the MAIN proc also needs the list of ids. Currently it was creating the list of ids by inserting into a table parameter the delimited list of values.

Could I instead consider passing the actual table parameter around since it's by a readonly object and hopefully keep referring to it, instead of having separate copies being created each time. This could reduce the IO requirements and tempdb activity by having a single TVP being used by the MAIN and CHILD procs. TVP Testing 2 TVP Testing 3 TVP Testing 4 TVP Testing 5 Summarized IO: TVP Testing 1 The footprint is reduced when dealing with IO from the child statement, because it keeps pointing to the same in memory object. I also validated this further by examining a more complex version of the same query that compares the comma delimited list against executing a nested stored procedure, which in turn has dynamic sql that needs the table parameter passed to it. The results of the review show successfully that it keeps pointing to the same temp object! TVP Test 6

In summary, the table valued parameter can end up being pretty powerful when dealing with passing a list of values that may need to be referenced by several actions or passed to nested procs (not that this is the best practice anyway). Disclaimer: this is working with the constraints of what I have to release soon, so not saying that nested procs with dynamic sql in both MAIN and CHILD are a great practice, but sometimes you gotta do what you gotta do!