Skip to content

posts🔗

Redgate SQL Data Compare & Devart DBForge Data Compare

I'm a big fan of Redgate, as I'm in the Friend of Redgate program. However, I do also utilize some other toolkits. One competitor that I find has some , but I do dabble with some other toolkits (I know heresy :-) . One of the competitors that I find has some brilliant features, but many time lacks the refinement and ease of use of Redgate is Devart tools. The tools they offer are often really nice, and continually updated based on feedback. As a general rule, I'd say the Devart tools feel less "refined" in some areas, but then offer some really nice usability features that RG hasn't yet implemented. Both have their place in my toolbelt depending on the need.Having just completed some very large data comparisons on views, generating over 15GB of network traffic in last few days, I've been really impressed with the usability and output from Devart DbForge Data Compare. The performance seems great.

I've evaluated their schema compare before and found it fantastic for the price if I was strapped on a budget, but when able to pay for an overall more flexible and refined product I'd definitely choose SQL Compare. The differences are much smaller on the data compare tool though due to the much less complex nature of what it's performing. I ran across a few features in that I thought would be great to mention for the team working on Data Compare to provide some enhanced functionality.

Diff Report: They provide a fantastic option of outputting a diff report not only in CSV but in XLS format. The formatted report is much more usable than the CSV I get from RG Data compare because they format, and apply bold to the _S and _T cells that actually have a difference, enabling a much easier review process to find the diffs. This is far more usable for an end audience that might want to view differences in data detected on a table. I've had the case to provide this report to analysts to look at differences. The typical use case of DBA's syncing data from one database to another probably would just use the tool and never need this. My particular use case has found a better report output would have been a major benefit. Cached schema object definitions/mapping. They load up previous mappings so you can go and tweak without the need to refresh immediately. This would be nice when you are fine tuning the comparison results and keep needing to tweak to the figures. Other suggestions based on my recent work w/large table comparison. Since table size has a direct impact on the compare due to local caching of the data, consider providing a column that shows estimated & total space required for the comparison. This way if I compared a lot of small tables I'd see the rowcount/size (sp_spaceused) and then added a large table (3GB for example), I'd see the approx local storage and network transfer impact with total size of "7GB total storage/transfer required".

If I setup a comparison on a view with custom key (due to no index on the view), and I drop and recreate the view for a new definition, the comparison options are not persisted (for example the custom key). I'm not sure if this is due to the underlying changes on the object_id and lack of clustered index for explicit mapping, but persisting this would be really nice when the columns used for key comparison still exist. Overall, as a friend of Redgate I'm

SSMS 2016 - Object Explorer Read Uncommitted

I ran through some directions from others, including the very helpful post from SqlVariant, but I had issues locating the correct keys. For my Windows 10 machine, running SSMS 2016, I found the registry keys related to the object explorer located in a different path.

I found matches for read committed/uncommitted string at: HKCU\SOFTWARE\Microsoft\VisualStudio\14.0\SSDT\SQLEditorUserSettings

Running the following powershell command: get-itemproperty -path 'Registry::HKCU\SOFTWARE\Microsoft\VisualStudio\14.0\SSDT\SQLEditorUserSettings' ' select SetTransactionIsolationLevel ' format-list

Track Creation of Databases

Sys.Databases has some create information, but I was looking for a way to track aging, last access, and if databases got dropped. In a development environment, I was hoping this might help me gauge which development databases were actually being used or not.

/*******************************************************
    run check on each constraint to evaluate if errors
*******************************************************/
if object_id('tempdb..##CheckMe') is not null
    drop table ##CheckMe;

select
    temp_k =    identity(int, 1, 1)
    ,X.*
into ##CheckMe
from
    (select

            type_of_check =                                            'FK'
            ,'[' + s.name + '].[' + o.name + '].[' + i.name + ']'    as keyname
            ,CheckMe =                                                'alter table ' + quotename(s.name) + '.' + quotename(o.name) + ' with check check constraint ' + quotename(i.name)
            ,IsError =                                                convert(bit, null)
            ,ErrorMessage =                                            convert(varchar(max), null)
        from
            sys.foreign_keys i
            inner join sys.objects o
                on i.parent_object_id = o.object_id
            inner join sys.schemas s
                on o.schema_id = s.schema_id
        where
            i.is_not_trusted = 1
            and i.is_not_for_replication = 0
        union all
        select
            type_of_check =                                            'CHECK'
            ,'[' + s.name + '].[' + o.name + '].[' + i.name + ']'    as keyname
            ,CheckMe =                                                'alter table ' + quotename(s.name) + '.' + quotename(o.name) + ' with check check constraint ' + quotename(i.name)
            ,IsError =                                                convert(bit, null)
            ,ErrorMessage =                                            convert(varchar(max), null)
        from
            sys.check_constraints i
            inner join sys.objects o
                on i.parent_object_id = o.object_id
            inner join sys.schemas s
                on o.schema_id = s.schema_id
        where
            i.is_not_trusted = 1
            and i.is_not_for_replication = 0
            and i.is_disabled = 0) as X

Bad Idea Jeans: Query Optimization Through Minification

SQL is pretty verbose compared to some languages. It's a pretty big disappointment that I have to type out select customer from dbo.customers where id = 2 instead of a much simpler syntax like dbo.Customers ' Customer like Powershell might offer. As I considered the disappointing verbosity of sql server, I considered that perhaps one way to reduce network traffic, save electricity, and aid the garrulous language known as sql might be to require all code running to be minified.Think about the potential savings in bandwidth and having to scroll. Anyone who complains about this should just realize there is a thing called word-wrap which will solve all the readability problems. No more need for Red Gate Sql Prompt's beautiful yet wasteful formatting options. (sorry RG). In fact, no more debates on readability of formatting standards at all!

Beautiful Minified Sql Code

In a file size comparison on this small small query I found a full 1KB size savings.

File Size
WITH-LINE-BREAKS.sql 9 KB
NO-LINE-BREAKS.sql 8 KB

If you extrapolate this over a larger query I found a 20% reduction.

File Size
LONG-NO-LINE-BREAKS.sql 129 KB
LONG-WITH-LINE-BREAKS.sql 160 KB

With a heavy traffic OLTP system, this might reduce traffic IO tremendously and reduce server energy costs. I did consider trying to calculate the wattage savings this might entail, but I plead laziness. I also considered running wireshark to analyze packets to compare the size to validate what magic the compression and other parser functions perform on the sent query, but decided I had better uses for my time. .... Maybe the next brilliant idea would be to name all your tables consecutive numbers like dbo.1, dbo.2, dbo.3 and so on. Since these names are stored in a nvarchar format in system tables, it might optimize the storage and performance of the system tables.... So many good ideas, so little time......

this post was inspired by the wonderful contributions to bad ideas that Brent Ozar and his team of eccentric consultants donated to the community to cause mayhem for those stupid enough to try them out. Thanks Brent :-)

ManicTime - Timetracking Automation Done Right

{{< admonition type="info" title="Updated: 2020-04-29" >}} broken image links removed {{< /admonition >}}

Tracking time is always a beast. With the amount of context switching many developers do, it can be tough to remember how much time went to each project. With companies looking to track effort on sprints, hours on a client project, or (as in my case) just a dev wanting to better evaluate the productive use of time, this app fills some gaps that others don't.For instance, I've tried tools such as Toggl, and similar. I found them useful, but requiring a lot of diligence to work with. There is very little "automatic" categorization of time. Many of those tools are focused on a timer based approach that requires you to start and stop the timer. ManicTime approaches this differently. It has the typical stop watch, countdown, Pomodoro type functionality a time tracking tool might offer, but in addition to this it provides a captured timeline of activity with various forms of meta data to easily review and parse for categorization.

This categorization of information can take the form of details such as:

  1. Development
  2. Personal
  3. Browsing or be specific based on user input such as
  4. Development, Reticulating Splines
  5. Personal, Contemplating Navel
  6. Project Manhattan, Task 666: Optimizing Nuclear Db Performance

Manually entered information is a big driver for better tagging accuracy, but it expands this to allowing dynamic tags based on matching of applications, titles of documents, titles of web pages, calendar entries, and more. This offers a vast range of meta data captured on your system to generate more accurate allocation of your time. The only real negative to this is that it is not as simple as something like Toggle. However, with a little work, you can work an entire day and quickly recap at the end, categorizing your effort with very little fuss, and a high amount of accuracy.

If you find yourself forgetting to hit the stop/start button on your time tracker and want to consider a better option for tracking your effort, then look no farther.

I've used this application over the last year and found a lot of value in it, figured I'd share a little on it, as it's become on of the tools in my essentials pack. Disclaimer: I was provided with a license a while ago on this. This doesn't impact my review, as I just like finding great tools to help devs have a better workflow. I only end up reviewing the ones that I've really found useful

Overview

Time Categories

Time Categories

Overview of Day's Activities

The list of all activities is organized into several timelines, whether it be the applications, extracted document titles from applications, calendar, or even calendar feeds. This allows a variety of ways to go back through and easily organize and track time. One recent improvement that I completely love is the integrated screenshots into the application timeline. This allows you to keep a running screenshot log of activity to easily go back through existing applications and remember exactly what was being done at the time. A very useful implementation!

Tracking Exclusion

Note that you can choose to go off record, as well as specifically only track certain times of day. This is a good option for those that have a work laptop that they might leave running and only want to report on certain periods of active work time.

Autotagging

Autotagging is where this tool gets powerful. Basically, the concept is to allow automatically tagging based on application, window title, url, or other parsed value. This means you can categorize your effort much more easily, with minimal effort.

Regex Parsing

I've yet to figure out the dynamic tags based on regex parsing as it doesn't seem to give you a preview to test and refine results. Once I figure this out, or the app improves the ability to use this I think the additional timelines will be very handy as you could have one timeline focused on dynamic parsing and grouping of projects based on doc/chrome titles that doesn't interfer with the categorization that the other timeline might use. This is a usability issue that I hope to see improved in the future. It has a lot of potential.

Regex Parsing

Multiple Autotag Timelines

This is someone I've recently been exploring as it provides the capability to create an automatic tagging of apps, but for different purposes. For instance, you might setup one rule for parsing project numbers and send to a AutoProject timeline that aggregates the totals, but another timeline for categorization of the apps/websites. Another use might be a timeline focused on categorizing web usage, while another focuses on app usage.

Multiple Autotag Timelines

Tagging

Away Time Tagging

You can have ManicTime prompt you when you return from your computer, or when a timer has detected minutes of idle on your system. This can help ensure that if you are gone to a meeting, or away from your PC you are still tracking the time you used.

Away Time Tagging

Narrow down untagged time quickly

There is a variety of ways to filter down the timeline to only untagged activities as the selected, or untagged as what's actually shown. This can help identify gaps in what you've reviewed.

Narrow down untagged time quickly

Statistics & Reports

Generate Timesheet Report

Generate Timesheet Report

Some Nice Visual Statistics Available

Other Statistics Available

These are listed based on the selected groups, tags and more.

Other Statistics Available

Manic Time Server

Manic time offers server functionality to allow this tool to be used to help generate reports for staff members and usage. This functionality is not for the non-technical user. I found it a little challenging to get things setup, so this current iteration wasn't designed as a simple "central" solution for all devices. With a better setup/configuration experience (no domain user logins etc) and perhaps more of a Google Drive/Dropbox type sync, I think the solution would be fantastic for tracking time on various devices. Due to the setup issues I had on server, I wasn't able to include tracking from the new ManicTime android client. I would say that homegrowing your own tracking solution with Tasker and a custom timeline here might not be a difficult project to consume through the app due to the documented format for consuming external timeline information. I haven't gone to that effort, but it's an intriguing concept.

daily retrospective

Being able to retroactively tag and categorize effort at the end of the day, without having to constantly toggle the stopwatch. You can approach with a stop watch/pomodoro/countdown built in, but if you get pulled in multiple tangents, this tool makes it easy to go back and categorize throughout the day... IF your primary work is driven on using your computer. Since I'm approaching this from a developer tool point of view, it's a perfect fit!

Last Thoughts

Fantastic app with a unique approach. Cost is a little high, but it's an independent app so supporting the development can be a good thing as a really specialized tool. Not sure they'd be able to continue development if it was a lifetime purchase (those seem to have gone away over time). As a good office tool for better tracking and reporting on time (for instance if working with clients), then I believe it might just pay for itself. I'd like to see a smoother integration with the server components to being a better cloud tracking mechanism, allowing android, pc, mac, all to provide a solid reporting mechanism for families on the go. The app seems more focused on enterprise/business tracking though, so this might not be implemented. I'll continue using and finding great value in helping track my time with the least amount of work. For those looking for a solution, give it a shot. They have a lite version available as well with less features, so you can take a swing at it.

Dynamically Set Powershell Variables from json

I created this small snippet to allow a list of values from a json file be turned into variables to work with. For working with a fixed list of configuration values, this might be helpful to reduce some coding effort.

The Traditional Birthday Song Is Terrible

The traditional birthday song is terrible.It's never really changed. It's like singing a dirge. It's really really hard for people to sing anywhere close to on key. We all sing it because we have to, but there is this feeling of regret, like "I'll do it for you, but just because I love you". It is followed by "Many mourns" by the closest available family clown. Apparently, the roots were back in the 19th century, and wikipedia says:

In 1988, Warner/Chappell Music purchased the company owning the copyright for US$25 million, with the value of "Happy Birthday" estimated at US$5 million....In February 2016 Warner/Chappell settled for US $14 million, paving the way for the song to become public domain.[18] WIKI Let's leave this tainted legacy behind. I propose a radical change. Ditch it for something fun. Make a new family tradition.

This is much more like Rend Collective's reminder to be always practicing the Art of Celebration :-) It's super easy to sing the first time, promotes foot stomping and hand clapping, promotes dancing and jumping, and overall, I feel conveys the feeling a birthday should have. New lyrics:

**The Art of Celebration Birthday Song**
- created after much deliberation and refinement in 1 min at the house of Sheldon Hull
Hey hey hey, It's your birthday day
Hey hey hey, It's your birthday day
Hey hey hey, It's your birthday day
Sing Loud, Sing Proud, It's Your Birthday
- Creative Commons Zero CC0, so do whatever you want with this bit of musical genius

Scan folder of dlls to identify x86 or x64 compiled assemblies

Point this at a directory of dlls and you can get some of the loaded assembly details to quickly identify what type of processor architecture they were compiled for.I did this as I wanted to explore a large directory of dlls and see if I had mixed assemblies of x32 and x64 together from a visual studio build. Some dlls with invalid assembly header information were found, and this skips those as warnings.

Attaching Database Using SMO & Powershell

Steve Jones wrote a great article on using this automation titled The Demo Setup-Attaching Databases with Powershell. I threw together a completed script and modified it for my functionality here. MSDN documentation on the functionality is located here Server.AttachDatabase Method (String, StringCollection, String, AttachOptions)I see some definitive room for improvement with some future work on this to display percentage complete and so on, but did not implement at this time.

For the nested error handling I found a great example of handling the error output from: Aggregated Intelligence: Powershell & SMO-Copy and attach database. If you don't utilize the logic to handle nested errors your powershell error messages will be generic. This handling of nested error property is a must to be able to debug any errors you run into. http://blog.aggregatedintelligence.com/2012/02/powershell-smocopy-and-attach-database.html

If you want to see some great example on powershell scripting restores with progress complete and more I recommend taking a look at this post which had a very detailed powershell script example. SharePoint Script - Restoring a Content Database

Parallel Powershell for Running SQL

This is just a quick look. I plan on diving into this in the future more, as I'm still working through some of the changes being made in the main parallel modules I utilize for SQL server. In the meantime, if you are looking for a quick way to leverage some parallel query running, take a look at PSParallel. I've avoided Powershell Jobs/Workflow due to limitations they have and the performance penalty I've seen is associated with them.For my choice, I've explored PSParallel & PoshRSJob. I've found them helpful for running some longer running queries, as I can have multiple threads running across server/database of my choice, with no query windows open in SSMS. Another great option that is under more active development is PoshRsJob. Be clear that this will have a higher learning curve to deal with as it doesn't handle some of the implicit import of external variables that PSParallel does. You'll have to work through more issues initially to understand correctly passing parameters and how the differents scope of runspaces impact updating shared variables (ie, things get deeper with synchronized hashtables and more :-) ) Hope this helps get you started if you want to give parallel query execution a shot. Here's a function using PSParallel to get you started. Let me know if it helps