Skip to content

posts🔗

TFS Work-Item Fancy Filtering

If you want to create a TFS query that would identify work items that have changed, but were not changed by the person working it, there is a nifty way to do this.The filtering field can be set to <> another field that is available, but the syntax/setup in Visual Studio is not intuitive. It's in the dropdown list, but I'd never noticed it before!

filter list

AND ' Changed By ' <> [Field] ' Assigned to

Note that you don't include brackets on the assigned to field, and that the <> [Field] is not a placeholder for you to type the field name in, it's actually the literal command for it to parse this correctly.

Filter setup for tfs query

Easy way to test log shipping or availability groups setup

Have been working through the fantastic training resources from Brent Ozar's Everything Bundle and on the recommended resources they mention after all the training on log shipping and availability groups that you can use a lab environment from TechNet to actually get going on familiarizing yourself with the technology more.

This is great! Of course, it's not the full deal, but this gives a tangible way to get moving on familiarizing yourself with this complex technology. TechNet Virtual Labs Availability Groups Side note: Fantastic training resource with great roi with Brent Ozar Unlimited "Everything Bundle" if you apply the current podcast review discount of 78%. Great value with immediate return. I also like pluralsight, but find it takes more time investment to get the immediate value. Their courses are short, compact, and full of great material.

Recommend you add to your training budget asap. Podcast Review Discount Combine this with a subscription to Office Hours with the app Pocket Cast, and you'll have an easy way to keep up to date with some great tips.

Diff all files

{{< admonition type="info" title="Updated: 2017-07-14" >}} Still find this incredibly awesome! Developer just updated for Visual Studio 2017 after a lot of hard work. Github repo here for any issues. Big thanks to DeadlyDog for this great tool and putting in the effort to update for VS2017. deadlydog/VS.DiffAllFiles: Visual Studio Extension to make comparing files before and after committing them to Git and TFS faster and easier {{< /admonition >}}

{{< admonition type="info" title="Updated: 2016-01-25" >}} I started using the option to compare a single file at a time, since it picks up the syntax highlighting then. The other way (merge files into one) can't handle syntax highlighting with the various files types all mixed into the same window. Diff All Files extension handles this beautifully with proceeding one by one and automatically opening the next file to compare or allowing you to hit next to close and reopen with the next file to compare. I still enjoy using this due to the "context" lines that reduce the noise to just the actual section being changed. {{< /admonition >}}

If you need to do code comparison on a lot of files in Visual Studio, I ran across an extension Diff All Files that is really helpful for merging down the individual file changes into more consumable format.

In the changeset view you can select diff all files and whatever settings you've setup in the Tools > Options > Diff all Files settings will then feed through automatically.

2016-07-06_9-31-56_diff_all_files_setupoptions

You then click on diff all files in the changeset viewer

diff_all_files_animation

All the non-excluded items will then be merged into a single file, if this option was selected, or separate windows. Personally, I've found the merged view really helpful to have one single pane to scroll through.

2016-07-06_9-32-02_diff_all_files_settings

Third Party Comparison Perks

To go a step beyond this you could use a third party comparison tool. My preferred option is Araxis Merge (disclaimer: they provided me with a license for evaluation, which doesn't impact my assessment). I really like the fact it merges down similarities into a concise comparison. This tool gives you the option to provide only the different lines, with a X number of lines before and ahead for context. This could reduce thousands of lines of code to just a few hundred that have variances. Win! Highly recommend you check them out, as if you are doing open source they have a free license for that. If you do a lot of code comparisons, some of the additional functionality in their app might be worth it, otherwise the built in viewer in TFS is pretty good.

In my test comparison, I reduced 3245 lines of code in the comparison window to 25 lines, which was the actual changes + 10 lines above/below for each change to have context. This resulted in only a few hundred lines to scroll through. Made my life easier! The only con for me with Araxis is no dark theme.... but I'll live.

araxis_merge_gui

configuring third party tools

configure_external_diff_tool

I saved this snippet from working through Araxis documentation for setting up their tool with Visual Studio.

I also just ran across another site that had a nice resource list of configuration settings for different comparison tools. Diff All Tools - Visual Studio Extension

Other comparison tools

Third party tools can be great for some extra perks like File versioning, comparing folders, and more. You can get by without them, of course, but if you do a lot of change review consider some of the options. A few others I've reviewed (if pro/paid they provided license for me to evaluate)

  • Delta Walker (pro): Great UI, does image similarities comparison tool. Didn't seem to have a "show only changed lines with context" like Araxis at the time I reviewed. Mac app!
  • Devart Code Compare (pro/free): I love most programs Devart puts out. This is a good comparison app. Wasn't my preferred option, and at times struggled with large files, but overall was good in comparison accuracy.
  • DiffMerge: Brief overview, solid tool, just didn't have specific improvements I needed (focused on changeset/sql comparison)
  • Beyond Compare: didn't dive into this extensively. They have a devoted following with a lot of loyalty, so check them out if you want to evaluate their software.
  • Built in TFS diff - great for a built in tool, I just wanted to have some enhancements in what to ignore and summarize.
  • SQL Compare: my favorite for reviewing changsets that are directly in TFS. I wish they'd add a module for just pulling up the diff view without having to open SQL Compare to setup a comparison though. Sounds like a good user voice feedback item :-)

Seeker & Servant: Fantastic Music with incredible dynamics

Just recently discovered this artist after being exposed to an article from Worship Leader magazine. Fantastic dynamics. The dynamics and beautiful harmonies are pretty darn close to what I'd love experimenting with if I had a group of folks playing those instruments. Interestingly, the vocal harmonies are very simple, but I found very beautiful. It's compromised of a tenor and a baritone range, and is a fresh change stuff like Shane and Shane which both have incredibly high ranges. I found it very approachable to enjoy singing with. The power of the musical dynamics and beautiful lyrics was a major win. I'll be following them closely.

Get their latest album for free here: Seeker & Servant Similar Artists/Albums:

  • Fleet Foxes
  • Dustin Kensrue: Carry the Fire

Would Like If:

  1. You like post-rock
  2. Long Vamping Dynamics
  3. Minimalist Arrangements
  4. Tight 2 part harmonies

Here's the album on spotify for those who want to check it out!

Transaction Logging & Recovery (part 3)

Continuation of some notes regarding the excellent content by Paul Randal in Pluralsight: SQL Server: Logging, Recovery, and the Transaction Log. Please consider supporting his excellent material by using Pluralsight and subscribing to his blog. He's contributed a vast amount to the SQL server community through SQLSkills This is my absorbing of key elements that I never had worked through

Jackalopes Are Real....so are Virtual Log Files

Ever seen a picture of a jackalope? Image by Mark Freeman (Jackalope, Grand Canyon North Rim, Oct 07) Creative Commons License

This is how I used to feel about Virtual Log Files. Folks were saying things like

  • "Your server may be slowing down because of those darn VLF's".....
  • "Have you checked your VLF count"...
  • "My VLF count was x" and more

Finding clarification on VLF (Virtual Log Files) can be difficult, as what is considered a high count for some may be contradicted by another with another "target VLF count" claim. Paul Randal unpacks this excellently in his class, providing some great transparency.

jackalopes-are-real-so-are-virtual-log-files_ibbrwc

Why Should I Care About VLFs?

In an excellent article regarding the performance impact analysis of VLF's, Linchi Shea provides some valuable insight into the impact. For more detailed analysis & graphs please look at this great article: Performance impact: a large number of virtual log files - Part I (2009)

  1. Inserts were about 4 times as slow
  2. Updates were about 8 times slower
  3. Deletes were about 5 times slower
  4. Recovery time can be impacted Slow recovery times and slow performance due to large numbers of Virtual Log Files (2008)
  5. Triggers & Log Backups can be slowed down Tony Rogerson article (2007)

Virtual Log Files

  • At the beginning of each log file is a header. This is 8kb header that contains settings like autogrowth & size metadata.
  • Active VLF's are not free for usage until they are marked as available when clearing the log (see previous post about backups)
  • When you create a db you have one active VLF file, but as you progress more VLF's will be used.
  • Too few or too many VLF's can cause problems.

VLF Count

  • You cannot change the number and size of VLF's in a new portion of the transaction log. This is SQL server driven.
  • The VLF size is determined by a formula.
  • For detailed breakdown of the changes that SQL 2014 brought for the VLF algorithm, see this excellent post by Paul Randal: Important change to VLF creation algorithm in SQL Server 2014 Since I'm working with SQL 2014, I found it interesting as the increased VLF count issue that can be impacting to server performance has been greatly improved. Paul's example cited that the number of VLF's in his example would result in 3192 VLF prior to 2014, but with SQL 2014 it decreased down to 455, which is a substantial improvement. Paul indicated that the prior algorithm was designed primarily for around 1997-1980's, when log files wouldn't be sized as large. Also note a critical question that he answers: COMPATIBILITY LEVEL IS IGNORED BY THE STORAGE ENGINE PROCESSOR This is great information he's shared, as I've found it confusing at times to separate out the Query Engine impact from compatibility level, and understanding this scope of impact can help with assessing possible impact.

More Detail than You Ever Wanted to Know on VLF's

  • VLF's internally contain log block sizes. 512-60KB.
  • When the log block is filled it must be flushed to disk.
  • Within the log block are the log records.
  • VLF's contain a header. This indicates whether or not the VLF is active or not, LSN, and parity bits.
  • VLF log records support multiple concurrent threads, so the associated transaction records don't have to be grouped.
  • LSN. I've heard the term used, but until you understand the pieces above, the term won't make sense. - Log Sequence Number = VLF Sequence Number : Log Block Number : Log Record
  • They are important as the LSN is stamped on the data file to show the most recent log record it reflects, letting sql server know during crash recovery that recovery needs to occur or not.

Number of Log Files

This is determine by a formula that has been updated for 2014.

  • Different size growths have different number of VLFs.
  • VLF's don't care about the total size, but instead about the growth.
  • For instance, Above 1 GB growth events on log file will split into 16 new VLF's, 1/16.

FAQ (I've asked and looked for some answers!)

**Create small log and then expand or create larger log initially? **

Paul Randal answered: No. If I was creating, say a 64 GB log, I'd create it as 8GB then expand in 8GB chunks to 64GB to keep the number of VLFs small. That means each VLF will be 0.5 GB, which is a good size. What is the ideal l number of VLFs? Some key articles I've found for detailed answers on understanding proper VLF count:

  1. Transaction Log VLFs - too many or too few (2008)
  2. 8 Steps to better Transaction Log throughput (2005)
  3. A Busy/Accidental DBA's Guide to Managing VLFs (2009) Resources
  4. Brentozar SP_BLITZ will check VLF counts How do I ensure my log file gets marked as available for reuse when in full recovery? Full recovery is required for point-in-time recovery after a failure. This is because every change to data or to database objects are written to the transaction log prior to being committed. These transactions are then written to the data file as SQL Server sees fit after this initial write to disk. The transaction log is a rolling history of all changes in the database and will allow for redo of each transaction in case of failure to rebuild the state of the data at failure. In the case of Full Recovery, the transaction log continues to expand until a checkpoint is issued via a successful transaction log backup. Top 13 SQL Server Mistakes and Misteps (2012) This great article by Tim Ford should be reviewed, as it's one of the best simple breakdowns of growth issues and prevention that I've read.

Verifying Instant File Initialization

Ran into a few issues verifying instant file initialization. I was trying to ensure that file initialization was enabled, but found out that running the xp_cmd to execute whoami /priv could be inaccurate when I'm not running it from the account that has the privileges. This means that if my sql service account has different permissions than I do, I could get the incorrect reading on if it is enabled.

Paul Randal covers a second approach using the sysinternals tool Accesschk, which seems promising. However, in my case, I didn't have permissions to run in the environment was I was trying to check. I found a way to do this by rereading original article in which Paul Randal demonstrates the usage of trace flags 3004,3605. This provided a very simple way to quickly ensure I was getting the correct results back. For even more detail on this, I highly recommend his Logging, Recovery, and Transaction Log course. I adapted pieces of his script for my quick error check on this issue.

Successfully Verifying

Successfully added instant file initialization should mean when you review the log you will not have any MDF showing up in the error log for zeroing. I adapted the sql script for reading the error log in a more filtered manner from this post: SQL Internals Useful Parameters for XP Reader (2014)

successfully-verifying_v1khio

Transaction Logging & Recovery (part 2)

Continuation of some notes regarding the excellent content by Paul Randal in Pluralsight: SQL Server: Logging, Recovery, and the Transaction Log. Please consider supporting his excellent material by using Pluralsight and subscribing to his blog. He's contributed a vast amount to the SQL server community through SQLSkills

Transaction Log File

  • The initial size of the log file is the larger of 0.58 MB or 25% of the total data files specified in the create database statement. For example, if you create a database with 4 separate files, it would increase the initial size the log file is set to.
  • This would be different if you've changed MODEL database to set the default log and database size.
  • The log file physically created must be zero initialized. Note that the benefits of instant file initialization apply to the data file, but the log file still has to be fully zero initialized, so a large log file creation doesn't benefit from instant file initialization. Previous Post on Enabling File Initialization --- Examine the errorlog (after you've enabled trace flag 3605,3004) EXEC xp_readerrorlog; GO
  • When examining the results, you can see the zeroing of the log file, but not the datafile if you have instant file initialization enabled.

transaction-log-file_daxina

Virtual Log Files

  • The transaction log is divided into virtual log files. This helps the system manage the log file more efficiently.
  • New VLF's are inactive & not used.
  • Active VLF's contain the log record activity and can't be reused until they have been noted as available by SQL server. My seque based on the fun experience of giant log files.
  • Note: In Brentozar Office Hours Brent talked about the common misconception of SIMPLE VS FULL logging. Most folks (guilty) think that SIMPLE reduces the amount of logging SQL server performs, thereby improving the overall performance. However, in a general sense this is a misconception. Logging, as previously discussed from my previous post on (101), is the core of SQL server, and required for transaction durability. The difference between SIMPLE and FULL is mostly to do with how the transaction log space is marked as available for reuse.
  • SIMPLE: after data files are updated and the data file is now consistent with the changes the log has recorded, the transaction logs are now marked as free and available.
  • FULL: all the transaction log records, even after hardened to the data file, are still used. This is what can cause the common issue of exponential log file growth with folks not aware of how it works. 300 GB log file on a small database due to now one watching? Been there? This is because the log file will keep appending the log entries overtime, without freeing up space in the existing transaction log, unless some action is taken to let SQL server know the transaction log file space is available for reuse.
  • Marking the space as available is done by ensuring you have a solid backup solution in place that is continually backing up the transaction log in the backup set) thereby letting SQL server know that the transaction log has been backed up and space can be reused in the existing log file.
  • The normal process would be to ensure you have a full backup, incremental backups, and transaction log backups running on a schedule.> Under the full recovery model or bulk-logged recovery model, if the transaction log has not been backed up recently, backup might be what is preventing log truncation. If the log has never been backed up, you must create two log backups to permit the Database Engine to truncate the log to the point of the last backup. Truncating the log frees space for new log records. To keep the log from filling up again, take log backups frequently.

    MSDN Troubleshooting a Full Transaction Log

  • My past experience was running into this challenge when performing a huge amount of bulk transactions. I ran the space out on a drive because the log files continued to grow with no backups on the log file running. The solution in my particular situation was to take a full backup, change the database recovery to Bulk-logged or SIMPLE, perform the massive changes, then get right back to full-recovery with backup. This helped ensure the log file growth didn't keep escalating (in my case it was the appropriate action, but normally you want to design the size of the transactions to be smaller, and the backup strategy to be continual so you don't run into this issue)

Transaction Logging & Recovery (101)

Logging & Recovery Notes from SQL Server Logging, Recovery, and the Transaction Log (pluralsight) Paul Randal Going to share some key points I've found helpful from taking the course by Paul Randal at pluralsight. I've found his knowledge and in detail presentation extremely helpful, and a great way to expand my expertise. Highly recommend pluralsight.com as well as anything by Paul Randal (just be prepared to have your brain turned to mush... he's definitely not writing "how to write a select statement" articles

Logging & Recovery

Why is it important to know about this?

  • One of the most critical components to how SQL server works and provides the Durability in ACID (Atomicity, Consistency, Isolation, Durability) is the logging and recovery mechanisms.
  • If log files didn't exist we couldn't guarantee after a crash the integrity of the database. This recovery process ensures after a crash even if the data file hasn't be changed yet (after a checkpoint) that SQL server can reply all the transactions that had been performed and thereby recover to the consistent point, ensuring integrity. This is critical and why we know even with a crash that no transactions will be left "halfway", as we'd require the transactions to be harden to the log file before SQL server would allow the data file to be changed. If this was done in the reverse order of writing to the data file, then if a crash happened, the log file might be out of sync, and you couldn't reply actions that might not have been fully made to the log file as the data file and log file wouldn't be in sync with the transactions noted.
  • The logging & recovery mechanism is actually the backend architecture driving most of the disaster recovery options like Mirroring, Replication, Availability Groups and more. They each have different ways of handling/processing, but underneath, they all rely on utilizing the transaction logs. - Logging = the backbone of sql server.

Data Files Aren't Changed Immediately

  • Unlike what you might think initially, data files are actually note being written to realtime with transactions. This would be inefficient. Instead, the log is written to, hardened, and then periodic checkpoints (default I believe is 1 minute) take these changes that have been hardened and ensure the changed pages in the buffer (dirty pages at this point) are updated as well.

Can't get away from transactions

  • Note that all actions occur in transactions. If not explicitly stated, then an implicit transaction is gathered by SQL server in how it handles the action (when dealing with DML). For example, if we alter 2 tables, we could manually set a transaction for each, so if one fails, both were rolled back to the original state, allowing us to commit one table changes even if the other experienced a failure. If we combined both of these statements without defining a transaction then SQL server would imply a transaction, and this might result in a different behavior.

For example:

    alter table foo
    alter table bar
    GO
    -- if foo failed, then both tables would not be changed, as the transaction itself failed

on the other hand the statement below would be promised in SSMS as a batch separator and SQL server would have two separate transaction for each. If one had an error, then the other would still be able to proceed.

    alter table foo
    GO
    alter table bar

Everything gets logged!

  • Everything has some logging to describe changes in log file (even in simple recovery)
  • Version store & workfile changes in tempdb

Commitment Issues

Transaction Has Committed

  • Before the transaction can commit, the transaction log file has to be written through to disk. - Once the transaction log is written out to disk, the transaction is considered durable
  • If you are using mirroring, the system will stop and wait for the replica/mirror to harden the transaction to the mirror db log file on disk, and then can harden the transaction log to the disk on the primary.
  • The Log file basically represents an exact playback of what changes have been made, so even if the buffer was cleared (removing the pages that were changed in buffer), SQL crashed, or your server went down, SQL server can recover the changes that were made from the log file. This is the "description" of the changes that were made.

Model needs exclusive lock

Ran into an issue where a developer was trying to create a database and was denied due to no ability to obtain exclusive lock on model. After verifying with other blogs, I found that creating a database required exclusive lock to use model as a template for the new database creation.

In my case I had connected with SSMS directly to model for some queries instead of master. In this case, SQL Complete (Devarts's excellent alternative to SQL Prompt) was querying the schema had this open session was blocking usage of model to create a new database. After killing this low priority query session, no issues were experienced.

Good to remember! Don't connect directly to model unless you have a specific reason to do so. Otherwise, you might be the culprit on some blocking errors.

Documenting Your Database with Diagrams

Don't get too excited. I know you love documentation, and just can't wait to spend some time digging in to document your database thoroughly. I imagine you probably want to build visio charts manually, or draw whiteboard diagrams by hand and take pictures.

For the rest of us that are lazy, a tool to help document your database is a great idea. I'm a big fan of SQL Doc by Red Gate, and ApexSQL Doc . I ended up using ApexSQL doc to document the database at my work, though Red Gates is also a great option. Both provide a great schema documentation tool that can generate a helpful CHM file to browse through (which I much prefer to trying to sort through PDF/WORD docs)

However, there is one thing that I was recently tasked with that made me appreciate a little more hands on documentation. In the case that you are deploying an application or any set of tables/structure that might end up being used by someone else, describing your data model can be a really helpful resource to someone trying to query or implement some custom implementation against your data. This might be helping document some data structures for someone building a report, or even developers trying to leverage some of the data in a separate implementation.

Understanding the data model as a dba/dev is important to being able to architect improvements as well, so I've found that going through and diagramming some of the logical structures/groupings can be a tremendous benefit to better understanding a database that perhaps you didn't architect from scratch, or has legacy results.

Some positives I see:

  1. Forces a better understanding of how your data model is constructed
  2. Can help highlight weaknesses and flaws in your data model
  3. Can help explain the data model to folks who may not be living and breathing your business, so if terminology and fields sometimes are confusing on how things relate, the data model can help clarify some of these things.

Some negatives:

  1. Requires a little elbow grease
  2. You'll find things to fix

Visio

Visio 2010 is the last Visio version that provides the capability to reverse engineer a database into a diagram. This is unfortunate. I worked for a while with this, but ended up shelving Visio as a long term option because of it's unintuitive nature, and behavior at times. For example, refreshing your database schema to identify changes wouldn't help flag any new changes, you'd have tables that were in several diagrams suddenly being given new table names like TableTest, TableTest1, TableTest2, instead of reusing the same table in the model. Also, the layout and arrangement can be pretty messy at times. Working with data sources requires you to use GENERIC OLEDB from what I recall, otherwise I got errors using SQL 2014. Lots of little things that added up to a poor experience. Gave it the old college try.... not worth it in my personal opinion

Visual Paradigm

Visual Paradigm Gallery This tool has some amazing functionality, but unfortunately was very tedious and complex for the purposes of documenting an existing database. I think this aspect of Visual Paradigm's reverse database toolset needs some major productivity and UI tweaks to be usable. It may be great for someone working with diagrams for creating databases and models, but for a SQL dba working to document their database better, it wasn't a great option. Even though very unintuitive, I did like the customization options for layout and the arrangement. The problem was none of the changes performed in realtime, thereby making it an annoying guesswork game. Not a huge fan. Also, very difficult to easily identify tables that had related tables to add with a click, so not easy to navigate all the tables and related tables in an intuitive way. Not a big fan of having to tweak 10 settings via 10 trips to a right click context menu. Their presentation workflow needs some major improvements.

DBVisualizer

DbVisualizer By far the best auto arranging layout tool I found. Amazing. It uses yEd as the backbone of the diagramming, so I also downloaded yEd, but for me to utilize, I'll definitely have to play around as the engine is powerful, but would take programming to get it to do anything similar to what DbVisualizer accomplished. It's more of an exploratory tool than a documenting one, but you can save the graph to .gml format and open in yEd to change. A little extra work, but possible if you like the results. It doesn't display foreign key columns, so it's more of looking at the referring/relationship side of things, without a full column list. I'm pretty much sold that this is bar none the best auto-laid out representation of tables I've ever seen with any tool. I'll be keeping the free DbVisualizer as a tool for reviewing adhoc data models for sure! The negative for documentation purposes is that none of these graphs are saved, so it's good for adhoc or one time, but not to regenerate.

Aquasoft Data Studio

They have a great output, but unfortunately they don't allow you to only show key columns, therefore your diagrams get pretty bloated. It's pricey, has some great other features, and I'd be a big fan, except the diagramming options aren't as robust as the others for customization. If you are documenting a data model, I like to only show key columns (PK/FK) to ensure there is not excessive noise. I did reach out to company on this option, and they said: "You can't show the PK and FK without the entity. So, you can't show them be themselves." The lack of any extra options to submit feedback, or anything as a potential buyer put me off to this product for now.

Other tools that were way too complex and pricey to fit documentation needs

  1. Oracle SQL Developer - very difficult to get the SQL connection up and running. Not unintuitive, not visually pleasing output
  2. Toad Data Modeler - same as above. Not visualizer pleasing, too many steps. Targeted at design rather than documentation.
  3. ERStudio... didn't even consider, much too pricey for documentation purposes
  4. SQL Architect - much too unintuitive and complex

Final Winner: DbSchema

DBSchema This is the winner. I switched all our diagrams over to this tool. A few major pros:

  1. All diagrams saved as "layouts" in your project. This means I can come back at anytime and fix/tweak.
  2. I can refresh the database schema imported and it will flag all the changes, and then the diagrams will automatically get updated.
  3. It generates really nice HTML interactive documentation. Major win! The major pro I found for this is with a little upfront work in arranging better, I could refresh all diagrams (maybe a new fk gets added), and then bulk export all these as HTML interactive database diagrams with full detail. You can hover over the FK lines and it will highlight the relationship etc. This is great for usability! Also, removes a lot of manual work in refreshing database column diagrams if something gets added/changed. One of the most significant points to this tool is the functionality + price! The price for a commercial licenses is extremely reasonable (as of 2015-12-09 the pricing was under $200) Some things that could use improvement:
  4. Improved autoarrangement - Allow preset option of only showing FK/PK ( i have an autohotkey script I built to do this, contact me if you want it, helped simplify)
  5. Allow optional layouts like Tree, Hierarchical