Skip to content

posts🔗

Consider the Benefits of Powershell for Developer Workflows

Who Am I Talking To

  • You use bash or python.
  • PowerShell seems wordy, extra verbose, and annoying.
  • It's a windows thing, you say... why would I even look at it.
  • Pry bash out of my fingers if yuo dare (probably not for you 😁)

What PowerShell Is

  • The best language for automating Windows... period.
  • A great language for development tooling and productivity scripts.
  • One of the best languages for automation with interactivity. Python is fantastic. The REPL isn't meant for the same interactivity you get with PowerShell. PowerShell prompt is sorta like mixing Python & fish/bash in a happy marriage.
  • A rich language (not just scripting) for interacting with AWS using AWS.Tools.
  • A rich object-oriented pipeline that can handle very complex actions in one-liners based on object-oriented pipelines.
  • Intuitive and consistent mostly for command discovery.
    • a common complaint from bash pros.
    • The point of the verbosity Verb-Noun is discoverability. tar for example is a bit harder to figure out than Expand-Archive -Path foo -DestinationPath foo
  • A language with a robust testing framework for unit, integration, infrastructure, or any other kinda testing you want! (Pester is awesome)

What PowerShell Isn't

  • Python 🤣
  • Good at datascience.
  • Succinct
  • Meant for high-concurrency
  • Good at GUI's... but come-on we're devs... guis make us weak 😜
  • A good webserver
  • Lots more.

The Right Tool for the Job

I'm not trying to tell you never to use bash. It's what you know, great!

However, I'd try to say if you haven't explored it, once you get past some of the paradigm differences, there is a rich robust set of modules and features that can improve most folks workflow.

Why Even Consider PowerShell

As I've interacted more and more with folks coming from a mostly Linux background, I can appreciate that considering PowerShell seems odd. It's only recently that it's cross platform in the lifecycle of things, so it's still a new thing to most.

Having been immersed in the .NET world and now working on macOS and using Docker containers running Debian and Ubuntu (sometimes Alpine Linux), I completely get that's not even in most folks purview.

Yet, I think it's worth considering for developer workflows that there is a lot of gain to be had with PowerShell for improving the more complex build and development workflows because of the access to .NET.

No, it's not "superior". It's different. Simple cli bash scripting is great for many things (thus prior article about Improving development workflow Task which uses shell syntax).

The fundemental difference in bash vs PowerShell is really text vs object, in my opinion. This actually is where much of the value comes in for considering what to use.

{{< admonition type="info" title="Go For CLI Tools" >}} Go provides a robust cross-platform single binary with autocomplete features and more.

I'd say that for things such as exporting pipelines to Excel, and other "automation" actions it's far more work in Go.

Focus Go on tooling that makes the extra plumbing and stronger typing give benefit rather than just overhead. AWS SDK operations, serverless/lambda, apis, complex tools like Terraform, and more fit the bill perfectly and are a great use case. {{< /admonition >}}

Scenario: Working with AWS

If you are working with the AWS SDK, you are working with objects. This is where the benefit comes in over cli usage.

Instead of parsing json results and using tools like jq to choose arrays, instead, you can interact with the object by named properties very easily.

$Filters = @([Amazon.EC2.Model.Filter]::new('tag:is_managed_by','muppets')
$InstanceCollection = (Get-EC2Instance -Filter $Filters)).Instances | Select-PSFObject InstanceId, PublicIpAddress,PrivateIpAddress,Tags,'State.Code as StateCode', 'State.Name as StateName'  -ScriptProperty @{
    Name = @{
        get  = {
            $this.Tags.GetEnumerator().Where{$_.Key -eq 'Name'}.Value
        }
    }
}

With this $InstanceCollection variable, we now have access to an easily used object that can be used with named properties.

  • Give me all the names of the EC2 instances: $InstanceCollection.Name
  • Sort those: $InstanceCollection.Name | Sort-Object (or use alias shorthand such as sort)
  • For each of this results start the instances: $InstanceCollection | Start-EC2Instance

Practical Examples

Beyond that, we can do many things with the rich eco-system of prebuilt modules.

Here are some example of some rich one-liners using the power of the object based pipeline.

  • Export To Json: $InstanceCollection | ConvertTo-Json -Depth 10 | Out-File ./instance-collection.json
  • Toast notification on results: Send-OSNotification -Title 'Instance Collection Results' -Body "Total results returned: $($InstanceCollection.Count)"
  • Export To Excel with Table: $InstanceCollection | Export-Excel -Path ./instance-collection.json -TableStyle Light8 -TableName 'FooBar'
  • Send a rich pagerduty event to flag an issue: Send-PagerDutyEvent -Trigger -ServiceKey foo -Description 'Issues with instance status list' -IncidentKey 'foo' -Details $HashObjectFromCollection
  • Use a cli tool to flip to yaml (you can use native tooling often without much issue!): $InstanceCollection | ConvertTo-Json -Depth 10 | cfn-flip | Out-File ./instance-collection.yml

Now build a test (mock syntax), that passes or fails based on the status of the instances

{{< admonition type="Note" title="Disclaimer" open=true >}}

I'm sure there's great tooling with jq, yq, excel clis and other libraries that can do similar work.

My point is that it's pretty straight forward to explore this in PowerShell as object-based pipelines are a lot less work with complex objects than text based parsing.

{{< /admonition >}}

Describe "Instance Status Check" {
  Context "Instances That Should Be Running" {
    foreach($Instance in $InstanceCollection)
    {
        It "should be running" {
        $Instance.StatusName | Should -Be 'Running'
        }
    }
  }
}

Now you have a test framework that you could validate operational issues across hundreds of instances, or just unit test the output of a function.

Exploring the Object

I did this comparison once for a coworker, maybe you'll find it useful too!

"Test Content" | Out-File ./foo.txt
$Item = Get-Item ./foo.txt

## Examine all the properties and methods available. It's an object
$Item | Get-Member

This gives you an example of the objects behind the scene. Even though your console will only return a small set of properties back, the actual object is a .NET object with all the associated methods and properties.

This means that Get-Item has access to properties such as the base name, full path, directory name and more.

You can access the actual datetime type of the CreationTime, allowing you to do something like:

($item.LastAccessTime - $Item.CreationTime).TotalDays

This would use two date objects, and allow you to use the relevant Duration methods due to performing math on these.

The methods available could be anything such as $Item.Encrypt(); $Item.Delete; $Item.MoveTo and more all provided by the .NET namespace System.IO.FileInfo.

I know many of these things you can do in bash as well, but the object pipeline here I'd wager provides a very solid experience for more complex operations based on the .NET framework types available.

Wrap Up

This was meant to give a fresh perspective on why some folks have benefited from PowerShell over using shell scripting. It's a robust language that for automation/build/cloud automation can give a rich reward if you invest some time to investigate.

For me the basic "right tool for the job" would like like this:

  • data: python
  • serverless: go & python (powershell can do it too, but prefer the others)
  • web: go & python
  • basic cli stuff: shell (using Task which uses shell syntax)
  • complex cli project tasks: powershell & go
  • automation/transformation: powershell & python
  • high concurrency, systems programming: go

Maybe this provided a fresh perspective for why PowerShell might benefit even those diehard shell scripters of you out there and maybe help convince you to take the plunge and give it a shot.

Improving Local Development Workflow With Go Task

Workflow Tooling

Development workflow, especially outside of a full-fledged IDE, is often a disjointed affair. DevOps oriented workflows that often combine cli tools such as terraform, PowerShell, bash, and more all provide more complexity to getting up to speed and productive.

Currently, there is a variety of frameworks to solve this problem. The "gold standard" most are familiar with in the open-source community would be Make.

Considering Cross-Platform Tooling

This is not an exhaustive list, it's focused more on my journey, not saying that your workflow is wrong.

I've looked at a variety of tooling, and the challenge has typically that most are very unintuitive and difficult to remember.

Make...it's everywhere. I'm not going to argue the merits of each tool as I mentioned, but just bring up that while cMake is cross platform, I've never considered Make a truly cross platform tool that is first class in both environments.

InvokeBuild & Psake

In the Windows world, my preferred framework would be InvokeBuild or PSake.

The thing is, not every environment will always have PowerShell, so I've wanted to experiment with minimalistic task framework for intuitive local usage in a project when the tooling doesn't need to be complex. While InvokeBuild is incredibly flexible and intuitive, there is an expectation of familarity with PowerShell to fully leverage.

If you want a robust framework, I haven't found anything better. Highly recommend examining if you are comfortable with PowerShell. You can generate VSCode tasks from your defined scripts and more.

InvokeBuild & Psake aren't great for beginners just needing to run some tooling quickly in my experience. The power comes with additional load for those not experienced in PowerShell.

If you are needing to interact with AWS.Tools SDK, complete complex tasks such as generating objects from parsing AST (Abstract Syntax Trees) and other, then I'd lead towards InvokeBuild.

However, if you need to initialize some local dependencies, run a linting check, format your code, get the latest from main branch and rebase, and other tasks that are common what option do you have to get up and running more quickly on this?

Task

Go Task

I've been pleasantly surprised by this cross-platform tool based on a simple yaml schema. It's written in go, and as a result it's normally just a single line or two to immediately install in your system.

Here's why you might find some value in examining this.

  1. Cross-platform syntax using this go interpreter sh
  2. Very simple yaml schema to learn.
  3. Some very nice features that make it easy to ignore already built assets, setup task dependencies (that run in parallel too!), and simple cli interactivity.

My experience has been very positive as I've found it very intuitive to build out basic commands as I work, rather than having to deal with more more complex schemas.

Get Started

version: 3
tasks:
  default: task --list
  help: task --list

  fmt:
    desc: Apply terraform formatting
    cmds:
      - terraform fmt -recursive=true

The docs are great for this project, so I'm not going to try and educate you on how to use this, just point out some great features.

First, with a quick VSCodee snippet, this provides you a quick way to bootstrap a new project with a common interface to run basic commands.

Let's give you a scenario... assuming you aren't using an already built Docker workspace.

  1. I need to initialize my 2 terraform directories.
  2. I want to also ensure I get a few go dependencies for a project.
  3. Finally, I want to validate my syntax is valid among my various directories, without using pre-commit.

This gets us started...

version: 3
tasks:

Next, I threw together some examples here.

  • Initialize commands for two separate directories.
  • A fmt command to apply standardized formatting across all tf files.
  • Finally, wrap up those commands with a dep: [] value that will run the init commands in parallel, and once that is finished it will run fmt to ensure consistent formatting.
version: '3'
env:
  TF_IN_AUTOMATION: 1
tasks:
  init-workspace-foo:
    dir: terraform/foo
    cmds:
      - terraform init
  init-workspace-bar:
    dir: terraform/bar
    cmds:
      - terraform init
  fmt:
    desc: Recursively apply terraform fmt to all directories in project.
    cmds:
      - terraform fmt -recursive=true
  init:
    desc: Initialize the terraform workspaces in each directory in parallel.
    deps: [init-workspace-foo,init-workspace-bar]
    cmds:
      - task: fmt

You can even add a task in that would give you a structured git interaction, and not rely on git aliases.

  sync:
      desc: In GitHub flow, I should be getting lastest from main and rebasing on it so I don't fall behind
      cmds:
        - git town sync

Why not just run manually

I've seen many folks online comments about why even bother? Can't the dev just run the commands in the directory when working through it and be done with it?

I believe tasks like this should be thrown into a task runner from the start. Yes, it's very easy to just type terraform fmt, go fmt, or other simple commands... if you are the builder of that project.

However:

  • it increases the cognitive load for tedious tasks that no one should have to remember each time the project grows.
  • It makes your project more accessible to new contributors/teammates.
  • It allows you to simply moving to automation by wrapping up some of these automation actions in GitHub Actions or equivalent, but simply having the CICD tooling chosen run the same task you can run locally.

Minimal effort to move it to automation from that point!

I think wrapping up things with a good task runner tools considers the person behind you, and prioritizes thinking of others in the course of development. It's an act of consideration.

Choose the Right Tooling

Here's how I'd look at the choices:

  • Run as much in Docker as you can.
  • If simple actions, driven easily on cli such as build, formatting, validation, and other then start with Task from the beginning and make your project more accessible.
  • If requirements grow more complex, with interactions with AWS, custom builds for Lambda, combined with other more complex interactions that can't easily be wrapped up in a few lines of shell scripting... use InvokeBuild or equivalent. This gives you access to the power of .NET and the large module collection provided.

Even if you don't really need it, think of the folks maintaining or enabling others to succeed with contributions more easily, and perhaps you'll find some positive wins there. 🎉

Unable To Resolve Provider AWS with Terraform Version 0.13.4

I couldn't get past this for a while when I accidentally stumbled across a fix. I believe the fix was merged, however this problem still existed in 0.13.4 so I stuck with it.

{{< admonition type=info title="GitHub Issues" open=true >}} When investigating the cause, I found this PR which intended this to be the installer behaviour for the implicit global cache, in order to match 0.12. Any providers found in the global cache directory are only installed from the cache, and the registry is not queried. Note that this behaviour can be overridden using provider_installation configuration. That is, you can specify configuration like this ~/.terraform.d/providercache.tfrc

GitHub Issue Comment

{{< /admonition >}}

I used the code snippet here: micro ~/.terraform.d/providercache.tfrc

Wasn't sure if it was interpreted with shell, so I didn't use the relative path ~/.terraform.d/plugins, though that might work as well.

provider_installation {
  filesystem_mirror {
    path = "/Users/sheldonhull/.terraform.d/plugins"
  }
  direct {
    exclude = []
  }
}

After this terraform init worked.

Quick Start to Using Influxdb on Macos

Intro

OSS 2.0 is a release candidate at this time, so this may change once it's released.

It wasn't quite clear to me how to get up and running quickly with a docker based setup for OSS 2.0 version, so this may save you some time if you are interested. It also should be very similar to the Windows workflow excepting the basic brew commands and service install commands you'll just want to flip over to choco install telegraf .

Docker Compose

Grabbed this from a comment and modified the ports as the were flipped from the 9999 range used during first early access.

# docker exec -it influxdb /bin/bash

version: "3.1"
services:
  influxdb:
    restart: always  # It will always restart on rebooting machine now, no need to manually manage this
    container_name: influxdb
    ports:
      - '8086:8086'
    images: 'quay.io/influxdb/influxdb:2.0.0-rc'
    volumes:
      - influxdb:/var/lib/influxdb2
    command: influxd run --bolt-path /var/lib/influxdb2/influxd.bolt --engine-path /var/lib/influxdb2/engine --store bolt
volumes:
  influxdb:

The main modifications I made was ensuring it auto started.

Access the instance on localhost:8086.

Telegraf

It's pretty straight-forward using homebrew. brew install telegraf

The configuration file is created by default at: /usr/local/etc/telegraf.conf as well as the telegraf.d directory.

I'm still a bit new on macOS, so once I opened Chronograf, I wanted to try the new http based configuration endpoint, so I used the web gui to create a telegraf config for system metrics and tried replacing the telegraf.conf reference in the plist file. This didn't work for me as I couldn't get the environment variable for the token to be used, so I ended up leaving it as is, and instead edited the configuration.

  • brew services stop telegraf
  • micro /usr/Local/Cellar/telegraf/1.15.3/homebrew.mxcl.telegraf.plist

I updated the configuration (see line 16) unsuccessfully with the http config endpoint.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
  <dict>
    <key>KeepAlive</key>
    <dict>
      <key>SuccessfulExit</key>
      <false/>
    </dict>
    <key>Label</key>
    <string>homebrew.mxcl.telegraf</string>
    <key>ProgramArguments</key>
    <array>
      <string>/usr/local/opt/telegraf/bin/telegraf</string>
      <string>-config</string>
      <string>/usr/local/etc/telegraf.conf</string>
      <string>-config-directory</string>
      <string>/usr/local/etc/telegraf.d</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>WorkingDirectory</key>
    <string>/usr/local/var</string>
    <key>StandardErrorPath</key>
    <string>/usr/local/var/log/telegraf.log</string>
    <key>StandardOutPath</key>
    <string>/usr/local/var/log/telegraf.log</string>
  </dict>
</plist>

What worked for me was to edit: micro /usr/local/etc/telegraf.conf and add the following (I set the token explicitly in my test case).

 [[outputs.influxdb_v2]]
  urls = ["http://localhost:8086"]
  token = "$INFLUX_TOKEN"
  organization = "sheldonhull"
  bucket = "telegraf"
  • Start service with brew services restart telegraf and it should start sending data.
  • NOTE: I'm still getting the hang of brew and service management on Linux/macOS, so the first time I did this it didn't work and I ended up starting it using telegraf -config http://localhost:8086/api/v2/telegrafs/068ab4d50aa24000 and just running initially in my console (having already set the INFLUX_TOKEN environment variable) Any comments on if I did something wrong here would be appreciated 😁 I'm pretty sure the culprit is the need for the INFLUX_TOKEN environment variable and I'm not sure if the service load with brew is actually sourcing the .profile I put this in. Maybe I can pass it explicitly?

Additional Monitoring

This is a work in progress. I found GitHub Issue #3192 and used it as a starting point to experiment with getting a "top processes" for evaluating what specifically was impacting my systems at the time of a spike. I'll update this once I've gotten things further improved.

# # Monitor process cpu and memory usage
# https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat
[[inputs.procstat]]
    pattern = "${USER}"
    fieldpass = [
      "cpu_time_user",
      "cpu_usage",
      "memory_rss",
    ]

[[processors.topk]]
  namepass = ["*procstat*"]
  fields = [
      "cpu_time_user",
      "cpu_usage",
      "memory_rss",
  ]
  period = 20
  k = 3
  # group_by = ["pid"]

[[processors.regex]]
  namepass = ["*procstat*"]
  [[processors.regex.tags]]
    key = "process_name"
    pattern = "^(.{60}).*"
    replacement = "${1}..."

Final Result

I like the final result. Dark theme for the win.

I've had some spikes in Vscode recently, impacting my CPU so I've been meaning to do something like this for a while, but finally got it knocked out today once I realized there was a 2.0 docker release I could use to get up and running easily. Next step will be to add some process level detail so I can track the culprit (probably VScode + Docker Codespaces).

Influx System Dashboard

Wishlist

  • Pretty formatting of date/time like Grafana does, such as converting seconds into hour/minutes.
  • Log viewing api so I could query cloudwatch logs like Grafana offers without needing to ingest.
  • Edit existing telegraf configuration in the load data section. Right now I can't edit.
  • MSSQL Custom SQL Server query plugin to be released 😁 Issue 1894 & PR 3069 Right now I've done custom exec based queries using dbatools and locally included PowerShell modules. This sorta defeats the flexibility of having a custom query call so I can minimize external dependencies.

Set Theory Basics in the Eyes of 10 Year Old

My morning. Explaining set and intersect theory basics to my 10 year old with Minecraft gamer tags. Trying to justify the need to know this, the best I could come up with was his future need to build a shark attack report accurately.

Kids are the best. Tech is fun. What job would have me spin up with docker-compose up -d my MSSQL container, write a quick SQL example with INTERSECT, UNION and all to demonstrate this magic.

Followed it up with a half-hearted lie that my day is comprised of cmatrix 😂 which he didn't believe for more than a couple seconds.

{{< asciinema id="DnQ0MCgZekv11MggByfjqRNNT" >}}

Ways to Improve Codespaces Local Docker Experience

I've been enjoying Codespaces local development workflow with Docker containers.

I'm using macOS and on Docker experimental release. Here are some ideas to get started on improving the development experience.

  • Clone the repository in the virtual volume (supported by the extension) to eliminate the binding between host and container. This would entail working exclusively inside the container.
  • Increased Docker allowed ram to 8GB from the default of 2GB.

Any other ideas? Add a comment (powered by GitHub issues, so it's just a GitHub issue in the backend)

Keep the Snippet Simple

I took a quick step back when too many parentheses started showing up. If you question the complexity of your quick snippet, you are probably right that there is a much simpler way to do things.

I wanted to get a trimmed message of the results of git status -s. As I worked on this snippet, I realized it was becoming way overcomplicated. 😆

$(((git status -s) -join ',') -split '')[0..20] -join ''

I knew my experimentation was going down the wrong road, so I took a quick step back to see what someone else did. Sure enough, Stack Overflow provided me a snippet.

$(((git status -s) -join ','))[0..20] -join ''     # returns the string '12345'

Moral of the story... there's always someone smarter on Stack Overflow. 😆

Go R1 Day 12

Day 12 of 100

progress

  • Worked on Algolia index project to do atomic updates on search index on my blog.
  • Worked with json, structs, ranges, and more.
  • Saw success with the first value in my output now correctly parsing out the title from the front matter.
  • Implemented zerolog.
  • Used front library to parse yaml front matter into map.
  • Accessed map to get title into json.

Hoping that eventually I can build out a Go app for sharing that's the equivalent of "atomic alogia" allowing diff updates. I haven't found anything like that for hugo so far.