Skip to content

2021🔗

Go R1 Day 54

progress

  • worked with tests and Goland.
  • Modified table driven tests to remove hard coded test case inputs.

Steampipe Is Sql Magic

Up And Running In Minutes

I tried Steampipe out for the first time today.

:(fab fa-twitter fa-fw): Follow Steampipe On Twitter

I'm seriously impressed.

I built a project go-aws-ami-metrics last year to test out some Go that would iterate through instances and AMIs to build out aging information on the instances.

I used it to help me work through how to use the AWS SDK to iterate through regions, instances, images, and more.

In 15 mins I just solved the equivalent issue in a way that would benefit anyone on a team. My inner skeptic was cynical, thinking this abstraction would be problematic and I'd be better served by just sticking with the raw power of the SDK.

It turns out this tool already is built on the SDK using the same underlying API calls I'd be writing from scratch.

First example: DescribeImage

This is the magic happening in the code.

    resp, err := svc.DescribeImages(&ec2.DescribeImagesInput{
        Owners: []*string{aws.String("self")},
    })
    for _, image := range resp.Images {
        d.StreamListItem(ctx, image)
    }

This is the same SDK I used, but instead of having to build out all the calls, there is a huge library of data already returned.

    req, publicImages := client.DescribeImagesRequest(&ec2.DescribeImagesInput{
        Filters: []*ec2.Filter{
            {
                Name:   aws.String("is-public"),
                Values: []*string{aws.String("true")},
            },
        },
    },
    )

There is no need to reinvent the wheel. Instead of iterating through regions, accounts, and more, Steampipe allows this in plain old SQL.

Query The Cloud For example, to gather:

  • EC2 Instances
  • that use AWS Owned Images
  • and use an image that created greater than n period
  • and want the aging in days
SELECT
    ec2.instance_id,
    ami.name,
    ami.image_id,
    ami.state,
    ami.image_location,
    ami.creation_date,
    extract('day' FROM now()) - extract('day' FROM ami.creation_date) AS creation_age,
    ami.public,
    ami.root_device_name
FROM
    aws_ec2_ami_shared AS ami
    INNER JOIN aws_ec2_instance AS ec2 ON ami.image_id = ec2.image_id
WHERE
    ami.owner_id = '137112412989'
  AND ami.creation_date > now() - INTERVAL '4 week'

There are plugins for GitHub, Azure, AWS, and many more.

You can even do cross-provider calls.

Imagine wanting to query a tagged instance and pulling the tag of the work item that approved this release. Join this data with Jira, find all associated users involved with the original request, and you now have an idea of the possibility of cross-provider data Steampipe could simplify.

Stiching this together is complicated. It would involve at least 2 SDK's and their unique implementation.

I feel this is like Terraform for Cloud metadata, a way to provide a consistent experience with syntax that is comfortable to many, without the need to deal with provider specific quirks.

Query In Editor

  • I downloaded the recommended TablePlus with brew install tableplus.
  • Ran steampipe service start in my terminal.
  • Copied the Postgres connection string from the terminal output and pasted into TablePlus.
  • Pasted my query, ran, and results were right there as if I was connected to a database.

TablePlus

AWS Already Has This

AWS has lots of ways to get data. AWS Config can aggregate across multiple accounts, SSM can do inventory, and other tools can do much of this.

AWS isn't easy. Doing it right is hard. Security is hard.

Expertise in building all of this and consuming can be challenging.

🎉 Mission accomplished!

Experience

I think Steampipe is offering a fantastic way to get valuable information out of AWS, Azure, GitHub, and more with a common language that's probably the single most universal development language in existenance: SQL.

One of the goals of Steampipe since we first started envisioning it is that it should be simple to install and use - you should not need to spend hours downloading pre-requisites, fiddling with config files, setting up credentials, or pouring over documentation. We've tried very hard to bring that vision to reality, and hope that it is reflected in Steampipe as well as our plugins.

Providing a cli with features like this is incredible.

  • execute
  • turn into an interactive terminal
  • provide prompt completion to commands
  • a background service to allow connection via IDE

The Future

The future is bright as long as truncate ec2_instance doesn't become a thing. 😀

Further Resources

If you want to explore the available schema, check out the thorough docs.

Go R1 Day 53

progress

  • Troubleshooting on: typechecking loop that helped me learn a bit more on how the compiler parsing occurs. Quick fix was to simply change var logger *logger.Logger to var log *logger.Logger.
  • Read up on dependency injection concepts and clean architecture design.

Go R1 Day 52

progress

  • published extension pack for Go1
  • Learned about magic number linter in golanglint-ci. For instance this would be flagged as a bad practice (while not applicable for a simple test like this, having a const makes sense in almost all other cases).
func Perimeter(width float64, height float64) float64 {
    return 2 * (width + height)
}
  • Learned a few extra linter violations and how to exclude including:
    • lll: for maximum line length
    • packagetest: for emphasizing blackbox testing.
    • gochecknoglobals: for ensuring global variables aren't used
    • nlreturn: for returning without a black line before. That's a "nit", but nice for consistency (though I'd like to see this as an autoformatted rule with fix applied.)

feat: structs-methods-and-interfaces -> initial functions without str… · sheldonhull/learn-go-with-tests-applied@be9ce01 · GitHub

My First Vscode Extension Pack for Go

Took a swing at creating my own extension pack for Go.

:(fab fa-github fa-fw): sheldonhull/extension-pack-go - GitHub

This was a good chance to familarize myself with the eco-system and simplify sharing a preset group of extensions.

Setup the repo with a Taskfile.yml to simplify running in the future. If frequent updates needed to happen, it would be easy to plug this into GitHub actions with a dispatch event and run on demand or per merge to main.

Here's the marketplace link if you want to see what it looks like: Marketplace - extension-pack-go

I could see this process being improved in the future with GitHub only requirements. At this time, it required me to use my personal Azure DevOps org to configure access and publishing.

Resources

Publishing Extensions

Go R1 Day 51

progress

  • Did iteration exercise, however, I skipped ahead and did strings.Repeat instead of iteration because I'm lazy. 😀
  • Moved all tests into blackbox test packages.
  • Worked through variadiac functions.
  • Tweaked my VSCode autotest to run on save.
  • Further tweaks to golanglint-ci to reduce noise on linting checks.

r1-d51-code-coverage

Commits

Use Driftctl to Detect Infra Drift

Use Driftctl to detect drift in your your infrastructure. This snippet generates a html report to show coverage and drift figures of the target.

For multiple states, you'll need to adapt this to provide more --from paths to ensure all state files are used to identify coverage.

$S3BucketUri = "terraform-states-$AWS_ACCOUNT_NUMBER/$AWS_REGION/$TERRAFORMMODULE/terraform.tfstate"
$Date = $(Get-Date -Format 'yyyy-MM-dd-HHmmss')
$ArtifactDirectory = (New-Item 'artifacts' -ItemType Directory -Force).FullName
&docker run -t --rm `
    -v ${PWD}:/app:rw `
    -v "$HOME/.driftctl:/root/.driftctl" `
    -v "$HOME/.aws:/root/.aws:ro" `
    -e "AWS_PROFILE=default" ` # Replace this with your aws profile name if you have multiple profiles
    cloudskiff/driftctl scan --from "tfstate+s3://$S3BucketUri" --output "html://$ArtifactDirectory/driftctl-report-$Date.html"

Optionally, you can adjust to recursively scan the state file of an entire bucket (say if using Terragrunt to store in special key prefixes).

  • Change to --from "tfstate+s3://mybucket/myprefix" without requiring the full path to a single tfstate file.
  • Recursively search if in many subfolders with: **/*.tfstate.

Go R1 Day 50

progress

At this point, I'm still struggling with the proper way to abstract a logging wrapper that calls a logging library. There's enough boilerplate for setup of my preferred defaults in zerolog that I want to include a wrapper to organize this and return the logger.

This tends to look like:

type Logger struct {
  Logger *zerolog.Logger
}

This results in a pretty lengthy call with logger.Logger.Info().Str("key", "value").Msg("message"). I'm also having issues with the embedded logger not returning the correct methods transparently back to the caller.

I've tested with internal/logger and pkg/logger with similar issues. This one I'll have to come back round to.

Go R1 Day 49

progress

  • Learned about White-Box vs Black-Box testing. Apparently, you can access all indentifiers of a package if you use the same package name such as: package packagename. If you are testing as a consumer might be, then you can use package packagename_test for only accessing the exported identifiers.
  • Used examples in test file to provide self-documentation of how to use the method.
  • Worked further with golanglint-ci and found it challenging when working with multiple modules in subdirectories. The go eco system systems simplest with one repo = one module. While mono-repos can work, the CI tooling isn't quite as intuitive to setup. VSCode requires experimental support for multiple modules as well at this time.

How To Reduce Noise From Slack

Noise

Slack can be a really effective tool with asynchronous work, but the defaults aren't great out of the box.

I want to get you a few tips on how to use Slack effectively.

Piping through release notifications, work-item updates, alerts and more can help you reduce context switching with other tools, but without proper control you'll likely find it overwhelming.

Sections

Use sections to organize your content and customize the level of priority you want to assign to the grouped channels.

This is for paid plans. This guide assumes you are on company plan with those features.

Slack Sidebar Section

Individual Channel Settings

Reduce noise from busy channels, especially when folks over-use @here

Individual Channel Settings

Configure settings (especially in automated or busy rooms) to:

  • mute notifications
  • mute @here if this is not properly used in the room.
  • You'll still get notified if your name is mentioned, but otherwise the channel won't keep showing up as needing your attention.

Change Section Behavior

Change To Unread Only

  • Sort by recent activity
  • Set your section to only show unreads + sort by recent updates. This will keep your sidebar very simple and clean, autohiding after it's been read.

Flag Keywords

If someone forgets to mention your name with the @Me syntax, you can set your name as a keyboard to alert on as a backup.

I set sheldon as a keyword, and it helps ensure I get notified even if the alert, message, response didn't properly format my name in the message or by the app integrations (very few map to user id properly).

Use All Unreads

From your settings for the sidebar, enable the All Unreads section. This can help you quickly review all channel activity in a single pane similar to an email inbox.

Shortcuts

A couple basic shortcuts will set you up to use Slack effectively.

For Windows, typically replace cmd with ctrl.

Action Keyboard
cmd+k Quick switcher for channels and conversations. Don't leave anything pinned you don't need to by using this to flip
cmd+/ Keyboard shortcut reference panel
cmd+left cmd+right Navigate similar to a web browser back or forward to whatever converation or channel you were looking at.
cmd+up edit the last message (if you are focused in the textbox)
option+shift+down go to next unread channel (or use allunreads)

Downtime

Make sure to update your notification window to allow for uninteruppted deep work.