Go R1 Day 54
progress
- worked with tests and Goland.
- Modified table driven tests to remove hard coded test case inputs.
I tried Steampipe out for the first time today.
:(fab fa-twitter fa-fw): Follow Steampipe On Twitter
I'm seriously impressed.
I built a project go-aws-ami-metrics last year to test out some Go that would iterate through instances and AMIs to build out aging information on the instances.
I used it to help me work through how to use the AWS SDK to iterate through regions, instances, images, and more.
In 15 mins I just solved the equivalent issue in a way that would benefit anyone on a team. My inner skeptic was cynical, thinking this abstraction would be problematic and I'd be better served by just sticking with the raw power of the SDK.
It turns out this tool already is built on the SDK using the same underlying API calls I'd be writing from scratch.
First example: DescribeImage
This is the magic happening in the code.
resp, err := svc.DescribeImages(&ec2.DescribeImagesInput{
Owners: []*string{aws.String("self")},
})
for _, image := range resp.Images {
d.StreamListItem(ctx, image)
}
This is the same SDK I used, but instead of having to build out all the calls, there is a huge library of data already returned.
req, publicImages := client.DescribeImagesRequest(&ec2.DescribeImagesInput{
Filters: []*ec2.Filter{
{
Name: aws.String("is-public"),
Values: []*string{aws.String("true")},
},
},
},
)
There is no need to reinvent the wheel. Instead of iterating through regions, accounts, and more, Steampipe allows this in plain old SQL.
For example, to gather:
n
periodSELECT
ec2.instance_id,
ami.name,
ami.image_id,
ami.state,
ami.image_location,
ami.creation_date,
extract('day' FROM now()) - extract('day' FROM ami.creation_date) AS creation_age,
ami.public,
ami.root_device_name
FROM
aws_ec2_ami_shared AS ami
INNER JOIN aws_ec2_instance AS ec2 ON ami.image_id = ec2.image_id
WHERE
ami.owner_id = '137112412989'
AND ami.creation_date > now() - INTERVAL '4 week'
There are plugins for GitHub, Azure, AWS, and many more.
You can even do cross-provider calls.
Imagine wanting to query a tagged instance and pulling the tag of the work item that approved this release. Join this data with Jira, find all associated users involved with the original request, and you now have an idea of the possibility of cross-provider data Steampipe could simplify.
Stiching this together is complicated. It would involve at least 2 SDK's and their unique implementation.
I feel this is like Terraform for Cloud metadata, a way to provide a consistent experience with syntax that is comfortable to many, without the need to deal with provider specific quirks.
brew install tableplus
.steampipe service start
in my terminal.AWS has lots of ways to get data. AWS Config can aggregate across multiple accounts, SSM can do inventory, and other tools can do much of this.
AWS isn't easy. Doing it right is hard. Security is hard.
Expertise in building all of this and consuming can be challenging.
🎉 Mission accomplished!
I think Steampipe is offering a fantastic way to get valuable information out of AWS, Azure, GitHub, and more with a common language that's probably the single most universal development language in existenance: SQL.
One of the goals of Steampipe since we first started envisioning it is that it should be simple to install and use - you should not need to spend hours downloading pre-requisites, fiddling with config files, setting up credentials, or pouring over documentation. We've tried very hard to bring that vision to reality, and hope that it is reflected in Steampipe as well as our plugins.
Providing a cli with features like this is incredible.
The future is bright as long as truncate ec2_instance
doesn't become a thing. 😀
If you want to explore the available schema, check out the thorough docs.
212
tables of metadata currently available.ami.aging_instances
.typechecking loop
that helped me learn a bit more on how the compiler parsing occurs.
Quick fix was to simply change var logger *logger.Logger
to var log *logger.Logger
.golanglint-ci
.
For instance this would be flagged as a bad practice (while not applicable for a simple test like this, having a const makes sense in almost all other cases).lll
: for maximum line lengthpackagetest
: for emphasizing blackbox testing.gochecknoglobals
: for ensuring global variables aren't usednlreturn
: for returning without a black line before.
That's a "nit", but nice for consistency (though I'd like to see this as an autoformatted rule with fix applied.)Took a swing at creating my own extension pack for Go.
:(fab fa-github fa-fw): sheldonhull/extension-pack-go - GitHub
This was a good chance to familarize myself with the eco-system and simplify sharing a preset group of extensions.
Setup the repo with a Taskfile.yml
to simplify running in the future.
If frequent updates needed to happen, it would be easy to plug this into GitHub actions with a dispatch event and run on demand or per merge to main.
Here's the marketplace link if you want to see what it looks like: Marketplace - extension-pack-go
I could see this process being improved in the future with GitHub only requirements. At this time, it required me to use my personal Azure DevOps org to configure access and publishing.
strings.Repeat
instead of iteration because I'm lazy. 😀golanglint-ci
to reduce noise on linting checks.Use Driftctl to detect drift in your your infrastructure. This snippet generates a html report to show coverage and drift figures of the target.
For multiple states, you'll need to adapt this to provide more --from
paths to ensure all state files are used to identify coverage.
$S3BucketUri = "terraform-states-$AWS_ACCOUNT_NUMBER/$AWS_REGION/$TERRAFORMMODULE/terraform.tfstate"
$Date = $(Get-Date -Format 'yyyy-MM-dd-HHmmss')
$ArtifactDirectory = (New-Item 'artifacts' -ItemType Directory -Force).FullName
&docker run -t --rm `
-v ${PWD}:/app:rw `
-v "$HOME/.driftctl:/root/.driftctl" `
-v "$HOME/.aws:/root/.aws:ro" `
-e "AWS_PROFILE=default" ` # Replace this with your aws profile name if you have multiple profiles
cloudskiff/driftctl scan --from "tfstate+s3://$S3BucketUri" --output "html://$ArtifactDirectory/driftctl-report-$Date.html"
Optionally, you can adjust to recursively scan the state file of an entire bucket (say if using Terragrunt to store in special key prefixes).
--from "tfstate+s3://mybucket/myprefix"
without requiring the full path to a single tfstate file.**/*.tfstate
.At this point, I'm still struggling with the proper way to abstract a logging wrapper that calls a logging library. There's enough boilerplate for setup of my preferred defaults in zerolog that I want to include a wrapper to organize this and return the logger.
This tends to look like:
This results in a pretty lengthy call with logger.Logger.Info().Str("key", "value").Msg("message")
.
I'm also having issues with the embedded logger not returning the correct methods transparently back to the caller.
I've tested with internal/logger
and pkg/logger
with similar issues.
This one I'll have to come back round to.
package packagename
.
If you are testing as a consumer might be, then you can use package packagename_test
for only accessing the exported identifiers.golanglint-ci
and found it challenging when working with multiple modules in subdirectories.
The go eco system systems simplest with one repo = one module.
While mono-repos can work, the CI tooling isn't quite as intuitive to setup.
VSCode requires experimental support for multiple modules as well at this time.Slack can be a really effective tool with asynchronous work, but the defaults aren't great out of the box.
I want to get you a few tips on how to use Slack effectively.
Piping through release notifications, work-item updates, alerts and more can help you reduce context switching with other tools, but without proper control you'll likely find it overwhelming.
Use sections to organize your content and customize the level of priority you want to assign to the grouped channels.
This is for paid plans. This guide assumes you are on company plan with those features.
Reduce noise from busy channels, especially when folks over-use @here
Configure settings (especially in automated or busy rooms) to:
@here
if this is not properly used in the room.If someone forgets to mention your name with the @Me
syntax, you can set your name as a keyboard to alert on as a backup.
I set sheldon
as a keyword, and it helps ensure I get notified even if the alert, message, response didn't properly format my name in the message or by the app integrations (very few map to user id properly).
From your settings for the sidebar, enable the All Unreads
section.
This can help you quickly review all channel activity in a single pane similar to an email inbox.
A couple basic shortcuts will set you up to use Slack effectively.
For Windows, typically replace cmd
with ctrl
.
Action | Keyboard |
---|---|
cmd+k |
Quick switcher for channels and conversations. Don't leave anything pinned you don't need to by using this to flip |
cmd+/ |
Keyboard shortcut reference panel |
cmd+left cmd+right |
Navigate similar to a web browser back or forward to whatever converation or channel you were looking at. |
cmd+up |
edit the last message (if you are focused in the textbox) |
option+shift+down |
go to next unread channel (or use allunreads) |
Make sure to update your notification window to allow for uninteruppted deep work.