Skip to content


Experiments With Go Arrays and Slices

Simplicity Over Syntactic Sugar

As I've been learning Go, I've grown to learn that many decisions to simplify the language have removed many features that provide more succinct expressions in languages such as Python, PowerShell, C#, and others. The non-orthogonal features in the languages result in many expressive ways something can be done, but at a cost, according to Go's paradigm.

My background is also heavily focused in relational databases and set based work, so I'm realizing as I study more programming paradigms seperate from any database involvement, that it's a fundamental difference in the way a database developer and a normal developer writing backend code look at this. Rather than declarative based syntax, you need to focus a lot more on iterating through collections and manipulating these.

As I explored my assumptions, I found that even in .NET Linq expressions are abstracting the same basic concept of loops and iterations away for simpler syntax, but not fundamentally doing true set selections. In fact, in some cases I've read that Linq performance is often worse than a simple loop (see this interesting stack overflow answer) The catch to this is that the Linq expression might be more maintainable in an enterprise environment at the cost of some degraded performance (excluding some scenarios like deferred execution).

For example, in PowerShell, you can work with arrays in a multitude of ways.

$array[4..10] | ForEach-Object {}
# or
foreach($item in $array[$start..$end]){}

This syntactic sugar provides brevity, but as two ways among many I can think of this does add such a variety of ways and performance considerations. Go strips this cognitive load away by giving only a few ways to do the same thing.

Using For Loop

This example is just int slices, but I'm trying to understand the options as I range through a struct as well.

When working through these examples for this question, I discovered thanks to the Rubber Duck debugging, that you can simplify slice selection using newSlice := arr[2:5].

Simple Loop

As an example: Goplay Link To Run

package main
import "fmt"

func main() {
    startIndex := 2
    itemsToSelect := 3
    arr := []int{10, 15, 20, 25, 35, 45, 50}
    fmt.Printf("starting: arr: %v\n", arr)

    newCollection := []int{}
    fmt.Printf("initialized newCollection: %v\n", newCollection)
    for i := 0; i < itemsToSelect; i++ {
        newCollection = append(newCollection, arr[i+startIndex])
        fmt.Printf("\tnewCollection: %v\n", newCollection)
    fmt.Printf("= newCollection: %v\n", newCollection)
    fmt.Print("expected: 20, 25, 35\n")

This would result in:

starting: arr: [10 15 20 25 35 45 50]
initialized newCollection: []
    newCollection: [20]
    newCollection: [20 25]
    newCollection: [20 25 35]
= newCollection: [20 25 35]
expected: 20, 25, 35

Moving Loop to a Function

Assuming there are no more effective selection libraries in Go, I'm assuming I'd write functions for this behavior such as Goplay Link To Run.

package main

import "fmt"

func main() {
    startIndex := 2
    itemsToSelect := 3
    arr := []int{10, 15, 20, 25, 35, 45, 50}
    fmt.Printf("starting: arr: %v\n", arr)
    newCollection := GetSubselection(arr, startIndex, itemsToSelect)
    fmt.Printf("GetSubselection returned: %v\n", newCollection)
    fmt.Print("expected: 20, 25, 35\n")

func GetSubselection(arr []int, startIndex int, itemsToSelect int) (newSlice []int) {
    fmt.Printf("newSlice: %v\n", newSlice)
    for i := 0; i < itemsToSelect; i++ {
        newSlice = append(newSlice, arr[i+startIndex])
        fmt.Printf("\tnewSlice: %v\n", newSlice)
    fmt.Printf("= newSlice: %v\n", newSlice)
    return newSlice

which results in:

starting: arr: [10 15 20 25 35 45 50]
newSlice: []
    newSlice: [20]
    newSlice: [20 25]
    newSlice: [20 25 35]
= newSlice: [20 25 35]
GetSubselection returned: [20 25 35]
expected: 20, 25, 35

Trimming this down further I found I could use the slice syntax (assuming the consecutive range of values) such as: Goplay Link To Run

func GetSubselection(arr []int, startIndex int, itemsToSelect int) (newSlice []int) {
    fmt.Printf("newSlice: %v\n", newSlice)
    newSlice = arr[startIndex:(startIndex + itemsToSelect)]
    fmt.Printf("\tnewSlice: %v\n", newSlice)
    fmt.Printf("= newSlice: %v\n", newSlice)
    return newSlice


The range expression gives you both the index and value, and it works for maps and structs as well.

Turns outs you can also work with a subselection of a slice in the range expression.

package main

import "fmt"

func main() {
    startIndex := 2
    itemsToSelect := 3
    arr := []int{10, 15, 20, 25, 35, 45, 50}
    fmt.Printf("starting: arr: %v\n", arr)

    fmt.Printf("Use range to iterate through arr[%d:(%d + %d)]\n", startIndex, startIndex, itemsToSelect)
    for i, v := range arr[startIndex:(startIndex + itemsToSelect)] {
        fmt.Printf("\ti: %d v: %d\n", i, v)
    fmt.Print("expected: 20, 25, 35\n")


While the language is simple, understanding some behaviors with slices caught me off-guard.

First, I needed to clarify my language. Since I was looking to have a subset of an array, slices were the correct choice. For a fixed set with no changes, a standard array would be used.

Tour On Go says it well with:

An array has a fixed size. A slice, on the other hand, is a dynamically-sized, flexible view into the elements of an array. In practice, slices are much more common than arrays.

For instance, I tried to think of what I would do to scale performance on a larger array, so I used a pointer to my int array. However, I was using a slice.

This means that using a pointer wasn't valid. This is because whenever I pass the slice it is a pass by reference already, unlike many of the other types.

newCollection := GetSubSelection(&arr,2,3)

func GetSubSelection(arr *[]int){ ...

I think some of these behaviors aren't quite intuitive to a new Gopher, but writing them out helped clarify the behavior a little more.


This is a bit of a rambling about what I learned so I could solidify some of these discoveries by writing them down. #learninpublic

For some great examples, look at some examples in:

If you have any insights, feel free to drop a comment here (it's just a GitHub powered comment system, no new account required).

Go R1 Day 22

Day 22 of 100


Using Dash, I read through much of the language specification. Dry reading for sure, but helped a bit in understanding a little more on stuff like arrays, slices, loops, etc.

Nothing profound to add, except to say I don't think I want to write a language specification.

Go R1 Day 21

Day 21 of 100


  • Signed up for, which is a pretty great website to work through progressively harder exercises.
  • Did Hello world to start with as requires progressive steps through the exercises.
  • Did a string concatenation exercise as well (Two Fer).

I like the mentor feedback system concept and submission of work. After I finish this, would be good to add myself as a mentor and contribute back to this community. This is a fantastic concept to help get acclimated to a new language and do progressively harder exercises to better learn the language usage.

SQL Server Meets AWS Systems Manager

Excited. Have a new solution in the works to deploy Ola Hallengren via SSM Automation runbook across all SQL Server instances with full scheduling and synchronization to S3. Hoping to get the ok to publish this soon, as I haven't seen anything like this built.


  • Building SSM Automation YAML doc from a PS1 file using AST & metadata
  • Download dependencies from s3 automatically
  • Credentials pulled automatically via AWS Parameter Store (could be adapted to Secrets Manager as well)
  • Leverage s5cmd for roughly 40x faster sync performance with no aws-cli required. It's a Go executable. #ilovegolang
  • Deployment of a job that automates flipping instances to FULL or SIMPLE recovery similar to how RDS does this, for those cases where you can't control the creation scripts and want to flip SIMPLE to full for immediate backups.
  • Formatted deployment summary card sent with all properties to Microsoft Teams. #imissslack
  • Management of these docs via terraform.
  • Snippet for the setup of an S3 lifecycle policy automatically cleanup old backups. (prefer terraform, but this is still good to know for retro-active fixes)

I'm pretty proud of this being done, as it is replacing Cloudberry, which has a lot of trouble at scale in my experience. I've seen a lot of issues with Cloudberry when dealing with 1000-3000 databases on a server.

Once I get things running, I'll see if I can get this shared in full since it's dbatools + Ola Hallengren Backup Solution driven.

Also plan on adding a few things like on failure send a PagerDuty incident and other little enhancements to possible enable better response handling.

Other Resources

Using AWS SDK With Go for EC2 AMI Metrics


The source code for this repo is located here:

What This Is

This is a quick overview of some AWS SDK Go work, but not a detailed tutorial. I'd love feedback from more experienced Go devs as well.

Feel free to submit a PR with tweaks or suggestions, or just comment at the bottom (which is a GitHub issue powered comment system anyway).

Image Age

Good metrics can help drive change. If you identify metrics that help you quantify areas of progress in your DevOps process, you'll have a chance to show the progress made and chart the wins.

Knowing the age of the image underlying your instances could be useful if you wanted to measure how often instances were being built and rebuilt.

I'm a big fan of making instances as immutable as possible, with less reliance on changes applied by configuration management and build oriented pipelines, and more baked into the image itself.

Even if you don't build everything into your image and are just doing "golden images", you'll still benefit from seeing the average age of images used go down. This would represent more continual rebuilds of your infrastructure. Containerization removes a lot of these concerns, but not everyone is in a place to go straight to containerization for all deployments yet.

What Using the SDK Covers

I decided this would be a good chance to use Go as the task is relatively simple and I already know how I'd accomplish this in PowerShell.

If you are also on this journey, maybe you'll find this detail inspiring to help you get some practical application in Go.

There are a few steps that would be required:

  1. Connection & Authorization
  2. Obtain a List of Images
    1. Filtering required
  3. Obtain List of Instances
  4. Match Images to Instances where possible
  5. Produce artifact in file form

Warning... I discovered that the SDK is pretty noisy and probably makes things a bit tougher than just plain idiomatic Go.

If you want to learn pointers and derefrencing with Go... you'll be a pro by the time you are done with it 😂

Everyone Gets a Pointers According to SpongeBob

Why This Could Be Useful In Learning More Go

I think this is a pretty great small metric oriented collector focus as it ties in with several areas worth future versions.

Since the overall logic is simple there's less need to focus on understanding AWS and more on leveraging different Go features.

  1. Version 1: MVP that just products a JSON artifact
  2. Version 2: Wrap up in lambda collector and product s3 artifact
  3. Version 3: Persist metrics to Cloudwatch instead as a metric
  4. Version 4: Datadog or Telegraf plugin

From the initial iteration I'll post, there's quite a bit of room for even basic improvement that my quick and dirty solution didn't implement.

  1. Use channels to run parallel sessions to collect multi-region metrics in less time
  2. Use sorting with the structs properly would probably cut down on overhead and execution time dramatically.
  3. Timeseries metrics output for Cloudwatch, Datadog, or Telegraf


  1. Still learning Go. Posting this up and welcome any pull requests or comments (comments will open GitHubub issue automatically).
  2. There is no proper isolation of functions and tests applied. I've determined it's better to produce and get some volume under my belt that focus on immediately making everything best practices. Once I've gotten more familiar with Go proper structure, removing logic from main() and more will be important.
  3. This is not a complete walkthrough of all concepts, more a few things I found interesting along the way.

Some Observations & Notes On V1 Attempt


Writing to JSON is pretty straight forward, but what I found interesting was handling null values.

If you don't want the default initialized value from the data type to be populated, then you need to specific additional attributes in your struct to let it know how to properly serialize the data.

For instance, I didn't want to populate a null value for AmiAge as 0 would mess up any averages you were trying to calculate.

type ReportAmiAging struct {
    Region             string     `json:"region"`
    InstanceID         string     `json:"instance-id"`
    AmiID              string     `json:"image-id"`
    ImageName          *string    `json:"image-name,omitempty"`
    PlatformDetails    *string    `json:"platform-details,omitempty"`
    InstanceCreateDate *time.Time `json:"instance-create-date"`
    AmiCreateDate      *time.Time `json:"ami-create-date,omitempty"`
    AmiAgeDays         *int       `json:"ami-age-days,omitempty"`

In this case, I just set omitempty and it would set to null if I passed in a pointer to the value. For a much more detailed walk-through of this: Go's Emit Empty Explained


Here things got a little confusing as I wanted to run this concurrently, but shelved that for v1 to deliver results more quickly.

To initialize a new session, I provided my starting point.

sess, err := session.NewSession(&aws.Config{
        Region: aws.String("eu-west-1"),
if err != nil {
log.Info().Str("region", string(*sess.Config.Region)).Msg("initialized new session successfully")

Next, I had to gather all the regions. In my scenario, I wanted to add flexibility to ignore regions that were not opted into, to allow less regions to be covered when this setting was correctly used in AWS.

// Create EC2 service client
client := ec2.New(sess)
regions, err := client.DescribeRegions(&ec2.DescribeRegionsInput{
    AllRegions: aws.Bool(true), Filters: []*ec2.Filter{
            Name:   aws.String("opt-in-status"),
            Values: []*string{aws.String("opted-in"), aws.String("opt-in-not-required")},
if err != nil {
    log.Err(err).Msg("Failed to parse regions")

The filter syntax is pretty ugly. Due to the way the SDK works, you can't just pass in *[]string{"opted-in","opt-in-not-required} and then reference this. Instead, you have to set the AWS functions to create pointers to the strings and then dereference. Deep diving into this further was beyond my time allotted, but made my first usage feel somewhat clunky.

After gathering the regions you'd iterate and create a new session per region similar to this.

for _, region := range regions.Regions {
        log.Info().Str("region", *region.RegionName).Msg("--> processing region")
        client := ec2.New(sess, &aws.Config{Region: *&region.RegionName})
    // Do your magic

Structured Logging

I've blogged about this before (mostly on microblog).

As a newer gopher, I've found that zerolog is pretty intuitive.

Structured logging is really important to being able to use log tools and get more value out of your logs in the future, so I personally like the idea of starting with them from the beginning.

Here you could see how you can provide name value pairs, along with the message.

log.Info().Int("result_count", len(respInstances.Reservations)).Dur("duration", time.Since(start)).Msg("\tresults returned for ec2instances")

Using this provided some nice readable console feedback, along with values that a tool like Datadog's log parser could turn into values you could easily make metrics from.

Performance In Searching

From my prior blog post Filtering Results In Go I also talked about this.

The lack of syntactic sugar in Go means this seemed much more verbose than I was expecting.

A few key things I observed here were:

  1. Important to set your default layout for time if you want any consistency.
  2. Sorting algorithms, or even just basic sorting, would likely reduce the overall cost of a search like this (I'm better pretty dramatically)
  3. Pointers. Everywhere. Coming from a dynamic scripting language like PowerShell/Python, this is a different paradigm. I'm used to isolated functions which have less focus on passing values to modify directly (by value). In .NET you can pass in variables by reference, which is similar in concept, but it's not something I found a lot of use for in scripting. I can see the massive benefits when at scale though, as avoiding more memory grants by using existing memory allocations with pointers would be much more efficient. Just have to get used to it!
// GetMatchingImage will search the ami results for a matching id
func GetMatchingImage(imgs []*ec2.Image, search *string) (parsedTime time.Time, imageName string, platformDetails string, err error) {
    layout := time.RFC3339 //"2006-01-02T15:04:05.000Z"
    log.Debug().Msgf("\t\t\tsearching for: %s", *search)
    // Look up the matching image
    for _, i := range imgs {
        log.Trace().Msgf("\t\t\t%s <--> %s", *i.ImageId, *search)
        if strings.ToLower(*i.ImageId) == strings.ToLower(*search) {
            log.Trace().Msgf("\t\t\t %s == %s", *i.ImageId, *search)

            p, err := time.Parse(layout, *i.CreationDate)
            if err != nil {
                log.Err(err).Msg("\t\t\tfailed to parse date from image i.CreationDate")
            log.Debug().Str("i.CreationDate", *i.CreationDate).Str("parsedTime", p.String()).Msg("\t\t\tami-create-date result")
            return p, *i.Name, *i.PlatformDetails, nil
            // break
    return parsedTime, "", "", errors.New("\t\t\tno matching ami found")

Multiple Return Properties

While this can be done in PowerShell, I rarely did it in the manner Go does.

amiCreateDate, ImageName, platformDetails, err := GetMatchingImage(respPrivateImages.Images, inst.ImageId)
if err != nil {
    log.Err(err).Msg("failure to find ami")

Feedback Welcome

As stated, feedback welcome from any more experienced Gophers would be welcome. Anything for round 2.

Goals for that will be at a minimum:

  1. Use go test to run.
  2. Isolate main and build basic tests for each function.
  3. Decide to wrap up in lambda or plugin.


I asked my daughter (3) how much she loved me. She held up her hands and said: "Five".

I'll take that as a win considering that's all the fingers on that hand. 😂

Leave Me Alone

Free Means You Are the Product

Over time, I've begun to look at products that are free with more judgment. The saying is: "If it's free, you are the product". This often means your data and privacy are compromised as the product.

This has resulted in me looking more favorably at apps I would have dismissed in the past, such as Leave Me Alone.

Leave Me Alone

The notion of buying credits for something I could script, click, or do myself made me use sporadically last year. This year, I took the plunge and spent $10 and appreciate the concept and cost.

If you have a lot of tech interaction, you'll have a slew of newsletter and marketing subscriptions coming your way. This noise can drown your email.

I saw one Children's clothing place that got my email on a receipt generate an average of 64 emails a month!

Leave Me Alone helps simplify the cleanup process by simplifying the summary of noisiest offenders, and one-click unsubscribes to any of these.

You can use an automatically generated rating based on ranked value on mailing lists, read engagement, number of emails sent monthly, and more.

Take a look, the free start is enough to figure out if you like it.

Other Tools

Combine this type of tool with:

  • Kill The Newsletter
  • Inoreader (RSS Reader)
  • Subscription Score: a really promising tool made by the same folks, but haven't added at this time as price seems a bit high for this specific feature if I'm already using their app. (at this time $49 a year). Be nice if this was a feature provided automatically to those who bought 250 credits or more since it's powered by the data mining of lists users unsubscribe from the most.

You'll be more likely to keep up to date with this noise reduced. Last tip: Add GitHub Release notes like Terraform and others as a subscription in your RSS reader, and it might reduce the noise via email and slack on releases.