Skip to content

2021🔗

Go R1 Day 70

progress

  • Concurrency section wrap-up with Learn Go With Tests.
  • Reviewed material learned from: Go R1 Day 61
  • Read material, but didn't do a lot of tests on this, per mostly concept oriented. Used concurrent progressbar example from uiprogress project to test concurrent UI updates.
  • My last concurrency test case was to launch many concurrent processes for a load test. This didn't leverage goroutinues as typically used, since it was calling to an executable on the host machine. However, this provided a great use case for something I've done before with DevOps oriented work and showed how to use concurrency as a blocking operation. Once the user was done with the test, ctrl+c was used to kill the active requests and the program exited.
  • I need more practice with channels. I was only wanting error stdout content, and so didn't have any need for receiving channel output back in a structured way. This is probably an atypical usage of concurrency, fitting for an external load test, but not internal Go code.
  • Still found it pretty cool that I could spin up 500 processes at once, with far less overhead than doing in PowerShell.

{{< admonition type="Note" title="Example Of Doing In PowerShell" open=true >}} Doing this in PowerShell is far more compact, but not as performant.

This is a good example of the different in using Go for adhoc tasks. It will require more code, error handling care, but pays off in something that is likely more stable and easier to run across multiple systems with a single binary.

#!/usr/bin/env pwsh
$Server = 'IPADDRESS'
$ServerPort = '3000'
Write-Host 'Load Test Start'
$RandomPort = '4000'

$j = @(4000..4100)| ForEach-Object {
    $c = $_
    Start-ThreadJob -ThrottleLimit 1000 -StreamingHost $Host -InputObject $c -ScriptBlock {
    $RandomPort = $input
    &mybinary serve --max-retry-count 5 --header "user-id: $(petname)" --header "session-id: $(uuidgen)" "${using:Server}:${using:ServerPort}"
}
}
$j | Wait-Job | Receive-Job
$j | Stop-Job

I didn't benchmark total load difference between this and Go, but I'm sure the pwsh threads were a bit more costly, though for this test case might not have been a large enough value to make much difference.

{{< /admonition >}}

Code Examples

This first section is the startup. Key points:

  • main() is the entry point for the program, but doesn't contain the main logic flow. Inspired by Matt Ryer's posts, I now try to ensure main is as minimal as possible to encourage easier automation in testing. Since run contains the main logic flow, the actual CLI itself can be called via integration test by flipping to Run() and calling from testing file using a blackbox testing approach.
package main

import (
    "bytes"
    "errors"
    "flag"
    "fmt"
    "io"
    "math"
    "os"
    "os/exec"
    "strings"
    "sync"
    "time"

    shellescape "github.com/alessio/shellescape"
    petname "github.com/dustinkirkland/golang-petname"
    "github.com/google/uuid"
    "github.com/pterm/pterm"
    "github.com/rs/zerolog"
    "github.com/rs/zerolog/log"
)

const (
    // exitFail is the exit code if the program
    // fails.
    exitFail = 1

    // desiredPort is the port that the app forwards traffic to.
    desiredPort = 22

    // petnameLength is the length of the petname in words to generate.
    petNameLength = 2

    // startingPort is the starting port for a new connection, and will increment up from there so each connection is unique.
    startingPort = 4000

    // maxRetryCount is the number of times to retry a connection.
    maxRetryCount = 5
)

func main() {
    if err := run(os.Args, os.Stdout); err != nil {
        fmt.Fprintf(os.Stderr, "%s\n", err)
        os.Exit(exitFail)
    }
}

Next run contains the main logic flow. The goal is that all main program logic for exiting and terminating is handled in this single location.

// Run handles the arguments being passed in from main, and allows us to run tests against the loading of the code much more easily than embedding all the startup logic in main().
// This is based on Matt Ryers post: https://pace.dev/blog/2020/02/12/why-you-shouldnt-use-func-main-in-golang-by-mat-ryer.html
func run(args []string, stdout io.Writer) error {
    if len(args) == 0 {
        return errors.New("no arguments")
    }
    InitLogger()
    zerolog.SetGlobalLevel(zerolog.InfoLevel)

    debug := flag.Bool("debug", false, "sets log level to debug")
    Count := flag.Int("count", 0, "number of processes to open")
    delaySec := flag.Int("delay", 0, "delay between process creation. Default is 0")
    batchSize := flag.Int("batch", 0, "number of processes to create in each batch. Default is 0 to create all at once")
    Server := flag.String("server", "", "server IP address")
    ServerPort := flag.Int("port", 3000, "server port") //nolint:gomnd

    flag.Parse()
    log.Logger.Info().Int("Count", *Count).
        Int("delaySec", *delaySec).
        Int("batchSize", *batchSize).
        Str("Server", *Server).
        Msg("input parsed")

    log.Logger.Info().
        Int("desiredPort", desiredPort).
        Int("petNameLength", petNameLength).
        Int("startingPort", startingPort).
        Msg("default constants")

    if *debug {
        zerolog.SetGlobalLevel(zerolog.DebugLevel)
    }

    RunTest(*Count, *delaySec, *batchSize, *Server, *ServerPort)
    return nil
}

Next, InitLogger is used to initialize the logger for zerolog. I don't need multiple configurations right now so this is just stdout.

// InitLogger sets up the logger magic
// By default this is only configured to do pretty console output.
// JSON structured logs are also possible, but not in my default template layout at this time.
func InitLogger() {
    output := zerolog.ConsoleWriter{Out: os.Stdout, TimeFormat: time.RFC3339}
    log.Logger = log.With().Caller().Logger().Output(zerolog.ConsoleWriter{Out: os.Stderr})

    output.FormatLevel = func(i interface{}) string {
        return strings.ToUpper(fmt.Sprintf("| %-6s|", i))
    }
    output.FormatMessage = func(i interface{}) string {
        return fmt.Sprintf("%s", i)
    }
    output.FormatFieldName = func(i interface{}) string {
        return fmt.Sprintf("%s:", i)
    }
    output.FormatFieldValue = func(i interface{}) string {
        return strings.ToUpper(fmt.Sprintf("%s", i))
    }
    log.Info().Msg("logger initialized")
}

Test the existence of the binary being run in a load test, and exit if it doesn't exist. This should more likely be handled in the run fuction, but I just did it here for simplicity in this adhoc tool.

// TestBinaryExists checks to see if the binary is found in PATH and exits with failure if can't find it.
func TestBinaryExists(binary string) string {
    p, err := exec.LookPath(binary)
    if err != nil {
        log.Logger.Error().Err(err).Str("binary",binary).Msg("binary not found")
        os.Exit(exitFail)
    }

    return p
}

Next, buildCLIArgs handles the argument string slice construction. I learned from this to keep each line and argument independent as escaping has some strange behavior if you try to combine too much in a single statement, especially with spaces. Best practice is to keep this very simple.

// buildCliArgs is an example function of building arguments via string slices
func buildCliArgs(Server string, ServerPort int, port int) (command []string) {
    command = append(command, "server")
    command = append(command, "--header")
    command = append(command, fmt.Sprintf(`user-id: %s`, petname.Generate(petNameLength, "-")))
    command = append(command, "--header")
    command = append(command, fmt.Sprintf(`session-id: %s`, uuid.Must(uuid.NewRandom()).String()))
    command = append(command, "--max-retry-count", "5")
    command = append(command, Server+":"+fmt.Sprintf("%d", ServerPort))
    return command
}

Finally, a function that run the tests with some pretty output using pterm. This would be probably better to break-up for testing, but again, adhoc project, so this ended up working decently as I was learning concurrency.

// RunTest is the main test function that calculates the batch size and then launches the  creation using a routinue.
func RunTest(Count int, delaySec int, batchSize int, Server string, ServerPort int) {
    log.Logger.Info().Msg("RunTest startings")
    totalBatches := math.Ceil(float64(Count) / float64(batchSize))
    log.Logger.Info().Float64("totalBatches", totalBatches).Msg("batches to run")
    myBinary := TestBinaryExists("binaryname")
    port := startingPort
    var wg sync.WaitGroup

    totals := 0
    p, _ := pterm.DefaultProgressbar.WithTotal(Count).WithTitle("run s").Start()

    for i := 0; i < int(totalBatches); i++ {
        log.Debug().Int("i", i).Int("port", port).Msg("batch number")

        for j := 0; j < batchSize; j++ {
            if totals == Count {
                log.Debug().Msg("totals == Count, breaking out of loop")

                break
            }

            totals++
            log.Debug().Int("i", i).Int("", totals).Msg("")
            cmdargs := buildCliArgs(Server, ServerPort, port)
            wg.Add(1)
            go func() {
                defer wg.Done()
                buf := &bytes.Buffer{}
                cmd := exec.Command(, cmdargs...)
                cmd.Stdout = buf
                cmd.Stderr = buf
                if err := cmd.Run(); err != nil {
                    log.Logger.Error().Err(err).Bytes("output", buf.Bytes()).Msg(" failed")
                    os.Exit(exitFail)
                }
                log.Logger.Debug().Msgf(" %v", shellescape.QuoteCommand(cmdargs))
                log.Logger.Debug().Bytes("output", buf.Bytes()).Msg("")
            }()

            p.Title = "port: " + fmt.Sprintf("%d", port)
            p.Increment()
            port++
        }
        time.Sleep(time.Second * time.Duration(delaySec))
    }
    p.Title = "s finished"
    _, _ = p.Stop()
    wg.Wait()
}

Go R1 Day 71

progress

  • Learn Go With Tests -> Using select with channels to wait for multiple goroutines.
  • Of particular interest is this:

Notice how we have to use make when creating a channel; rather than say var ch chan struct{}. When you use var the variable will be initialised with the "zero" value of the type. So for string it is "", int it is 0, etc. For channels the zero value is nil and if you try and send to it with <- it will block forever because you cannot send to nil channels (go-fundamentals-select listed below)

  • Used httptest to create mock server for faster testing, and included wrapper around a calls to allow configuration for timeout. This ensures that testing can handle in milliseconds, but default behavior in a deployment would be 10 seconds or more.

Go R1 Day 68

progress

  • Did exercism.io for gigasecond puzzle.
package gigasecond

// import path for the time package from the standard library
import (
    "time"
)

// gigasecond represents a very very very small portion of a second.
const gigasecond = 1000000000

// AddGigasecond adds a very very very small portion of a second called a gigasecond to a provided time input.
func AddGigasecond(t time.Time) time.Time {
    gcDuration := gigasecond * time.Second
    n := t.Add(gcDuration)
    return n
}
  • Learned a bit more about using Math.Pow(), conversion of floats/ints, and dealing with time.Duration.
  • Tried using Math.Pow() to work through the issue, but got mixed up when using time.Duration() which expects nanoseconds, and such. Went ahead and just used a constant for the exercise as not likely to use gigaseconds anytime soon. 😀

Go R1 Day 67

progress

Built functionality in my blog repo to create a new 100DaysOfCode post using Mage. This provides an interactive prompt that automatically tracks the days left and increments the counter as this progresses.

  • ingest toml configuration
  • unmarshal to struct
  • update struct
  • marshal and write back to the toml configuration file
  • replace matched tokens in file

Go R1 Day 66

progress

This wasn't specific to Go, but was the first step towards using Go in a distributed test.

Dapr

I had an interesting project today with my first development level effort using Kubernetes. Here's my log of attempting to use Getting started with Dapr | Dapr Docs and getting two Go APIs to talk to each other with it.

First, what is Dapr?

Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. 1 ... Dapr codifies the best practices for building microservice applications into open, independent building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application.

From this, it sounds like Dapr helps solve issues by abstracting the "building blocks" away from the business logic. Rather than focusing on the implementation level concern of how to talk from service to service, Dapr can help with this.

Instead of relying on provider specific key-value store, such as AWS SSM Parameter store, Dapr abstracts that too.

It's interesting as this concept of abstraction on a service level is something new to me. Good abstractions in software are hard but critical to maintainability long-term. Provider-level abstractions are something on an entirely different scale.

Setup

  • Enable Kubernetes on Docker Desktop.
  • Install Lens: brew install lens
  • Pop this open and Cmd+, to get to settings.
  • Add dapr helm charts: https://dapr.github.io/helm-charts/
  • Connect to local single-node Kubernetes cluster and open the charts section in Lens.
  • Install Dapr charts.
  • Celebrate your master of all things Kubernetes.

Master Of Kubernetes

I think I'll achieve the next level when I don't do this in Lens. I'll have to eventually use some cli magic to deploy my changes via helm or level-up to Pulumi. 😀 Until then, I'll count myself as victorious.

A Practical Test

Go R1 Day 65

progress

  • Built mage tasks for go formatting and linting.

Using this approach, you can now drop a magefile.go file into a project and set the following:

// +build mage

package main

import (

    "github.com/magefile/mage/mg"
    "github.com/pterm/pterm"

    // mage:import
    "github.com/sheldonhull/magetools/gotools"
)

Calling this can be done directly now as part of a startup task.

// Init runs multiple tasks to initialize all the requirements for running a project for a new contributor.
func Init() error {
    fancy.IntroScreen(ci.IsCI())
    pterm.Success.Println("running Init()...")
    mg.Deps(Clean, createDirectories)
    if err := (gotools.Golang{}.Init()); err != nil {  // <----- From another package.
        return err
    }

    return nil
}

Additionally, handled some Windows executable path issues by making sure to wrap up the path resolution.

// if windows detected, add the exe to the binary path
var extension string
if runtime.GOOS == "windows" {
  extension = ".exe"
}
toolPath := filepath.Join("_tools", item+extension)

First Pass With Pulumi

Why

Instead of learning a new domain specific language that wraps up cloud provider API's, this let's the developer use their preferred programming language, while solving several problems that using the API's directly don't solve.

  • Ensure the deployment captures a state file of the changes made.
  • Workflow around the previews and deployments.
  • Easily automated policy checks and tests.

This can be a really useful tool to bring infrastructure code maintainability directly into the the lifecycle of the application.

It's subjective to those in DevOps whether this would also apply for "Day 0-2" type operations, which are typically less frequently changed resources such as account settings, VPC, and other more static resources.

However, with a team experienced with Go or other tooling, I could see that this would provide a way to have much more programmatic control, loops, and other external libraries used, without resorting to the HCL DSL way of doing resource looping and inputs.

First Pass

First impression was very positive!

Basic steps:

  • brew install pulumi
  • pulumi new aws-go
  • Entered name of test stack such as aws-vpc.
  • Copied the VPC snippet from their docs and then plugged in my own tag for naming, which by default wasn't included.
  • Reproduced the example for pulumi.String().1
package main

import (
    "flag"

    petname "github.com/dustinkirkland/golang-petname"
    "github.com/pulumi/pulumi-aws/sdk/v4/go/aws/ec2"
  "github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)

var (
    words     = flag.Int("words", 2, "The number of words in the pet name")
    separator = flag.String("separator", "-", "The separator between words in the pet name"))

func main() {
    pulumi.Run(func(ctx *pulumi.Context) error {
        conf := config.New(ctx, "")
        stage := conf.Require("stage")
        petname := petname.Generate(*words, *separator)
        _, err := ec2.NewVpc(ctx, stage, &ec2.VpcArgs{
            CidrBlock: pulumi.String("10.0.0.0/16"),
            Tags: pulumi.StringMap{
                "Name": pulumi.String(strings.Join([]string{stage, petname}, "-")),
            },
        })
        if err != nil {
            return err
        }

        return nil
    })
}

Positive Observations

  • Running pulumi destroy left the stack in the console for full plan history and auditing. To remove the stack from the web you'd run: pulumi stack rm dev. This is similar to how terraform cloud workspaces work and gives confidence of easier auditing by default.
  • The console experience and browser integration was beautifully done.
  • pulumi preview --emoji provided a very clean and succint summary of changes.
  • pulumi up also was very clean, and allowed a selection to expand the details as well.
  • Browser for stack provides full metadata detail, resource breakdown, audit history, and more.

Great Console Preview & Interaction Experience

  • The Pulumi docs for Azure DevOps were pretty solid! Full detail and walk through. As an experienced PowerShell developer, I was pleasantly suprised by quality PowerShell code that overall was structured well.2

  • Set some values via yaml easily by: 'pulumi config set --path 'stage' 'dev' which results in:

config:
  mystack:stage: dev
  aws:region: myregion

This is then read via:

conf := config.New(ctx, "")
stage := conf.Require("stage")

Things To Improve

  • Missing the benefit of Terraform module registry.
  • Pulumi Crosswalk sounds pretty awesome to help with this. However, I wasn't able to find the equivalent of a "crosswalk module library" to browse so that part might be a future improvement.

This document link: AWS Virtual Private Cloud (VPC) | Pulumi seemed great as a tutorial, but wasn't clear immediately on how I could use with Go.

I looked at the aws · pkg.go.dev but didn't see any equivalent to the documented awsx package shown from nodejs library.

Finally, found my answer.

Pulumi Crosswalk for AWS is currently supported only in Node.js (JavaScript or TypeScript) languages. Support for other languages, including Python, is on the future roadmap. Pulumi Crosswalk for AWS | Pulumi

I wish this was put as a big disclaimer right up at the top of the crosswalk section to ensure it was very clear.


  1. This feels very similar in style to the AWS SDK which doesn't allow just string values, but requires pointers to strings and thus wraps up the strings with expressions such as aws.String(

  2. Subjective, but I noticed boolean values instead of switches, which would slightly simplify the build scripts, but is more of a "nit" than a critical issue. Using if blocks instead of switch might also clean things up, but overall the script was pretty well written, which seems rare in vendor provided PowerShell examples. 👏 

Go R1 Day 64

progress

Wrote: First Pass with Pulumi

At $work, I'm working primarily with Go developers. This was an exploration of using Go for infrastructure.

Read a bit on CDK for Terraform as well, which seems interesting.

SweetOps Slack Archive

Just wanted to give props to the Cloudposse team lead by Erik Osterman @eosterman.

Slack provides a great community chat experience, but there are quite a few problems about using it for Q&A. 1 Since the free plans for communities hide content over 10,000 messages, a healthy community will go over this quickly.

With all the great conversations, I want to find prior discussions to benefit from topics already covered.

Cloudposse archives the community discussions so they can be searched.

Cloudposse archives the community discussions for future searches here: SweetOps Archive.

Pro-Tip Search Aliases

If you use Alfred you can setup an alias for this, or use a Chrome Search Engine Alias. To use a Chrome search engine alias, go to: Search Engines and add a new entry.

  • Search Engine: cloudposse
  • Keyword: cloudposse
  • URL with %s in place of query: https://archive.sweetops.com/search?query=%s

For any future search, just type in cloudposse in the searchbar and whatever you type after that will open up in the archive search.

Search Using Alfred

Search Using Chrome Search Engine Alias


  1. I don't think Cloudposse or many others deny that Slack is "inferior" for threaded conversation to a tool like Discourse. However, despite it being a "walled garden", it's a lot easier to get engagement there than a forum from what I understand. This solution provides a nice middle ground by giving the ease of Slack, while ensuring great conversation is still captured and able to be found.