Skip to content

2020🔗

Go R1 Day 3

Day 3 of 100

progress

  • Learned about GOROOT, GOPATH and how to configure
  • Ran into problems with Visual Studio code reporting:
Failed to find the "go" binary in either GOROOT() or PATH(/usr/bin:/bin:/usr/sbin:/sbin. Check PATH, or Install Go and reload the window.
  • After attempting solution with various profile files, I tried setting the setting: "go.goroot": "/usr/local/opt/go/libexec/", in settings.json and this resolved the issue.
  • After it recognized this, I ran the Go: Current GOPATH from the command palette and it found it.
  • Finally, after this it reporting back some feedback showing it was recognizing the latest version I was running.
  • Initialized a new serverless framework project for aws-go-mod template using the following command: serverless create --template aws-go-mod --path ./sqlserver and the initial project layout was created.
  • I'm sure this will need to be improved as I go along, but since macOS failed on the go path setup, this resolved my problems for now.
# GO: Make tools work in console sessions
$ENV:GOPATH = "$ENV:HOME$($ENV:USERPROFILE)/go"

if ($PSVersionTable.OS -match 'Darwin') {
    $ENV:GOROOT = "/usr/local/opt/go/libexec"
    $ENV:PATH += "$ENV:PATH:$(go env GOPATH)/bin"
    $ENV:GOBIN = "$(go env GOPATH)/bin"
}

random-microsoft-teams-quirks-01

  • Using cmd+= results in zooming to 120%,145%,170%
  • Using cmd+- results in zooming out to 85%, 70%, 60%

How to Iterate Through A List of Objects with Terraform's for_each function

What I want to do

# create file local.users.yml
user:
  - name: foobar1
    email: foobar1@foobar.com
  - name: foobar2
    email: foobar2@foobar.com
  - name: foobar3
    email: foobar3@foobar.com
locals {
  users_file         = "local.users.yml"
  users_file_content = fileexists(local.users_file) ? file(local.users_file) : "NoSettingsFileFound: true"
  users_config       = yamldecode(local.users_file_content)
}

What I want to work:

resource "something" {
for_each local.users_config

name = each.key # or even each.value.name
email = each.value.email
}

What I've had to do

Now to iterate through this collection, I've had challenges, as the only way I've gotten this to work would be to ensure there was a designated key in the yaml structure. This provides a map object with a key/value format, instead of a collection of normal objects.

This would result in a yaml format like:

user:
  - 'foobar1':
      name: foobar1
      email: foobar1@foobar.com
  - 'foobar2':
       name: foobar2
       email: foobar2@foobar.com
  - 'foobar3':
       name: foobar3
       email: foobar3@foobar.com

This provides the "key" for each entry, allowing Terraform's engine to correctly identify the unique entry. This is important, as without a unique key to determine the resource a plan couldn't run in a deterministic manner by comparing correctly the previously created resource against the prospective plan.

Another Way Using Expressions

Iterating through a map has been the main way I've handled this, I finally ironed out how to use expressions with Terraform to allow an object list to be the source of a for_each operation. This makes feeding Terraform plans from yaml or other input much easier to work with.

Most of the examples I've seen confused the issue by focusing on very complex flattening or other steps. From this stack overflow answer, I experimented and finally got my expression to work with only a single line.

resource "foobar" "this" {
    for_each = {for user in local.users_config.users: user.name => user}
    name     = each.key
    email    = each.value.email
}

This results in a simple yaml object list being correctly turned into something Terraform can work with, as it defines the unique key in the expression.

simple conditional flag in terraform

Sometimes, you just need a very simple flag for enabled or disabled, or perhaps just a resource to deploy if var.stage == "qa". This works well for a single resource as well as collections if you provide the splat syntax.

resource "aws_ssm_association" "something_i_need_in_testing_only" {
   count = var.stage == "qa" ? 1 : 0
   name = var.name
}

grave accent

TIL: What I've been calling the backtick 👉 ```` 👈 for years, is technically the grave accent.

Getting Started with Stream Analytics

Resources

Resources
If you want a schema reference for the json Application Insights produces // Azure Application Insights Data Model // Microsoft Docs
If you want to visualize last 90 days of App Insight Data with Grafana // Monitor Azure services and applications using Grafana // Microsoft Docs

The Scenario

Application insights is integrated into your application and is sending the results to Azure. In my case, it was blob storage. This can compromise your entire insights history.

Application Insights has some nice options to visualize data, Grafana included among them. However, the data retention as of this time is still set to 90 days. This means historical reporting is limited, and you'll need to utilize Continuous Export in the Application Insights settings to stream out the content into blob storage to

The process

  1. Install Visual Studio Azure Plugin
  2. Initialize a new Stream Analytics project in Visual Studio
  3. Import some test data
  4. (Optional) If using SQL Server as storage for stream analytics then design the schema.
  5. Write your stream analytics sql, aka asql.
  6. Debug and confirm you are happy with this.
  7. Submit job to Azure (stream from now, or stream and backfill)
  8. Configure Grafana or PowerBI to connect to your data and make management happy with pretty graphs.

Install Visual Studio Azure Plugin

I don't think this would have been a feasible learning process without having run this through Visual Studio, as the web portal doesn't provide such a smooth experience. Highly recommend using Visual Studio for this part.

Learning the ropes through the web interface can be helpful, but if you are exploring the data parsing you need a way to debug and test the results without waiting minutes to simply have a job start. In addition, you need a way to see the parsed results from test data to ensure you are happy with the results.

New Stream Analytics Project

stream analytics project

Setup test data

Grab some blob exports from your Azure storage and sample a few of the earliest and the latest of your json, placing into a single json file. Put this in your solution folder called inputs through Windows Explorer. After you've done this, right click on the input file contained in your project and select Add Local Input. This local input is what you'll use to debug and test without having to wait for the cloud job. You'll be able to preview the content in Visual Studio just like when you run SQL Queries and review the results in the grid.

Design SQL Schema

Unique constraints create an index. If you use a unique constraint, you need to be aware of the following info to avoid errors.

When you configure Azure SQL database as output to a Stream Analytics job, it bulk inserts records into the destination table. In general, Azure stream analytics guarantees at least once delivery to the output sink, one can still achieve exactly-once delivery to SQL output when SQL table has a unique constraint defined. Once unique key constraints are set up on the SQL table, and there are duplicate records being inserted into SQL table, Azure Stream Analytics removes the duplicate record. Common issues in Stream Analytics and steps to troubleshoot Using the warning above, create any unique constraints with the following syntax to avoid issues.

create table dbo.Example (
...
,constraint uq_TableName_internal_id_dimension_name
          unique ( internal_id, dimension_name ) with (IGNORE_DUP_KEY  = on)

Stream Analytics Query

warning "Design Considerations" Pay attention to the limits and also to the fact you aren't writing pure T-SQL in the asaql file. It's a much more limited analytics syntax that requires you to simplify some things you might do in TSQL. It does not support all TSQL features. Stream Analytics Query Language Reference

Take a look at the query examples on how to use cross apply and into to quickly create Sql Server tables.

Backfilling Historical Data

When you start the job, the default start job date can be changed. Use custom date and then provide it the oldest data of your data. For me this correctly initialized the historical import, resulting in a long running job that populated all the historical data from 2017 and on.

Configure Grafana or PowerBI

Initially I started with Power BI. However, I found out that Grafana 5.1 > has data source plugins for Azure and Application insights, along with dashboard to get you started. I've written on Grafana and InfluxDB in the past and am huge fan of Grafana. I'd highly suggest you explore that, as it's free, while publishing to a workspace with PowerBI can require a subscription, that might not be included in your current MSDN or Office 365 membership. YMMV.

Filter Syntax

Filter Syntax Reference

I had to search to find details on the filtering but ended up finding the right syntax for doing partial match searches in the Filter Syntax Reference linked above. This also provides direct links to their ApiExplorer which allows testing and constructing api queries to confirm your syntax.

If you had a custom metric you were grouping by that was customEvent\name then the filter to match something like a save action could be:

startswith(customEvent/name, 'Save')

This would match the custom metrics you had saved that might provide more granularity that you'd normally have to specify like:

customEvent/Name eq 'Save(Customer)'
customEvent/Name eq 'Save(Me)'
customEvent/Name eq 'Save(Time)'
customEvent/Name eq 'Save(Tacos)'

Wrap-up

I only did this one project so unfortunately I don't have exhaustive notes this. However, some of the filter syntax and links were helpful to get me jump started on this and hopefully they'll be useful to anyone trying to get up and running like I had too.

setting default open with on macOS

It should be easy to pick a default program to open a file. On macOS, I was surprised at how poor the design was. Seriously, how is this intuitive? Open With > Set this as default. Apparently this only set it for an individual file. This means, every different csv file required me to do this again.

Instead, I had to Get Info > Unlock settings and then choose the default Open With setting, and further select Use this application to open all documents like this.

I enjoy most of my development experience with macOS.

Don't try and tell me that it is the pinnacle of usability though, some of this stuff is just quirky and over complicated. In what world, should my default behavior be set on a specific file and not the the file type?

Assume a role with AWS PowerShell Tools

Assume A Role

I've had some issues in the past working with AWS.Tools PowerShell SDK and correctly assuming credentials.

By default, most of the time it was easier to use a dedicated IAM credential setup for the purpose.

However, as I've wanted to run some scripts across multiple accounts, the need to simplify by assuming a role has been more important.

It's also a better practice than having to manage multiple key rotations in all accounts.

First, as I've had the need to work with more tooling, I'm not using the SDK encrypted json file.

Instead, I'm leveraging the ~/.aws/credentials profile in the standard ini format to ensure my tooling (docker included) can pull credentials correctly.

Configure your file in the standard format.

Setup a [default] profile in your credentials manually or through Initialize-AWSDefaultConfiguration -ProfileName 'my-source-profile-name' -Region 'us-east-1' -ProfileLocation ~/.aws/credentials.

If you don't set this, you'll need to modify the examples provided to include the source profilename.

{{< gist sheldonhull "e73dc7689be62dc7e8946d4ab948728b" "aws-cred-example" >}}

Next, ensure you provide the correct Account Number for the role you are trying to assume, while the MFA number is going to come from the "home" account you setup. For the Invoke-Generate, I use a handy little generator from Install-Module NameIt -Scope LocalUser -Confirm:$false.

{{< gist sheldonhull "e73dc7689be62dc7e8946d4ab948728b" "aws-sts-assume-role-example.ps1" >}}

Bonus: Use Visual Studio Code Snippets and drop this in your snippet file to quickly configure your credentials in a script with minimal fuss. 🎉

{{< gist sheldonhull "e73dc7689be62dc7e8946d4ab948728b" "vscode-snippet.json" >}}

I think the key area I've missed in the past was providing the mfa and token in my call, or setting up this correctly in the configuration file.

Temporary Credentials

In the case of needing to generate a temporary credential, say for an environment variable based run outside of the SDK tooling, this might also provide something useful.

It's one example of further reducing risk vectors by only providing a time-limited credential to a tool you might be using (can limit to a smaller time-frame).

{{< gist sheldonhull "e73dc7689be62dc7e8946d4ab948728b" "generate-temporary-credentials.ps1" >}}

AWS-Vault

Soon to come, using aws-vault to improve the security of your AWS sdk credentials further by simplifying role assumption and temporary sessions.

I've not ironed out exactly how to deal with some issues with using this great session tool when jumping between various tools such as PowerShell, python, docker, and more, so for now, I'm not able to provide all the insight. Hopefully, I'll add more detail to leveraging this once I get things ironed out.

Leave a comment if this helped you out or if anything was confusing so I can make sure to improve a quick start like this for others. 🌮

Go R1 Day 1

Day 1 of 100

progress

  • Cloned learning-go-with-tests to ensure a nice structured start, even though I've already done hello-world
  • Setup fresh gotools updates - Ran golangci-lint through docker to ensure improved linting options ready for further tests
  • Fixed default debug template in vscode to use workspacefolder instead of file directory. Strange that it defaulted to the wrong path.