Skip to content

2021🔗

Helm Is Like Hugo

Turns out helm is pretty intuitive if you already have been working with something like Hugo, which is Go template driven.

Was able to convert an entire K8 stack to helm with a couple hours of work and render everything.

I have this problem of trying to avoid percieved complex tools in an attempt to reduce "another tool" syndrome for others I work with. Sometimes, it's important to keep in mind who is editing and doing the majority of the work, and not worry as much about the long-term solution over delivery.

That's always a tough balance since I tend to think outside the scope of a single team due to the style of work I've done. I think I'm slowly getting there. 😀

Go R1 Day 86 - Wrap Up!

Finished!

Done! I've pretty much done above and beyond 100 days, but finding the blogging format to take a lot more effort to keep up when I'm doing a mix of puzzles, courses, and work.

Since my full-time job has Go development as a part of it now, I've exceeded this and going to track any future training goals in a lower overhead way, such as GitHub issues or such.

Was It Worth It?

Yes, it was worth it. It helped me break down a large amount of learning back into a daily rythm of dedicated study. For me, doing full time development, I found it was hard since I do code a big chunk of the day to document all the time.

What would I do differently?

Probably would minimize the effort of documenting the process itself. While it's great to save notes and articulate things, I feel either saving the notes as part of the git log or algorithm style repo would be less trouble. Also, some of the work is in various platforms like Leetcode, which aren't easy to straight extract. Reduce the overhead and focus on documenting core principles or concepts that would be useful in a wiki style format, but not log as much.

Using Github Issues might work really well too, because you could post them to a log later in bulk, but otherwise the cli driven creation and kanban board approach would minimize the overhead. That would be cool too cause you could have bots run todos, stale items, and other cool things for you.

Setup Sourcegraph Locally

I went through the Sourcegraph directions, but had a few challenges due to the majority of code being behind SSH access with Azure DevOps.

Finally figured out how to do this, with multiple repos in one command and no need to embed a token using https.

Navigate to: manage-repos and use this.1 Better yet, use Loading configuration via the file system (declarative config) - Sourcegraph docs and persist locally in case you want to upgrade or rebuild the container.

{
  "url": "ssh://[email protected]",
  "repos": [
    "v3/{MYORG}/{PROJECT_NAME}/{REPO}",
    "v3/{MYORG}/{PROJECT_NAME}/{REPO}"
  ]

}

For the json based storage try:

  {
      "GITHUB": [],
      "OTHER": [
          {
              "url": "ssh://[email protected]",
              "repos": [
                "v3/{MYORG}/{PROJECT_NAME}/{REPO}",
                "v3/{MYORG}/{PROJECT_NAME}/{REPO}"
              ]
          }
      ],
      "PHABRICATOR": []
  }

To ensure SSH tokens are mounted, you need to follow-up the directions here: SSH Access for Sourcegraph

cp -R $HOME/.ssh $HOME/.sourcegraph/config/ssh
docker run -d \
  -e DISABLE_OBSERVABILITY=true \
  -e EXTSVC_CONFIG_FILE=/etc/sourcegraph/extsvc.json \
  --publish 7080:7080 \
  --publish 127.0.0.1:3370:3370 \
  --volume $HOME/.sourcegraph/extsvc.json:/etc/sourcegraph/extsvc.json:delegated \
  --volume $HOME/.sourcegraph/config:/etc/sourcegraph:delegated \
  --volume $HOME/.sourcegraph/data:/var/opt/sourcegraph:delegated \
  sourcegraph/server:3.34.1

cloned-repos

LSIF For Go

I didn't get this to work yet with my internal repos, but it's worth pinning as Go module documentation for API docs can be generated for review as well. Change darwin to linux to use the linux version.

go install github.com/sourcegraph/lsif-go/cmd/lsif-go@latest
sudo curl -L https://sourcegraph.com/.api/src-cli/src_darwin_amd64 -o /usr/local/bin/sourcegraph
sudo chmod +x /usr/local/bin/sourcegraph

{{< admonition type="Tip" title="Docker" open=true >}}

docker pull sourcegraph/lsif-go:v1.2.0

{{< /admonition >}}

Now index code in repo

lsif-go
sourcegraph_host=http://127.0.0.1:7080
sourcegraph -endpoint=$sourcegraph_host lsif upload

  1. I removed --rm from the tutorial. 

Docker Healthchecks for Spinning Up Local Stacks

I've used a few approaches in the past with "wait-for-it" style containers.

Realized there's some great features with healthchecks in Docker compose so I decided to try it out and it worked perfectly for Docker compose setup.

This can be a great way to add some container health checks in Docker Compose files, or directly in the Dockerfile itself.

---
version: '3'

networks:
  backend:
  database:

volumes:
  mysql-data:

services:
  redis:
    image: redis
    ports:
      - 6379:6379
    networks:
      - backend
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 1s
      timeout: 3s
      retries: 30
  mysql:
    image: mysql:5.8
    env_file: ../env/.env # or use another path
    volumes:
      - mysq-data:/var/lib/mysql
      # This is the initialization path on first create
      # Anything under the directory will be run in order (so use sorted naming like 01_init.sql, 02_data.sql, etc)
      - ../db/myql/schema/:/docker-entrypoint-initdb.d
    ports:
      - 3306:3306
    networks:
      - database
    healthcheck:
      #test: "/etc/init.d/mysql status"  > didn't work
      # The environment variable here is loaded from the .env file in env_file
      test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
      interval: 1s
      timeout: 3s
      retries: 120

    ### example api service that now depends on both redis and mysql to be healthy before proceeding
    api:
    image: api:latest
    env_file: ../env/.env
    ports:
      - 3000:3000
    networks:
      - backend
      - database
    depends_on:
      mysql:
        condition: service_healthy
      redis:
        condition: service_healthy

Go R1 Day 85

progress

🎉Finished Ultimate Syntax course.

Worked on Enumerators concept using iota.

I still find this very confusing in general.

Here's the gist I created.

Go R1 Day 84

progress

Ultimate Syntax (Ardan Labs - Bill Kennedy) and went back through various topics such as:

  • Pointers: One thing mentioned that resonated with me was the confusion regarding pointers in parameter declarations. I also find the usage strange that the deference operator is used to denote a pointer value being dereferences in the parameter. I'd expect a pointer value to pass clearly with func (mypointer &int) and not func (mypointer int) with a pointer call.
  • Literal Structs: Great points on avoiding "type exhaustion" by using literal structs whenever the struct is not reused in multiple locations.
  • Constants: Knowing that there is a parallel typing system for constants with "kind" vs "type" being significant helped me wrap my head around why constants often don't have explicit type definitions in their declaration.

Iota

This is one of the most confusing types I've used.

  • Iota only works in a block declaration.
const (
  a = iota + 1  // Starts at 0
  b             // Starts at 1
  c             // Starts at 2
)

Also showed using << iota to do bit shifting. This is common in log packages (I'll have to look in the future, as bit shifting is something I've never really done).

Become of kind system, you can't really make enumerators with constants.

Best Practices

Don't use aliases for types like type handle int in an effort. While it seems promising, it doesn't offer the protection thought, because of "kind" protection.

This is because "kind promotion", it destroys the ability to truly have enumerations in Go by aliasing types.

I've seen stringer used in some articles as well, but not certain yet if it's considered idiomatic to approach enum like generation this way.

Go R1 Day 83

progress

Revisited Ultimate Syntax (Ardan Labs - Bill Kennedy) and went back through various topics such as:

  • Variables: When to use var vs walrus operator for readability and zero value initialization.
  • Type Conversions: How identical named types aren't the same in Go's eyes when named.
  • Pointers: General overview. Mostly stuff I knew, but good basic overview again.

Thermal Throttling Mac Intel Woes

I lost roughly half a day in productivity. CPU hit 100% for an extended period, making it difficult to even use the terminal.

Initially, I thought the culprit was Docker, as it was running some activity with local codespaces and linting tools. Killing Docker did nothing.

Htop pointed out kernel as the primary hog, and you can't kill that.

After digging around online, I found further mentions about charging on the right side, not the left to avoid thermal issues causing CPU throttling.

The white charger cable wasn't plugged in. The phone charger was, but the white cable to the laptop charger wasn't.

I was drawing power from the dock, which doesn't provide the same output as the Apple charger (seems to be a common issue).

This Stack Exchange question pointed me back to checking the charging: macos - How to find cause of high kernel_task cpu usage? - Ask Different

I was skeptical of this being the root cause of kernel CPU usage, but once I plugged in the charger, the CPU issue resolved itself within 30 seconds.

This is completely ridiculous. If throttling is occurring, a polished user experience would be to notify of insufficient power from charger, not hammer my performance. Additionally, it seems odd how many docking stations I've looked at for my Mac don't provide the minimum required power to sustain heavy usage.

While I still enjoy using the Mac, having 4 cables coming out from it to use at my desk compared to my older Lenovo/HP docking station experience feels like a subpar experience.