Jekyll2025-10-23T05:25:35+00:00https://codejanitor.dev/feed.xmlDillonAdMy blog! Unable to find Ruby class that definitely exists2025-01-12T00:00:00+00:002025-01-12T00:00:00+00:00https://codejanitor.dev/blog/2025/01/12/Unable-To-Find-Ruby-Class-That-Defintiely-ExistsRecently I have been learning Ruby on Rails! With that learning comes a lot of lessons, and this one was both fun and frustrating for me. While working in the Ruby on Rails project, I created a new class and RSpec tests for that class.

/app/thing/thingy_doer.rb

class Thing::ThingDoer
# ...
end

/spec/thing/thing_doer_spec.rb

RSpec.describe Thing::ThingDoer do
# ...
end

I ran the tests and got the following error:

NameError:
  uninitialized constant Thing::ThingDoer

Crazy right? I specified the class name in the test exactly as it is spelled in the class, but the RSpec tests can’t find the class. Why?!

The issue is that Rails has a loading convention that matches the file name to the class name. Due to the mismatch between the name of the thingy_doer.rb file and the ThingDoer class, Rails can’t find the class.

The solution is is correct the typo in the file name. Once the file is renamed from thingy_doer.rb to thing_doer.rb, everything now works as expected!

5 examples, 0 failures
]]>
Dillon Adams
Git-ing Started2024-01-22T00:00:00+00:002024-01-22T00:00:00+00:00https://codejanitor.dev/blog/2024/01/22/Git-ing-StartedWhen starting to learn Git you will see a lot of people claiming that Git is complicated and hard to use. Mostly because Git is complicated and hard to use. You can do a lot of wild and crazy things with Git, but to function on a day to day basis only a few commands are needed to do the basic Git workflow.

The Basic Workflow

First we need to clone the repository. This can be an existing repository or a new one that was just created. This will copy the repository code down to your computer, and set you up with the latest that is on the default branch (typically named main or master)

git clone <RepositoryURL>

Most workflows won’t allow you to make changes directly to the default branch, so you will have to make your own branch. This is an area where you can make your changes.

git checkout -b <BranchName>

That command is shorthand for creating a branch and then switching to that branch. The long way around is these two commands.

git branch <BranchName>
git checkout <BranchName>

Once your changes are made, you will need to stage them to be committed/saved. You do this by adding the files directly, but I tend to just include everything. If you want to add specific directories or files, set the path to the file or directory instead of the . in the command.

git add .

Next we want to make our changes permanent. We do this by committing them with a hopefully meaningful commit message. If every commit message is the same generic message, it can create a terrible situation when you need to do more advanced things down the road.

git commit -m "<Message>"

Last, but certainly not least, we want to persist our changes outside of our computer. We need to push them up to the server that we initially cloned them from. For new branches, we will need to specify the target branch on the server.

git push --set-upstream origin <BranchName>

If the branch has already been pushed, we can use a simple push.

git push

Giving up

Every once in a while, you get to a point with a branch that you just want to reset everything back to where you started at the last commit. The specific motivations behind this are wide ranging, but each is valid and you deserve a solution.

git checkout .
git clean -fdx

The checkout command resets any existing files back to their original state. The clean command deletes any newly created files (f), directories (d), and any files that Git is set to ignore (x).

Conclusion

There are a lot more features in Git that people has literally written books about. Once you have mastered the basics, then I would recommend looking into the more advanced features. I have been using Git for almost about decade at this point and I rarely use the more advanced features of this tool so I highly recommend a focus on the things that you will actually use.

If you end up in a situation where you are stuck, there is a website called Oh Shit, Git!?! that is an excellent resource.

Hopefully this helps you Git going!

]]>
Dillon Adams
Versions Are For Humans2023-07-06T00:00:00+00:002023-07-06T00:00:00+00:00https://codejanitor.dev/blog/2023/07/06/Versions-Are-For-HumansWhat does it mean to version a piece of software? I believe that it applies a meaningful label to indicate the capabilities of the software at that point in time. As software engineers, the most common method of doing this is semantic versioning. Each number in a semantic version conveys a meaning for a human to understand and use as a basis their decisions.

If these version numbers are meant for humans to derive meaning from, the incrementation of any part of the version should not be automated. Automatically incrementing a version strips the version of it’s meaning. Let’s say that we increment the patch version of a softare package each time we merge to the main branch. The issue is anyone consuming this software package has no clue whether there are breaking changes, new features, or just bug fixes in the new version.

Version numbers are meant for humans, and they should be set by humans. The engineers making changes to a software package should set the new version number as a part of their change. They are the ones that understand the change best and are the most suited to translate the effects of the change into a semantic version.

]]>
Dillon Adams
How To Create A Memory Leak In Golang2022-01-05T00:00:00+00:002022-01-05T00:00:00+00:00https://codejanitor.dev/blog/2022/01/05/How-To-Create-A-Memory-Leak-In-GoA while ago I was monitoring a containerized application, and noticed an odd pattern in the graph for memory usage. The memory usage would climb steadily until reaching the memory limit set on the container and then precipitously drop to zero. This pattern would repeat as long as the application was running. I had a memory leak. Cue ominous music

The cause of the memory leak was a fairly simple thing, but due to multiple changes over time in this area of code I had completely missed it. Here is the relevant code:

package main

func main() {
	running := true

	for running {
		defer func() {}()
	}
}

Warning: If you execute this code on your machine, it will max out your CPU and eventually your memory

I had deferred a function inside of a long running loop. Each time a function is deferred, it gets added to a stack or stack-like data structure (last in, first out). Once the function containing the defer is completed the stack for that function is emptied one by one and executed. The issue here is that the loop completing an iteration does not trigger the execution of the deferred functions, nor does exiting the loop.

In my case, deferred functions would continue to be added to the deferred functions stack until the memory footprint finally exceeded what was allowed by the container causing the container to crash. A way around this issue, would be to wrap the defer statement in another function like this:

package main

func main() {
	running := true

	for running {
		func() {
			defer func() {}()
		}()
	}
}

Wrapping the defer in a function causes the secondary function’s deferred functions to be executed on every iteration instead of letting them accumulate indefinitely until the program finally finishes. That being said, the solution for my issue was to remove the deferred function from inside the loop.

package main

func main() {
	defer func() {}()

	running := true

	for running {}
}
]]>
Dillon Adams
Making Comments Count2021-10-12T00:00:00+00:002021-10-12T00:00:00+00:00https://codejanitor.dev/blog/2021/10/12/Making-Comments-CountA generally accepted practice for writing software is leaving comments. These comments will hopefully be helpful to the next person that reads through the code. My overall goal when I comment code is to either describe the purpose of a method/class/file or the reasoning behind a decision that was made in the code.

Some comments describe what the code is doing verbatim. These work really well in coding tutorials or when I’m learning a new language and I need to remind myself of the function of a particular operator. Outside of those scenario’s I haven’t found them as useful since they are telling me something I already know by reading the code.

// Post-increment x and assign to y
var y = x++;

An exception to my previous statement, is when I find myself using a more esoteric operator, but in that case it is also nice to leave an explanation why the task couldn’t be accomplished with a simpler approach.

My preferred type of comments provide context to future contributors (including myself once I’ve forgotten everything about that piece of code in about a week). These are the comments that save time down the road by helping future contributors avoid pitfalls that have already been found and remedied.

// Using a loop instead of recursion due to intermittent StackOverflowExceptions
//  when processing deeply nested objects
while(condition)
{
    // ...
}

Happy commenting!

]]>
Dillon Adams
Watching Other Files With The .NET CLI2021-03-31T00:00:00+00:002021-03-31T00:00:00+00:00https://codejanitor.dev/blog/2021/03/31/Watching-Other-Files-With-The-.NET-CLII ran into an odd scenario recently where I was working on SQL files to generate data via a C# program I had written. The simple solution was to have the dotnet watch run command watch for changes in the SQL files. The only problem was that the file watcher doesn’t watch for SQL files by default. The first option I found was to declare each file in the project file and that was not a viable option given the number of files at hand.

I needed a solution that would pick up the SQL files by recognizing the file extension. Lucky for me, the documentation around the dotnet watch command has grown significantly since the command was first released. All that is required to customize the watcher with a wildcard is a single entry in the project file.

<ItemGroup>
    <Watch Include="**\*.sql" />
</ItemGroup>

Those three lines instructed the file watcher to watch the SQL files in my project, making my day exponentially easier.

]]>
Dillon Adams
Testing GitHub Pages Locally2019-12-05T00:00:00+00:002019-12-05T00:00:00+00:00https://codejanitor.dev/blog/2019/12/05/Testing-GitHub-Pages-LocallySomething has always bothered me about deploying something that I haven’t tested locally. A big manifestation of this is my blog. Since my blog is currently hosted on GitHub Pages, I needed Jekyll and all of its dependencies. Since those aren’t things I typically have installed, so I decided to use a piece of software I always install: Docker.

The whole ordeal managed to come out to a single line:

Linux

docker run --rm -it -v $(pwd):/srv/jekyll -p 4000:4000 jekyll/jekyll jekyll serve --watch

Windows

docker run --rm -it -v "%cd%":/srv/jekyll -p 4000:4000 jekyll/jekyll jekyll serve --watch

Note the double quotes since Windows paths can have spaces

This command should be run in the root of your GitHub Pages project. It will create a Jekyll container that is listening to the current directory for changes and is exposed via port 4000. This way as you edit either blog content or site styling the changes appear as quickly as the site refreshes in your browser.

]]>
Dillon Adams
Build/Deploy from the Beginning2019-11-18T00:00:00+00:002019-11-18T00:00:00+00:00https://codejanitor.dev/blog/2019/11/18/Build-Deploy-from-the-BeginningWhen I start a project, I like to follow a few steps to speed things along later. The first is to get the basic project compiling successfully and the next step is to set up the automated build and deployment infrastructure (bonus points if the build/release can be put into version control with the codebase). Some of the benefits of setting up the build/deploy infrastructure early on include automated feedback, easier deployment, and informed design choices.

Automated Feedback

Setting up an automated build/deploy provides an early feedback mechanism for those working on the codebase. This feedback provides extraordinarily valuable feedback to those working on the codebase. This feedback includes if the merged code builds and if all of the new and existing tests are passing. While this all should be confirmed locally, people make mistakes and sometimes things slip.

Easier Deployment

Building on the Automated Feedback, changes can be made to the codebase freely without impacting the ability to deliver the software. A symptom of spending excessive amounts of time developing code is a rough and lengthy process figuring out how to deploy that code. By allowing the build/deploy process to grow with the codebase, the necessary changes to build and deploy the product will be included as the codebase grows. This allows the person with the greatest understanding of the change to make the incremental changes to the build/release process.

Informed Design Choices

As the product’s codebase grows, more and more design choices are made and each of those choices involves trade offs, incurring opportunity cost. The earlier these choices are made, the greater the impact is to the whole system. Having build and deployment infrastructure from the beginning forces decisions involving deployment to be considered as the changes are being made. This way changes that incur an unacceptable amount of deployment pain can be either avoided, or reverted with much less pain since there aren’t any dependent changes yet.

Conclusion

In the end, the goal is to be able to produce and deliver excellent, quality software with as little pain as possible. Setting up build/deploy infrastructure early on can alleviate the pain of developing a product and then figuring out how to release it.

]]>
Dillon Adams
Introduction to the Specification Pattern2019-08-19T00:00:00+00:002019-08-19T00:00:00+00:00https://codejanitor.dev/blog/2019/08/19/Introduction-To-The-Specification-PatternRecently, I have been working my way through the Gang of Four’s Design Pattern Book and looking deeper into patterns that I can apply in my day to day work. One pattern that isn’t in the book is the Specification pattern. After looking into the Specification pattern, I really gravitated to the idea.

What is a Specification?

A specification is a query represented in an named object. Vague enough? Here is an example in code:

using System;
using System.Linq.Expressions;

public class CurrentStudentsSpec
{
  public Expression<Func<Student, bool>> Expression => 
    s => s.IsActive;
}

Why would/should I use a Specification?

This pattern allows query logic to be consolidated into named objects. Doing so has two distinct advantages:

  • The name of the objects allows the intent to be communicated clearly.
  • Consolidating logic into objects reduces duplication and makes it easier to spot future duplication.
  • Isolates business logic to allow for simple unit testing.

The big advantage of this approach for me is that I can not only name the logic, but I can compose the specifications to create easily readable code.

To put a cherry on top of all of this, these objects allow for completely isolated unit testing of the logic they contain. This mean I don’t have to mock or fake an ORM (Object-Relational Mapper), or worse stand up a real data store. The isolated nature makes these tests amazingly fast and reliable.

A Concrete Example

Let’s start with a Student class that the previous specification was based on.

using System;

public class Student
{
  public Guid Id { get; set; }
  public string Name { get; set; }
  public double Average { get; set; }
  public bool IsActive { get; set; }
}

The task at hand is to get all students that scores greater than or equal to 70.0 in the class. To accomplish this, we can create the following specification:

using System;
using System.Linq.Expressions;

public class PassingStudentSpec
{
    public Expression<Func<Student, bool>> Expression =>
        s => s.Average >= 70.0;
}

Using this specification, filtering the data becomes extraordinarily readable.

using System.Collections.Generic;
using System.Linq;

public class StudentService
{
  private readonly IEnumerable<Student> _students;

  public StudentService(IEnumerable<Student> students)
  {
    _students = students;
  }

  public IEnumerable<Student> GetPassingStudents()
  {
    var passingStudents = new PassingStudentSpec();
    return _students.Where(passingStudents.Expression.Compile());
  }
}

The alternative being:

using System.Collections.Generic;
using System.Linq;

public class StudentService
{
  private readonly IEnumerable<Student> _students;

  public StudentService(IEnumerable<Student> students)
  {
    _students = students;
  }

  public IEnumerable<Student> GetPassingStudents()
  {
    return _students.Where(s => s.Average >= 70.0);
  }
}

Notice the difference? Almost none, right? In simple use cases, this pattern really ends up being overkill. So let’s add some complexity and let the Specification pattern shine.

using System.Collections.Generic;
using System.Linq;

public class StudentService
{
  private readonly IEnumerable<Student> _students;

  public StudentService(IEnumerable<Student> students)
  {
    _students = students;
  }

  public IEnumerable<Student> GetPassingStudents()
  {
    return _students.Where(s => s.Average >= 70.0);
  }

  public IEnumerable<Student> GetCurrentPassingStudents()
  {
    return _students.Where(s => s.Average >= 70.0 && s.IsActive);
  }
}

Now we have duplication! The risk of these two methods falling out of sync with each other grows each time the code is changed. To mitigate that risk, our specification can be used!

using System.Collections.Generic;
using System.Linq;

public class StudentService
{
  private readonly IEnumerable<Student> _students;

  public StudentService(IEnumerable<Student> students)
  {
    _students = students;
  }

  public IEnumerable<Student> GetPassingStudents()
  {
    var passingStudents = new PassingStudentSpec();
    return _students.Where(passingStudents.Expression.Compile());
  }

  public IEnumerable<Student> GetCurrentPassingStudents()
  {
    var passingStudents = new PassingStudentSpec();
    return _students.Where(passingStudents.Expression.Compile())
                    .Where(s => s.IsActive);
  }
}

In this case, the benefit gained by introducing the specification is that if there is ever a need to change what defines a “passing student” there is one place to change that logic!

Combining Specifications

Let’s continue the previous example, and make the assumption that not all business logic can or should be contained in a single statement. As mentioned earlier, part of the power of the Specification pattern is the reusability of the Specifications, and that can make creating specifications of appropriate specificity quite difficult. The solution is to combine specifications so disparate Specifications can be composed to create the business logic that is needed. Combining the Expressions in the Specifications (at least in C#) is quite an interesting thing to do, but it can be complicated so I ended up writing a library to make combining Specifications easier. That way the code from the previous example becomes:

using EZSpecification;
using System.Collections.Generic;
using System.Linq;

public class StudentService
{
  private readonly IEnumerable<Student> _students;

  public StudentService(IEnumerable<Student> students)
  {
    _students = students;
  }

  public IEnumerable<Student> GetPassingStudents()
  {
    var passingStudents = new PassingStudentSpec();
    return _students.Where(passingStudents.Expression.Compile());
  }

  public IEnumerable<Student> GetCurrentPassingStudents()
  {
    var passingStudents = new PassingStudentSpec();
    var currentStudents = new CurrentStudentSpec();
    
    return _students.Where(passingStudents.And(currentStudents));
  }
}

Pitfalls

If the Specifications are too focused, they can only be used for the one case. On the other hand, if the specification is too broad a plethora of specifications will have to be composed to make a meaningful query which gains nothing except more complexity. Each use case will be different, but finding the right balance between these two extremes can yield massive benefits.

Conclusion

In the end, there are many ways to achieve the same benefits the Specification pattern provides. That being said, I believe the Specification pattern provides the cleanest and most testable solution. With the combination of readability and testability, I truly believe that this pattern can be of great benefit to applications that utilize a lot of query logic.

Resources

]]>
Dillon Adams
Random Azure DevOps Build Failures2019-06-19T00:00:00+00:002019-06-19T00:00:00+00:00https://codejanitor.dev/blog/2019/06/19/Random-Azure-DevOps-Build-FailuresA few days ago, I started seeing some odd failures in one of my Azure DevOps Pipeline YAML builds. The error was:
/azure-pipelines.yml (Line: 1, Col: 1): Unexpected value 'name'

The beginning of the file looked like:

name: $(Build.BuildId)
pool: Default

trigger:
 - master

After looking through the documentation repeatedly, a clue arose when doing a git diff. The output was:

diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 3fda759..4ece8de 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -1,5 +1,5 @@
+<U+FEFF>name: $(Build.BuildId)
-name: $(Build.BuildId)

<U+FEFF>?! Someone/something had snuck a byte order mark (BOM) into the beginning of my pipeline file! It turns out that certain text editors in Windows environments will inject this character at the beginning a file.

Once this was identified as the problem, the solution in this case utilized Visual Studio Code. In the bottom right corner on the window, a piece of information listed is the file encoding. The encoding listed for this file was UTF-8 with BOM. Clicking on the encoding opens the Command Palette and presents options. Selecting either Save with Encoding or Reopen with Encoding and then saving the file gets rid of the offending character.

After that change was committed and pushed, the build succeeded.

]]>
Dillon Adams