Thomas Mayfield bio photo

Thomas Mayfield

Polyglot programmer who loves the weird beautiful chaos of humans building software together. Fitness nerd. Southern kid living in Massachusetts.

The logbook is where I keep less structured, weekly-ish writing: stray thoughts and what I'm building or learning.

I suck at knowing when to quit.

Staying on track working through nand2tetris has been a bear, and I’ve been second-guessing myself as to whether I should push through or rethink where I’m spending my time.

Looking for guidance, I read Seth Godin’s The Dip which was… unhelpful to be charitable about it. I think Godin made a good observation that there are mostly two kinds of situations where work sucks: ones where there’s a payoff at the end and it sucks because getting there is hard work, and ones where it sucks because there’s NO payoff at the end and the whole situation is a dead end. He took about a hundred pages of just-so clear-in-hindsight stories to provide little useful advice for distinguishing the two situations in the moment. Back to square one.

I do think I went into nand2tetris with poorly defined goals as to what I wanted to get out of it. “Fill in knowledge gaps about physical-level computing to OS-level computing” is the best I can elucidate right now. It’s kinda-sorta been doing that, but in a college course kind of way: lots of conceptual learning that’s two or three steps removed from a practical application. I don’t think it’s getting me any closer to really understanding (say) what happens inside a modern laptop or what my linux server is doing with my application process. There are probably better resources out there if I want to learn about the detailed innards of moderns systems. If I push on what I want to get out this kind of time invested, and keep nudging on that thought until it actually feels true at a deep level, I get something like “deepening my understanding of computing systems so I can build bigger and more useful things”. That’s useful—I can see that this book isn’t really going to directly get me there.

The rest of my worry here is that I’m not just switching tactics, I’m likely putting that entire goal on the back burner for a bit. But maybe that’s what I need to do—my difficultly expending mental energy working through the book could be a sign that I’m overinvested in one part of my life. I’m intellectually comfortable with the fact that “building software” waxes and wanes between vocation and avocation depending on all kinds of suff in my life. That still doesn’t make it easy to see if I’m quitting something for good reasons.

I think I’m looking for certainty, a clear “ah-ha, yes, this is definitely the right decision” moment. If this were a professional project, I’d make a bet with the information I had, pick a metric to judge success by and set up a reminder to reflect on how the decision went some time later. That’s probably the right thing to do here. Funny how you can’t see that sometimes without spilling a little ink.

First, I’d like to introduce Ripley, one of the reasons my attention is a little scarce at the moment:


A nine-week old puppy is a joyous thing, and will happily hoover up every spare second you have.

Focused time for learning and study will resume probably around the time this little furball starts sleeping through the night without pee breaks. But even before we adopted her, it was pretty clear at this point that my goal of hitting a once-a-week writeup on things I’m learning isn’t going to happen. I do think striving for that cadence has still proven a push in the right direction. I’m going to continue tacking towards that goal even if the year-end average is probably going to come out lower than I hoped.

A few reflections from the last few weeks:

I’ve been leaning on Habitica as a way to help myself build daily habits and get through my todo list. It’s been working surprisingly—I’d even say embarrassingly—well. Progression systems and magic pixels, man. The part of me that fed a good chunk of my 20s into World of Warcraft is rolling its eyes in not-surprise.

Prior to doggo adoption, I’d gotten a pretty good streak of spending 20 minutes a day working on whatever my current project is… but found myself having a hard of time actually sitting down and writing about it. I think part of the friction here is how this blog is structured. Aside from a couple about-me pages, it’s a collection of articles. Trying to push my intended log-of-learning writing into this format has wound up making me feel like each entry needs to have a focused point and something to teach others. What I want out of this writing, instead, is just a nudge towards spending time deliberately learning and a bit of the clarity that comes from having to structure my thoughts to write them down. So, I’m going to try splitting this blog into two sections: a collection of articles/essays (which is most of the existing stuff) and a looser stream of thoughts and updates. Should be an interesting experiment to see if it helps shake loose writing more frequently.

Out of the wiring swamp, on to the dizzying but invisible depths of software abstraction.

I was actually a little surprised that there was a full chapter devoted to writing an assembler—it’s just mechanically translating assembly code to machine code, word for word, right? As it turns out, while command translation itself is super straightforward, location labels for branching and variable declaration added a little fun. We wound up with a two pass design: a first pas to allow for memory address allocation for each variable and label, then a second pass to generate the machine code itself.

The system isn’t self hosting—that is, we now don’t use the tools we’re writing to directly build the next level of software (which would laborious, since we haven’t built an operating system yet, much less a text editor!). This means we get to use whatever outside-of-Hack language we want to build the assembler. So now instead of fighting with HDL, I’m writing Ruby! I write Ruby most of the day for work and have for the last seven years or so. It’s DARN NICE for text chomping.


Holy crap, we’re done with the hardware part. I built a computer!

Chapter 4

A cool aspect of each chapter’s material being a self-contained abstraction is that the book can skip between levels for pedagogical reasons. So we wound up learning to write some programs in the machine language for our fully-built computer, before the final phase of actually wiring up the complete computer.

hack assembly language

It’s… definitely for machines. Messing around with the assembly language was pretty important for the next chapter. Without that experience, I don’t think I’d have understood enough of the intent behind how things are accomplished using its limited idioms. Debugging when my CPU wasn’t wired up correctly might have cost me a fair bit more hair!

Two side notes from spelunking with Hack assembly:

  • This page was a super useful companion for dealing with some very picky language stuff.
  • There’s a part where you need to load a 16-bit word that’s all 1s into memory to turn a part of the screen dark. You can actually only load 15-bit words in A-instructions, but the assembler will silently accept constants that are over the size you can express in 15 bits, leading to some serious headscratching.

Chapter 5

Building the CPU and Memory units were the most challenging bits of HDL wiring so far. Breaking everything that needed to happen down into discrete tasks (and being well rested) was key here. Definitely went back to pen & paper here to make this work.

wiring the CPU on paper

All that wiring gore boiled down to only 18 lines of HDL to make a simple CPU, using all of the previously built components. Wow.

I did lots of breaking inputs down into binary to make sense of how to connect logical wires. Plotting numbers out as monospaced binary is another useful form of sketching:

writing out RAM addresses in binary

An added level of difficult: bus indexing works backwards from how my brain thinks, as traditional array indexes go left to right. Bus indexing, on the other hand, goes from the least significant bit to the most signicant bit… which is right to left when binary is written out. This must have accounted for at least half the bugs I created.


It’s been three weeks since the entry before this; not exactly the pace I set out for myself at the beginning of the year. I was getting a little bored with writing a single entry for each chapter, but trying to get two chapters worth of work done in a single week and a write up wound up taking much longer.

I’ve also been having a hard time finding the focus to do this particular project after work, so only the real progress happens on weekends. I figured that going back to the blinking lights part of programming would stretch different brain muscles from what I’m using at work, but I think that’s demonstrably false. I’m still having fun, but probably need to either moderate my expectations of what I can do during the week, and/or get more ok with these writeups being progress updates rather than proof of commpleting milestones.

Oh boy, a clock! In this chapter of nand2tetris, we started teaching our logic circuits about time and consequently, memory. We’re introduced to a single new primitive, a data flipflop: all it does is output the value of its input one clock tick ago. With that and the array of combinatorial logic gates from previous chapters, we build all the way up to 16 kilobye RAM chips!

It was a bit disappointing that DFFs are given as primitives here. Though the book says they can be composed from Nand gates just like the rest of the chips we’ve built so far, it would have been neat to see the gory details of how one goes from combinatorial, stateless logic to sequential, time-based logic. Apparently the construction of DFFs is “intricate”, so I get pedagogically why we aren’t asked to implement them. Still, nandandflipflop2tetris just doesn’t have the same ring…

That aside, building memory chips felt like like bit twiddling and more like combining of logical components. These chips were easier to get right on the first-ish try without pen and paper; the composing of larger and larger RAM chips felt particularly simple and elegant. It did, however, take a bit for me to shift my thinking abouts values throughout a system being phased time-wise: e.g. you set inputs up, then on the next clock tick the outputs react.

Aside: an HDL syntax thing that I didn’t know is that you can declare pin connection twice on the gate. Like, if I wanted to hook up a DFF’s output pin to both the chip’s out pin and something else internally, you can do DFF(in=something, out=outb, out=out) . The simulator won’t let you connect pins that touch the outside world to internal pins, so you can’t just use out. Go figure.