Thomas Mayfield bio photo

Thomas Mayfield

Polyglot programmer who loves the weird beautiful chaos of humans building software together. Fitness nerd. Southern kid living in Massachusetts.

The logbook is where I keep less structured, weekly-ish writing: stray thoughts and what I'm building or learning.

Oh boy, a clock! In this chapter of nand2tetris, we started teaching our logic circuits about time and consequently, memory. We’re introduced to a single new primitive, a data flipflop: all it does is output the value of its input one clock tick ago. With that and the array of combinatorial logic gates from previous chapters, we build all the way up to 16 kilobye RAM chips!

It was a bit disappointing that DFFs are given as primitives here. Though the book says they can be composed from Nand gates just like the rest of the chips we’ve built so far, it would have been neat to see the gory details of how one goes from combinatorial, stateless logic to sequential, time-based logic. Apparently the construction of DFFs is “intricate”, so I get pedagogically why we aren’t asked to implement them. Still, nandandflipflop2tetris just doesn’t have the same ring…

That aside, building memory chips felt like like bit twiddling and more like combining of logical components. These chips were easier to get right on the first-ish try without pen and paper; the composing of larger and larger RAM chips felt particularly simple and elegant. It did, however, take a bit for me to shift my thinking abouts values throughout a system being phased time-wise: e.g. you set inputs up, then on the next clock tick the outputs react.

Aside: an HDL syntax thing that I didn’t know is that you can declare pin connection twice on the gate. Like, if I wanted to hook up a DFF’s output pin to both the chip’s out pin and something else internally, you can do DFF(in=something, out=outb, out=out) . The simulator won’t let you connect pins that touch the outside world to internal pins, so you can’t just use out. Go figure.

Continuing my slow plod through TEoCS (also known as nand2tetris), I’ve now reached the part of building in building a computer where I’ve built something that… computes.

The Arithmetic Logic Unit chip is the first thing I’ve made that felt like it deserved the label “chip” instead of the mechanical-sounding “gate”. Just like everything else so far, at its heart it’s a bunch Nand gates with a lot of wires running between them… but it feels like we’ve crossed over from simple logic reified by wires into the real start of a more general purpose computing machine. The chip has a pair of 16-bit inputs, and 6 control bits, which you manipulate to do various combinations of arithmatic, from producing a constant value, addition, subtraction, bitwise boolean operations, negation upon those inputs. We’ll apparently be implementing multiplication and division at a higher level up the stack, but that’s a still a lot for some jumped-up simulated silicon.

It was, remarkably, only 14 lines of HDL code to implement, using all the gates I’d built so far.

It would have been pretty easy to rip through this chapter and build the ALU by just blindly implementing the logical operations indicated in the chip’s truth table and control bits, but I wanted to take a step back and work out with pen and paper how the chip actually accomplished a bunch of this math. It wasn’t obvious at all to me that, “ok, negate the y bus’s bits, then add them to x and then negate the result of that” comes out to x-y. But it does, and I’ve at least moved past taking it on faith after working out a few of these operations out with ink and dead trees. Pretty elegant design!

A few tidbits that were helpful for me as I worked through this chapter:

  • You can use the literal values true or false if you want a wired to be always on or off. This eluded me for a bit (heh heh heh).

  • The tests will produce a (relatively) easy to read output file as they go. At least on OS X, it’s difficult to see the entirety of this output in the simulator’s UI, but it’s also available in a file in the current project directory with the .out extension. You can eyeball or diff each of its lines versus the test script’s .cmp file to start seeing where things have gone wrong.

  • Examining the state of each internal pin of the chip in the hardware simulator is a great debugging technique. I found plenty of bugs by just running the test script, waiting for it to fail on a particular case, then stepping through what each internal pin/bus’s value should have been on paper.

Last week, I cracked open The Elements of Computing Systems and started working through it. It’s a book with a pretty cool idea: it walks you through thirteen projects that all build upon each other to create a complete general-purpose computer. The first chapter starts with transistors and logic gates; the last has Tetris running on an operating system you wrote!

My posts about work towards this year’s learning-and-doing goal have thus far been either been documenting a finished project or showcasing immediately usable tips and tricks. Working my way through this book is going to be a little different. I want to document my progress—hopefully without these posts sounding like a third-grade book report—so I’m just going to try and call out what was fun or interesting about the material as I work through it. Maybe a work journal like this will be a useful template for how to write about other longer, ongoing projects, but we’ll see! This is all serving as a proxy for habit changes, which is the most important thing to me.


The first chapter of TEoCS has you building primitive boolean logic gates starting with just a single atom: a Nand gate. You progressively build more and more complicated gates, from the familiar Not/Or/And operators to stuff like multiplexors and n-way version of the earlier gates.

The book doesn’t come with a soldering iron, so this is all wired up inside a hardware simulator program (that reminds me of being introduced to pointers machine-language first in CS210). You wire up the gates by writing code in Hardware Definition Language: basically virtually naming all the chips and how wires connect them. You can load into the simulator, and then change the value of various wires and see what happens. It’s a baroque-looking program, but pretty useful for exploring and debugging:

hardware simulator

Delightfully, the material has a set of test scripts that you can use to put your gates through their paces and make sure you’ve wired up everything correctly.

Creating gates like this via code definitely stretched my brain. Sketching them out on pen and paper was was really helpful:

circuit sketch

Next week, we teach the circuits to do math!

Though I’m generally terrible at taking regular vacations, last week my wife and I got away to Hawaii for some long overdue R&R. We saw whales at sunset, stood 10 feet away from active lava flows, hiked an extinct volcano caldera, relaxed on the beach, and drank a whole bunch of rum drinks.

Regular learning posts resume next!

Beach sunset

Botantical garden shore

Caldera floor

Sea turtle

Lava!

Oh, and we got to experience about 38 minutes of sheer mortal terror.

Missle alert

I’ll admit I approached Haskell with a bit of trepidation: the language has a tongue-in-cheek unofficial motto of “avoid success at all costs” and a reputation of only being used by academic ivory-tower types.

Functional Strength

Haskell’s functional programming model is pure as the driven snow: unlike some of the functional languages earlier in the book, there’s no mutatable state whatsoever. Period, end of story. Like all the previous functional languages, lists of data are a first-class primitive and there are a number of tools for slicing and dicing them. The usual map/filter/fold suspects are around, as are succinct form for defining anonymous functions and some powerful list comprehension forms.

Lazy evaluation seems to be the default mode in Haskell. Under the hood, every function has a single argument—functions with multiple arguments are split into multiple functions that are then applied to each other. This allows for easy currying—saving functions with partially bound arguments for later full evaluation. These partially applied functions let you do some pretty nifty tricks, like incrementing a list via map (+ 1) [1, 2, 3]. Lazy evaluation also allows functions to be infinite generators without any special invocation or magic.

Not My Type

Haskell, while strongly typed, requires surprisingly little type declaration—the type checker can infer much of what it needs from the structure of the code. You can define your own types quite easily, both in terms of other types and recursively. These recursive definitions can lead to some very terse expressions of complex concepts - a one-line tree type definition, for example.

After all that, I have to admit that I don’t totally grok Haskell’s type system. Fighting with type error messages in GHC is what prevented me from doing most of the non-trivial exercises in this chapter. It’s apparent that once you learn to use it well, Haskell’s type system is sophisticated and very powerful—but for me, the learning curve proved greater than my enthusiasm for the language. Perhaps I’m spoiled by more dynamically-typed languages.

Monads (Or, What Is This I Don’t Even)

The book attempts to explain monads initially through an example involving drunken pirates, which is a perfectly good stand-in for my mental state while trying understand how monads work. The idea—I think—is a generally applicable way to represent state in a language that has no mutatable state. Monads seem to act as functions whose return values can work as state accumulators when chained together. Using monads under the covers, the do statement lets you write imperative looking code, but it’s actually chained together with monads and winds up boiling down to one big function invocation. That top-level conceptual understanding is where I stopped—exactly how to implement and practically use a monad was still a mystery to me after a section’s worth of examples.

Thanks For All The Lambdas

At the end of this chapter, my trepidation unfortunately seemed justified. Perhaps it was a bit of new language burn-out or perhaps it was Haskell’s vertical learning curve, but I had a hell of a time getting as much out of Haskell as I did out of the other languages the book covered. Frustration with Haskell aside, I had a blast with Seven Languages In Seven Weeks. Covering big swaths of intellectual territory is always fun, and I’ve got a non-trivial desire to start a deep dive into Clojure soon. Time to get back to building things.