I just watched Inventing on Principle from Bret Victor and found this entire talk incredibly interesting. Chris Granger’s post on Light Table led me to this, and shortly after, I found the redesigned Khan Academy CS course which this inspired. Bret touched on something that basically anyone who’s attempted to design anything has implicitly understood: This feedback loop is the most essential part of the process.
I reflected on this and on my own experiences, and decided on a few things:
(1) Making that feedback loop fast enough can dramatically change the design process, not just speed it up proportionally.
I feel that Bret’s video demonstrates this wonderfully. It matches up with something I’ve believed for awhile: That a slower, more delay-prone process becoming fast enough to be interactive can change the entire way a user relates to it. The change, for me at least, can be as dramatic as between filling out paperwork and having a face-to-face conversation. This metamorphosis is where I see a tool become an extension of the mind.
Toplap probably has something to say on this. They link to a [short] live coding documentary, Show Us Your Screens. I rather like their quote: “Live coding is not about tools. Algorithms are thoughts. Chainsaws are tools. That’s why algorithms are sometimes harder to notice than chainsaws."
Live coding perhaps hits many of Bret’s points from the angle of musical performance meeting programming. Since he spoke directly of improvisation, I’d say he was well aware of this connection.
(2) These dynamic, interactive, high-level tools don’t waste computer resources - they trade them.
They trade them for being dynamic, interactive, and high-level, and this very often means that they trade ever-increasing computer resources to earn some ever-limited human resources like time, comprehension, and attention.
I don’t look at them as being resource-inefficient. I look at them as being the wrong tool for those situations where I have no spare computer resources to trade. Frankly, those situations are exceedingly rare. (And my degree is in electrical engineering. Most coding I’ve done when acting as a EE guy, I’ve done with the implicit assumption that no other type of situation existed.) Even if I eventually have to produce something for such a situation - say, to target a microcontroller - I still have ever-increasing computer resources at my disposal, and I can utilize these to great benefit for some prototyping.
Limited computer resources restrict an implementation. Limited human resources, like time and attention and comprehension, do the same…
(3) The choice of tools defines what ideas are expressible.
Any Turing-complete language can express a given algorithm, pretty much by definition. However, since this expression can vary greatly in length and in conciseness, this is really only of theoretical interest if you, a human, have only finite time on earth to make this expression and only so many usable hours per day. (This is close to a point Paul Graham is quite fond of making.)
This same principle goes for all other sorts of expressions and interactions and interfaces, non-Turing-complete included, anytime different tools are capable of producing the same result given enough work. (I can use a text editor to generate music by making PCM samples by hand. I can use a program to make an algorithm to do the same. I can use a program such as Ableton Live to do the same. These all can produce sound, but some of them are a path of insurmountable complexity depending on what sort of sound I want.)
In a strict way, the choice of tools defines the minimum size of an expression of an idea, and how comprehensible and difficult this expression is. Once this expression hits a certain level of complexity, a couple paths emerge: it may as well be impossible to implement, or it may cease to be about the idea and instead be an implementation of a set of ad-hoc tools to eventually implement that idea. (Greenspun’s tenth rule, dated as it is, indicates plenty of other people have observed this.)
In a less strict way, the choice of tools also guides how a person expresses an idea; not like a fence, but more like a wind. It guides how that person thinks.
The boundaries that restrict time and effort also draw the lines that divide ideas into possible and impossible. Tools can move those lines. The right tools solve the irrelevant problems, and guide the user into solving relevant problems instead.
Of course, finding the relevant problems can be tricky…
(4) When exploring, you are going to re-implement ideas. Get over it.
(I suppose Mythical Man Month laid claim to something similar decades ago.)
Turning an idea plus a bad implementation into a good implementation, on the whole, is far easier than turning just an idea into any implementation (and pages upon pages of design documentation rarely push it past ‘just an idea’). It’s not an excuse to willingly make bad design decisions - it’s an acknowledgement that a tangible form of an idea does far more to clarify and refine those design decisions than any amounts of verbal descriptions and diagrams and discussions. Even if that prototype is scrapped in its entirety, the insight and experiences it gives are not.
The flip side of this is: Ideas are fluid, and this is good. Combined with the second point, it’s more along the lines of: Ideas are fluid, provided they already have something to flow from.
A high-level expression with the right set of primitives is a description that translates very readily to other forms. The key here is not what language or tool it is, but that it supports the right vocabulary to express the implementation concisely. Supports doesn’t mean that it has all the needed high-level constructs - just that it is sufficiently flexible and concise to build them readily. (If you ‘hide’ higher-level structure inside lower-level details, you’ve added extra complexity. If you abuse higher-level constructs that hide simpler relationships, you’ve done the same. More on that in another post…)
My beloved C language, for instance, gives some freedom to build a lot of constructs, but mainly those constructs that still map closely to assembly language and to hardware. C++ tries a little harder, but I feel like those constructs quickly hit the point of appalling, fragile ugliness. Languages like Lisp, Scheme, Clojure, Scala, and probably Haskell (I don’t know yet, I haven’t attempted to master it) are fairly well unmatched in the flexibility they give you. However, in light of Bret’s video, the way these are all meant to be programmed still can fall quite short.
I love Context Free as well. I like it because its relative speed combined with some marvelous simplicity gives me the ability to quickly put together complex fractalian/mathematical/algorithmic images. Normal behavior when I work with this program is to generate several hundred images in the course of an hour, refining each one from the last. Another big reason it appeals to me is that, due to its simplicity, I could fairly easily take the Context Free description of any of these images and turn it into some other algorithmic representation (such as a recursive function call to draw some primitives, written in something like Processing or openFrameworks or HTML5 Canvas or OpenGL).
Later note, circa 2017: Tobbe Gyllebring (@drunkcod) in The Double Edged Sword of Faster Feedback makes some excellent points that I completely missed and that are very relevant to everything here. On the overreliance on fast feedback loops to the exclusion of more deliberate design and analysis, he says, “Running an experiment requires you to have a theory. This is not science. It’s a farce,” which I rather like.