I posted on familycoding.com a few months ago in response to one of Juliet’s post on the use of equal signs in a “if(…..)” function. Here is the original post, where I gave her my humble opinion on how to better technically define what x=3 is doing in a if(x=3) situation, and why it will inevitably “spaz out” and evaluate to true.
Fast forward months later, I revisited the site and inadvertently reviewed my comment… and was immediately overcome by absolute shock of what an alien I sounded like. In the span of a few months, I had traversed from someone that was constantly appalled and puzzled by why the English usage in programming explanations is always so unfathomable (case in point), to becoming one of those people who freely speak in such alien syntaxes — without any recognition that they are doing so.
How did this happen to me? At the time of writing the comment, I had in fact made a mental note to use as much “normal English” as possible, yet inevitably I had embedded programming terms because their usage implied concepts which would otherwise be muddled by the fluid nature of verbal language. I had been horrified by this before– that programming gave way to sterile, rigid, and very much alien ways of speaking which novices could not really understand. One of my envisioned future projects to “demystify” the (English) language of programming through a combination of more visualization + metaphors, yet here I was.
Part of me understands that this is the subtle transformation of mapping to one mental model to another, and that sounding like an alien to myself is perhaps a congratulatory hallmark. But part of me wonders if it truly has to be so? If we can solve the gap between programming language and the language we use everyday, can we make programming more accessible to the rest of us who are used to speaking but not coding?
To conclude, I want to point to an intriguing post in Software Carpentry by Greg Wilson on the robustness of programming languages.
I’d therefore like to throw out a challenge to programming language designers. Forget about parallelism or the esoteric corner cases of various type systems; instead, focus on robustness. How forgiving is your language? How well do programs written in it work when people make minor mistakes? Or to switch to industrial engineering terminology, what are your language’s tolerances?
The Strong Robustness Measure is the percentage of those programs that correctly reproduce the output of the intended program. The Weak Robustness Measure is the percentage for which the exact location of the error, and the fix required, are reported in terms a novice can understand. (I realize that what a novice understands is ill-defined, but you get the idea.) At a guess, Python’s SRM score is close to 0%; its WRM score is around 20-50%…and that “strict” languages like Java and Haskell do markedly worse on the second (without improving the first).
Even though his point is on the tolerances of mistakes in a programming language, I believe that at its core it is related to my argument on the expression of programming in English. It boils down to this– can you express or construct something that demands to be highly specific and rigid and predictable in a medium that allows for ambiguity? Or does that violate the very core of its philosophy?