What is obscure?

CASSIUS Did Cicero say anything?

CASCA Yes, he said something in Greek.

CASSIUS What did he say?

CASCA If I told you I understood Greek, I'd be lying. But those who understood him smiled at one another and shook their heads. As for myself, it was Greek to me.

from Act 1, scene 2, Julius Ceasar (1599) by Shakespeare

What we cannot understand, appear as Greek to us. We all recognize this experience, but from different contexts.

When we have reason to believe something is not gibberish (by i.e. recognizing the letters as Greek), we presume the words are parts of a valid expression. We know that someone who knows Greek would have no trouble understanding the words. If s/he would interpret the meaning of the expression as it was intended is altogether another question.

In natural languages, it's possible to make guilty of obscurantism, deliberate attempts to distorting the truth. This is not possible in formal languages from the parser's point of view, but it is possible from a human perspective.

An expression has one (and only one) meaning in the context of the grammar, but with that said it might not be the case that the application does what the developer intended it to do. However, the application does exactly what the code intends to do.

The compiler follows instructions, it doesn't interpret semantics in the same way as we do. Therefore the notion of obscure code, provided that the application executes, is something subjective, a potential conflict between what the application is told to do by the code and what the developer predicts it to do from his or her understanding of it.

An excellent blog named Obscure JavaScript, collects, as the name suggests, obscure code written in JavaScript. In a blog post, we find this snippet. Consider it:

function sayMessage(title, name, suffix) { 
  sayMessage.replay = () => sayMessage(...arguments); 
    console.info(`Hello, ${title} ${name} ${suffix}!`); 

sayMessage('King', 'Bob', 'The Magnificent'); 
// Hello, King Bob The Magnificent! 

// Hello, King Bob The Magnificent!

Is it accurate to say it is obscure? This code works and does precisely what it is intended to do in relation to the grammar of JavaScript. We may, or may not, understand this code, but the code in itself cannot be obscure.

JavaScript has closures, which explain why the arguments can be saved for the replay. When a function is created (through its first call) a lexical environment is established. This environment holds a local state. If we create another function within this environment the inner function has access to state from the outer function. . A function in JavaScript is an object, and that's why we can write sayMessage.replay. Another source of confusion can be the use of the .info method of the console API. The console API has numerous less know method. Perhaps we're more accustomed to the method .log, but the API also includes .info (used here), .error, .dir and others.

There is nothing obscure in the code, a more important question is if this is good code. I believe this what the author of Obscure JavaScript is aiming for, to uncover parts of JavaScript we experience as obscure - implicitly telling us it's not. Or rather, that we need to understand it even though we would express it differently.

We think things are obscure when we don't understand them. Code we understand, we can express differently - just as, in natural languages, we can express the same meaning of an expression 'with other words'. In JavaScript, for instance, we could formulate a pre-ES6 version:

function sayMessage(title, name, suffix) { 
  sayMessage.replay = function() { 
    var _title = title, _name = name, _suffix = suffix; 
    sayMessage(_title, _name, _suffix); 
  console.info('Hello, ' + title + ' ' + name + ' ' + suffix + '!');

sayMessage('King', 'Bob', 'The Magnificent'); 
// Hello, King Bob The Magnificent! 

// Hello, King Bob The Magnificent! 

One might still argue that JavaScript is obscure. But ask, why is it obscure?

It's, if anything, obscure when compared to other languages. A formal language - in this case; a formal programming language that makes the Web tick - persist of a set of rules. If we abide by the rules, we can predict the outcome. And if we can predict the outcome, there is no magic involved, the behavior is not random. We know this. In the snippet we've considered, the statements and expressions do exactly what we would expect. And if they do, in what way is the code obscure?

Often when I try to grasp subject matters, I have to pause and return later. Sometimes months pass. But my general experience is that when I return, even though I might have forgotten some things, it is quite easy to get back on track and expand my knowledge. This experience is quite universal, but we need to remind ourselves of this from time to time. What we experience as obscure today, won't be tomorrow.

When we understand code we previously thought was obscure, we're in a position to ask the next question: is it good code? Could we rephrase it, and by this refactor makes it more clear? Sometimes the answer is yes, on other occasions no. A far more important question would be when to use 'obscure' code and solutions. That is, when it's motivated and when to refactor.

We produce meaning from contexts. It's hard, if not, impossible to determine if the code we use is obscure if we don't specify the context in which it is included. Open Source projects often are written to be read by a large group of people. Following the tradition of the Linux project, it's often said, such projects aim for readability in a higher degree than other projects. In such a context, I guess the tolerance for what many would experience as 'obscure' is lower.

There is something 'academic' in the bad sense, meaning 'forced', in the snippet we've discussed. The example provided by Obscure JavaScript is limited, and not very hard to understand. We must use our imagination to conjure the proper setting for such a snippet. This is one of the main problems with programming textbooks, tutorials and YouTubes in general. The examples used are always simplistic. But still, we need to speak about code. If we would only write code, we'd most likely end up as obscurantists when discussing with other developers.

Excursus. On semicolons

There's no such thing as information overload, only bad design.

― Edward Tufte

Using semicolons is about exercising good taste by avoiding border cases and unnecessary complexity.

People have strong opinions about semicolons. I usually avoid discussing this topic because other topics interest me more. In the end, I believe consistency to be more important than if you make use of semicolons or not. However, at the same time, we should note that consistency often is impossible when using semicolons.

Something I often hear from enemies of semicolons in JavaScript, is that it doesn't matter if we use them or not. This is not true, or at least not always true. Someone who claims that semicolons don't matter is either ignorant of Automatic Semicolon Insertion (ASI) or an involuntary obscurantist.

There are countermeasures (proper settings in for instance ESLint or JSHint) that would warn us when ASI is a problem. But even then, we can't be sure the chain doesn't break at some point. Errors could arise when we use Babel.js, a minifier, or any other technology that performs analysis of source code and build an Abstract Syntax Tree (AST).

A compiler can't read our minds, and ASI is set up to decide for us since JavaScript occasionally do need semicolons. JavaScript uses a compiler to interpret our code, but it is not a compiled language. Every time a parser deals with our code there is a small chance it will break due to the absence of semicolons, since the JavaScript parser uses semicolons as a delimiter. Clever linting can only warn us about when we most likely would want to use semicolons. True is that often semicolons don't matter since the ASI only in border cases will interpret our code erroneously in relation to our intentions.

It's a difference between ending up with obscure code that we don't quite understand and choices we do that make the code obscure by design. Only if we wouldn't use Babel.js, no minifiers and knew all the rules of ASI by heart there would be real arguments for not using semicolons. Why carry the cognitive extra load for the sake of semicolons?

At Stackoverflow, Reddit and Hacker News there are lots of examples of cases where the absence of semicolons in JavaScript makes the code break. This example is from Stackoverflow.

var fn = function () {
} // semicolon missing at this line

// then execute some code inside a closure
(function () {

According to the author of the post, the compiler would after the ASI process translate the code like this.

var fn = function () {
}(function () {

Uncertainty is something we want to remove, whenever possible. The information overload a semicolon implies is not very large. On the other hand, if problems arise when not using semicolons we should think it's bad design - because it's not something supposed to ever happen.

By not using semicolons we end up in a situation where our tools might fail, without proper settings. If we say use a good minifier and have semicolons, the minifier thinks for us. But if we don't have semicolons it doesn't matter how good our minifier is, errors can still emerge, unless with make use of other tools that warns of ASI hazards. Are we absolutely sure our tools are correctly set up? Are we advanced enough to know all the details of AST, ASI and other important topics we need? If yes, we will have the occasional semicolon injected anyway. And if our aim is consistency, why now always use semicolons?

We avoid ambiguity and obscurity in the eyes of the parser by the use of semicolons. This is why TC39 speaks about 'ASI hazards'. Even though I don't like the idea of a language feature that need external tools to work properly, I think we should be allowed not to use semicolons. Why break the Web? The reason why the TC39 committee are still discussing 'ASI hazards' in the first place is,

it's a recognition by the committee that the no-semi style is likely to become more hazardous over time, and that the best way to avoid said hazards is to… use semicolons.

If we know the tools we are using it's possible to avoid the hazards, for the time being. But how about tomorrow? If we programmers are lazy, isn't it more convenient for us not bothering about a problem and accept a solution that comes in a handy one character box.

The JavaScript parser will keep using a semicolon as a delimiter (removing that would require a new language). We know that. If there is a solution (the use of semicolons) that creates the possibility of never having to think about semicolons, how parsers and different tools handle them and so on, why choose to make our applications vulnerable to problems - now and tomorrow? Again, by using semicolons in our code we avoid any such problem. We create cognitive space for dealing with other hazards. As Occam would have it, "[t]he simplest solution is most likely the right one."