Thoughts on language features and thinking

2021-05-07

In The Discovery of The Mind, Bruno Snell investigates the connection between the ancient greek language and thought, emphasizing how deeply they go together. For a modern person, it’s baffling to learn that the Greeks only had words for a few colors and used metaphors for the rest. A multitude of colors is a modern invention, growing in importance first with the rise of modern science.

Before the multitude of words for colors, people were not unable to speak exactly about the colors of items, but they had to proceed differently. And in many ways, this is equivalent to the yearly new features of ECMAScript - a language obsessed with, forced by necessity, backward compatibility.

Just like natural language widens or narrows our ability to speak about reality, the design of a programming language widens and limits how one thinks about applications. In the end, the design and language feature affects how we think about solutions to problems. This is, said parenthetically, the main argument provided for why developers are advised to learn more than one language and paradigm.

In JavaScript, all new language features could, with a slight oversimplification, be viewed either as tools of convenience or operational toolings.

Tools of convenience effects the surface of things, while operational tools change language behavior at a deep level.

Examples of the first type would be the so-called ‘Array extras’ of ES5, methods such as map, filter, reduce, every, and so on (sometimes falsely attributed to ES6). These methods make our life as programmers simpler while providing us with the means to carry through a behavior according to a plan. But characteristically all features of this nature could be produced with custom-made functionality and often is included in user-made API:s. None of the Array or Object extras are difficult to implement and most of them existed in libraries such as Lodash and jQuery.

Different languages think differently about whether functions such as these should be added to the standard library of the language, if they’re missing. C has remained small and has avoided expanding the standard library, C++ keeps growing and Golang, influenced by C, is designed to stay small. Sometimes functions in standard libraries disappear; in Python 3, reduce moved out from the standard library to the functools module and JavaScript also have its forgotten lore.

Even though my distinction between tools of convenience and operational tools is artificial - a rhetorical construct - the later kinds of tools has a greater impact on our mental models, how we think about code, and what we can do with code.

The behavior correlating to the class keyword was quite possible before class was introduced with ES6. But because of the ‘shape’ of code needed to produce it, I think it’s a reasonable hypothesis that thinking through an Object-Oriented paradigm in JavaScript was harder. Introducing class opened gates to thinking about software development previously obscured by clunky syntax and constructs.

I guess that the opening of this gate also opened numerous association gateways in the minds of people used to programming using classes and an Object-Oriented approach in other programming languages.

Dramatically put, perhaps it would even be possible to claim that new paths to the tradition were unlocked. The change, in this image, lessened the cognitive barrier needed to translate for instance Java code found Online at GitHub, Stackoverflow and other places, Java code used in papers, books, and Videos to do things with code. It’s hard or impossible to measure how great an impact such things have but it would be presumptions and arrogant to claim that it have no impact.

The advent of Arrow functions in ES6 made day-to-day work with programming easier while Proxy and Reflection perhaps do not have a common use-case. At the same time, Proxy and Reflection can fundamentally change how the language behaves. Or rather, our perceptions of this behavior, our cognitive models.

Let’s make all this more concrete using two examples of how language may influence the way one thinks about behavior.

A misconception of mine was for long that map conceptually was about transforming a list of values to another list taking a function to transform an individual value, applied iteratively.

Before learning the basics of functional programming my cognitive model of map was boxed to usages such as,

const nums = [1, 2, 3];
const addOne = (x) => x + 1;
const numsTransformed = nums.map(addOne);

Since this is how map works in JavaScript, conceptually this how I thought about the map operation.

However, map of functional programming is - I learned, at some point - about transforming a set of values - a collection of values - to another set of values. map takes a function transforming a to b and list of a’s returning a list of b’s.

What’s interesting about map is not primarily how data is structured but the transformation in itself. Since this transformation concentrates on individual values it’s not as interesting how individual values are modeled as parts of a larger data structure.

If we have a set of integers and a function incrementing a value by one, does it matter - from the view of the function increasing an integer by one - if the integers are structured in an Array, a linked list, or some other data structure?

It’s therefore not strange if we wanted to map values of an Object just like a JavaScript Array. But the design choice of ES5 to only attach map to lists for long, for me, made this harder to see. I was blinded by what was given to me for free in the language and I don’t believe I am alone in making this kind of cognitive fallacies. Implementing map for Object is straightforward in JavaScript,

Object.prototype.map = function (fn) {
  return Object.keys(this).reduce(
    (acc, k) => ({ ...acc, [k]: fn(this[k]) }),
    {}
  );
};

const obj = {
  a: 1,
  b: 2,
  c: {
    d: 3,
  },
};
console.log(obj.map((x) => x + 1));

Or using a more reasonable and performant definition, eventually being immutable,

Object.prototype.map = function (fn) {
  const o = {};
  for(const key in this) {
    o[key] = fn(this[key]);
  }
  return o;
};

In this example, the value of key c would be mismatched and not result in anything meaningful but on the other hand so would [1, 2, [3].map((x) => x + 1));.

In JavaScript, an Iterator allows us to access a collection of objects one at a time using a for ... of-loop. By default, we can iterate collections of type Array, Map, Set, and String. Any Object with functionality for [Symbol.iterator] returning an iterator, is iterable.

The Iterator interface provides tooling for bending JavaScript to suit a cognitive model of our choice. If we are more interested in the data - the values - of a collection than how they’re ordered perhaps we want to treat an Object similar to an Array. For instance, we could spread an Object into an Array . Of course, this is already possible using Object.keys, Object.values and Object.entries but creating uniformity sometimes helps us think.

const obj2 = {
  a: 1,
  b: 2,
  c: {
    d: 3,
  },
};

Object.prototype[Symbol.iterator] = function* () {
  const values = Object.values(this);
  for (const v of values) {
    yield v;
  }
};
console.log([...obj2].map((x) => x + 1));

Possible expressions present natively in a programming language are inextricably linked to how we think with the programming language. In the world of programming, it’s true that a crude language only permits crude thinking.

A non-crude language is Turing complete but what expressions a programming language permits natively or at least without an effort, affects how we use it. Lambda Calculus is not a language you’d want to make a Rest API with.

It’s not very interesting that you could do anything if the means to do sp prerequisites near-infinite time. Provided with the tools to solve any computation doesn’t give you the power to solve any computation; by the time we’ve made the Y combinator we’re exhausted and would’ve wanted a symbol.

It’s easy to underestimate what it means to gain a new cognitive tool. On the other hand, too many tools can hinder us.

About | Archive