Nomenclature? Old and New

Thinking/musing about a recent thread…

Where the talk drifted a little an ended up at Algol… Wondering about where some computing terms came from?

I always found call by name to be a weird one. Maybe just me. Call by name or call by value - where ‘name’ is really a pointer. (or call be reference in some circles?)

Formal parameters (are there informal parameters?)

I wonder how many started in early architectures that are still used today, even though those architectures hare long gone, or been succeeded by something else?

BCPL for example has vectors (rather than arrays, but I think I can see a vaguely mathematical background there)

Anyone think of any other examples? Or any others that just sound a little suspicious these days?

(the tag refers to this: Niklaus Wirth - Wikiquote )

Cheers,

-Gordon

1 Like

I would expect some naming to come from the mathematical schools of thought, from Turing and Church and so on, and other naming to come from engineering or implementation. Perhaps the former for software and the latter for hardware?

“Core” for example is a description of the ferrite torus that stores one bit. At least, I think so… a cored apple is like that, although the core of the apple isn’t!

And “kernel” - that’s by analogy with soft fruit or nuts, I would think. It’s a shape thing - it’s the middle.

“Tube” is a shape whereas “valve” is a function - but they are different names for the same thing.

There’s a story about the naming of “transistor”… it was, I seem to recall, controversial.

1 Like

While John Von Neumann liked biological terms such as memory and organs, the IBM folks preferred more mechanical sounding terms such as storage and functional units.

Many terms such as “software” and “firmware” started out as mostly jokes. Saying that “bit” is short for “Binary digiT” seems like a retcon to me, specially given the later and related byte and nibble.

When you move to a different human language things become even more muddled since the jokes or references are completely lost. It always cracks me up when some talk in Portuguese has the speaker treating with all seriousness terms that would make English speakers smile.

Regarding Algol and call by name, to me “call by reference” would be more intuitive, but, I guess, this wasn’t invented yet. “Call by name” makes some sense in the light of the rather sophisticated label/switch system of Algol. (Switches are not clauses by case, but rather named structures of pointers to named labels, which are also subject to arithmetic and multi-level definitions. Calling a switch then selects the according entry, much like ON GOTO in Basic.) In Algol, these types are just “names”. So it may have been quite intuitive then.

It’s not. The term was popularized by Claude Shannon (no surprise there!) in his classic 1948 paper “A Mathematical Theory of Communication”. From the introduction:

The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2N and log2 2N = N.

1 Like

Well, no, but there is a need to distinguish between what I call the parameters, which are the names to which supplied values will be bound in the function, and arguments, which are the actual values for any particular call. I feel that calling the former formal parameters makes this distinction more clear.

Actually the post to which you’re replying sent me to Wikipedia to refresh my memory on call by name, and no, it doesn’t seem to be at all the same thing as call by reference. As I understand it, it works as follows. Consider the following pseudo-code:

int x = 0;
int f():
    x += 1
    return x

int g(h):
    return h + h

print(g(f()))

If f() is passed to g() by reference, it will be evaluated once, x will be set to 1, and g() will return 2. But if f() is passed to g() by name, the call f() will be bound to h, the first reference to h will call f() (setting x to 1), the second reference to h will call f() again (setting x to 2) and 3 will be returned.

In other words, it’s more like a macro, but there’s guaranteed to be no collision with variable names. I.e., you could have a different f defined in g() and it would not interfere with the f() you passed in, now known as h.

Or something like that. It’s late here. :-)

(I think here we need a passing reference to Knuth’s “Man or Boy” compiler test.)

In the 40s, Alan Turing’s (paper) design for the ACE had BURY and UNBURY where we would these days have PUSH and POP, or CALL and RETURN. In the text he speaks of burying and disinterring, which I find much more colourful:

When we wish to start on a subsidiary operation we need only make a note of where we left off the major operation and then apply the first instruction of the subsidiary. When the subsidiary is over we look up the note and continue with the major operation. Each subsidiary operation can end with instructions for this recovery of the note. How is the burying and disinterring of the note to be done? There are of course many ways. One is to keep a list of these notes in one or more standard size delay lines (1024), with the most recent last. The position of the most recent of these will be kept in a fixed TS, and this reference will be modified every time a subsidiary is started or finished. The burying and disinterring processes are fairly elaborate, but there is fortunately no need to repeat the instructions involved, each time, the burying being done through a standard instruction table BURY, and the disinterring by the table UNBURY.

What I meant was that I was guessing that John Tukey might have come up with “bit” as in “a very little piece” first and then added the “binary digit” explanation. That was a tradition and a popular thing to do still in the 1980s when I encountered this part of nerd culture. People would think up a funny name for their projects, like Lisp or Snobol, and then invent a reasonable sounding explanation for them like LISt Processing or StriNg Oriented and symBOlic Language.

1 Like

Also “PUSH” and “PULL” (compare 6502 stack instructions).
And then there’s the more recent pair of “append” and “remove”. (To my utter annoyance, Python manages to mix up these era defining pairs by using “append” and “pop” – it’s like starting a block by “BEGIN”, but closing it by “}”. Luckily, Python uses indentation instead. :wink: )

Regarding call by by name, there are also hard and symbolic references. Compare the following Perl script:

#!/usr/bin/perl

use strict;

# disable strict references to use symbolic ones

no strict 'refs';

my $a = "b";
${$a} = 2;   # also: $$a = 2

print $b;    # prints 2

(Notably, this doesn’t work, if you lexically define $b using “my”, but works with a “local” definition.)

If nothing else the nomenclature lets Niklaus Wirth make a pretty good joke:

He said he was once asked how to pronounce his name, and replied, “If you call me by name, it is Neeklaws Veert, but if you call me by value, it is Nickle’s Worth”.

1 Like