“Should array indices start at 0 or 1? ..."

http://exple.tive.org/blarg/2013/10/22/citation-needed/

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

(read the whole story, the link doesn’t promise much but it’s worth it)

3 Likes

The real problem is the limit <MAX or <=MAX depending
where you start. Back then X+offset where X was the index. We still have to have large offsets just for that old
idea.
You had one data size so incriment was allways 1, not like
today with scaling of 1 2 4 8. A 32768 word memory then is 4096 words now.
It would be nice to be able define things better dynamicaly
like arrays, for the amount of memory you have rather than
malloc for everything.
if max memory small max is small
if max memory medium max is bla bla bla
what if -1 index returned data size defined and
-2 gave max size?
Ben.

Interesting tracing back - would not have expected Martin Richards to figure in the story! I would probably have gone for a programmers-were-mathematicians-not-engineers kind of argument, but if the article is right it was much more pragmatic (or brutal) than that.

1 Like

I find it very interesting that Richards uses “monodic” and “diadic” instead of “unary” and “binary”. Perhaps the Greco-Roman wars are not quite over?

I’m not quite sure I agree with all of the article’s conclusions, however:

But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler. […] Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

If I understand correctly, the author is arguing that the calculation of v!i in BCPL was performed at compile time, not at run time, and this is simply not true. It may have been true in more contexts than it is with dynamic allocation, but the fact of the matter is that pointer words could still be calculated in this context (consider the degenerate case of a program implementing its own dynamic allocator to manage memory, but pointer-based or even index-based data structure traversals would have this same property), and the work done “ahead of time” by the compiler would be exactly the same whether the initial index is zero or one — it would simply emit slightly different instructions in each case.

All in all, I find the article very interesting and it does expose some wonderful history, but I don’t think the thesis statement — that we use zero-indexed arrays because of compile time on an IBM mainframe — holds water.

4 Likes

The only context which matters, for this story, is the context of what Dr. Martin Richards was using at that time. It doesn’t matter that other implementations could do things differently. What matters is what that implementation was doing.

But … well, this is a fun story but it’s not ultimately so critical. The specific reasons for the originating example of zero-indexed arrays is an interesting historical footnote, but what’s more important are the reasons others took up the usage. Was it just unquestioning copying? Was it because it was really useful? If so, are those reasons valid?

Speaking as someone who has programmed in numerous 1-indexed and 0-indexed languages … there’s no contest. 0-indexed makes life so much easier in so many ways. The reasons for copying the practice are indeed valid.

The blog post’s argument that our incuriosity of the past endangers our agency in shaping the future? Well … maybe, but I think anyone who’s been in software development for a while knows that we absolutely have no idea what we’re doing anyway. It’s nothing but layers upon layers of lava. Everyone thinks the latest new thing will fix all the problems with the last latest new thing which didn’t fix things and it all just makes things worse and worse. The real reason we’re still using messed up garbage like C++? Because sticking with just a few lava layers is at least better than adding more and more layers with a track record of making things even more of a nightmare.

My point is … everyone in software development “knows” that if we only got our act together, we could fix everything. It’s just … people with enough experience (like myself) know that we’re definitely wrong. Yeah, I’ve got opinions about what will fix software languages/development. The only difference is that I know my opinions are wrong. No matter how much I think they’re right, the track record of geniuses plenty smarter than me is … not promising.

I notice from his wikipedia page that Martin Richards has a maths degree, so I might go back to that! (But then, so do I, so I might be biased.) For me, Dijkstra’s argument, linked within but dismissively (“elegance”), holds some water.

Anyhow, still a good read, thanks @jhi!

1 Like

My point is that it was necessarily not even true in the implementation that Martin Richards was using, because whether or not that computation can be folded to a constant at compile time depends on the program being written, not the compiler. (Of course, modern compilers can do it more often than compilers of that time could, due to more resources being available to devote to optimizaton.)

I must agree. :wink: And it’s not clear to me at all that 0 versus 1-based arrays is even an issue of correctness or betterness or incorrectness or worseness. It’s simply … a decision. Now, arbitrary-base arrays may be a different story — but again, if not used judiciously, they’re probably dangerous!

I think, this is more than valid, when we’re considering ranges. Say we have 4K of memory, the valid range starts 0 and is limited by (but not including) 0x1000, the total amount. We may navigate in this space or address deliberate slices by applying offsets to the base address and by limiting the extent of a subrange. This is also, like bare metal works, either by explicit address arithmetic or by implicit offsets, like with index (or B-) registers.
Mind that when the first low and mid level languages came about, everyone in that specific task space was coming from bare metal, those who invented and implemented those languages, as well as the users of those languages. High level languages like Algol, on the other hand, were much more abstract in concept (Algol was designed as a general description language, which may or may not have a machine implementation), so traditional mathematical concepts were prevalent over things that were related to the implementation (which would have probably been considered as an impurity of the language) and could be delegated to the compiler.
We may conclude, zero- and one-based indices are pretty much an expression of the respective conceptual space.

For Wirth- and Wirth-derived languages you can choose whatever you wish, not just 0 or 1. So just pick what looks best for the particular function you’re coding. Fortran as well, and some old BASICs had the possibility too.

The article also talks about all kinds of features and tools that people keep “inventing” were already used decades ago - and either abandoned or simply overlooked. Quote: “How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C?” - I actually implemented a “foreach” for C, and I’m absolutely sure I’m not the only one. I got it from Perl though, not directly from BCPL.

Another point from the article which appeals greatly to me, it links Bret Victor’s not-1973 talk The Future of Programming:

2 Likes