I shall try to go beyond just reporting what we have done and how, and I shall try to formulate as well what we have learned.
…
I am convinced more than ever that this type of work is very difficult, and that every effort to do it with other than the best people is doomed to either failure or moderate success at enormous expense.
Some interesting technical points in here - well worth a read.
if a segment of information, residing in a core page, has to be dumped onto the drum in order to make the core page available for other use, there is no need to return the segment to the same drum page from which it originally came. In fact, this freedom is exploited: among the free drum pages the one with minimum latency time is selected.
A next consequence is the total absence of a drum allocation problem: there is not the slightest reason why, say, a program should occupy consecutive drum pages. In a multiprogramming environment this is very convenient.
Wikipedia notes that the machine introduced the semaphore, a primitive needed for the operating system. Dijkstra’s paper describes sequential processes and even has some pseudocode looking rather like Occam to me, although the whole effort comes 10 years before Tony Hoare’s Communicating Sequential Processes.
That kind of pseudocode (to my eye it looks much like Pascal, but it is probably more Algol-y in origin) is very common in classic algorithm papers. Looking at that takes me back to university…
I’ve just taken the time to read the PDF and its full of good stuff. A fore-runner to Communicating Sequential Processes? Maybe… It’s also similar to how I implemented multi-tasking in my own little (BCPL based) operating system - maybe there really are only so many ways to do something…
One thing I noted:
In my experience, I am sorry to say, industrial software
makers tend to react to the system with mixed feelings.
On the one hand, they are inclined to think that we have
done a kind of model job; on the other hand, they express
doubts whether the techniques used are applicable outside
the sheltered atmosphere of a University and express the
opinion that we were successful only because of the modest
scope of the whole project.
University vs. the real-world, again.
Also - as noted in other threads here; Testing! That’s the whole verification stage again, or pre-again - just shows that testing/verification was an essential part of development even back in the mid 1960s… Even on a system that by todays standards is no better than a little microprocessor system. In-fact, I suspect the entire system could be emulated on a suitable 6502 running at (say) 8MHz. The challenge is the 27-bit memory system though… I’m not sure if it’s 32K Bytes of 32K words of 27-bits. I’m suspecting the latter which would make it more interesting… But for any low-spec 32-bit system then it’s fairly trivial.