Cybersyn and Stafford Beer

One problem with command economies is information overload: suppliers and consumers need to be matched, in a complex interdependent web of relationships. Stafford Beer, the Mancunian cybernetician, stepped up to help Chile get things sorted out…

The principal architect of the system was British operations research scientist Stafford Beer, and the system embodied his notions of organisational cybernetics in industrial management. One of its main objectives was to devolve decision-making power within industrial enterprises to their workforce in order to develop self-regulation of factories.

2 Likes

This story has always intrigued me since I first read about it here. I would love to have seen how it played out had they gotten it working (damn meddling CIA).

1 Like

This story has amazed me too when I first heard about it a few years ago. It left me with a bitter taste as well.

I have written an article (it’s in greek though …) about Stafford Beer and his endeavours in Chile helping President Allende. Which evolved to comparing Central Planning to the theories of Hayek about Information overload and von Mises’ polemic to socialist society.

Linear Programming was invented and used in the USSR first, by Kantorovich as an early effort towards scientific socialism. Unfortunately it was not matched by compuing hardware research in USSR (as much).

Stafford Beer had also proposed the “algidometer” (algos=pain in greek – notice: not “pleasuremeter”) which was going to be supplied to each household as a massive feedback device. Today there are tens of social media which extract feedback via sentiment analysis of users’ posts.

For me, the technical challenges of Central Planning can be met with today’s hardware. Naively put, the use of every screw, cog, capacitor etc. in the economy must be first identified (via digital blueprints) and then automatically decide how much and when to produce and where to deliver. Software must do that. I argue that a naive linear system of very sparse equations describing this can be solved with current state-of-the-art. As an indication: in this post some argue the limit in 2012 was 10^5-10^8 variables:stackoverflow. Of course 10^5 is too little but 10^8 is getting there. With tricks and more memory it can be pushed a few orders of magnitude further.

More difficult than creating hardware and software to solve this, it is to decide whether it is on the interests of the society.

For: eliminate waste by more efficient production.

Against: the very word “central”, will it stall individual’s creativity? Will a ZX-Spectrum or Altair be ever produced in such a system?

I believe it wont stall and inventions will continue: Money is a vulgar incentive as a means of pushing the state-of-the-art or inventing new things. It’s prevalent today (though it wasn’t always: remember Laurel as a prize in ancient olympic games) Altough most researchers work at universities for relatively little and very few of them capitalise their work with patents.

Sorry, my post quickly becomes or can become Political but my incentive is a society where I have free time to do things I love while the tedious parts are handled by the computer and the robot, and there is no middle-man to suck a percentage.

Thanks for reminding us Stafford Beer.

1 Like

If you like this sort of historical/philosophical/political analysis, you might like the unusual book Red Plenty by Francis Spufford - half novel, half documentary, about the Soviet system in the 50s, and the brief promise that central planning would result in a better outcome than systems seen elsewhere. There was information overload: when it takes more than a year to calculate this year’s targets, the system won’t converge. Here’s something related from our real world:

1 Like

Thanks Eds for the links. I have the book. Sure there was information overload, when doing it semi-manually and mostly on paper in the 50’s. The question is will it still be with current and new technology?

1 Like

Arguably Google’s original Page Rank (1996) was this kind of exercise: trying to bootstrap a coherent set of values for things, when each thing implies a value of some other set of things. I’m sure the compute power is up to the job these days, but I’d expect trouble anyway (‘misaligned incentives’ and ‘you get what you measure’.)

Personally, I rather enjoy utopias such as Banks’ The Culture, where there’s plenty of everything for everyone. And AIs. And yet, still enough trouble to create drama. (See perhaps the previous thread Retrocomputing in SF stories)

But I’d like to steer a course here that’s closer to retrocomputing, and the history of computing, and a little further away from political theory and practice…

1 Like

I agree with what you say. (re: 1st and last paragraphs)

1 Like

I’m now reading this, and its chapter featuring Sergey Lebedev working late with the БЭСМ mainframe contains a clear — yet certainly lyrical — overview of how a valve/tube computer operates. The book is a (very strange) work of fiction, but I’m enjoying it immensely. Other readers might, too.

1 Like