"Frankly it blew my mind": Tron (1982)

Nice piece about 1982’s Tron (“From cyberspace to AI, Steven Lisberger’s 1982 sci-fi classic was way ahead of its time. The team behind it explain how they made a game-changer.”) - some pull quotes below which are as much social as technical… but first here’s a making-of short:

Found that here, where we read

To create the computer animation sequences of Tron, Disney turned to the four leading computer graphics firms of the day: Information International, Inc. of Culver City, California, who owned the Super Foonly F-1 (the fastest PDP-10 computer ever made and the only one of its kind); MAGI of Elmsford, New York; Robert Abel and Associates of California; and Digital Effects of New York City. Bill Kovacs worked on this movie while working for Robert Abel before going on to found Wavefront Technologies. The work was not a collaboration, resulting in very different styles used by the firms.

Also from there, this sequence of excerpts showing the CGI:

Interview giving the Abel & Associates perspective:

On the Foonly F1:

More on III:

From the headline article:

with 40 years’ hindsight, Steven Lisberger’s sci-fi adventure Tron was the shape of things to come: in cinema, in real life, and in virtual life. As a piece of entertainment, it is admittedly no classic, but thematically, Tron anticipates issues we are still grappling with today: artificial intelligence, digital identity, privacy, personal data, the dominance of big tech.

Tron also anticipated the digital future of film-making. It was the first movie to incorporate lengthy sequences of entirely computer-generated imagery (CGI) – a then-unprecedented 15 minutes’ worth. Nobody had seen anything like it. As such, Tron paved the way for the current era of digitally enhanced spectacle, influencing film-makers such as James Cameron, George Lucas, Peter Jackson, Tim Burton, the Wachowskis (The Matrix bears many similarities to Tron) and former Pixar chief John Lasseter, who once said: “Without Tron there would be no Toy Story.”

Tron emerged in the culture at the precise moment when computing power was being liberated from its military-industrial strongholds and put into the hands of the people, and that is also the story in the film.

In late 1979, while developing Tron, Lisberger and his co-writer Bonnie MacBird visited the Palo Alto Research Centre (Parc).

After seeing early video games such as Pong, Lisberger became interested in computer graphics. He met pioneers including Ed Catmull, the future co-founder of Pixar, who was at the New York Institute of Technology, and Phil Mittelman at MIT, whose company MAGI was making 3D tank simulations for the US military. Mittelman showed Lisberger a virtual artist’s mannequin he’d created. “Frankly, it blew my mind,” Lisberger recalls. “This was, like, the magic realms of wizards. As an artist, I thought: ‘We should be dabbling in this stuff.’”

Lisberger was also thinking about computing power being in the hands of the state, collecting citizens’ personal data: tax records, driving licences, and so on. “I’m already in your system. So why is it I don’t have access to myself?” As such, Tron embodies the utopian dream of the early computing era. “It was a story of rebellion and revolution, and founding a new frontier that would enable a new civilisation to take hold.”

Technically, Tron is a mix of live action, old-school animation and CGI. All three elements were challenging. All the special effects were added in post-production, so the actors were performing on blank, entirely black sets.

Tron’s distinctive glowing circuitry was achieved through a technique called backlight animation, which involves making a negative of each frame and hand-painting the glowing areas. There were 75,000 frames to do; more than half a million pieces of artwork.

Tron’s CGI elements were an entirely separate process. Computer graphics had been used in movies before Tron, but only in brief snippets. In 1973’s Westworld there is a clip of a robot’s-eye pixellated view, for example. Star Wars and Alien both feature 3D wireframe graphics projected on screens. Only a few companies could produce such images, each of which had their own room-sized computer and their own custom-built software.

Tron’s animators had to map out the CGI scenes on graph paper, then calculate the coordinates and angles for each element in each frame. Computer engineers would then input all the numbers manually. And there was no way of seeing the results until the images were printed on to 35mm film and projected in the theatre.

The spin-off Tron video game actually made more money than the movie. The film was even disqualified from the best special effects Oscar, since using computers was considered “cheating”.


Trivia: The PARC visit was significant to Bonnie MacBird in more than one way, as she happened to marry Alan Kay. Alledgedly, the user character in Tron is somewhat inspired by him.

Regarding the graphics: it can’t be overstated how exotic (and difficult to make) smooth gradients were, before computer generated graphics were everywhere.


Tron’s special effects disqualification bugs me – even though it was the man-on-the-street/media’s view of computers at the time. Computers were there in mid-70’s TV shows like “The Six Million Dollar Man” where they’d feed the data into the computer and it would “figure it out” for them.

Still, they should admit their ignorance and give Tron some kind of retroactive award.

They’ll need to do it soon before the movie version of Dall-E comes along and you really can get a computer to do all the work of making a special effect by simply asking.

I asked both Liskov and Kay whether the name of the character CLU has anything to do with the programming language. They wouldn’t confirm nor deny.

There’s also a character called CRAM - microcode control RAM?


I don’t know if we’re talking about the same character, but I remember one called RAM. He was the actuarial program. There was another called Chrome, as I remember, who died in the “ring” game, I think it was called.

I’ve been in contact with Alan Kay over the years, and one thing I’ve noticed is I’ve never seen him respond to “yes or no” questions. He responds more to questions where he can give an explanation about concepts, or can talk about the history of something. I wondered why he wouldn’t respond to such questions, for a while, but came to the conclusion (which I think he later confirmed in a presentation he gave) that he responds best to interesting questions. “Yes or no” isn’t that interesting, even though it might be a burning question on our part. :slight_smile:

A question I’ve had for a while about the movie arose when I read some documentation on the Burroughs B5000, a model that Kay has repeatedly brought up as a great system, one of his favorites. My interest was really piqued when I found out that its operating system was called the Master Control Program (MCP). I thought, wait a minute! That was the villain in Tron…right? I haven’t asked him if his interviews with MacBird were what led to that character name, but I’ve wanted to. :slight_smile: No article I’ve read on this part of the backstory has talked about that.

Incidentally, Unisys, what Burroughs merged with, has continued selling computer systems based on the B5000 design that Bob Barton came up with. They still call them “MCP systems.”

I’ve kind of wondered if he soured on the movie somewhat. He made the comment that his marriage to MacBird “worked out better than the movie.” He also said that MacBird wrote the script for Tron on a Xerox Alto (I think at his place), which may be the first time a movie script was edited on a computer. Anyway, he’s also said that the script got whittled down from what she wrote, and so didn’t turn out as well as he thought her version was, something like that. It sounded like she not only picked his brain about computing ideas, but they also might’ve collaborated some on her version of the script. Though, I haven’t heard any history of it confirm that. It seems like he used to have an interest in that. I heard him tell the story once of how he wrote a treatment for Steven Spielberg for “E.T.” on the part of the story where E.T. begins to communicate with Elliot. He came up with what he thought was a more realistic scenario for E.T. and Elliot to begin communicating, but Spielberg threw it out, instead going with, “E.T. watches TV, and learns to communicate that way,” which Kay said was not realistic at all.

Re. CLU, the concept of “little computer people” on “the game grid,” like CLU, came out of the computing concept of agents that had been around by then for almost a couple decades; programs that would learn about your wants and needs, and would do work on their own of finding information and resources that would serve them.

1 Like

What really surprised me was to learn that the computer animators of the time didn’t have any automation in their animation process. Their systems were only capable of rendering still frames. So, if you wanted motion, you had to carry out the calculations for them yourself. You could use a computer to do the calculations, but there was no system that would both render and do calculations for motion, which could automatically be fed back into the rendering system to render more frames. The producers had to carry out the motion calculations and renderings as separate steps, which meant that once the motion calculations were done, they had to be manually translated into frames, and then the coordinates for each frame had to be manually keyed into the rendering system. So, data entry was a big part of the process.

This was why the fact that the movie contained 15 minutes of CG animation was considered insane! Given that Disney used a higher frame rate for their animation than other production houses (the typical frame rate for animation is 24/sec., IIRC. So, Disney’s was higher than that), this meant keying in lots and LOTS of coordinates for every second of CG film. And they didn’t have an intermediate step. They didn’t have wireframe animatics or dailies where they could review whether they’d gotten their motion calculations correct, or if the virtual camera was in the right orientation, or moving correctly, or if everything was in the shot. They had to find all that out when the final renderings were done, and they had to view it on a big movie screen! I mean, man, talk about pressure! If someone screwed up, everyone could see it, and it meant doing the process for a few minutes of animation all over again.

There had been rendering systems, which could this by around 1970.
E.g., “GENESYS” on the TX-2 (1969) or a 3D polygon-based key frame/skeleton animation system by Marceli Wein and Nestor Burtnyk on a SEL 840A in Canada (1970, ca).
I guess, the real problem was the synchronization with real-life frames and controlling multiple layers.

“GENESYS: An Interactive Computer-Mediated Animation System”:

(Mind what we now would call UI controls, like tabbed views or sliding track indicators.)

“Key Frame Animation”:

Mind that there’s a mouse attached to the SEL 840A (featuring 8 kilowords of memory) in 1971!
(The blinking list selections also remind me of the selections in the HAL 9000 status screens in “2001” – I have no idea, if there was any influence or common ancestor.)

Baecker, Ronald M. (1969). Interactive Computer-Mediated Animation. PhD Thesis, MIT.
PDF: http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-061.pdf

Burtnyk, N. & M. Wein, M. 1976. Interactive Skeleton Techniques for Enhancing Motion Dynamics in Key Frame Animation. Communications of the ACM, 19(10)
PDF: https://dl.acm.org/doi/pdf/10.1145/360349.360357

1 Like

Good to know. I don’t know why the systems Tron’s producers used could only do still frames, but that’s what they said. One of the technical producers said that if you wanted a light cycle to follow an S-curve, you had to calculate the curve out (they used a separate computer for this), print out the coordinates, and then hand-key the coordinates for each frame. Perhaps they didn’t have enough memory in the systems to do both rendering and calculations for object movement (mind you, the frames were at pretty high resolution for the time. I’ve never gotten a clear fix on what resolution they used, but I know the movie was shot on 70 mm film, and a lot of the curves on objects look very smooth. So the frame buffer would’ve been quite large).

I guess, tight, networked integration was not that high on the list. Primarily, this was about rendering and composition.

My point was rather that there existed some amazing graphics software in the late 1960s (e.g., GENESYS did about everything that later packages like Flash did with parametric animations, even with intuitive input), but this was all on experimental, academic systems and it took a long time until this found its way into commercial software running on commercially available and affordable computers. There’s always a difference between “problem solved” and the solution actually being available. However, companies like iii where not nobody.

I think, the most prohibitive part is, where they were sending entire scenes, image by image via telephone line over to the other studio for preview. (What baud rates could you achieve on unoptimized land lines in the early 1980s?)

Re. the phone lines

I was surprised to learn that the Arpanet and the early internet ran on leased long-distance phone lines, and were communicating at 56Kbps from the beginning, in 1969 for the Arpanet, and 1983 for the internet (or 1979, if you count the early TCP/IP networks at a few universities). At the time, physicists had a high demand for this technology, because they were running models and data analysis on supercomputers, and really wanted a way to get the results remotely, because otherwise, they had to transport tapes across the country.

I’m not saying Disney was using the Arpanet, but maybe something like the physical infrastructure it used.

As I know the story, the network speed had been originally planned at 9.6Kb/s, who would need more? As it happened, there was a British gentleman, some Mr Roger Scantlebury, at the Intergalactic Computer Network Conference, who was there to read a paper on the National Physical Laboratory’s (NPL, Teddington) own packet-switched test network (esp. on the “interface computers”, the British version of IMPs). Mr Scantlebury, who had formally proposed a UK-wide network at 1.5Mb/s (in 1966!) was rather astonished to hear about the comparably modest US plans and found himself in high demand regarding various details of the British ideas. As a consequence of this, the backbone speed was updated to 56Kb/s.

I had been thinking of doing the network with 9.6 kilobit lines because this was what worked in terms of traffic levels we expected. But the British made me realize that I had not looked at the question seriously. So (…) I started thinking about the possibility of what the phone companies called a 50 kilobit line.
(Larry Roberts quoted in “The Dream Machine” by M.M. Waldrop, p. 276)

The interesting part beyond ARPAnet is how the Roberts quote continues:

The idea was that you bought a very expenisve modem that tied twelve lines together. Stepping up to two would be expensive. But I did the numbers over and over for myself and it looked like it would trade off economically: you could get more through the higher-speed lines, so you would not have to buy as many, and you would get better response time. So that was a good idea.
(Larry Roberts, ibidem)

So this was state of the art in the mid-1960s with lines ready to rent from the telecom companies. – But there’s only that much that you could do with normal telephone networks and it took a while until these were made ready to do things like ADSL (ca. mid 1990s).

Remenber too, that for long distance calling analog telphones were multiplexed together into a single channel. TTY’s at 110 baud worked, for weather and news updates. Telephones are half duplex,
you talk or they talk, not both together. Even today fast internet is only between big cities, not down on the farm.

We read from the same source. :slight_smile:

Another surprising thing was to find that the internet was not upgraded to T1 lines until a year or two before I got into college. I experienced the internet a little in 1988 through my roommate who was an older EE student. I didn’t get on it myself until the following year. I remembered hearing talk of how “the internet crashed” sometime before I got there. Indeed, Waldrop talked about that, that in '86/'87 its growth was outstripping its bandwidth, and the situation was getting desperate. So, the NSF upgraded the nodes with T1.

It was a bit interesting to read how even though DARPA developed the internet, it was the NSF that took over its development, up until the network largely went private in the mid-90s. I’ve been very unfamiliar with how science is funded, but what I’ve read is that the computer scientists involved with the DARPA research looked down at NSF funding. It was short-term, and not basic research. However, NSF did an unusual thing in taking on the responsibility of building out the “2nd wave” of internet development (after CSNet). It sounded like they took it on because they saw it as supporting scientists. Though, the mandate they wrote up for themselves also included “education.” So, anything that supported that also fell into the same basket.

The 56 kbit/s number is commonly cited, but I believe the correct number is 50 kbit/s.

The IMPs were connected through Bell 303 modems. http://www.bitsavers.org/communications/westernElectric/modems/303_Wideband_Data_Stations_Technical_Reference_Aug66.pdf

1 Like

Well well, what a find!

Three speed categories are available with this equipment. The highest speed capability is over a “Supergroup” channel which uses the bandwidth of 60 voice circuits, and is a convenient breakdown of bandwidth of telephone carrier systems. On a supergroup facility, a synchronous speed of 230.4 kilobits per second is available. The next lower convenient breakdown is the “group” channel which uses the bandwidth of 12 voice circuits. The 303-type equipment can transmit at a synchronous speed of 50 kilobits per second over group facilities. The group channel can be divided in half and provides the third speed category - 19.2 kilobits per second. Half of the group channel can be used to provide 6 regular voice circuits if desired.

A possible source of confusion: Wasn’t NSFNET 56 kbit/s?

1 Like

I believe it was. The company I worked for in the early 90’s had a 56K line to the NSFNET from their US (Boston) office.

My understanding - based on working for a UK based ISP in the mid 90’s was that data comms at that time was delivered via many technologies but the base rate for a US digital line was 1544Kbits/sec with 24 channels of 56Kbs. (This was a “T1” or DS1 line) Split as voice and/or data, so a “frac T1” was often quoted for the data part. (fractional)

The UK & Europe used a 2048Kbps ‘carrier’ with 32 channels of 64000 bits/sec. (E1 line)

In the early 90’s had a “frac T1” in the US office which carried both phone and the 56Kbps Internet line. The US to UK office private link (reassuringly expensive in the early 90’s!) was 64Kbps.

Technology was improving all the time but prices didn’t really drop for a long time, so it was really the realm of big tech. companies and universities for a long time. Home Internet became a thing in the early/mid 90’s though (in the UK) but even then it was still expensive until the technology improved.


1 Like

Now I know what a T1 line actually is!
(I recall a variety of descriptions – as may be still known from various carriers today –, but not an accurate specification or why it would be this particular speed. But I never had really to know, the T1 line was more a mythical beast.)

I recall the process of modems over voice lines stepping up from 300bps to 1200bps to 2400bps to 9600bps to 19200bps to 38,400bps as I was wishlisting modems over the course of nearly a decade without being able to afford one until I finally got a 38.400bps modem which was on discount because the top speed had become 56kbit/s … and read at the time that the reason it hadn’t quadrupled or doubled in speed again was that 56kbps was the fastest modems on US home lines could go.

The fact that the bundle of high speed lines in the mid-1960’s had a throughput so close to what was the max that a late 1980’s / early 1990’s voiceline modem could put through a home line of the day makes it easy to misremember the number from the mid-1960’s.

1 Like

Don’t forget some terminals had 1200 baud download and 300 baud uploads.
Also at that time you did have the high long distance rates for the phone. I think
it was 50 cents minute. Ben.