The hassle of generated images and what to do about this (in retro-computing, specifically)

Back over, at, there was a post asking for help in identifying a computer from an image (deleted, since). This is the image in question:

While rather convincing at first glance, it’s obviously an A.I. generated image.
Here, I’ve marked a few areas, where things either don’t make sense or are off (like “mutilated” keys, suitable for AI fingers only, or perspective):

There may be also some aesthetic hints, like that monitor glow being just a bit too nice.

However, this made me think: in a year or two, there won’t be such dead giveaways that an image was generated, as these algorithms are constantly improving (and these offending details are an annoyance to the respective developers). Meaning, these images will either send well-meaning enthusiast on a meaningless scavenger hunt, or any such requests will be routinely dismissed with a “well, that’s A.I.”, even if it were a not that well known but real historic artifact, maybe even of some relevance. (Thus, eventually restricting the field to just a handfull or so of well known computers. “I do know a C64C and a C128, but that C65 is obviously fake, because I don’t know it.”)

How are we going to proceed? How are we tackling this problem, especially in the retro-computing community, where there are certainly some obscure artifacts, which are still to be explored. (We even had a few “do you know that machine” puzzles, here.)
On the other hand, these algorithms are tuned for generating interesting images with typical traits, which may also generate interest in what they are depicting, whenever they pop up, an interest, which is to be frustrated, as their origin is eventually revealed. (The very nature of historic and often ephemeral photos, which are generally poor in image quality, only aggravates the problem, by hiding any dead giveaways in an overall lack of color precision, lack of dynamic range or resolution.)

Are we going to build an image database of machines that actually existed, so that we can verify any images against this? Is there any interest for such an effort? In any case, if we come up with a viable and practical idea, we should start soon, before the algorithms become too good for those images giving away their very nature immediately.


My gut reaction is to treat this as a human problem rather than a technical one.

It’s common when buying or selling locally to specify “no time-wasters” or “sensible offers only.” I think we’re in that kind of territory: it is inconsiderate or even rude to be wasting other people’s time, and posing badly-formed questions is one of the ways to do that. There’s a common one which keeps coming up trying to stir up debate about an ambiguous arithmetic expression, which is pointless. An artificial image is something very like that - it has no value other than aesthetic, and we shouldn’t be quizzing ourselves about what it is.

It might be that identify-this-photo questions just become unacceptable. It’s rather like trolling. Perhaps the person raising the question needs to do a bit more groundwork and present what they know in such a way that it shows sincerity and looks different from trolling.

I think I’d react in a similar way if someone posted a snippet of program, asking what it did, and it turned out not to be the product of a human with some level of skill, but the output of an idiot machine. It’s a waste of time, and the offence is in passing it off, wittingly or unwittingly, as a real thing.

Maybe this is a little like the problem of forged currency: sometimes it’s being presented by someone who knows it to be forged, sometimes by someone who hasn’t realised what they are doing. One of those is more naughty than the other, but both are causing a problem.

We try, I think, to surround ourselves with people of good judgement and sincere intentions, and we react negatively when we are fooled. So we do some kind of gatekeeping on who gets to join our circles, in real life and online. Older people with more experience perhaps do this more - and more effectively - than young people, which is why we see much more internet drama in the 18-25 age bracket.


I wouldn’t be that harsh in this instance. It’s certainly an image that may generate genuine interest. If it pops up in the right environment, I may be fooled, too. E.g., let’s assume there’s a blog about radar installations with quite a track record and a post incorporates this image (drawn from the Internet, because this is what we do), among other, genuine images. I may become interested in what that machine is, lurking in the corner of an image. I’ve certainly done so in the past – and I’ve learned quite a bit from this about things that had been foreign to me, before. – So the question is really, will curiosity become a public offense? I wouldn’t want to live in a world where this would be the case.

(On a personal note, I’m rather convinced that AI will plunge us in a dogmatic age, on the one hand by promoting and replicating en mass things that are close to the semantic center of what’s already in the conversation and thus has some statistical weight, and, on the other hand, by socially outlawing any non-conformant curiosity, because it wastes everybody’s time by sending them on a meaningless scavenger hunt.)

I suspect 95% of all the known computers have valid images online,
so having a fake one will not do much harm other than giving new ideas
for old computers.
The keyboard is the real give away on this being fake, where are all
the missing function keys? With 3d printing, we can make c65 come true :slight_smile:

It’s funny (though perhaps not literally so) that concerns about AI have traditionally been about a machine becoming self-aware and wanting to eradicate humanity, when it turns out the danger is more to do with how humanity uses “AI” as a tool.

What’s even funnier, there are reinforcing feedback loops involved. E.g., Google Image Search seems to be quite confident about this being a PET:

Meaning, what is already quite convincing for us, may be even more convincing for the tools, we use, as they and the tools generating those images may even share the same data and the same embedder tokens, bearing the statistical significane.

I believe there will be simple solutions adopted to these problems as they get worse:

  1. Response to “does anyone know what this weird thing is” will be a polite version of “Well, you know how to use the internet, good luck on your search. There’s a lot of info there to find and corroborate. Let us know what you find!”.

  2. Response to “I’ve found photographic proof that a desktop personal computer with full keyboard and monitor preceded the Programma 101 by 3 years” can also be a polite version of “Well, extraordinary claims require extraordinary evidence. We’re interested in seeing what further verifiable evidence you can provide to support this incredible claim of the image you’ve provided”.

So, borrowing from the Scientific Method can help reduce nefarious claims and time wasters regardless of AI advances as long as we can keep historical databases intact from tampering (wayback machines, hard copy, etc.).


Sometimes this kind of situations make me sad, why would someone try to use an AI image of a retro computer to get attention? Someone may be confused and really interested or curious about a computer they don’t know about, but generating it on purpose is a different problem… I hope the fake images don’t flood the retro computing places.

1 Like

I wasn’t under the impression that this particular question was posted in bad faith.
As I understand it, these images of retro computers are generated en mass as thumbnails, etc, and find their way into image search. And it’s likely that they will be found and reused for illustration purpose, where someone may be fooled and become genuinely interested in what this is (probably, as the first person ever; maybe, no person was involved at all, until this point.)

Also, not everybody starts out as a born expert – and newcomers may become frustrated by the lack of support by a community that dismisses them as mere trolls. Self-directed research may become difficult, as well, as these models eventually start to drift on their own data in an infinite feedback loop, while blocking most of the genuine search results behind thousands of high ranking and hits, taylored to SEO requirements. We may become an eletist bunch…

1 Like

I believe that gatekeepers and elitist people is getting more influence in this world bit a bit. Luckily we have places like this forum where we are open to share our findings and knowledge without fame expectations. Maybe we have a good resistance to social networks influences, we need to keep and replicate that good vibes, sometimes I feel that is the last resort to preserve and share this little piece of history that is the retro computing.
(Sorry for the sudden meditations)

1 Like

Google Images not saying that this is a PET. The photos on the right are just similar photos. So vintage computers with a display (see the terminal). Same for pinterest,
You can find the original source by clicking on find image source.
I think there’s no AI here.

There are several photos of fake or fictional computers. Some even selling images so others can use them like in a game. The problem is just if someone wants to sell/buy that computer.

While this diverges off-topic, I’m not so sure about this: “similar” has shifted increasingly from a formal criterion towards content and semantics. E.g., while you could use image search in the past to identify an artist, this has become useless for this purpose, yielding similar subjects, instead. So a prevalence of a single make or item indicates a closeness in semantics to me.
(The proposed use case is now: I see a handbag in a photo and want to buy this, so Google Lens will guide me to a shopping site where I can obtain the item.)
Also mind the shift in perspective: while we look at that item in the reference image diagonally from the front right, this spacial relation seems not to be that important for “similarity”, rather abstracting on perspective. It even appears so that perspective becomes what passes as geometry or shape for some of these images. (E.g., the Wikipedia image of a terminal doesn’t have much formal similarity to the reference image, but the very perspective distortion makes it somewhat similar to a PET. We appear to require the concept of the PET as an intermediary, to regard these two images as similar.)