So in the previous article, The Daily Beagle invited users to partake in a test to see if they could distinguish AI generated images. The early results show that AI generated images do indeed trick people.
For example, for Images A and B, most people voted A as being real:
Image A was, in-fact, AI generated, however the voting is understandable, as Image A has a far more complex background than Image B. This isn’t how to distinguish AI generated images.
AI has advanced such that complex backgrounds are no problem, and to prove it, here’s the original prompt The Daily Beagle used, complete with the four responses:
With the exception of the third image, the other three images contain image rendering errors. For example, the first man’s fingers are melded:
The second has a ‘weird eye’ just subtly in his forehead (very difficult to spot):
And the fourth has a weird foamy fuzz around the hair and eyes:
Look At Detail: AI Can’t Yet Render Subtle Detail Well
So one way to tell is to look closely at the finer details. AI generation struggles with things like fingers, hair, eyes and small details. In the original image of the masked man there is in-fact a tell, but it’s hard to spot if you’re not intently looking:
The mask’s right strap ‘melds’ with the ear:
In order to reduce the odds of detection, the image was resized from 1024 pixels to 512, however you can just barely see it in the original. Many AI rendered images are plagued with subtle errors such as these, and melding is common.
That said, a State actor can brute force generations to find more convincing selections. It is a numbers game. The Daily Beagle would have done things like images of injections and injuries but the cost to brute force exceeds our non-existent budget.
Inconsistent Designs
Another tell is the man has a non-standard mask on:
In contrast, Image B has the more simple, more common, surgeon’s mask, which is a known mask design:
To be fair though, there are a lot of whacky mask designs out there, hence the inclusion of Image D, which is supposedly a real image with a very bizarre mask:
The Eyes Have It
The other tell are the eyes — although this won’t be foolproof — when producing stock photographs, photographers will use bright lighting to light up their subject so it appears more clearly in the photograph.
We can see the bright lighting reflected in the woman’s eyes:
If you contrast any of the example AI images, none of the eyes appear to reflect any bright light, they just seem creepy and dark:
Sometimes eyes may have lighting, but typically it will be dull:
Simple backgrounds are often used in stock photographs because it makes them much easier to edit for various advertising purposes. So a simple background isn’t a tell for an AI rendering.
AI Cannot Yet Render Text Images
One very, very safe bet (currently) is to request an image containing text. So the classic ‘hold up a sign saying…’ is still currently a valid fuzz method!
As AI image generation currently struggles with finer details (it may get better in due time), one thing it has great difficulty with is word generation. It ignores prompts and generates nonsense.
So simply requesting an image with some words written in it will be enough to defeat most commercial use AI image generation for now.
Look For Signs Of Obscuration
With AI generated images often showing defects, adversaries will attempt to use a variety of tricks to obscure the various ‘tells’ of AI images. This includes:
Downscaling the image to make details hard to see (be wary of thumbnails)
Blurring all or part of the image
Darkening or obscuring all or part of the image
Cropping the image in unnatural ways.
Or making use of ‘composite’ real images, using AI generation to edit only relevant parts of a real image (This is a bad quality example; The Daily Beagle lacks the funds to brute force for a compelling example; it is made by the same engine that produced the above images that fooled people)
Plot Twist: All Texts Were AI Generated
So in the Writing Test section, readers were asked to determine if texts were ‘real’ or ‘fake’. This was a bit of a trick question!
All text prompts — besides minor editing for consistency (assigning a book name or authorship quote source) — were AI generated. The reason we did this was to test how the public interprets realness. Surprising results!
There’s two ways the public could have interpreted it: realness in the sense of meaning ‘truthiness’, or realness in the sense of authenicity (I.E. generated by a human).
Most would have likely thought they were answering the question in terms of authenicity, however how you determine that is likely to be based on truthiness.
At time of writing, no-one thought both of the Of Mice and Men quotes were fake, out of 32 votes:
And only 3% out of 34 votes correctly identified both quotes were inauthentic, AI generated content.
This shows how difficult it actually is.
In this case, the AI was asked to generate one factually accurate quote for a book (which was independently verified with a search), and one made-up quote for the book.
The AI amusingly refused to falsely attribute either made-up quote to their given books — omitting the fake reference entirely — we had to add that in so it wasn’t a dead giveaway.
With a bit more prompt engineering it is likely possible to coerce the AI into including fake references, but that costs time and money for a simple demonstration.
What Is Realness?
The Daily Beagle will hold up our hands to the sleight-of-hand: We used ‘realness’ to mean truthfulness, rather than authenicity. Ironically, the fake quotes had more human involvement than the real ones… or did they?
It poses a deeper philsophical question; if an AI verbatim copy-pastes a human work, is that work still authentic or is it AI generated? If the AI can write factually correct answers, are they still forgeries? If the AI produces an exact replica of a real world photograph, is that image still fake?
What is the difference between real and fake, dear reader?
Subscribe to get more content from The Daily Beagle.
Share this article?
And leave a comment below on what your thoughts are.
I failed the whole test lol but then again phones ftw. AI will change many things. Basically paper writing has been rendered useless. Why research when a chatbot can perform the same task in minutes?
If they were programmed without an agenda, they may actually be a positive benefit but you just kniw the fix is in. A PC AI is a scary concept. Throw in CBDCs and Digital ID with an AI run Social Credit system and it's hello bastard child of 1984, Brave New World, and Skynet.
This is a though-provoking piece. I also failed the test but have learned a lot for future tests.