Generative AI is getting better and better. Just this week, Adobe released Firefly 2, a vast improvement on its earlier AI model, especially when it comes to depicting faces.
That makes it harder than ever to tell if an image is AI-generated. Harder, but not impossible. Here are some things to look for if you’re trying to determine whether an image is created by AI or not.
The most obvious telltale sign that an image is AI-generated is the one you can’t see just by looking at the picture: the metadata.
Metadata is information that’s attached to an image file that gives you details such as which camera was used to take a photograph, the image resolution, and any copyright information. Metadata often also betrays if an image was created by AI.
Metadata often survives when an image is uploaded to the internet, so if you download the image afresh and inspect the metadata, you can normally reveal the source of the image.
To see image metadata in Windows:
- Right-click on the image file and select Properties
- Click the Details tab in the window that opens
To see the metadata on a Mac:
- Right-click the image file
- Select Get Info
On genuine photos, you should find details such as the make and model of the camera, the focal length, and the exposure time. On AI-generated images, that information will be absent.
To be clear, an absence of metadata doesn’t necessarily mean an image is AI-generated. Some photographers and websites remove metadata before sharing images. But if an image contains such information, you can be 99% sure it’s not AI-generated. Or, at least, not entirely AI-generated.
The image filename is another big clue. Images downloaded from Adobe Firefly will start with the word Firefly, for instance. AI-generated images from Midjourney include the creator’s username and the image prompt in the filename. Again, filenames are easily changed, so this isn’t a surefire means of determining whether it’s the work of AI or not.
Although generative AI is getting much better at faces, it’s still a problem area, especially when you’ve got lots of faces in one image.
At first glance, the image shown above of a soccer crowd looks like a genuine photo. However, even a relatively cursory look around the crowd reveals oddly distorted faces, such as the ones highlighted above.
This same rule applies to AI-generated images that look like paintings, sketches, or other artforms; mangled faces in a crowd are a telltale sign of AI involvement.
It’s not only faces that often go wrong in AI imagery, but other fine details. The face of the woman in the image above is actually quite convincing, and, again, on first inspection, you might think this is a genuine photo. But zoom in, and you’ll see other details have gone awry.
The text on the books in the background is just a blurry mush, for example. Yes, it’s been made to look like a photo with a shallow depth of field, but the text on those blue books should still be readable.
Look closely at the woman’s wrist, and the bracelet or watch strap she’s wearing is also distorted. It’s often when you zoom in close and start inspecting the detail that the involvement of AI becomes obvious.
AI models are often trained on huge libraries of images, many of which are watermarked by photo agencies or photographers. Unlike us, the AI models can’t easily distinguish a watermark from the main image. So when you ask an AI service to generate an image of, say, a sports car, it might put what looks like a garbled watermark on the image because it thinks that’s what should be there.
The fake watermarks normally appear in the bottom-right of images. They are almost always unreadable—something a true watermark would almost never be!