Just last month, an image of Pope Francis in a stylish, white Balenciaga puffer jacket quickly went viral, convincing thousands that the pontiff was ready for a night on the town. Days before, another series of photos surged across social media, showing the arrest of former president Donald Trump at the hands of riot-gear-clad New York City police officers.
There’s just one catch: The images were completely fake.
These viral sensations were produced by artificial intelligence systems that process a user’s text prompts to create images. While such AI systems — like Midjourney, DALL-E and Stable Diffusion — have been available for a few years, only recently have the images they spit out become convincing enough to trick less-savvy observers. And some experts think that we’re speeding towards a future where it’s virtually impossible to distinguish between real images and AI-generated fakes.
Read More: AI Can Now Make Music From Text Descriptions
“The systems are already very good,” says James O’Brien, a computer science professor at the University of California, Berkeley. “Even experts, without using specialized tools to analyze the images, can’t necessarily find some glaring, obvious telltale that proves it’s fake. And they’re going to get better.”
O’Brien cautions that it’s important not to rely on visual clues alone to determine if an image is legitimate. Still, for now, there are some signs that eagle-eyed viewers can look out for — among other ways to flag potential visual disinformation without getting fooled.
How Can You Spot Fake AI-Created Images?
Phony photos aren’t exactly a new phenomenon. Photos have been fabricated and manipulated for nearly as long as photography has existed. A famous portrait of U.S. president Abraham Lincoln, for example, is actually a composite created by stitching together an image of Lincoln’s head with (ironically) an engraving of pro-slavery politician John C. Calhoun. AI image generator systems, however, don’t require nearly as much fiddling, capable of quickly creating realistic-looking photos with a simple text prompt.
Today’s AI tools vary in their ability to generate convincing images. AI systems have specifically struggled to digitize human hands, producing mangled extremities that sometimes feature too many (or too few) fingers. Spotting these uncanny fingers in images may be one way to flush out the fakes, says Claire Wardle, co-director of the Information Futures Lab at Brown University School of Public Health.
Read More: Your Robotaxi Is Almost Here
“It’s kind of like a fun exercise, like when you were a kid and would look at those puzzles of problems with an image,” she says. “It’s not going to be very long before these images get so good that those things like the fingers won’t save us. But, for right now, there are still some things that we should be on the lookout for.”
1. Watch for Wonky Fingers and Teeth
Since data sets that train AI systems tend to only capture pieces of hands, the tech often fails to create lifelike human hands. This can lead to images with bulbous hands, stretchy wrists, spindly fingers or too many digits — hallmark signs that an AI-created image is a fake. In the viral image of Pope Francis, his right hand (and the coffee cup its holding) looks squashed and warped. Teeth can also pose problems, too.
“There’s a structure to your hands,” says O’Brien. “You have five fingers, not four or six or whatever. The [AI] models have trouble with that kind of structure, although the newer ones are getting better at it.”
Indeed, as the tech advances, some AI tools like Midjourney V5 have started to crack the code, at least in certain examples.
2. Be Wary of Overly Smooth Textures
Some AI image generators produce textures that are excessively smooth, or plastic-looking skin with a glossy sheen. This means that a jacket — say, the Pope’s swagged-out coat — may appear too nice, says O’Brien.
“Rather than looking like a material that has some wrinkles in it, it might come out a little too perfect,” he adds.
3. Look For Details That Don't Fit
Perhaps the biggest things to watch out for are inconsistencies in the logic of the image itself. Both O'Brien and Wardle reference a recent series of AI-generated images of a great white shark washed up on a beach, which also went viral on social media.
“The easiest way to understand that that image is fake is to look at the shark carefully,” says O’Brien. “In each picture, you’ll notice that the pattern around the eye is different.”
Other inconsistencies, continues O’Brien, include details like clothing fabric that blends together across different subjects or background patterns that repeat perfectly. But we may be more likely to miss those details if we want to believe the reality that a fake image presents. In a 2021 study co-authored by O’Brien, participants were more likely to buy into an image’s credibility if it aligned with that person’s pre-existing beliefs.
4. Do Your Research
When in doubt, don’t be afraid to cross-reference what you’re seeing with other credible sources.
“The things that we see now, we just have to immediately Google to find other information about it,” says Wardle. “Yes, there are small clues that you can see in the image, but just think and do your research.”
The Future of AI-Created Images
O’Brien doesn’t think that the point where we can no longer tell the difference between real photos and AI-generated images lies in some far-flung future. He suspects we may pass that threshold within the next few years.
"That moment might not be as far away as some people might like to think," says O'Brien. "Because the rate of improvement for these systems is getting faster. [...] I think that as a society, we have to get better at accepting that things that we see — imagery — may not reflect reality."
Nonetheless, the future of AI image generation will be bold, if nothing else. Scientists from across the world have already begun using AI systems to recreate images that people have seen based on their brain scans, according to a preprint published on bioRxiv late last year.
“There’s amazing things coming out of this new technology,” adds Wardle. “But it’s a bit like if a faster car comes along; we just have to drive a bit more carefully.
Read More: Could AI Language Models Like ChatGPT Unlock Mysterious Ancient Texts?