1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
MediaGlobal issues

Fact check: How can I spot AI-generated images?

Joscha Weber | Kathrin Wesolowski | Thomas Sparrow
April 9, 2023

Midjourney, DALL-E, DeepAI — images created with artificial intelligence tools are flooding social media. Some carry the risk of spreading false information. Which images are real and which are not? Here are a few tips.

https://p.dw.com/p/4PnBK
Fake picture of President Putin being arrested
This screenshot purports to show the arrest of Russian President Vladimir Putin, but the image is fake

It has never been easier to create images that look shockingly realistic but are actually fake.  

Anyone with an internet connection and access to a tool that uses artificial intelligence (AI) can create photorealistic images within seconds, and they can then spread them on social networks at breakneck speed.   

In the last few days, many of these images have gone viral: Russian President Vladimir Putin apparently being arrested, or Tesla CEO Elon Musk holding hands with General Motors CEO Mary Barra, just to name two examples.  

The problem is that both AI images show events that never happened. Even photographers have published portraits that turn out to be images created with artificial intelligence.

And while some of these images may be funny, they can also pose real dangers in terms of disinformation and propaganda, according to experts consulted by DW.  

Fake photo which allegedly shows Elon Musk and Mary Barra
This AI-generated viral photo purports to show Tesla head Elon Musk with GM CEO Mary Barra. It is fake

An earthquake that never happened

Pictures showing the arrest of politicians like Putin or former US President Donald Trump can be verified fairly quickly by users if they check reputable media sources.

Other images are more difficult, such as those in which the people in the picture are not so well-known, AI expert Henry Ajder told DW.

One example: a German member of parliament for the far-right AfD party spread an AI-generated image of screaming men on his Instagram account in order to show he was against the arrival of refugees.

And it's not just AI-generated images of people that can spread disinformation, according to Ajder.

He said there have been examples of users creating events that never happened.

This was the case with a severe earthquake that is said to have shaken the Pacific Northwest of the United States and Canada in 2001.

But this earthquake never happened, and the images shared on Reddit were AI-generated.

And this can be a problem, according to Ajder. "If you're generating a landscape scene as opposed to a picture of a human being, it might be harder to spot," he explained.

However, AI tools do make mistakes, even if they are evolving rapidly. Currently, as of April 2023, programs like Midjourney, DALL-E and DeepAI have their glitches, especially with images that show people.

DW's fact-checking team has compiled some suggestions that can help you gauge whether an image is fake. But one initial word of caution: AI tools are developing so rapidly that these tips only reflect the current state of affairs.

Fact check: How to spot AI images?

1. Zoom in and look carefully 

Many images generated by AI look real at first glance.

That's why our first suggestion is to look closely at the picture. To do this, search for the image in the highest-possible resolution and then zoom in on the details.

Enlarging the picture will reveal inconsistencies and errors that may have gone undetected at first glance.

2. Find the image source

If you're unsure whether an image is real or generated by AI, try to find its source.  

You may be able to see some information on where the image was first posted by reading comments published by other users below the picture.

Or you may carry out a reverse image search. To do this, upload the image to tools like Google Image Reverse Search, TinEye or Yandex, and you may find the original source of the image. 

The results of these searches may also show links to fact checks done by reputable media outlets which provide further context.

3. Pay attention to body proportions  

Do the depicted people have correct body proportions?  

It's not uncommon for AI-generated images to show discrepancies when it comes to proportions, with hands being too small or fingers too long, for example. Or the head and feet don't match the rest of the body.  

In this picture, Putin is supposed to have knelt down in front of Xi Jinping.
Putin is supposed to have knelt down in front of Xi Jinping, but a closer looks shows that the picture is fakeImage: Twitter/DW

This is the case with the picture above, in which Putin is supposed to have knelt down in front of Chinese President Xi Jinping. The kneeling person's shoe is disproportionately large and wide, and the calf appears elongated. The half-covered head is also very large and does not match the rest of the body in proportion.

Read more about this fake in our dedicated fact check. 

4. Watch out for typical AI errors  

Hands are currently the main source of errors in AI image programs like Midjourney or DALL-E. 

People frequently have a sixth finger, such as the policeman to Putin's left in our picture at the very top.  

Or also in these pictures of Pope Francis, which you've probably seen.

But did you realize that Pope Francis seems to only have four fingers in the right picture? And did you notice that his fingers on the left are unusually long? These photos are fake. 

Other common errors in AI-generated images include people with far too many teeth, or glasses frames that are oddly deformed, or ears that have unrealistic shapes, such as in the aforementioned fake image of Xi and Putin. 

Within a few seconds, image generators such as the Random Face Generator create fake images of people who do not even exist. And even if the images look deceptively genuine, it's worth paying attention to unnatural shapes in ears, eyes or hair, as well as deformations in glasses or earrings, as the generator often makes mistakes. Surfaces that reflect, such as helmet visors, also cause problems for AI programs, sometimes appearing to disintegrate, as in the alleged Putin arrest.

AI expert Henry Ajder warned, however, that newer versions of programs like Midjourney are becoming better at generating hands, which means that users won't be able to rely on spotting these kinds of mistakes much longer.

5. Does the image look artificial and smoothed out?  

The app Midjourney in particular creates many images that seem too good to be true.  

Follow your gut feeling here: Can such a perfect image with flawless people really be real?  

"The faces are too pure, the textiles that are shown are also too harmonious," Andreas Dengel of the German Research Center for AI told DW.  

People's skin in many AI images is often smooth and free of any irritation, and even their hair and teeth are flawless. This is usually not the case in real life. 

Many images also have an artistic, shiny, glittery look that even professional photographers have difficulty achieving in studio photography. 

AI tools often seem to design ideal images that are supposed to be perfect and please as many people as possible.

6. Examine the background 

The background of an image can often reveal whether it was manipulated.  

Here, too, objects can appear deformed; for example, street lamps.

In a few cases, AI programs clone people and objects and use them twice. And it's not uncommon for the background of AI images to be blurred.

But even this blurring can contain errors, like the example above, which purports to show an angry Will Smith at the Oscars. The background is not merely out of focus but appears artificially blurred.

Conclusion

Many AI-generated images can currently still be debunked with a little research. But technology is getting better and mistakes are likely to become rarer in the future. Can AI detectors like Hugging Face help us to detect manipulation?

Based on our findings, detectors provide clues, but nothing more.

The experts we interviewed tend to advise against their use, saying the tools are not developed enough. Even genuine photos are declared fake and vice versa.

Therefore, in case of doubt, the best thing users can do to distinguish real events from fakes is to use their common sense, rely on reputable media and avoid sharing the pictures.

 

Graphic listing how you can detect fake pictures

Editor's note: Due to legal and journalistic reasons, DW does not currently publish images created with generative AI programs. As an exception, we might show AI images when they are the subject of reporting, for example a review of the capabilities of AI or verification of fake images. In this case, we clearly indicate that the pictures shown are created by AI.

This article was updated on April 11 to include the Random Face Generator.