An informative guide on recognizing AI-generated images, ranging from simulated Pentagon explosions to the Pope donning Balenciaga attire

Spread the love


“Combatting the alarming rise of AI-generated fake news is crucial. Educate yourself on the key indicators of counterfeit images to protect against falling for hoaxes.

A recent example of misinformation involved a fabricated image portraying a massive explosion near the Pentagon, which rapidly spread across social media platforms. Even multiple verified accounts unwittingly amplified its circulation, leading to further confusion. Compounding the issue, reputable publications also fell victim to this hoax.

However, this incident is not an isolated case of AI deceiving prominent media outlets. Just last month, fake news disseminated AI-generated images depicting a man who allegedly underwent multiple surgeries to resemble Jimin, a member of the Korean boy band BTS.

In an era of rampant misinformation on social media, AI is rendering fake news increasingly challenging to fact-check. As the technology advances, we can expect these instances to multiply. Nonetheless, there are ways to identify AI-generated images and avoid the embarrassment of spreading false information.

Unmasking the Pentagon blast image Let us closely examine the image featuring the Pentagon explosion. Bellingcat, a renowned online news verification group, has identified several anomalies that can be applied to similar images.

Bellingcat highlighted distortions in the building’s facade, where the fence appears to blend into the crowd barriers. Equally noteworthy, there is a conspicuous absence of other visual evidence, videos, or firsthand witnesses—a highly unusual circumstance for an event of such magnitude.

Tips for detecting AI-generated fake images Various techniques can help uncover AI-generated images. Consider the following:

  1. Seek on-ground reports: In significant incidents, reporters promptly swarm the location to provide firsthand coverage. However, in the case of the now-debunked Pentagon blast news, no on-ground reporting occurred. While realistic images can be easily produced with tools like Midjourney, Dall-E, and Stable Diffusion, replicating authentic on-ground reporting remains nearly impossible, making it difficult to create a convincing fake.
  2. Verify your sources: The presence of a blue verification tick on Twitter no longer guarantees authenticity, as troll accounts spreading misinformation can also obtain them. It is crucial to thoroughly examine the account’s feed in question to ensure a clean track record. Additionally, cross-check the account’s location with the event’s location for consistency.
  3. Utilize reverse image search tools: If the typical indicators of AI-generated images are not readily apparent, conduct a reverse image search to trace the image’s initial source. Tools like Google Lens and TinEye prove helpful in this regard. Once you locate the image’s original appearance, you can verify its authenticity using the aforementioned methods.
  4. Look for distortions: When an image contains multiple subjects or a busy scene, certain objects may blend together or overlap. For example, a building in the background might merge with a lamppost, or a person’s foot might distort into the pavement they are walking on.
  5. Analyze the surroundings: AI possesses only a rudimentary understanding of the appearance of the locations it is trained on. Consequently, akin to a human attempting to draw a place from memory, it often fails to capture intricate details accurately. For instance, the viral Pentagon blast image depicts a fabricated location. If a featured location appears familiar, conduct a search for landmarks and road signs to validate its authenticity

Leave a Reply