Sawt Falasteen - Fake AI satellite imagery spurs US-Iran war disinformation

NYSE - LSE
RBGPF 0.12% 82.5 $
CMSD -0.04% 23.2 $
BCC -2.6% 75.35 $
BCE 0.31% 26.06 $
JRI -1.83% 12.57 $
CMSC -0.45% 23.185 $
NGG 0.13% 89.86 $
AZN -1.7% 194.22 $
GSK -1.39% 54.51 $
RIO -0.69% 90.21 $
RELX 1.4% 35.68 $
RYCEF -1.42% 16.96 $
BTI -1.24% 57.87 $
VOD -0.76% 14.51 $
BP 2.82% 40.44 $
Fake AI satellite imagery spurs US-Iran war disinformation
Fake AI satellite imagery spurs US-Iran war disinformation / Photo: ATTA KENARE - AFP

Fake AI satellite imagery spurs US-Iran war disinformation

The satellite image posted by an Iranian news outlet looked real: a devastated US base in Qatar. But it was an AI-generated fake, underscoring the accelerating threat of tech-enabled disinformation during wartime.

Text size:

The rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during major conflicts, a trend that researchers warn carries real-world security implications.

As the US-Israeli war against Iran rages, Tehran Times, a state-aligned English daily, posted on X a "before vs. after" image it claimed showed "completely destroyed" US radar equipment at a base in Qatar.

In fact it was an AI-manipulated version of a Google Earth image from last year of a US base in Bahrain, researchers said.

The subtle visual giveaways included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated image.

Yet the manipulated photo garnered millions of views as it spread across social media in multiple languages, illustrating how users are increasingly failing to distinguish reality from fiction on platforms saturated with AI-generated visuals.

Brady Africk, an open-source intelligence researcher, noted an "increase in manipulated satellite imagery" appearing on social media in the wake of major events including the Middle East war.

"Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don't align with reality," Africk told AFP.

"Others appear to be an image manipulated manually, often by superimposing indicators of damage or another change on a satellite image that had no such details to begin with," he said.

- 'Fog of war' -

Information warfare analyst Tal Hagin flagged another AI-generated satellite image purporting to show that Israeli-US jets had targeted the painted silhouette of an aircraft on the ground in Iran, while Tehran seemingly moved real planes elsewhere.

The telltale clues included gibberish coordinates embedded in the fake image, which spread across sites including Instagram, Threads and X.

AFP detected a SynthID, an invisible watermark meant to identify images created using Google AI.

The fabricated satellite images follow the emergence of imposter OSINT -- or open-source intelligence -- accounts on social media that appear to undermine the work of credible digital investigators.

"Due to the fog of war, it can be very difficult to determine the success of an adversary's strikes. OSINT came as a solution, using public satellite imagery to circumvent the censorship" inside countries like Iran, Hagin said.

"But it's now being preyed upon by disinformation agents," he added.

Reports of fake satellite imagery created or edited using AI also followed the Russia-Ukraine conflict and the four-day war between India and Pakistan last year.

- 'Critical awareness' -

"Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity," Africk said.

"This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets."

In the age of AI, authentic high-resolution satellite imagery collected in real time can give decision-makers vital clues to assess security threats and debunk falsehoods from unverified sources.

During a recent militant attack on Niamey airport in Niger, satellite intelligence company Vantor said it detected images circulating online purporting to show the main civilian terminal on fire.

The company's own satellite imagery helped confirm that the photos were fake, almost certainly generated using AI, Vantor's Tomi Maxted told AFP.

"When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events," Bo Zhao, from the University of Washington, told AFP.

As AI-generated imagery grows increasingly convincing, it is "important for the public to approach such visual content with caution and critical awareness," Zhao said.

P.AbuBaker--SF-PST