By exploiting AI’s ability to mimic authenticity, Israel’s enemies turned digital platforms into battlegrounds for disinformation.
By Hezy Laing
Israel’s adversaries have increasingly weaponized artificial intelligence to discredit the country online.
Deepfakes, doctored battlefield footage, and fabricated social media posts have become central tools in a coordinated campaign amplified by hostile actors.
During the 2025 Israel–Hamas war, false narratives such as the so‑called “Gaza starvation campaign” flooded digital platforms, portraying Israel as deliberately causing famine.
Promoted by Hamas, Palestinian groups, and sympathetic foreign media, these stories were further magnified by bot networks.
The result was a blurring of truth and fiction, erosion of trust, and manipulation of global opinion.
By exploiting AI’s ability to mimic authenticity, Israel’s enemies turned digital platforms into battlegrounds for disinformation.
Yet many experts argue that Israel could harness AI defensively to counter these campaigns and strengthen its credibility online.
Sean McGuire (Sequoia Capital, DefenseTech Summit, Tel Aviv)
“The future of information warfare is AI. If Israel doesn’t build its own engines, defensive and offensive, it will be outmaneuvered in a war it can’t see but is already in.”
McGuire emphasizes the need for Israel to develop its own AI systems to detect and neutralize hostile narratives.
He frames information war as invisible but decisive, urging Israel to use AI to expose fakes, verify facts, and maintain credibility against adversaries like Iran.
Mike Sexton (TRENDS Research & Advisory, August 2025)
“The Israel–Iran cyber engagements provide a compelling case study for examining how artificial intelligence can amplify asymmetric strategies, particularly those targeting softer civilian infrastructures.”
Sexton highlights AI’s defensive potential in protecting civilian information channels and infrastructure from Iranian cyber‑propaganda.
He sees AI as a way to blunt asymmetric attacks by rapidly identifying manipulated content and shielding the public from destabilizing falsehoods.
Dr. Jean‑Michel Valantin (Red Team Analysis Society, June 2025)
“The 12‑day war between Israel and Iran revealed a new way of war in the Middle East, where AI is not only a battlefield tool but also a strategic instrument shaping perceptions and narratives.”
Valantin argues that Israel must extend AI use beyond drones and targeting into the cognitive domain.
He suggests configuring AI to manage narratives, counter propaganda, and reinforce strategic messaging—making credibility itself a weapon in modern conflict.
How AI Could Be Configured for Defensive Information Work
Disinformation Detection: Scanning social media and news feeds to flag AI‑generated fakes before they spread.
Fact‑Checking Automation: Cross‑referencing claims with verified sources in real time.
Pattern Recognition: Identifying coordinated bot networks and troll farms amplifying hostile narratives.
Media Literacy Tools: Embedding AI in consumer apps to alert users when content shows signs of manipulation.
Archiving & Verification: Timestamping and authenticating genuine battlefield footage or official statements.
Israel faces persistent disinformation campaigns—from fake battlefield footage to manipulated casualty reports.
Configured properly, AI could help Israel’s institutions, media, and civil society by rapidly debunking false claims (such as the fabricated “downed F‑35” story), tracking hostile networks, and protecting citizens from synthetic media.
In short: AI can indeed be configured to help Israel fight its information wars—by detecting fakes, verifying facts, and exposing disinformation.





























