Disinformation on U.S.-Iran war takes over the internet

A person stands amid a crowd, holding an orange protest sign that reads

In a world increasingly connected yet paradoxically prone to division, the digital landscape has become a battleground for truth. This grim reality was starkly evident following the recent U.S.-Israel military strikes against Iran. Even before the dust had settled on the tragic ruins of the Shajareh Tayyebeh school, an innocent casualty that led to the deaths of up to 168 adults and children, a different kind of war was already raging online. People were actively engaged in "engagement farming," a disturbing trend where false or misleading content is spread to gain attention, clicks, and often, profit. This online onslaught saw clips from digital flight simulators presented as genuine real-time operational footage, while old videos of aerial missile attacks and out-of-context images of battleships were cleverly repurposed. The goal? To fabricate and sell a narrative of overwhelming Iranian dominance and military success. Alarmingly, a significant portion of this content was expertly edited using artificial intelligence, making it even harder for the average user to distinguish truth from fabrication.

The speed and scale at which these deceptive posts circulated were staggering. According to various experts closely monitoring the situation, these misleading pieces of content had already accumulated hundreds of millions of views across various social media platforms in a mere handful of days. This rapid proliferation highlights the urgent and growing challenge of controlling the spread of misinformation, especially during times of international crisis. The sheer volume of engagement generated by these falsehoods underscores the power of social media algorithms to amplify sensational content, regardless of its accuracy.

The escalating number of viral posts and the potential for even more to emerge, fueled by users earning cash for viral falsehoods, reached such an alarming level that it forced social media giant X (formerly Twitter) to take action. The platform was prompted to edit its policies on misinformation. As of yesterday, X announced a new policy: users participating in its Creator Revenue Sharing program will face suspension if they post AI-generated content depicting armed conflict without clearly labeling it as such. This move, while a step in the right direction, highlights the reactive nature of platform moderation in the face of rapidly evolving deceptive tactics. The struggle remains to keep pace with the sophisticated methods employed by those looking to exploit global events for personal gain or political influence.

The insidious reach of misinformation has extended beyond social media feeds, permeating even what many once considered reliable sources of information. Shockingly, not even Google searches are safe from misinformation in the current digital climate. This development is particularly concerning because search engines are often the first point of contact for individuals seeking information, especially during fast-moving news cycles. When even these foundational tools can present misleading or inaccurate content, the challenge of discerning truth becomes significantly harder for the average internet user, eroding trust in the very infrastructure of online information retrieval.

The widespread distribution of digital misinformation is not an accidental phenomenon; it is the deliberate outcome of a complex network. This network is comprised of automated bots and numerous "engagement farming" accounts, all operating with a singular, shared objective: to be the most prominent, loudest, and most frequently clicked-on account in the digital space. These entities are not merely seeking casual interaction; their existence is predicated on capturing and monopolizing user attention. Some of these malicious actors are driven by a desire to gain political and social influence, aiming to sway public opinion or destabilize narratives during sensitive geopolitical events. Others are motivated purely by financial incentives, leveraging viral content, however false, to generate revenue through advertising or direct payments from social media platforms. In the midst of this digital manipulation, everyday users repeatedly fall victim to these schemes. They are susceptible due to their inherent confirmation bias, a psychological tendency to favor information that confirms their existing beliefs, and their increasing reliance on digital news sources for immediate updates. What began as a relatively harmless exchange of memes and clickbait has tragically evolved. Engagement farming is no longer just a digital nuisance; it has transformed into a dangerous and politically charged game, with real-world consequences for individuals and international relations alike.

What Users Are Seeing as the U.S.-Iran Conflict Rages: A Deluge of Deceit

During intense periods of geopolitical tension, like the ongoing U.S.-Iran conflict, the online environment becomes a breeding ground for specific types of disinformation. Experts observing these patterns explain that recent posts actively engaging in misinformation about this conflict primarily focus on two key strategies: grossly exaggerating the scale of Iranian counterattacks and falsely inflating their success. This narrative aims to project an image of formidable power and effectiveness, regardless of the actual events on the ground. Such tactics are designed to influence public perception, both domestically within Iran and internationally, potentially shaping political discourse and public sentiment about the conflict.

A recent in-depth investigation by Wired meticulously documented hundreds of instances of deceptive content circulating across Elon Musk's social media platform, X. This extensive report revealed a shocking array of misleading footage and manipulated photos, including sophisticated AI-manipulated content, that promoted false claims regarding the actual scale and impact of the attacks. Many of these posts were strategically released in the immediate aftermath of missile strikes, capitalizing on the initial confusion and hunger for information. One particularly striking example was a post that garnered more than 4 million views, falsely claiming to show ballistic missiles dramatically sailing over the city of Dubai. In reality, the footage depicted an entirely different event: an Iranian attack on Tel Aviv that occurred in October 2024. Another highly deceptive post, viewed over 375,000 times, presented a fictitious "before-and-after" image purporting to show the shelled compound of the assassinated Iranian leader, Ali Hosseini Khamenei. These examples highlight the sophisticated nature of the misinformation, blending old footage with new narratives and outright fabrication to create convincing, yet utterly false, portrayals of events.

The Wired investigation uncovered a worrying trend regarding the source of much of this disinformation. It revealed that nearly all of the misleading posts were shared by accounts holding premium subscriber status, easily identifiable by their blue checkmarks. This group notably included state-funded media outlets in Iran. The implication is significant: accounts with enhanced visibility and perceived credibility, often paid for, were actively participating in the spread of false narratives. This phenomenon suggests a deliberate and coordinated effort by various actors, including state-backed entities, to leverage platform features and perceived authority to amplify their deceptive messages. The blue checkmark, once a symbol of verified identity, has in this context become an enabler of widespread misinformation, granting a veneer of legitimacy to otherwise dubious claims.

Adding to the complexity of the misinformation landscape, accounts have once again resorted to a familiar tactic seen in previous military conflicts: passing off video game footage as authentic news clips. This deceptive practice now often incorporates AI-manipulated images, making the fake content even more convincing. For instance, fabricated images of downed F-35 fighter jets, realistically rendered and appearing to be ripped directly from high-fidelity flight simulator games, have been widely shared. These images have been distributed across platforms like TikTok, with some instances even showing clear links to known Russian influence operations, as reported by the BBC. The use of gaming footage is particularly effective because modern video games boast incredibly realistic graphics, blurring the lines between simulated reality and genuine battlefield events. When coupled with AI manipulation, these fakes become almost indistinguishable from real combat footage, tricking millions of unsuspecting viewers and further muddying the information environment.

Beyond simply reusing out-of-context footage and creating misleading content, the BBC’s investigation also documented a disturbing proliferation of videos that were entirely AI-generated. These sophisticated, fabricated videos managed to accumulate nearly 100 million total views, circulating widely across social media. The BBC identified that these clips were often shared by what the outlet describes as notorious "super-spreaders" of disinformation. These individuals or networks are highly effective at amplifying false narratives, leveraging large follower counts and understanding platform algorithms to ensure maximum reach. The sheer volume of views on purely AI-generated content underscores a critical and evolving threat: the ability to create entirely synthetic realities that can powerfully shape public opinion and sow confusion on a massive scale, all while appearing utterly authentic to the casual viewer.

Visuals are a good way for us to process what is going on in war when we can't comprehend the scale of these conflicts.
- Sofia Rubinson, NewsGuard

Further shedding light on this escalating crisis, a comprehensive report published by the reputable misinformation watchdog, NewsGuard, meticulously detailed additional patterns of deception. This report chronicled numerous instances of users sharing viral posts that propagated false claims of targeted military strikes against U.S. and Israeli strongholds. The primary methods employed in these deceptive campaigns included the repurposing of old video footage, presenting images of destruction entirely out of context, or recontextualizing them to fit a false narrative. These tactics exploit the emotionally charged nature of conflict imagery, manipulating viewers into believing events that never occurred or occurred in a completely different setting. The report provided crucial evidence of a coordinated effort to sow confusion and panic, all while pushing a particular geopolitical agenda.

Sofia Rubinson, a senior editor of NewsGuard's Reality Check newsletter and a co-author of the aforementioned report, offered crucial insights into the modus operandi of these misinformation campaigns. She explained that "these videos are posted by anonymous accounts that tend to report on geopolitical conflicts." Rubinson further clarified, "These are accounts that are known to NewsGuard for spreading exaggerated claims, usually from a pro-Iran perspective." This highlights a pattern of ideologically motivated actors deliberately disseminating misleading information to advance a specific narrative. From these initial anonymous posts, the false claims gain traction. As Rubinson describes, other accounts with significantly larger followings then pick up and amplify these fabrications, rapidly spreading them to a much wider audience. This chain reaction demonstrates how a small group of determined disseminators can inject falsehoods into the mainstream, using larger, often unsuspecting, accounts as unwitting conduits for their deceptive content.

A chilling example of this rapid spread of falsehoods occurred just hours after initial reports of the U.S.'s military strikes in Iran. Almost immediately, users on X began widely reposting an image depicting a sinking naval aircraft carrier. Accompanying this vivid image were claims that it showed a recent, successful attack on the battleship USS Abraham Lincoln in the Arabian Sea, implying a major victory against U.S. forces. However, the U.S. military's Central Command (CENTCOM) swiftly issued a clear statement refuting the claim on that very same day, attempting to quell the burgeoning misinformation. NewsGuard's diligent investigation conclusively confirmed that the image was, in fact, not of the USS Abraham Lincoln or any recent attack. Instead, it showed the intentional sinking of the USS Oriskany, an obsolete aircraft carrier, which took place nearly two decades ago. Despite the official refutation and easy verifiability, the false claim continued to circulate, shared by unverified "news" accounts and even by prominent figures such as Kenyan parliamentary member Peter Salasya. Salasya’s post alone accumulated an astonishing more than 6 million views, illustrating how quickly and widely disinformation can spread, even when demonstrably false, especially when amplified by influential accounts.

The pattern of using old footage to create new, false narratives continued. Multiple accounts, including that of Peter Salasya, shared another video that purportedly showed Israel's Dimona nuclear power plant under a fierce aerial siege. This video, designed to provoke strong emotional responses and suggest a devastating attack, quickly racked up hundreds of thousands of impressions across pages that were either staunchly anti-Israel or strongly pro-Iran. However, the truth eventually caught up with this deception. An X Community Note, a feature designed to provide context and corrections, now appears below the video on Salasya's page, clearly clarifying the images are not from Dimona or any recent event. Instead, the footage was taken from a March 2017 attack in Balaklia, Ukraine. This incident underscores how readily such misleading content is consumed and shared, and the vital role of community-driven fact-checking mechanisms, even if they often appear after the initial damage is done.

The comprehensive analysis conducted by NewsGuard revealed the astonishing reach of these deceptive posts. The misinformation watchdog found that content propagating these false claims and repurposed visuals had already collectively garnered at least 21.9 million views across X alone. This figure represents a significant audience exposed to inaccurate and potentially inflammatory information during a sensitive period of international conflict. The sheer volume of views underscores the urgent need for more robust content moderation and more effective strategies to combat the rapid dissemination of falsehoods on major social media platforms, especially when these lies can have tangible impacts on public understanding and geopolitical stability.

The spread of misinformation during times of conflict isn't limited to battlefield reporting; it often extends to inducing fear and anxiety among domestic populations. Posts designed to instill panic about retaliatory attacks have also circulated widely online. One particularly disturbing example included an unverified list of U.S. cities, falsely alleged to be top targets for Iranian sleeper cells. What made this particular piece of disinformation even more bizarre and yet deceptively simple, was its presentation: the list appeared to have been crudely written in Apple's Notes app. This seemingly low-effort presentation actually contributed to its virality, as it appeared to be a leaked, authentic document rather than a professionally produced fake. Such posts prey on public fears, leveraging the anonymity and ease of digital sharing to sow widespread alarm and undermine public trust in official information channels.

Disinformation is Only Going to Get Worse: A Deepening Crisis

The current online misinformation crisis is not a temporary phenomenon but a rapidly worsening situation, a sentiment echoed by numerous experts in the field. This acceleration is largely driven by two intertwined factors: the rapid advancements in generative AI technologies and the increasingly relaxed moderation policies across many major social media platforms. Generative AI has made it easier than ever to create hyper-realistic fake images, videos, and texts, overwhelming the capacity of human moderators and traditional fact-checking methods. Simultaneously, as platforms like X reduce their content moderation efforts, the floodgates open for malicious actors to spread disinformation with less fear of repercussions. This dangerous combination creates a fertile ground for falsehoods to flourish, making the digital environment more treacherous for users seeking reliable information.

NewsGuard researchers, who continuously monitor the spread of online disinformation, have observed a clear and concerning pattern emerge, particularly during periods of breaking news. Over recent months, including critical events such as the U.S.-led capture of Venezuelan leader Nicolas Maduro, they've noted that misinformation surges precisely when major news is unfolding. This pattern indicates that malicious actors are highly adept at exploiting the initial chaos and uncertainty surrounding significant global events. They capitalize on the public's urgent desire for information, injecting false narratives into the vacuum before verified facts can be established. This strategic timing maximizes the impact and reach of disinformation, making it harder to counter once it has taken root in public consciousness.

Sofia Rubinson, from NewsGuard, further elucidated the dynamics at play, explaining that "people now have a shorter window for the lapse between an event occurring and authentic visuals coming out of the media." To put it more bluntly, in today's fast-paced digital world, users are losing their patience. They have become accustomed to an online environment where information, including visual evidence, is usually right at their fingertips, delivered almost instantaneously. This expectation of immediate gratification, coupled with the inevitable delay in verifying and releasing authentic media, creates a critical vulnerability. The void between an event and its verified documentation is a prime target for those looking to spread false information, as they can quickly fill that gap with fabricated content that appears credible, capitalizing on the public's hunger for instant updates.

These brief periods of uncertainty, often referred to as "information voids" or the "fog of war," become exceptionally fertile ground for disinformation bots and engagement farmers. Rubinson emphasizes that during these gaps between initial breaking news reports and the release of confirmed videos or photos, malicious actors strike with precision, injecting fabricated content to shape early narratives. Beyond merely spreading false news, these voids also pose a significant threat by threatening to reinforce conspiratorial thinking. For instance, when authentic information is delayed, it can fuel the insidious idea that mainstream news outlets are deliberately withholding information from the public, leading to distrust. Furthermore, these circumstances lend themselves perfectly to a user's own confirmation bias, making them more likely to accept information that aligns with their existing beliefs, even if that information is entirely false. This combination of speed, distrust, and psychological susceptibility makes the information void a critical weak point in the defense against online misinformation.

Political conflicts, by their very nature, are particularly ripe environments for the widespread dissemination of such misinformation. The emotional intensity, the competing narratives, and the high stakes involved create an ideal landscape for deceptive content to flourish. This problem is further intensified by active disinformation campaigns waged by both sides of armed conflicts, each seeking to gain an advantage by controlling the narrative. Researchers have also found that a lack of proximity to events significantly contributes to individuals' susceptibility. When people are physically distant from the conflict zone, they lack direct sensory information or personal witnesses to verify claims. This distance makes it considerably easier for them to believe out-of-context images, exaggerated reports, or outright fabricated information, as they have no immediate way to cross-reference or challenge the narratives presented to them online.

"It's an attempt to fill this fog of war," said Sofia Rubinson, encapsulating the psychological imperative behind the consumption and spread of visual misinformation. She elaborates, "It can be very overwhelming for people. They want to make sense of it, and visuals are a good way for us to process what is going on in war when we can't comprehend the scale of these conflicts." This natural human tendency to seek understanding through visual aids is precisely what malicious actors exploit. When the reality of war is too vast and complex to grasp, a simple, dramatic, and often false image or video can provide a seemingly clear explanation, even if it's entirely manufactured. This vulnerability makes the public particularly susceptible to visual disinformation, as it fulfills an innate desire to contextualize and comprehend events that are otherwise beyond immediate experience.

This challenge becomes even more profound as individuals increasingly rely on social media platforms as their sole sources for news. The trend of consuming news primarily through curated feeds and viral posts means that traditional journalistic standards of verification and balanced reporting are often bypassed. Simultaneously, previously reliable fact-checking tools, including what were once straightforward Google searches, are becoming more unreliable. As search engine results and AI-generated summaries are increasingly tainted by misinformation, the very mechanisms designed to help us find accurate information are themselves compromised. This creates a vicious cycle where the tools intended to combat disinformation inadvertently contribute to its spread, leaving users with fewer trusted avenues for truth and deepening the crisis of public trust in information.

AI Is Harming More Than Helping: The Unreliable Fact-Checker

In a perverse twist, artificial intelligence, often touted as a solution for navigating the complexities of the digital age, is actually exacerbating the misinformation problem. AI chatbots and search features have become deeply embedded in how many users interact with real-world crisis events, with a growing number turning to them as real-time fact-checkers. Sofia Rubinson observed this phenomenon firsthand during NewsGuard's analysis, noting that nearly every X post they examined included the same recurring reply: "@Grok is this true?" This demonstrates a widespread, yet misplaced, trust in AI systems to validate breaking news, even from unverified sources. The public's reliance on these nascent technologies for immediate verification highlights a critical vulnerability in the information ecosystem, as users bypass traditional journalistic processes in favor of algorithmic responses that are far from infallible.

However, the reality is that AI assistants and platform chatbots, including X's own Grok, are notoriously unreliable at disseminating and verifying breaking news. Their underlying models are trained on vast datasets that may contain outdated or biased information, and they often struggle with the real-time nuanced understanding required to assess unfolding events. Furthermore, these AI systems have proven to be inconsistent at applying their own platforms' moderation policies, sometimes failing to flag or remove content that violates guidelines. A stark illustration of this unreliability was reported by the BBC, which found that Grok erroneously verified recent AI-generated images that purported to depict Iranian military movements. This error underscores a fundamental flaw: AI systems can be easily manipulated to confirm fabricated narratives, leading to an alarming situation where the very tools meant to help us find truth actively contribute to the spread of falsehoods, inadvertently lending credibility to deepfakes and manipulated content.

The problem isn't confined to chatbots; it extends to broader AI-powered search functions. According to a second report published by NewsGuard on March 3, Google’s AI-powered Search Summaries have unfortunately repeated misleading claims about the U.S.-Iran conflict. This occurs even when users prompted the search engine with reverse image searches, a technique typically used to verify the origin of images. For example, NewsGuard researchers uploaded a frame from a video that had been shared online, falsely claiming to show the destruction of a CIA outpost in Dubai. When presented with this image, Google's AI summary, rather than debunking the claim, erroneously verified the story. The summary stated: "The image shows a fire at a high-rise residential building in Dubai, UAE, reportedly occurring on March 1, 2026, following regional tensions. … Conflicting reports emerged regarding the cause, with some sources mentioning a drone strike and others referring to the building as a specific intelligence facility." This AI-generated summary not only failed to correct the misinformation but added layers of fabricated detail, effectively legitimizing a false narrative. The truth, as NewsGuard confirmed, was far simpler: the video actually depicted a 2015 residential fire in the city of Sharjah, an entirely unrelated incident. This example is deeply troubling, as it demonstrates how AI, integrated into trusted search engines, can amplify disinformation and create a sophisticated, yet entirely false, understanding of events, undermining the very purpose of information retrieval.

The growing reliance on AI for information, coupled with its demonstrated propensity to spread falsehoods, has prompted security experts to sound urgent alarm bells over what they term "AI information threats." These threats encompass a wide range of malicious activities, including the use of AI tools specifically designed to generate and amplify misleading content at unprecedented scales and speeds. A comprehensive report by the UK Centre for Emerging Technology and Security starkly suggests that this rapidly worsening information environment could pose existential threats. These dangers extend far beyond mere annoyance; they directly imperil public safety, undermine national security by spreading propaganda and sowing discord, and critically erode the foundations of democracy by distorting public discourse and trust in institutions. The report emphasizes that without direct and concerted intervention from governments, tech companies, and civil society, the consequences of uncontrolled AI-driven disinformation could be catastrophic, fundamentally altering the way societies function and make decisions.

Amidst this digital battleground, the struggle for accurate information takes on an even more challenging dimension for those directly affected by conflict. Civilians and journalists on the ground in Iran, for instance, are valiantly fighting back against a near total internet blackout. This widespread internet shutdown, often employed by authoritarian regimes, severely hampers their ability to communicate with the outside world, share authentic accounts, and access reliable information. Ironically, this blackout followed a massive push by the Trump administration and its ally Elon Musk to provide Starlink internet connections to those on the ground, aiming to circumvent censorship. However, while legitimate users struggle, bad actors, skilled in digital circumvention, are still consistently finding their way through the block and back onto sites like X. This stark contrast highlights a cruel paradox: those seeking to spread truth are silenced, while those intent on spreading falsehoods continue to exploit vulnerabilities, ensuring that the digital front remains open for their deceptive campaigns, further isolating the affected populations and distorting global perceptions of the conflict.

Navigating the Digital Minefield: A Call for Critical Awareness

The ongoing U.S.-Iran conflict serves as a powerful and troubling case study in the pervasive and dangerous nature of modern misinformation. The rapid spread of AI-generated content, recycled visuals, and outright fabrications, amplified by engagement farming and lax moderation, paints a grim picture of our information ecosystem. From falsely depicting missile attacks in Dubai to misrepresenting naval operations and nuclear plant sieges, the deliberate distortion of reality has become a weapon in its own right, capable of shaping public opinion and exacerbating tensions.

What makes this situation particularly perilous is the erosion of traditional safeguards. When social media becomes the primary news source for millions, and even trusted search engines falter in their ability to distinguish truth from fiction, the average user is left vulnerable. The human tendencies of confirmation bias and the desire for immediate answers are exploited by those who profit from chaos or seek to manipulate narratives. As AI continues to evolve, the tools for deception will only become more sophisticated, making the fight against misinformation an increasingly complex and urgent challenge.

The policy adjustments by platforms like X, while necessary, often feel like a reactive scramble against an ever-advancing tide of digital deception. The real solution lies not just with the platforms, but with us, the users. Cultivating a habit of critical thinking, questioning sources, cross-referencing information from diverse and reputable outlets, and being wary of emotionally charged viral content are no longer optional skills; they are essential for digital survival. In this new era of information warfare, vigilance and media literacy are our strongest defenses against the insidious power of online falsehoods.

The fight for truth in the digital age is a continuous one, demanding constant adaptation from individuals, tech companies, and governments alike. As the lines between reality and simulation blur, our collective ability to discern what is real will determine not only our understanding of global conflicts but also the very health of our democracies and societies. It is a stark reminder that in the age of AI and instant information, the greatest power lies not in who can shout the loudest, but in who can think the clearest.



from Mashable
-via DynaSage