Scroll Top

The infrastructure of an information war

ro_RO

93 million views and nearly 10,000 posts between September 1 and 23

Data source: Lists monitored on the FACT platform

Data Credits from investigations of: Context.ro, ExpertForum.ro, Watchdog.md, Promo-LEX, PROMPT , DFR LAB -ATLANTIC COUNCIL

Authors: Mădălina Voinea

Contributors: Sorin Ioniță, Atilla Biro, Septimius Pârvu, Daniel Timofte, Mihaela Tănase, Iulia Stănoiu

Between September 1 and 23, the networks monitored in the Republic of Moldova generated 9,882 videos, which accumulated 93.1 million views on the TikTok platform. We are talking about a huge flow of mass-distributed content: over 204,000 shares and 1.5 million likes. In a country with only 2.3 million citizens, these figures indicate a propaganda machine operating at full capacity, flooding the digital space of Moldovans 24/7.

A recurring technique we have noticed is that of flooding the information flow on a certain topic in order to cover up any other type of discourse. This often means that volume replaces the complexity or quality of the messages/videos created. The messages are repetitive, designed to fix the same narratives and conspiracies in the minds of the public, but that does not make them any less effective. In the Republic of Moldova, however, we observe a distinctive element: a much higher proportion of content generated with artificial intelligence, compared to what we monitored in Romania. Between September 1 and 23, we monitored 93 accounts specializing in producing manipulative AI content, directed almost exclusively against Maia Sandu and the Action and Solidarity Party (PAS), accounts that we also discussed in a separate investigation.

Patterns observed in viral peaks

Social media data should always be interpreted with caution, taking into account both context and inherent limitations. Monitoring also focuses on accounts with a high degree of potential inauthenticity, accounts that reduce critical space and public debate of real voices in society, regardless of opinion. Our observations focus on amplification mechanisms and their volume, which create the impression of consensus and give disproportionate weight to certain messages.

The most visible phenomenon is related to the daily dynamics of views and shares. In the last seven days monitored, these seem to have decreased significantly. Such a trend can be explained by several scenarios: actions by the platform against accounts suspected of inauthenticity to limit views, or the deletion of content that violates TikTok’s terms and conditions. At the same time, we cannot rule out a deliberate move of infrastructure to accounts that are more difficult to monitor, voluntary deletion of traces by the operators of these campaigns, or even strategic migration to other social networks. In this sense, what we see is only partial, and our interpretation must remain cautious.

However, if we look at the peak moments, we can identify some trends. For example, September 3 was a day when multiple posts went viral, exploiting an event that was also present in the traditional press: the story of a cancer patient who died while his wife was fined for electoral corruption. Videos of the grieving widow, angry villagers, and emotional texts blaming the government for his death circulated widely, fueling this narrative of injustice and abuse by the authorities. Thus, citizens’ fear of persecution by the authorities became a dominant narrative in the first week of September. This category included numerous messages victimising and martyrising Evghenia Guts, the former Bashkan of Gagauzia who was convicted of electoral corruption. Copying this case in the form of recurrent repression of ordinary citizens is an attempt to turn her case into a collective symbol of “government persecution.”

A new peak occurs on September 12, when the @raportdezastru account posts a video that goes viral. In it, a person wearing a T-shirt with the DA – Democrația Acasă (Democracy at Home) logo (AUR partner party) talks about an allegedly illegal fence erected by neighbors. The posts that follow speculate that they would benefit from the protection and complicity of the authorities, thus being exempt from the rules applied to ordinary citizens. The narrative quickly amplifies: the fence case becomes a metaphor for what the authors consider to be the persecution of the DA party, in contrast to the “immunity” of the privileged. It is a classic formula for manipulation, a minor local incident turned into a symbol of systemic injustice.

On September 19, the account @groza5572, which mainly promotes the Patriotic Bloc (pro-Russian), published a video that garnered nearly one million views. This account creates videos that present the Patriotic Bloc as the only force capable of “restoring Moldova” through populist rhetoric that combines promises of social benefits, economic protectionism, and the defense of “traditional values,” while other videos describe a state of alert, corruption, and various accusations against the current government.

Content distribution – What messages do we see amplified by potentially inauthentic accounts?

Between September 1 and 23, most of the political conversation on TikTok from potentially inauthentic accounts analyzed by us was dominated by the topic of local government , which appeared in one in five posts (20.4%). Content in this category focuses on concrete issues such as spending priorities and respect for citizens, and uses everyday problems to paint a picture of leaders who are out of touch with ordinary people and real-life issues.

How is this theme constructed? There are specific events that have been elevated to symbolic status. One example is the incident involving Alexei Buzu, Minister of Labor and Social Protection, whose public statement was turned into evidence of a systemic lack of respect for citizens. The narrative caught on quickly: videos showed people’s reactions at the minister’s house, his subsequent apologies, and public skepticism. Other videos are related to the inefficiency of local investments: images of broken roads, poor-quality asphalt, or “patchwork” repairs that raise the question “where does the money go?”, sometimes directly linked to accusations of corruption. Interestingly, this theme circulates almost as much in Romanian as in Russian, through accounts that recycle video material from real public scandals.

It is important not to confuse legitimate political criticism with amplification operations. Such criticism is part of the debate; what becomes problematic is when the same messages are systematically multiplied by an artificial infrastructure to dominate the agenda, which we are investigating through the networks monitored on TikTok.

With an almost equal share (20.1%), there is the theme of political actors, where Maia Sandu, MPs, ministers, judges, and even European emissaries become characters in a political drama focused on people, not on concrete policies. The dominant narrative line can be summarized as follows: “Europe as a campaign surrogate.” Clips featuring European figures, such as Emmanuel Macron’s statements in support of Maia Sandu, are presented with undertones: convenient “coincidences” before the elections, external applause turned into an internal campaign. Even Maia Sandu’s speech in the European Parliament is reframed as a veiled plea to boost the ruling camp’s chances.

This type of content is personalized, emotional, and contrasts life in Europe with the daily frustrations of citizens. Much of it is produced in Russian, from accounts that we have flagged as potentially inauthentic. /

Another consistent category, to which we attributed 18.7% of the posts analyzed, is the topic of electoral processes. Here we have a mix of materials: clips with “experts,” recycled content from other platforms, and videos generated with artificial intelligence. In the foreground is the repeated warning that if Maia Sandu and PAS win the elections, Moldova will lose its statehood and be “swallowed” by Romania, a line that echoes the analogies with Georgia that we also observed on Facebook pages monitored during the same period. There has been an escalation of alarmism this week with the conspiratorial promotion of a plan by Romania to invade Transnistria.

Another powerful idea is that the elections are “already rigged.” The videos talk about violations in the pre-campaign phase, the abusive use of administrative resources, and schemes to exclude inconvenient observers. These posts rewrite a familiar narrative: Maia Sandu allegedly “stole” the presidency by falsifying the results and is set to repeat the same scenario in the parliamentary elections.

The content is predominantly in Russian and concentrated around accounts that our assessment has identified as having a high risk of inauthenticity. The messages are repetitive and often of poor quality.

The last two topics have a similar weight in the discussion: 14.6% for the economic and energy topic, which presents Moldova as being depleted of resources due to the explosion of energy costs and the closure of factories. This leads to the emigration of young people, the shutdown of industry, and tens of thousands of people being forced to leave the country, while gas and electricity prices make local production uncompetitive. The remaining 12.3% is related to the theme of security and war, where Moldova is presented as a neutral state turned into a pawn of the West, with NATO exercises, sanctions lists, and the war in Ukraine.

Across the entire set of TikTok posts analyzed, the messages generate an atmosphere of mistrust and urgency. The main propaganda techniques (TTPs) identified are: framing through fear and loss (war, collapse of sovereignty); credibility through figures and unsourced statistics to create the impression that there is solid evidence; repetition of simple labels (“fake president,” “dictatorship,” “Titanic = EU”); selective comparison (with Georgia or Romania) to suggest the inevitability of war and external interference if Moldova continues on the path of European integration.

Case study – Analysis of 10,000 comments from our list between September 1 – 23

Between September 1 and 23, we analyzed a set of 10,000 comments downloaded from our list of accounts with a high risk of inauthenticity. These include, on the one hand, 93 accounts that produce content with AI, and on the other hand, 151 accounts that recycle real videos, reposting them without much modification.

We noticed that over a third of the comments, specifically 3,442 out of a total of 10,007, were exact duplicates. To be considered a copy, a comment had to appear at least ten times with the same text or the same pattern of emojis. This resulted in a percentage of 34% with a high potential for coordination.

When we compared the two types of inauthentic accounts, we noticed different behaviors with multiple possible interpretations. In the comments section of accounts that produce AI content, we found an almost hermetic, exclusively anti-PAS environment, where there were no nuances or contradictions. In contrast, on the “classic” accounts of what we call inauthentic, those that repost real content and claim to be fans, the discussion was somewhat more mixed: there were both negative comments and positive reactions.

This contrast makes us wonder whether AI accounts have a higher proportion of artificial feeding with bots, in order to create the illusion of a uniform and hostile community, a closed ecosystem that amplifies and radicalizes messages. In contrast, accounts that recycle pre-existing content are more likely to be used by small in-house marketing companies, professional or passionate posters who imitate inauthentic behavior. Here, a more nuanced discussion of the intentions and tactics used is needed.

What do we mean? It is not illegal to post under a pseudonym on the Internet. At the same time, this supporter/fan behavior has become a front for disguised political advertising, through the creation of accounts on an assembly line, as we see both in the Republic of Moldova and in Romania.

Analysis of comments on 93 accounts that generate content with AI

Analysis of comments on 151 accounts that use real content but which we suspect of inauthentic behavior due to their posting pattern, posting frequency, and political exclusivity.

Different networks, one goal

What connects all these accounts, regardless of the network or technique used, is the common direction of the messages: a predominantly anti-PAS, anti-Maia Sandu, and anti-European Union accession discourse. It is not the messages themselves that are new, as many are recycled or already familiar, but the quantity and consistency with which every micro-incident in Moldovan society is transformed into a politically charged story.

The psychology that these operations seek to exploit is one of permanent alertness and insecurity. All the topics analyzed contain the same type of message: imminent danger, persecution, loss. Overall, we observe an information infrastructure that operates through volume: its strength does not come from the quality of its arguments, but from its ability to rapidly amplify any incident and produce a saturated information environment where the dominant emotion is fear. In such a space, trust, whether in institutions or in democratic processes, becomes fragile and easily destroyed. This conclusion does not only concern the current government, but raises broader questions about the resilience of society in the face of manipulation campaigns. This is not just a matter of verifying facts or combating individual messages, but one that concerns the democratic capacity to manage the pressure of a manipulative information flow.

Moldova has a difficult choice ahead, and we must all realize that election periods are only the tip of the iceberg. The rest of the time, these networks remain almost completely free: NGOs and the press have limited resources, the state has other priorities, and platforms have long since lost their status as mere intermediaries hosting entertainment content. Now they compete directly with television as sources of information, and then, in turn, through inaction, they actually take a position and influence the fate of a country.

The big problem is that we are not adapting quickly enough to the new reality. Neither as a society nor as platforms are we managing to catch up with the new techniques of manipulation, some out of lack of ability and others out of lack of will. We are unable to inventory the technology and marketing companies that flood the information space….

And if we don’t do that, public conversation will be artificially dominated by those who develop the best (or best-funded) digital strategies, not by the weight that real issues have in society.

This is where the major anomalies arise: artificial traffic that ends up shaping the political opinion and attitude of some people who are disappointed, disillusioned with real life, and ready to swallow pre-packaged solutions from the virtual world.

Methodological notes

Between September 1 and 23, we monitored over 700 accounts on TikTok, of which 498 produced 9,883 videos. These accounts were selected based on their published content and high potential for inauthenticity. The main criteria were: the presence of AI-generated content, the use of AI avatars that spread misinformation about elections, the electoral process, and political parties. For anonymous accounts, we excluded accounts that post personal content, prioritized accounts with a high or exclusive frequency of political content, and also looked for cumulative conditions such as: not posting personal content, exclusively posting the same videos, misinformation, and conspiracy-themed content. However, there is a degree of error in any digital monitoring in a field where data collection is imperfect. That is why the entire classification process included manual review and human attribution to validate the automated results and reduce the risk of errors.

We note that we also included channels with a clear pro-Russian orientation, such as @tv.6press, fined in 2025 by the Audiovisual Council with 85,000 lei for political advertising in favor of Ilan Șor and the “Pobeda” Bloc, and @primulinmoldova, affiliated with the same oligarch. Although they also post entertainment content, celebrity news, or clickbait, these accounts were monitored exclusively because of the presence of political content.

On the other hand, we excluded the accounts of politicians, influencers, or identifiable individuals who post political content. We focused strictly on accounts with manipulative behavior and high potential for inauthenticity, precisely to avoid including citizens who express their legitimate political opinions, even under pseudonyms.

We note that another potential problem or context to consider is regional contamination, from Moldovan accounts in Romanian and Russian, and vice versa. We have also identified cases of regional contamination: accounts from Belarus such as @maxim0440.6, @liubaz7, or @mashanews (since deleted) entered Moldovan feeds through hashtags about Maia Sandu. On the other hand, accounts such as @dratuti45, with an overwhelming amount of entertainment content and only about 6% of their audience in the Republic of Moldova, were excluded so as not to distort local relevance.

Important: the inclusion of these accounts does not automatically mean that they are part of a single coordinated operation. Propaganda is evolving, and what we are seeing are accounts with inauthentic behavior, but not necessarily directly linked to each other. Coordination is more evident at the level of messages, tactics, and types of narratives.

methodology

All accounts were collected through scraping, mainly using the FACTory platform, but for the discovery part, we supplemented with tools such as Exolyt and Apify. We downloaded the posts, automatically transcribed the audio-text videos from September 1–23, and collected the metadata provided by TikTok (play_count = views, digg_count = likes, share_count = shares).

For the thematic analysis, we used BERTopic modeling, which generates semantic representations of texts and then groups the data using clustering techniques. Each topic is associated with keywords, and the final interpretation and labeling were done manually.

Limitations: the results depend on data quality, clustering parameters, and pre-processing. In addition, the labels remain human interpretations, not an automatic “objective” result.

Related posts