Research with An X Factor
Life is what happens when you’re busy making other plans. That’s certainly been true for Professor Heiko Paulheim and his research team. What began as a project on the regulation of hate speech on Twitter (now X) has turned into an analysis of the proliferation of pro-Russian narratives—thanks to Elon Musk’s crackdown on academic data access.

“You’re always smarter in hindsight—that’s the nature of research,” says Heiko Paulheim with a laugh. Back in early 2022, when the Chair of Data Science at the University of Mannheim and his team developed the project “Data-driven Analysis of Hate Speech and the Effects of Regulation in Germany,” the plan looked quite different. The four-person team set out to explore how hate speech spreads on social media and whether regulations like Germany’s Network Enforcement Act (NetzDG) actually help curb it. They were particularly interested in how well platforms comply with takedown obligations—and whether the deleted content truly qualifies as hate speech. Their main focus: Twitter.
Then, on October 27, 2022—barely a month after the project was approved—U.S. entrepreneur Elon Musk took over the platform. “Our project officially started in January 2023,” Paulheim recalls. “Just a few months later, X—as Twitter is now called—revoked researchers’ access to essential data.” Until then, academics had been able to obtain all the necessary data free of charge. “Today, access is possible again to some extent—but only at astronomical prices. For us, real-time research on X just isn’t feasible anymore.”
Paulheim and his three co-researchers—Katharina Ludwig (University of Mannheim), Dr. Raphaela Andres (ZEW – Leibniz Centre for European Economic Research), and Dr. Olga Slivko (Erasmus University Rotterdam)—realized that their project could no longer proceed as planned. The lack of data also made it impossible to analyze the posts deleted by X. “Luckily, we saw it coming. The platform had announced its intention to shut off access, so we downloaded as much as we could while it was still possible,” says Paulheim.
Eight dollars a checkmark
Another major change Musk introduced in the first few months after the acquisition was a revision of the “blue check” verification policy. “Previously, the checkmark was reserved for verified accounts,” Paulheim explains. “Beginning in March 2023, anyone can purchase one. Interestingly, posts from verified accounts now receive significantly higher reach than those from non-verified accounts.” This raised a new research question: Could propaganda accounts exploit the system?
Current societal developments offer a telling example—most notably Russia’s invasion of Ukraine. “Eight dollars a month per account isn’t a big investment for Russian actors,” says Paulheim. “It’s plausible they’re using this new policy to amplify their messaging.” The dataset the four researchers are working with contains around eight million posts related to the war, which is why they’ve chosen to focus their analysis on this aspect.

Analyzing social media using AI
“To test our hypothesis, we trained an artificial intelligence model to distinguish between pro-Russian and pro-Ukrainian posts,” Paulheim explains. One training method involved feeding the model posts from accounts clearly aligned with one side or the other. Another relied on spelling conventions as cues: Kiev, for instance, is the Russian spelling of the Ukrainian capital, whereas Ukrainians typically use Kyiv.
The result, according to Paulheim, is a well-performing AI model that can classify posts on X based on specific characteristics—such as pro-Russian or pro-Ukrainian narratives—and track changes over time, such as before and after the blue check verification policy shift. And what did the analysis reveal? “Posts containing pro-Russian narratives—and their reposts—have indeed increased significantly. That strongly suggests Russian actors are deliberately buying blue checks to boost their messages,” Paulheim concludes. “The platform’s new policy plays right into their hands.”
Still, he cautions against drawing overly broad conclusions. “Our dataset comes from a very chaotic period right after Musk’s takeover, when a lot of changes were happening at once on X. So it’s difficult to tie our insights to a single cause.” The team’s next step will be to map the share of pro-Russian accounts on X—and the specific pathways through which their content spreads.
For the researchers, developing the AI model was a crucial part of the project. Now they have a tool they can apply to other topics in future research. “If we ever regain access to relevant data, we could use it to study hate speech after all, for example,” says Paulheim. The team also has a dataset on climate change content that they plan to analyze with the same model.
A threat to public discourse?
To underscore the urgency of their work, Paulheim offers the following example: “A few decades ago, people had a daily newspaper subscription and watched the evening news on public television. Those were their sources of information,” he says. “Today—especially for younger generations—it’s the content they see online that shapes opinions. That’s why we need to understand how this content spreads and how platforms influence that process.”
Paulheim admits he feels uneasy about how dramatically their research plans were derailed by Musk’s crackdown on access. “Researchers enjoy a special privilege: We’re allowed to do more with data than most people, because our goal is public knowledge,” he says. “But when individual tech owners use their power to restrict access to that data, it’s not just a problem for science—it’s a problem for society.”
Text: Jessica Scholich / August 2025
The research project is supported by a €158,000 grant from the German Foundation for Peace Research. Launched on January 1, 2023, and scheduled to run until March 31, 2026, the project is headed by Heiko Paulheim, professor of data science at the University of Mannheim.