Denis Volkov on the problems of public opinion surveys and electoral research in Russia, and public opinion in democratic and authoritarian contexts.
This essay was published in Russian in Gorby magazine, №17, January 2025, pp. 84—89.
“No man is an island, entire of itself;
every man is a piece of the continent, a part of the main.”
John Donne
Public opinion polls have been criticized since their inception, and modern Russia is no exception. The volume and intensity of complaints about sociological research has risen sharply since the start of the special operation. At the same time, in most cases, critical statements about polls are not based on facts and evidence, but on assertions, assumptions and speculations that turn out to be unfounded. This situation is an inevitable consequence of the fact that the key speakers on the topic of polls in our country are often people who know about it at best from books – philosophers, economists, psychologists, political strategists, journalists, bloggers. They all lack the practice of field work and systematic work with numbers.
The opinions of the experts who directly conduct the surveys are not always of interest. We are well aware of the genre of criticism of the Levada Center without asking for explanations from the sociologists themselves – employees of the research organization. We have repeatedly dealt with the fact that famous critics of the surveys refused to have a direct conversation with us. Therefore, let us enter into a dispute in absentia.
Critical statements can be roughly divided into three categories:
- accusations of a sharp decline in the quality of surveys after the start of the special operation;
- talk of how the polls no longer reflect reality;
- and that polls are in principle impossible under the increasing authoritarian pressure.
Let’s follow this list.
The challenges of (non)response
One of the main indicators of the quality of the collected data is the accessibility of respondents. This indicator shows what proportion of planned respondents can be reached during the survey. It should be emphasized that we are talking about representative surveys that show the distribution of opinions among the population, and for this they must provide more or less equal opportunities for participation in the study for representatives of different categories of the population (by gender, age, place of residence, etc.).
A survey sample always has initial parameters – this can be a database of randomly generated phone numbers that need to be called, or specified points that need to be visited during a door-to-door survey. The response rate indicates what proportion of respondents were surveyed according to the initial plan. Since not all generated numbers are answered, and not all planned addresses are answered, the rest have to be reached using additional numbers or addresses. A gradual decline of response rate is a known problem, but there was no significant collapse of this indicator after the start of the Russia’s “special military operation” in Ukraine.
The Levada Center uses the generally accepted recommendations of the American Association for Public Opinion Research (AAPOR) to calculate response rate. It should also be noted that in the United States, a response rate of 25–30% of the planned sample is considered the “gold standard” for political polls conducted using personal interviews. In the Levada Center’s monthly national door-to-door survey, which is the source of most of the data we publish, the average response rate for 2022 was 27% of the planned sample, and 29% in 2023 and 2024. This is slightly less than the average figures for 2021 (31%), but overall, we can say these fluctuations are not significant. Sociologist Vladimir Zvonovsky, who analyzed the corresponding figures for VTsIOM and FOM, also came to the conclusion that the response rate of their polls in 2022–2023 was stable.
In telephone surveys, the response rate is significantly lower – 9-10%. This is partly due to people’s reluctance to answer calls from unknown numbers (in the context of increased spam and intensification of scammer activity), as well as a feature of the method itself – not all generated numbers work. It is important that these indicators are at the level of global standards and that Russia does not stand out from other countries. In 2022, there were reports of a drop in the response rate of Russian telephone surveys. But, according to American researcher Bryn Rosenfeld, who analyzed an open array of telephone surveys of the sociological service Russian Field, it is more appropriate to talk about the volatility of this indicator, rather than an established trend [1].
In any case, there is no reason to speak about the sharp drop of response rate in Russian polls after the start of the special operation. Moreover, telephone and door-to-door polls give a similar picture of mass assessments of what is happening. This provides additional confidence in the quality of the data they provide.
Critics of surveys like to argue that the current level of response rate arises mainly because people are afraid to respond to sociologists, and thus, surveys cannot give an accurate picture of public sentiment. However, an analysis of the reasons for non-response does not confirm this. As our colleague Ekaterina Kozerenko writes, the main difficulties in implementing the initial sample are not related to the respondents and their motivation [2]. In 70% of cases, they arise because it is not possible to gain access to the apartment, when no one is home, or the respondent is physically unable to answer (was drunk, did not speak Russian, was incapacitated). In the second place is the refusal to open the door for security reasons. This group is quite stable; its size has not changed significantly since the early 2000s. Motivated refusals of a particular respondent make up 10-11% of the total nonresponse and are caused by an unwillingness to waste time on a meaningless, from the person’s point of view, procedure, and not by fear of answering sensitive questions.
A detailed analysis did not find any evidence that people were interrupting interviews en masse during surveys about Ukraine and the special operation. Interrupted interviews, according to Kozerenko, account for only 1-1.5% of the total nonresponse. This figure does not change over time, and when asked about the ‘special military operation”, interviews were interrupted only a few times during the entire survey, which is not statistically significant.
Another indicator of avoidance answering questions about support for the government or the “special military operation” could be a noticeable increase in the proportion of respondents who found it difficult to answer. But this is not happening either. Among all known surveys about the “special military operation”, only the“Chronicle Project” of political activist Alexei Minyailo shows a high proportion of those who found it difficult to answer the question about its support, but in this case this is achieved artificially – due to the specific wording of the question.
As has already been written before, there is no confirmation of the words that only supporters of the government take part in the polls [3]. All this allows us to say that from the point of view of methodology, Russian public opinion polls remain a working tool.
Sociology and elections
An important basis for criticism of Russian public opinion polls is the accusations against polling companies that they allegedly failed to correctly reflect the results of certain elections. For example, Grigory Yudin writes in the introduction to his book “Public Opinion” that the most famous blunders occurred during the elections to the State Duma in 2011 and the elections of the mayor of Moscow in 2013, and this is largely the basis for the pathos of his subsequent “exposé” of polls. The only catch is that in both cases the polls turned out to be quite close to the election results.
Thus, in 2011, the last published pre-election poll by the Levada Center, conducted two weeks before the elections, gave to the United Russia party only 3.5 percentage points more and to the Just Russia and Yabloko 4 and 3 percentage points less than they received after the votes were counted. (For the other parties, the difference between the poll and the result was less than 1 percentage point). The poll, conducted five days before the vote (and so it could not have been published), gave United Russia exactly as much as it subsequently received, while the final result of the LDPR party was overestimated by 3 percentage points and the result for Just Russia party was underestimated by the same amount.
After the elections, a meeting of our sociologists with renowned experts in electoral statistics, S. Shpilkin and A. Shen, took place at the Levada Center. Based on their mathematical models, they pointed to significant falsifications in the elections. The subject of the discussion was precisely to understand why the polls so neatly converged with the announced election results. Our explanation was that the effect of state control over the central media and the marginalization of the political opposition have a much greater impact on public sentiment (and, as a consequence, on the results of polls and elections) than the alleged falsifications. It was not decided who was right at the seminar; everyone stuck to their own opinion.
An even more interesting situation developed in 2013. Pre-election polls by FOM, VTsIOM and Levada Center did indeed diverge markedly from the final election results (although VTsIOM’s exit poll was more accurate). The “inaccuracy” of the polls was largely due to the fact that Muscovites’ opinions changed significantly during the campaign and especially in the last week and a half to two weeks before voting day: sympathy for Alexei Navalny steadily grew, while support for Sergei Sobyanin declined. This was clearly demonstrated by the most successful – by virtue of the method used and the resources expended – telephone polls by Komkon, which were conducted daily for 10 weeks throughout the campaign; 12,000 Muscovites were polled. For comparison, Levada Center conducted only two polls of Muscovites at that time, totaling 2,000 people, the last poll being conducted a week and a half before the elections.
The closest to the voting results was the tenth poll by Komkon, which ended two days before the elections: it overestimated Sobyanin’s final result by 5 percentage points and underestimated Navalny’s result by 2.5 percentage points. However, if we extrapolate the dynamics of voters’ sympathies recorded by previous waves of Komkon polls to the results of this poll, the figures obtained practically coincided with the election results.
“Incumbent sociology” showed a completely different picture. A poll by Navalny’s headquarters, published on August 20, showed the ratio between Navalny and Sobyanin as 25% to 44% (while polls by Komkon and Levada Center, conducted close to that date, showed this ratio as 20% to 60%). On this basis, Navalny’s supporters declared the second round inevitable and in passing accused the sociologists of failure; these words have been quoted many times and were well remembered.
Less well known is the fact that “a source close to Navalny’s headquarters” told Gazeta.Ru on the day the poll was published that “if the data from the latest study is recalculated using the correct sample, the results indicating the inevitability of a second round are not yet confirmed.” Moreover, Leonid Volkov* confirmed in a conversation with the publication at the time that the data were indeed not final [4]. None of this stopped Volkov from using the figures he himself had publicly refuted and promising a second round in his election forecast: “Sobyanin will get 44–47% of the votes, Navalny will get 26–29% of the votes (followed by Melnikov with 10–12%). And this is the second round”. But the polls did not indicate the possibility of a second round, and it did not happen – the polls were not wrong.
“Falsification of preferences”
Finally, today we increasingly hear talk about how polls are impossible in an authoritarian context: people are afraid to say what they really think. This automatically assumes that in democracies people are free from any pressure and that each person has an honest, consistent and clearly formed opinion about what is happening. But these assumptions are wrong.
The authoritative German pollster Elisabeth Noelle-Neumann has reflected extensively on the influence of fear on public opinion in a democratic society in her work on the “spiral of silence.” Contemporary American public opinion researcher Timur Kuran, when writing about “preference falsification” (that is, the tendency of people to hide their preferences and adapt to the dominant opinion under social pressure), emphasized that his theory is universal for any political system, from dictatorships to democracies. Russian critics of polls who like to cite Kuran usually omit this thesis.
For Kuran, public opinion is not the sum of private opinions, but the prevalence and degree of acceptance by individuals of publicly voiced assessments and opinions. This is what polls measure. In contrast, private opinions, which people tend to hide, cannot be measured (ethnography or focus groups can give some hint of their existence, but cannot say anything about the prevalence of such opinions, because they are not representative in principle). Yielding to external pressure, people are forced to adapt to the dominant discourse. This helps maintain the existing social order, which, in turn, contributes to the gradual adoption of the dominant opinion by individuals as their own. In his book, Kuran examines these processes in three different contexts – India, communist Eastern Europe, and the United States.
American researchers Barbara Geddes and John Zaller come to the conclusion about the similarity of the processes of public opinion formation under different political regimes in a different way. Using the materials of public opinion polls in Brazil during the military dictatorship, they showed that public opinion there is formed according to the same criteria as in the USA. The authors rely on the position accepted in communication studies that public opinion in modern mass societies is determined by elites who control the agenda of the central media. The difference between democratic and authoritarian contexts is only in the forms and degree of such control.
They write that in both Brazil and the US, support for government policy is stronger among people who follow the news but have difficulty critically evaluating mainstream media reports. Support is weakest among those who do not follow the news at all, who are obviously opposed to the dominant agenda due to their values or opposition views, and among people with a high level of education who can critically evaluate the dominant discourse. Anyone who closely follows the results of polls in Russia knows very well that support for government policy – for example, with regards to the “special military operation” – is distributed according to the same principles.
This is because in any country, whether the United States or Russia, the number of people who purposefully and regularly follow the news, seek out and compare different points of view, comprise less than a third of the population. These are the elites, activists, and the informed public – they have clearly expressed, consistent and coherent political views and opinions. The majority of people in any country are little interested in political issues; their views are poorly formulated and contradictory. Therefore, on issues that are not directly related to their life experience, they are strongly influenced by the agenda of the central media.
In his further studies, Zaller specified that under a democracy, the elites are divided most of the time on most issues, and their disputes are spilled out into the central media, which ensures a diversity of opinions among ordinary people. Under authoritarianism, public discussions between the elites are almost non-existent, which ensures the dominance of one agenda in the media – the official one. Opponents of the government are isolated, their voices are practically unheard. Therefore, the issue is not so much fear, as the fact that the majority of the population superficially and uncritically follows what is happening, does not know the alternatives and sees no reason to doubt the picture of the world created by the media.
It is characteristic that even in democratic countries there are situations when disputes within the elite cease for a while, and then the intra-elite consensus turns into almost unconditional public support for the government’s policies, which usually only authoritarian regimes can boast of. In the United States, Zaller notes, this could be observed at the initial stages of the Vietnam War (support for the fight against communism), during the Gulf War (support for the American military and the rapid growth of George Bush Sr.’s rating to 89%). And also – we would add – in the first two years of the current Russian-Ukrainian conflict (high rates of support for American aid to Ukraine) before the resurged confrontation between Democrats and Republicans.
Similar principles of public opinion formation in different countries, regardless of their political regime, bring us back to the methodological aspects of polling. As Danielle Lussier, an American sociologist and an attentive reader of Kuran and Zaller, accurately noted, poor quality polls can be conducted under any regime. And one should not speculate about the flaws of public opinion surveys, but carefully scrutinize compliance with methodology [5]. It is important that the research sample is correctly constructed, the questions are properly formulated, and a sufficient level of response rate is ensured.
All of the above gives grounds to assert that the surveys of leading Russian pollsters so far quite accurately reflect public sentiment in our country. While we should continue to monitor quality of the surveys, it is time to move on from talking about how polls show nothing to a meaningful analysis of the results of research – they allow us to learn a lot about our society.
- Rosenfeld B. Curious What Russians Think about the War? Ask Yourself This before You Read the Polls / Russian Analytical Digest. February 22, 2023. No. 292. https://css.ethz.ch/content/dam/ethz/special-interest/gess/cis/center-for-securities-studies/pdfs/RAD292.pdf [1]
- Kozerenko E. Analysis of unattainability in the “Courier” of the “Levada Center” / Bulletin of Public Opinion. 2023. No. 1. https://www.levada.ru/cp/wp-content/uploads/2023/08/VOM1-2023.pdf [2]
- Volkov D. Are polls possible in today’s Russia? / February 10, 2023. https://www.levada.ru/2023/02/10/vozmozhny-li-oprosy-v-segodnyashnej-rossii/ [3]
- Kuzmenkova O., Pertsev A. Navalny was removed from Odnoklassniki / Gazeta.ru. August 20, 2013. https://www.gazeta.ru/politics/2013/08/20_a_5598529.shtml ↩︎ [4]
- Lussier D. Does the War Make Russian Public Opinion Polling Worthless? / Russia.Post. December 21, 2023. https://russiapost.info/society/public_opinion [5]