Published: 5. 11. 2020
Research prizes

Czech Academy of Sciences recently announced, that nine exceptional scientists obtained a DSc. degree. One of them was Pat Lyons ffrom the Institute of Sociology CAS for his outstanding work Political Knowledge in the Czech Republic. Read the interview with Pat Lyons, done by the head of Press and Publications Department Filip Lachmann.

Pat Lyons, DSc.

The Czech Academy of Sciences has just awarded you a DSc. degree. What does it mean for you?

It is an honour to be the first person from the field of sociology to receive the DSc. degree in the human and social sciences. Moreover, it is nice that my theoretical and empirical book on political knowledge in the Czech Republic between 1967 and 2014 (free to download as a pdf) was recognised as making a contribution to scientific understanding of how citizens know about politics.

The graduation was for your extensive work Political Knowledge in the Czech Republic. Now, few years after the publication of the book of the same name, would you change any part of it?

No. I think all of the general lessons of the book remain valid. For example, the idealist view that if all citizens had the same level of factual knowledge as experts then democratic politics would be better is incorrect. Being able to recall facts is only one aspect of having political knowledge. This is because other factors such as core beliefs are also important in decision making. This link between beliefs and factual knowledge is evident in public attitudes toward climate change, voting in the Brexit referendum, and differences in opinion toward the pandemic.

Is there any development, regarding the main topic of your work, that has surprised you?

Yes, the way in which information about the current pandemic has been expressed in the media has surprised and shocked me.

What has surprised me is that any person trying to understand the pandemic has been exposed to an enormous amount of conflicting facts and interpretations by experts, politicians and media commentators. In part, scientists have been responsible for this problem by promoting the view that the securing of scientific facts through the development of a vaccine will solve the pandemic problem. The emergence and spread of the covid-19 disease is primarily a human attitudes and behaviour problem. The view that if all citizens are given the basic facts and statistics they will do the right thing in protecting themselves and others is naïve. Human behaviour is much more complex. It seems that the results of human and social science research play little role in scientific communication and public policy decision making.

What has shocked me has been the mixed signals given by political leaders; the misrepresentation of the daily pandemic statistics as being accurate measures of the current situation; the manner in which experts who have no expertise in how diseases spread have nonetheless expressed strong views on what should be done; how appropriate survey research options have been wasted, e.g. using self-selected interviewing and testing methods rather than random sampling and in-depth case studies; an over-dependence on sophisticated statistical models that are based on data that are known to be unreliable rather than gather more valid and reliable data; and how the media has focused on the entertainment value of the pandemic with such themes as incompetent leaders, squabbling experts, weird scientific claims, and how best to have a personally “good” pandemic.

In effect, it is very difficult for citizens to be informed because of how the pandemic has been managed by leaders and experts whose general policy is “don't worry, do as we say, we will save you”; whereas the scientific bottom line (as it is currently known) is that citizens must save themselves through effective collective action based on beliefs such as solidarity and collective responsibility.

One conclusion from your book is that expert knowledge also has limits. Does this have implications for science and public policy in a time of pandemic?

Yes. Over the last 15 years there has been discussion that scientists use of standard statistical methods to analyse data are problematic. For example, within medicine and psychology there is a “replication crisis” where the results from most published research are likely to be false because they cannot be independently reproduced (e.g. Ioannidis, 2005). The implication here is that a lot of published scientific knowledge is polluted with unverifiable results. This situation emerges because statistical methods are used in a “ritualistic” way by scientists (think here of a cookbook recipe, Jaynes 2003: 492) who are incentivised to present statistically significant results in their papers to ensure publication in prestigious journals; and hence have successful careers (Gigerenzer 2018: 201-202).

The general point here is that all knowledge ranging from citizens’ knowledge of politics to how scientists create knowledge has an important social basis: facts are not simply discovered, they are created. The degree to which many sciences are now defined by the incorrect use of statistical methods has become critical because the resulting explosion of published knowledge is fragmenting rather than accumulating. Within the social sciences few theories are ever completely disproved allowing progress to be made. Researchers can be very persistent and creative in finding statistical results that lend support to their favourite theories. If statistical evidence for a theory is lacking, then such a paper is either not submitted to a journal or is rejected by an editor for being “uninteresting”. In fact, with a standard version of statistical significance, i.e. null hypothesis significance testing, cannot prove that a theory is true: all that is shown is that an argument of no effects has been rejected which cannot be interpreted (although this is often done) as the preferred theory of the researcher has been demonstrated to be true (Jaynes 2003: 135-137, 523-524; Szucs and Ioannidis 2017: 8-9). More formally, the manner in which many studies use statistical significance is a non-sequitur, i.e. the conclusions do not logically follow from the premises of the (statistical) argument presented (Branch 2019: 80).

Within the social sciences statistical methods have become more than a method of analysis but less than a theory. Please allow me to explain. With statistical training, most often at the graduate level, researchers are socialised into defining their research questions only in terms of standard statistical models; and what innovations are offered by current statistical software. Consequently, much research is viewed primarily in terms of what statistical testing allows. Theories are re-written to fit into a statistical model through a process of operationalising key features of a theory using numbers, e.g. answers to survey questions or use of official data of all types. In short, statistical methods determine what questions are examined and how the resulting (numerical) evidence is analysed. Here method dominates theory. In practice this means that often the theory section of an academic article is little more than a literature reviews with some minor tweak made to justify originality. The central problem here is that the (inappropriate) use of statistical methods is distracting scientists from doing truly original research based on observation and theory because the thinking is always oriented toward creating an analysable data matrix.

Within the social sciences the use of standard statistical methods has a fundamentally important theoretical consequence: society is viewed in terms of the operation of “social forces” (Abbott 1998: 148). In different words, standard statistical models explore differences across individuals where the goal is to identify associations between key variables (social forces) of interest. This means that social research which uses standard statistical methods treats individuals as objects through which social forces act. Here individuals don't matter, social forces do. As a result, social research produces knowledge about average citizens, which frequently in the case of national surveys does not equate to any single person interviewed. Consequently, the statistical knowledge generated by the human and social sciences does not refer to humans but to the relationships between social forces that operate across (or above) individuals. Whether one thinks that the human and social sciences should only be about generalities or should be grounded in concrete individual experience, the main point here is that the use of statistical methods comes with strong assumptions that are often not recognised by scientists or the general public who are informed of the latest research results.

Does this matter? I think it does. Within the current pandemic there is often the statistical claim made that for most (younger healthy) people having the covid-19 disease will not lead to any serious outcomes. Consequently, it is better for all people (except the old and sick) to get the disease and acquire immunity so that life can return to normal as swiftly (and cheaply) as possible. This fits with a specific set of political beliefs where it is argued that covid-19 policies will change Czech society (for the worse presumably). The “let's get it over and done with” argument is based on a statistical claim made at the level of the Czech population: it says nothing about what will happen in individual cases. What is known is that a condition labelled “long covid” leaves a proportion of people (according to unpublished UK tracking data, 10% after 21 days and 2% after 90 days) with persistent symptoms such as fatigue and shortness of breath for reasons that are at present unknown (COVID Symptom Study 2020; Tan 2020; Carfi et al. 2020; Townsend et al. 2020).

Here the reader of this article who is faced with the choice of deciding which pandemic policy is best experiences first-hand the limits of factual knowledge and dependence on competing experts and leaders. What to do? Now, as in the past, people will act on the basis of other forms of knowledge such as personal experience and the application of core beliefs such as freedom, equality, solidarity, and responsibility, etc. Alternatively, in the absence of knowledge or any strong belief there will be apathy where the default choice is to 'free ride' and let others decide. In sum, as I argued in my book on political knowledge having a factually well informed citizenry where all decisions are sensible is a utopia, that is as unworldly as the arguments of social scientists who use statistics in a ritualistic manner to make claims about an artificial social world devoid of real people. True knowledge in a time of pandemic must be useful and grounded in life as it is lived and death as it happens.  

References

  • Abbott, A. (1998). The Causal Devolution. Sociological Methods & Research, 27(2): 148–181. doi: 10.1177/0049124198027002002
  • Branch, M.N. (2018). The “Reproducibility Crisis:” Might the Methods Used Frequently in Behavior-Analysis Research Help? Perspectives on Behavior Science, 42(1): 77–89. doi: 10.1007/s40614-018-0158-5
  • Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Medicine, 2(8): e124. doi: 10.1371/journal.pmed.0020124
  • Carfì A, Bernabei R, and F. Landi for the 'Gemelli Against COVID-19 Post-Acute Care Study Group' at the Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy. (2020). Persistent Symptoms in Patients After Acute COVID-19. Journal of the American Medical Association, 324(6): 603–605. doi: 10.1001/jama.2020.12603
  • COVID Symptom Study (2020). How long does COVID-19 last? June 6. Article available at: https://covid.joinzoe.com/post/covid-long-term
  • Gigerenzer, G. (2018). Statistical Rituals: The Replication Delusion and How We Got There. Advances in Methods and Practices in Psychological Science, 1(2): 198–218. doi: 10.1177/2515245918771329
  • Jaynes, E.T. (2003). Probability Theory: The Logic of Science. Cambridge, UK: Cambridge University Press. doi: 10.1017/CBO9780511790423
  • Szucs, D. and J.P.A Ioannidis. (2017). When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment. Frontiers of Human Neuroscience, 11: 390. doi: 10.3389/fnhum.2017.00390
  • Tan, L. (2020). A Special Case of COVID-19 with Long duration of Viral Shedding for 49 days. MedRxiv (a pre-print server for the medical sciences). March 27.  doi: his version posted March 27. doi: 10.1101/2020.03.22.20040071
  • Townsend, L. et al. (2020). Persistent Fatigue Following SARS-CoV-2 Infection is Common and Independent of Severity of Initial Infection. MedRxiv (a pre-print server for the medical sciences). July 30.  doi: 10.1101/2020.07.29.20164293
Share this page
Související publikace: