Blogpost 4: Grant Allocation and Panel Behavior: Between Universalism and Particularism

Grant Allocation and Panel Behavior

Between Universalism and Particularism

By Ulf Sandström
Posted on April 14th 2021

Keywords: nepotism; cronyism; peer-review; research funding; gender.

One feature of the scientific community is the meritocratic principle; decisions that affect the career should have a base in a person’s performance, the merit. Meritocracy is universalistic. To treat everyone eligible for a position or a grant after the same standards is the general rule (Long & Fox, 1995; Merton, 1942).

However, when science meets society, when social relations mix with principles, the result is often particularism—playing down scholarly qualifications and letting other factors take a role in the game: gender, age, or ethnicity. Also, other types of social relations, e.g., friendship, collaboration, and sometimes even family relations, can play a substantial role.

The GRANteD project is, to a large extent, built on the landmark study by WennerĂ„s and Wold ‑ W&W ‑ (1997). They analyzed the Swedish MRC competition for post-doctoral position grants in terms of two factors: sexism (gender) and nepotism (understood as “friendship bonus”) in peer review. In its initial meaning, nepotism favors (such as jobs) to relatives, i.e., family relations as the basis for favoritism, but W&W widened the concept and included all types of conflict of interest (CoI).

Ten years later, the Nature-study was replicated by Sandström & HĂ€llsten (S&H) (2008) using data from 2004 of the same organization, MRC. They found a similar sexism effect (5 % higher grades): However, if there was a “friend” in the panel an applicant with otherwise average scores from reviewers would be lifted higher. The nepotism factor reversed and balanced the sexism factor.

Looking at the literature, grant allocation studies appeared in the 1970s when the Cole brothers published a report on procedures in the U.S. National Science Foundation (NSF), addressing the critique on tendencies interpreted as an “old boys’ network” (Cole et al., 1981). The report tested the geographical hypothesis and found that reviewers and panel members did not favor proposals from their own state/region.

Opposing this, favoritism of the own university came into focus by one of the most prolific scholars on organizational behavior, Jeffrey Pfeffer, and his colleagues when they had a look at data from the NSF and four of its social science panels (Pfeffer et al. 1976). The investigation found that the size of funds of a particular university correlated to the number of panel members from the same university. By having representatives on the panel, a university had a higher probability of receiving grants. The authors proposed that Kuhns paradigm theory (Kuhn, 1970) laid the ground for explaining the result. Pfeffer could see differences between disciplines and how they tackled the uncertainty on decision-making. The less developed the discipline, the harder to achieve a consensus on research topics. Instead, the funding decisions of panels in less developed disciplines tended to match the representation of universities in the panel.

Pfeffer concluded that in “absence of universalistic standards, particularistic criteria, deriving from existing social relationships, are more likely to influence decision making”. The authors also tried to explain the mechanism for decision-making under these circumstances. They suggested an explanation that involved implicit exchanges: “Each panel member knows that the universities of other panel members have submitted proposals. Panel members will favor the proposals of their fellow panel members’ institutions, expecting, following the norm of reciprocity, that they will do the same.” (p. 239).

There were also other possible mechanisms but in this blog post, we concentrate on implicit exchange.

Sadly, it is the case that the “science of science” community has not taken up Pfeffer’s elegant paper. Cole (1992) mentions the paper positively in a short discussion, but that is all, although the paper is one of Pfeffer’s more cited (about Q1 of all his papers). Bornmann & Daniel (2007) is an exception, but they use the paper for quite a different purpose.

The blogpost author, at that time unaware of the Pfeffer paper, researched, how councils worked with proposals, and analyzed the particularism in peer review. These investigations started at the end of the 1990s and still continues (now within the GRANteD project). One aspect of the investigations indicates a contradiction to the idea that paradigmatic development and uncertainty could explain the phenomena of home-university favoritism. Sandström (2012) uses project register data from the Swedish Natural Research Council (NFR) between 1989–1996 to analyze the correlation between the total grant sum per university and the share of panel members from each university. Results display a very high correlation (Spearman’s rho=0.90). Furthermore, there is reason to believe that discipline uncertainty was pretty low in the natural sciences; they were paradigmatic in the Kuhnian sense.

The level of correlation indicates that research councils largely could be considered as an extension of the faculty system instead of a complementary organization, which was originally the idea of a council (Nybom, 1997). This also holds for the most developed discipline par excellence, the NFR Physics committee, where Sandström (2000) revealed that applicants without a panel member from the same university had a significantly lower chance of success (Sandström, 2000).

Other aspects of the investigations confirm and reinforce the mechanism identified by Pfeffer and colleagues. In order to avoid nepotism, the Swedish councils – hopefully as elsewhere – have established CoI protocols. According to these legally binding protocols, panel members with any integrity issue to any of the applicants should report this before the panel meeting. Whether self-reporting of CoI is an efficient routine for detection, has been questioned in a Canadian study by Gallo et al. (2016); substantial potential conflicts were detected using manual methods.

Interestingly, during the review process, before the disclosure of CoI, panel members can reconstruct current relations to any person in the panel, e.g., by applicant’s affiliation, department, supervisor, co-authors. As panels tend to consider themselves as excellence par preference, connections to any panel member indicate closeness to excellent research and a good research environment, said council defenders Billig & Jacobsson (2009).

This latter view was challenged by Sandström (2012), who showed that panel members of the Swedish Medical Research Council were at best on the same level – measured as bibliometric performance – as the average Swedish researcher, and Sweden overall had a low average performance level. A similar finding reported by Abrams (1991) shows that members of the ecology panel in the US NSF were less influential citation-wise than median-grantees.

As the mentioned studies show, reporting of applicant-panelist relations did not prevent nepotism. Those who have a connection to panel members have an advantage even if the panel member, as stated in the protocol mentioned above, ‘leaves the room’ when discussing the related application. Similar results appear in a study based in South Korea (Jang et al., 2017): evaluators had “a tendency to give relatively high scores to research proposals submitted by the alumni of the same universities as their alma mater”.

Furthermore, recent studies indicate that favoritism is a significant factor in scoring applications, see Abdoul et al. (2012). A Canadian study confirmed that CoI strongly affects the scoring and explains it in these terms: “One possible reason is that reviewers vote favorably for applicants from the same institution, even if they have never met them and would therefore not be in conflict” (Tamblyn et al. 2018).

Several Chinese studies have investigated the mechanism of ‘Guanxi’ (interpersonal relationships), which signifies non-relative relations to establish connections to achieve specific purposes, e.g., connections to the research bureaucracy to achieve funding (Zhang et al., 2020). Fisman et al. (2017) focus on connections that play a central role in Chinese society, so-called hometown ties (observable to research). Although the connected persons have considerably lower bibliometric performance (measure: h-index), hometown favoritism seems to have had large effects on research funding.

Another aspect of panel peer review is the following: on some of the applicants panel members have direct and personal information; on others, they have no other than written information. In short, a general restriction on the fairness of peer review. Simultaneously, it makes the general situation for a panel member understandable: in the typical case, he/she has maybe one or two candidates for which it is possible to put in arguments in the panel deliberations. Panels consist of a few experts in different research lines and are often “both better informed and more biased about the quality of projects in their own area” (Li, 2017; c.f. Wang & Sandström 2015). However, if there are no candidates from their area of research, then the home-university aspect becomes the bottom line, especially if they have at least network information to rely on.

The practice at Swedish universities to give monetary incentives to faculties that engage in council committees illustrates the harsh competition for resources in Sweden, where extramural sources stand for almost 60 % of research funding. Universities can no longer expect that councils will pay back to size. The fact that universities use incentives also speaks to the researchers—panel membership will add to their reputation, and their performance in the committee might be evaluated based on the number of grants achieved to the home-university.

How does nepotism relate to gender effects of research funding? The literature is sparse on that question; basically, it starts with W&W (1997). Out of their 114 applicants, 14 had a reviewer affiliation or CoI according to the protocol. Ten out of these were in the Microbiology panel – the resting four were distributed over four panels. So, five panels had nothing in the protocol. All of the four outside of Microbiology were male and they were given a higher competence grade than expected (based on performance), and they were granted. Nepotism counts. The Microbiology panel was more problematic: if one panel member acts with meticulous accuracy numbers will become strange, too much of reporting. Women did not earn from CoI, but men were lifted. About the same result was found by S&H (2008). To repeat: while women had a bonus (5 % higher grades), the nepotism factor worked strongly in favor of men and results were more or less even.

 

Abdoul H, Perrey C, Tubach F, Amiel P, Durand-Zaleski I, Alberti C (2012). Non-Financial Conflicts of Interest in Academic Grant Evaluation: A Qualitative Study of Multiple Stakeholders in France. PLoS ONE 7(4): e35247. doi:10.1371/journal.pone.0035247

Abrams PA (1991). The predictive ability of peer review of grant proposals: the case of ecology and the U.S. National Science Foundation. Social Studies of Science 21:111 132

Bellow A (2005). In Praise of Nepotism: A Natural History. Doubleday: New York.

Billig H & Jacobsson C (2009). The Swedish Research Council welcomes debate on openness and competition Swedish Medical Journal, 109(39), debate section.

Bornmann L & Daniel HD (2007). Convergent validation of peer review decisions using the h index: Extent of and reasons for type I and type II errors. Journal of Informetrics 1(3):204-213.

Cole S, Cole JR, Simon GA (1981). Chance and consensus in peer review. Science 214 (4523) 881–886.

Cole, S (1992). Making Science: Between Nature and Society. Harvard University Press: Cambridge.

Fisman R, Shi J, Wang YX, Xu R (2018). Social Ties and Favoritism in Chinese Science. Journal of Political Economy 126 (3): 1134-1171

Gallo SA, Lemaster M, Glisson SR (2016). Frequency and type of Conflicts of Interest in the peer review of basic Biomedical research funding applications: self-reporting versus manual detection. Science & Engineering Ethics 22: 189 197.

Jang D, Doh S, Kang GM, Han DS (2017). Impact of Alumni Connections on Peer Review Ratings and Selection Success Rate in National Research. Science Technology & Human Values 42 (1): 116-143.

Kuhn, TS (1970). The Structure of Scientific Revolutions. 2nd ed. Chicago: Univ Chicago Press.

Li D (2017). Expertise versus bias in evaluation: evidence from the NIH. American Economic Journal: Applied Economics 9:60 92.

Long JS &Fox MF (1995). Scientific careers: Universalism and particularism. Annual Review Sociology 24:45–71.

Merton R (1942). The normative structure of science.  In: RK Merton, The sociology of science. University of Chicago Press 1973.

Moed H (2005). Citation Analysis in Research Evaluation. Springer: Dordrecht.

Nybom T (1997). Kunskap Politik SamhÀlle: essÀer om kunskapssyn, universitet och forskningspolitik 1900 2000. Stockholm: Arete.

Pfeffer J, Salanick GR & Leblebici, H (1976). The Effect of Uncertainty on the Use of Social Influence in Organizational Decision Making. Administrative Science Quarterly 21 (2): 227-245.

Sandström U (2000). ForskningsrÄden, politiken och rÀttvisan: studier kring kollegial styrning av forskning. Stockholm. www.sister.se [Research councils, politics and justice: studies on collegial governance of research]

Sandström U (2012). Vetting the panel members [In Swedish: En granskning av granskarna: hur bra beredningsgrupperna? Forskning om forskning 2/2012 (revised version 2015).

Sandström U & M HÀllsten (2008). Persistent nepotism in peer-review. Scientometrics 74 (2): 175-89.

Tamblyn R, Girard N, Qian CJ, Hanley J (2018). Assessment of potential bias in research grant peer review in Canada. CMAJ, 23;190:E489-99. doi: 10.1503/cmaj.170901.

Viner N, Powell P, Green R (2004) Institutionalized biases in the award of research grants: a preliminary analysis revisiting the principle of accumulative advantage. Research Policy 33: 443 454.

Wang Q & Sandström U (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation 24, 3: 271-281.

WennerÄs C & Wold A (1997). Nepotism and sexism in peer review. Nature 387 (6631): 341-343.

Zhang GP, Xiong LB, Wang X, Dong JN, Duan HB (2020). Artificial selection versus natural selection: Which causes the Matthew effect of science funding allocation in China? Science and Public Policy 47 (3): 434–445