Print
A manifesto for fair and equitable research funding in ecology
expand article infoLinton Winder§, Simon Hodge§
‡ Toi Ohomai Institute of Technology, Rotorua, New Zealand
§ BHU Future Farming Centre, Lincoln, New Zealand
Open Access

Abstract

The way in which research funding is allocated by both governmental and non-governmental research agencies needs to be revamped to avoid bias and encourage innovation. Known biases in allocation of funding include those driven by gender, race, institution size, geographic location and interdisciplinary study. We also contend that the peer-review process itself provides an apparently fair process, but that the flaws within it work against funding innovative science. We propose an unbiased process that combines the use of short proposals, blinded review and a lottery to allocate funding.

Keywords

Funding, research, grants, lottery, bias, ballot, fairness

Introduction

Research scientists spend substantial amounts of time preparing research grants for submission. For example, Herbert et al. (2013) report that in 2012 the Australian National Health and Medical Research Council alone received 3727 applications. Each proposal took an average of 38 working days to prepare, with an estimated effort equivalent to 550 working years at a cost of AU$66 million. While this effort is to some extent inevitable given the funds available and the competitive nature of gaining grants, it can be a frustrating process, and particularly for those who are new researchers or marginalised by biases that reduce their chances of success (see below).

We believe that the way funds are distributed requires a complete overhaul to make the process both fair and equitable, and to drive innovation and ambition. Innovation and ambition is constrained by a risk-averse approach to awarding grants, maintaining the status quo. This overhaul is necessary for two main reasons:

1. The process is biased

There is substantial evidence that bias in many forms impacts on the grant-review process, leading to outcomes that are neither fair nor in the interest of science itself. Known negative biases include those driven by gender (Pohlhaus 2011; Tricco et al. 2017), race (Ginther et al. 2011), institution size (Murray et al. 2016), geographic location (Wahls 2016), and interdisciplinary study (Bromham et al. 2016). Additionally, Boudreau et al. (2016) found a tendency for reviewers with a shorter intellectual distance from an application to provide harsher evaluations even when they were highly innovative. Other factors can also influence decision making, such as the bias towards the perceived value of high-income-country research (Harris et al. 2017).

Known positive biases include the distribution of funding to a disproportionate minority of investigators (Wahls 2017). The tendency to reward eminence or track record by giving them more weight to funding applications is, as Vazire (2017) points out, “… like giving Usain Bolt a 10 meter head start in his next race”. This results in elite scientists over-attracting resources (Ma et al. 2015; Szell and Sinatra 2015), with a bias towards an orthodoxy where well‐established laboratories are favoured over those that are small or newly established (Alberts 1985, Alberts et al. 2014; Daniels 2015) even when established research units might deliver reduced productivity ovr time (Lorsch 2015).

Such positive and negative biases in the funding process does not mean that poor science is being funded, but it does result in a culture where some researchers are marginalised and innovation is stifled. In a newspaper interview, the Nobel Prize winner, Sir James Black (who created medicines for the prevention of heart attacks and stomach ulcers), stated that “… the process of peer review is the enemy of scientific creativity… it tends to reinforce the majority research program by withdrawing funding from its competitors” (see Labini 2016).

2. Peer review is not sufficiently reliable for decision-making purposes

Putting scientists into a room to rank, debate and award research funding appears like a fair process. Yet there is substantial evidence that this process has weaknesses (Ballabeni et al. 2016; Cicchetti 1991; Fang and Casadevall 2009; Nicholson and Ioannidis 2012; Staats 2014; Danthi et al. 2015; Gallo et al. 2014). Decisions regarding the funding of grant applications can be influenced by small differences in the scores given by reviewers (Box 1). Evidence of poor precision when making judgements (Kaplan et al. 2008) and the impact of expertise on evaluation (Gallo et al. 2016) compromise the process further, whilst panels’ reliance on bibliometric measures of success (impact factor, etc.) tend to deliver risk-averse funding decisions (Stephan 2017).

When the grant-awarding process has been systematically evaluated, there appears little evidence that peer-review scores reliably predict the subsequent success of research projects (Danthi et al. 2014; Kaltman et al. 2014; Danthi et al. 2015). Although a study by Li and Agha (2015) indicated that panel rankings predicted good research outcomes (in terms of publication record), a subsequent reanalysis by Fang et al. (2016) found no such effect. Even if panel discussion rather than ranking is used to evaluate proposals, there is little evidence that the reliability of peer review is improved (Fogelholm et al. 2012). In fact, committees are often forced into choices where essentially arbitrary decisions are made from a pool of applications that are all considered more or less equally fundable (Powell 2010).

Such observations might be uncomfortable for the scientific establishment, because, the peer review process for grant allocation is embedded in our collective research culture. Our trust in a small panel of scientists being able to evaluate and assess often large numbers of varied proposals is not warranted. This problem is magnified by the use of ‘introducing members’ (where one or two panel members are asked to evaluate an application fully), which further restricts scrutiny and is likely to introduce personal bias. Ballabeni et al. (2016) found strong support for a process that distributes funds equitably among individuals and establishments, preventing marginalisation of smaller or new research groups, and thus avoiding the “… incumbency advantage” that favours “… insiders and the familiar” over the unknown.

There must be a better way

There has been recent interest in introducing a lottery when allocating research funding, which, if combined with blinded review, could result in a much fairer, transparent and, importantly, unbiased system (Avin 2017; Fang and Casadevall 2016a, b, c; Solans-Domenech 2017). A transparent process could allow applicants to be confident that they have been treated fairly (Gurwitz et al. 2014), and these modifications could “… increase the number of funded investigators and harness a greater diversity of tools, perspectives and creative ideas” (Fang et al. 2016a). It would also give all those within the scientific community confidence that “… if my idea is good enough, I’m good enough”.

We propose that the essential components for such a funding system are as follows:

a) Using concept notes (i.e., expressions of interest) to reduce the time spent preparing full grant applications (Barnett et al. 2015).

b) Assessing anonymised (blinded) concept notes so that the scientific idea is paramount, reducing the possibility of bias.

c) Evaluating a concept note using a simple binary 'meritorious/non-meritorious' system, with the objective of eliminating "... infeasible, poorly conceived, unoriginal or otherwise flawed applications" (Fang and Casadevall 2016a, b, c). The assessment is based on the validity and novelty of the concept. All proposals that meet this threshold advance.

d) Requesting a full application following this triage.

e) Screening the full proposal for budget, facilities, methodology, etc. to ensure that it is viable with an opportunity to correct any aspect of the proposal. This process is "peer support", not "peer review", and is focussed on supporting the researcher to deliver their "meritorious" research idea.

f) If the proposal cannot be advanced, feedback is given to promote progression at the next funding round. This is a technical evaluation and is not ranked.

g) Once screened and approved, forwarding a full proposal to a lottery. All "meritorious", full proposals are included.

h) Additional features could include limiting the number of applications that an individual researcher can have within the assessment system, and providing robust feedback for both the concept notes and the full applications. Separate calls for established and new entrant funding could also be considered.

Some of these components are already widely implemented (e.g., using concept notes and new researcher awards) but not adopted holistically. Such a scheme would be fairer to all applicants and be a catalyst for new thinking. All researchers could be confident that the forces of bias are minimised, and that decisions made are less risk-averse. Such a randomisation approach would broaden the number of institutions, and the number of individuals and range of researchers that gain funding (Box 2). Those with established track records will still have an advantage; they have the platform to submit well-argued and innovative applications from a position of strength. However, in this system their advantage is implicit as it is the idea itself rather than the cyclical, self-fulfilling grading of their proposal based on their previous track record.

Conclusion

Although our proposed approach would not lead to an increase in funded projects, it could provide a platform for research funding to be more equitably distributed. We are not arguing that the current system results in poor science. Rather, we argue that systems should encourage scientists to submit their ideas in the knowledge that they have a genuine chance of gaining funding, where bias, prejudice, and caution are minimised. This would encourage innovation and the development of new ideas. While no system of allocating research funds can be perfect, such an impartial and transparent funding system should be welcomed by the scientific community.

Author contribution

Linton Winder conceptualised the idea, prepared the first draft of the main text and edited text in Box 1 and Box 2. Simon Hodge conducted all the analyses in Boxes 1 and 2, prepared the first drafts of these sections and edited the main text. The authors contributed equally (50:50) to this paper.

Authors Contribution ACI
LW 0.5 1.000
SH 0.5 1.000

Acknowledgements

The authors are grateful to a UK research funding agency that kindly provided data used in Box 1 and Box 2. The authors would also like to thank two referees, the Subject Editor and the Editor in Chief for helping us to substantially improve this paper.

References

  • Anon (2016) Data kindly provided by a UK funding organisation made available via a freedom of information request. Data was provided on the basis that its source would be anonymised.
  • AvinS (2017) Research funding is a gamble so let’s give out money by lottery. blogs.lse.ac.uk /impactofsocialsciences/2017/03/28/research-funding-is-a-gamble-so-lets-give-out-moneyby-lottery/
  • BarnettAG, HerbertDL, CampbellM, DalyN, RobertsJA, MudgeA, GravesN (2015) Streamlined research funding using short proposals and accelerated peer review: an observational study. BMC Health Services Research 15: 55. https://doi.org/10.1186/s12913-015-0721-7
  • BoudreauKJ, GuinanEC, LakhaniKR, RiedlC (2016) Looking across and looking beyond the knowledge frontier: intellectual distance, novelty, and resource allocation in science. Management Science. https://doi.org/10.1287/mnsc.2015.2285
  • CicchettiDV (1991) The reliability of peer review for manuscript and grant submissions: A cross disciplinary investigation.Behavioral and Brain Sciences14: 119–135. https://doi.org/10.1017/S0140525X00065675
  • DanthiN, WuCO, ShiP, LauerM (2014) Percentile ranking and citation impact of a large cohort of National Heart, Lung, and Blood Institute–funded cardiovascular R01 grants. Circ. Res.114: 600–606. https://doi.org/10.1161/CIRCRESAHA.114.302656
  • DanthiNS, WuCO, DiMicheleDM, HootsWK, LauerMS (2015) Citation impact of NHLBI R01 grants funded through the American Recovery and Reinvestment Act as compared to R01 grants funded through a standard payline. Circ. Res.116: 784–788. https://doi.org/10.1161/CIRCRESAHA.116.305894
  • FogelholmM, LeppinenS, AuvinenA, RaitanenJ, NuutinenA, VäänänenK (2012) Panel discussion does not improve reliability of peer review for medical research grant proposals.Journal of Clinical Epidemiology65: 47–52. https://doi.org/10.1016/j.jclinepi.2011.05.001
  • GalloSA, CarpenterAS, IrwinD, McPartlandCD, TravisJ, ReyndersS, ThompsonLA, GlissonSR (2014) The validation of peer review through research impact measures and the implications for funding strategies. PLoS One 9: e106474.
  • HarrisM, MartiJ, WattH, BhattiY, MacinkoJ, DarziAW (2017) Explicit bias toward high-income-country research: a randomized, blinded, crossover experiment of English clinicians, Health Affairs 36, No. 11: Global Health Policy. https://doi.org/10.1377/hlthaff.2017.0773
  • KaltmanJR, EvansFJ, DanthiNS, WuCO, DiMicheleDM, LauerMS (2014) Prior publication productivity, grant percentile ranking, and topic-normalized citation impact of NHLBI cardiovascular R01 grants. Circ. Res.115: 617–624. https://doi.org/10.1161/CIRCRESAHA.115.304766
  • NicholsonJM, IoannidisJP (2012) Research grants: conform and be funded.Nature492: 34–36.
  • Solans-DomenechM, GuillamonI, RiberaA, Ferreira-GonzalezI, CarrionC, Permanyer-MiraldaG, PonsJMV (2017) Blinding applicants in a first-stage peer-review process of biomedical research grants: An observational study.Research Evaluation26: 181–189. doi: 10.1093/reseval/rvx021
  • StaatsC (2014) State of the science: implicit bias review. Kirwan Institute.
  • TriccoAC, ThomasSM, AntonyJ, RiosP, RobsonR, PattaniR, GhassemiM, SullivanS, SelvaratnamI, TannenbaumC, StrausSE (2017) Strategies to prevent or reduce gender bias in peer review of research grants: a rapid scoping review. PLoS ONE 12(1): e0169718. https://doi.org/10.1371/journal.pone.0169718
  • WahlsWP (2016) Biases in grant proposal success rates, funding rates and award sizes affect the geographical distribution of funding for biomedical research. PeerJ 4:e1917 https://doi.org/10.7717/peerj.1917

Appendix 1

Box 1: Sensitivity of the scoring process for funding decisions

Grant applications to an anonymous United Kingdom funding body (UKFB) are scored by reviewers from 0 to 7, and a mean score is obtained (Anon 2017). We obtained these funding data from eight panel meetings, encompassing 905 grant applications. The highest scores each ranged between 5.8 and 6.6, and the lowest between 1 and 2.9, respectively. In total, 192 applications (21.2%) were funded (Table 1). Individual reviewers gave scores to one decimal place, but differences between scores for the lowest-scoring funded application and highest scoring non-funded application were always to at least the second decimal place. These small differences suggest that a small variation in the score given by any one reviewer, or inter-application variation in reviewers, could dictate whether an application was funded or not.

Summary of grant applications, score evaluation, and numbers funded by an anonymous United Kingdom funding body from eight panels.

Panel Applications Funded Applications Funded (%) Highest score Lowest score Lowest score funded Lowest rank funded Break in funding sequence Applications above last funded Score difference: (lowest funded – highest not funded)
1 103 23 22.3 5.8 2.6 5.00 31 Yes 8 0.01
2 83 22 26.5 6.2 2.0 5.25 33 Yes 11 0.05
3 97 19 19.6 5.8 2.9 5.23 20 Yes 1 0.01
4 118 26 22.0 6.5 2.8 5.14 46 Yes 20 0.01
5 129 27 20.9 5.9 2.0 5.10 34 Yes 7 0.01
6 94 22 23.4 6.6 2.9 4.84 58 Yes 36 0.01
7 131 24 18.3 6.3 1.0 5.00 46 Yes 22 0.0001
8 150 29 19.3 6.3 2.6 5.32 41 Yes 12 0.07

To assess how small, random changes to scores influenced the likelihood of funding, we ran a simulation (40 runs) on the Panel 8 data. To each score, we added a random number based on a normal distribution with mean = zero and standard deviation (SD) ranging from 0.01 to 0.35. After adjusting the scores and re-ranking, we assessed the proportion of simulations that resulted in all the originally funded applications being ‘re-funded’. There was a clear negative relationship with consistency of funding and the score change SD; with an adjustment SD > 0.25, we recorded no occasions when all the previously awardees received funding (Figure 1). With a SD = 0.25, the average score change was approximately ± 0.2 marks (Table 1), which represented 3% (0.2 ÷ 6.6 × 100) of the highest score awarded). This suggests that even small changes in scores results in consistent changes to the pool of applicants that were awarded funding.

Figure 1.

Results of adding a small random number to funding application scores for Panel 8 of the anonymous United Kingdom funding data: percentage (%) of 40 simulations that resulted in all previously funded applications being awarded funding.

Box 2. What if funding bodies played dice? Sensitivity of the funding process to varying random selection

Given that applications to the UKFB considered ‘fundable’, but not actually funded, were separated by very small score differences from those that were successful (Table 1), the process is clearly sensitive to even small fluctuations in marks awarded by evaluators (Box 1). So, rather than fund applications strictly based on the final funding score, what if various rules were applied so that more or less equally scored applications all had a chance of being funded?

It appears that some rule other than just funding the top-scoring applications was the case for these data; in each year, when the applications were ranked in terms of scores, a break in the funding sequence occurred. This indicates that some applications, although scoring highly, were overlooked in favour of applications with lower scores (Table 1). In one panel (Panel 6) the lowest scoring application was ranked 58 out of 94 applications when only 22 projects were funded; 36 applications scored more highly than this application and yet were not funded.

What happens when we introduce more randomness to the decision process so that applications in the ‘fundable zone’ all had a chance of receiving funding? In Panel 8 of the data, there were 150 applications, of which 147 were deemed fundable after an initial screening process, and 29 were actually funded. By using a bootstrap-resampling process (1000 iterations), we applied a series of selection rules to these data, each using various randomness to select ‘winners’ from the 147 fundable applications. We then ascertained how many previously unfunded applications would now be funded in each case (Table 2). Naturally, if all 29 funded grants were selected at random from all those considered fundable, a high percentage (80.1%) of previously unfunded grants would be funded (Scenario 1). However, even when we applied much stricter rules such as awarding all applications with a score ≥ 6 (only six applications), and then selecting the remainder at random from those with scores of ≥ 5, nearly half (44.8%) of the applications now awarded funding did not receive funding in the actual funding round (Table 2; Scenario 6).

The effect of using different random selections by including new awardees from those previously funded in Year 8 of the anonymous United Kingdom funding body data.

Scenario New grants funded (%) (95% bootstrap CI)
1 29 funded at random from all 147 fundable applications 80.1 (86.2–75.9)
2 29 funded at random from Top 50 scores 41.9 (48.3–34.5)
3 29 funded at random from those with score ≥ 5 (n = 59) 50.9 (55.2–44.8)
4 Top 10 scores all funded. 19 selected at random from remainder (n = 137) 56.2 (58.6–51.7)
5 Score ≥ 6 all funded. 23 funded at random from remainder (n = 141) 66.4 (72.4–62.1)
6 Score ≥ 6 all funded. 23 funded at random from those with score ≥ 5 (n = 53) 44.8 (51.7 - 37.9)
login to comment