Corresponding author: Linton Winder ( lintonwinder@yahoo.co.uk ) Academic editor: Corey Bradshaw
© 2017 Linton Winder, Simon Hodge.
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Winder L, Hodge S (2017) A manifesto for fair and equitable research funding in ecology. Rethinking Ecology 2: 47-56. https://doi.org/10.3897/rethinkingecology.2.21798
|
The way in which research funding is allocated by both governmental and non-governmental research agencies needs to be revamped to avoid bias and encourage innovation. Known biases in allocation of funding include those driven by gender, race, institution size, geographic location and interdisciplinary study. We also contend that the peer-review process itself provides an apparently fair process, but that the flaws within it work against funding innovative science. We propose an unbiased process that combines the use of short proposals, blinded review and a lottery to allocate funding.
Funding, research, grants, lottery, bias, ballot, fairness
Research scientists spend substantial amounts of time preparing research grants for submission. For example,
We believe that the way funds are distributed requires a complete overhaul to make the process both fair and equitable, and to drive innovation and ambition. Innovation and ambition is constrained by a risk-averse approach to awarding grants, maintaining the status quo. This overhaul is necessary for two main reasons:
There is substantial evidence that bias in many forms impacts on the grant-review process, leading to outcomes that are neither fair nor in the interest of science itself. Known negative biases include those driven by gender (Pohlhaus 2011;
Known positive biases include the distribution of funding to a disproportionate minority of investigators (
Such positive and negative biases in the funding process does not mean that poor science is being funded, but it does result in a culture where some researchers are marginalised and innovation is stifled. In a newspaper interview, the Nobel Prize winner, Sir James Black (who created medicines for the prevention of heart attacks and stomach ulcers), stated that “… the process of peer review is the enemy of scientific creativity… it tends to reinforce the majority research program by withdrawing funding from its competitors” (see
Putting scientists into a room to rank, debate and award research funding appears like a fair process. Yet there is substantial evidence that this process has weaknesses (
When the grant-awarding process has been systematically evaluated, there appears little evidence that peer-review scores reliably predict the subsequent success of research projects (
Such observations might be uncomfortable for the scientific establishment, because, the peer review process for grant allocation is embedded in our collective research culture. Our trust in a small panel of scientists being able to evaluate and assess often large numbers of varied proposals is not warranted. This problem is magnified by the use of ‘introducing members’ (where one or two panel members are asked to evaluate an application fully), which further restricts scrutiny and is likely to introduce personal bias.
There has been recent interest in introducing a lottery when allocating research funding, which, if combined with blinded review, could result in a much fairer, transparent and, importantly, unbiased system (
We propose that the essential components for such a funding system are as follows:
a) Using concept notes (i.e., expressions of interest) to reduce the time spent preparing full grant applications (
b) Assessing anonymised (blinded) concept notes so that the scientific idea is paramount, reducing the possibility of bias.
c) Evaluating a concept note using a simple binary 'meritorious/non-meritorious' system, with the objective of eliminating "... infeasible, poorly conceived, unoriginal or otherwise flawed applications" (
d) Requesting a full application following this triage.
e) Screening the full proposal for budget, facilities, methodology, etc. to ensure that it is viable with an opportunity to correct any aspect of the proposal. This process is "peer support", not "peer review", and is focussed on supporting the researcher to deliver their "meritorious" research idea.
f) If the proposal cannot be advanced, feedback is given to promote progression at the next funding round. This is a technical evaluation and is not ranked.
g) Once screened and approved, forwarding a full proposal to a lottery. All "meritorious", full proposals are included.
h) Additional features could include limiting the number of applications that an individual researcher can have within the assessment system, and providing robust feedback for both the concept notes and the full applications. Separate calls for established and new entrant funding could also be considered.
Some of these components are already widely implemented (e.g., using concept notes and new researcher awards) but not adopted holistically. Such a scheme would be fairer to all applicants and be a catalyst for new thinking. All researchers could be confident that the forces of bias are minimised, and that decisions made are less risk-averse. Such a randomisation approach would broaden the number of institutions, and the number of individuals and range of researchers that gain funding (Box 2). Those with established track records will still have an advantage; they have the platform to submit well-argued and innovative applications from a position of strength. However, in this system their advantage is implicit as it is the idea itself rather than the cyclical, self-fulfilling grading of their proposal based on their previous track record.
Although our proposed approach would not lead to an increase in funded projects, it could provide a platform for research funding to be more equitably distributed. We are not arguing that the current system results in poor science. Rather, we argue that systems should encourage scientists to submit their ideas in the knowledge that they have a genuine chance of gaining funding, where bias, prejudice, and caution are minimised. This would encourage innovation and the development of new ideas. While no system of allocating research funds can be perfect, such an impartial and transparent funding system should be welcomed by the scientific community.
Linton Winder conceptualised the idea, prepared the first draft of the main text and edited text in Box 1 and Box 2. Simon Hodge conducted all the analyses in Boxes 1 and 2, prepared the first drafts of these sections and edited the main text. The authors contributed equally (50:50) to this paper.
Authors | Contribution | ACI |
---|---|---|
LW | 0.5 | 1.000 |
SH | 0.5 | 1.000 |
The authors are grateful to a UK research funding agency that kindly provided data used in Box 1 and Box 2. The authors would also like to thank two referees, the Subject Editor and the Editor in Chief for helping us to substantially improve this paper.
Box 1: Sensitivity of the scoring process for funding decisions
Grant applications to an anonymous United Kingdom funding body (UKFB) are scored by reviewers from 0 to 7, and a mean score is obtained (
Summary of grant applications, score evaluation, and numbers funded by an anonymous United Kingdom funding body from eight panels.
Panel | Applications | Funded | Applications Funded (%) | Highest score | Lowest score | Lowest score funded | Lowest rank funded | Break in funding sequence | Applications above last funded | Score difference: (lowest funded – highest not funded) |
1 | 103 | 23 | 22.3 | 5.8 | 2.6 | 5.00 | 31 | Yes | 8 | 0.01 |
2 | 83 | 22 | 26.5 | 6.2 | 2.0 | 5.25 | 33 | Yes | 11 | 0.05 |
3 | 97 | 19 | 19.6 | 5.8 | 2.9 | 5.23 | 20 | Yes | 1 | 0.01 |
4 | 118 | 26 | 22.0 | 6.5 | 2.8 | 5.14 | 46 | Yes | 20 | 0.01 |
5 | 129 | 27 | 20.9 | 5.9 | 2.0 | 5.10 | 34 | Yes | 7 | 0.01 |
6 | 94 | 22 | 23.4 | 6.6 | 2.9 | 4.84 | 58 | Yes | 36 | 0.01 |
7 | 131 | 24 | 18.3 | 6.3 | 1.0 | 5.00 | 46 | Yes | 22 | 0.0001 |
8 | 150 | 29 | 19.3 | 6.3 | 2.6 | 5.32 | 41 | Yes | 12 | 0.07 |
To assess how small, random changes to scores influenced the likelihood of funding, we ran a simulation (40 runs) on the Panel 8 data. To each score, we added a random number based on a normal distribution with mean = zero and standard deviation (SD) ranging from 0.01 to 0.35. After adjusting the scores and re-ranking, we assessed the proportion of simulations that resulted in all the originally funded applications being ‘re-funded’. There was a clear negative relationship with consistency of funding and the score change SD; with an adjustment SD > 0.25, we recorded no occasions when all the previously awardees received funding (Figure
Results of adding a small random number to funding application scores for Panel 8 of the anonymous United Kingdom funding data: percentage (%) of 40 simulations that resulted in all previously funded applications being awarded funding.
Box 2. What if funding bodies played dice? Sensitivity of the funding process to varying random selection
Given that applications to the UKFB considered ‘fundable’, but not actually funded, were separated by very small score differences from those that were successful (Table
It appears that some rule other than just funding the top-scoring applications was the case for these data; in each year, when the applications were ranked in terms of scores, a break in the funding sequence occurred. This indicates that some applications, although scoring highly, were overlooked in favour of applications with lower scores (Table
What happens when we introduce more randomness to the decision process so that applications in the ‘fundable zone’ all had a chance of receiving funding? In Panel 8 of the data, there were 150 applications, of which 147 were deemed fundable after an initial screening process, and 29 were actually funded. By using a bootstrap-resampling process (1000 iterations), we applied a series of selection rules to these data, each using various randomness to select ‘winners’ from the 147 fundable applications. We then ascertained how many previously unfunded applications would now be funded in each case (Table
The effect of using different random selections by including new awardees from those previously funded in Year 8 of the anonymous United Kingdom funding body data.
Scenario | New grants funded (%) (95% bootstrap CI) | |
1 | 29 funded at random from all 147 fundable applications | 80.1 (86.2–75.9) |
2 | 29 funded at random from Top 50 scores | 41.9 (48.3–34.5) |
3 | 29 funded at random from those with score ≥ 5 (n = 59) | 50.9 (55.2–44.8) |
4 | Top 10 scores all funded. 19 selected at random from remainder (n = 137) | 56.2 (58.6–51.7) |
5 | Score ≥ 6 all funded. 23 funded at random from remainder (n = 141) | 66.4 (72.4–62.1) |
6 | Score ≥ 6 all funded. 23 funded at random from those with score ≥ 5 (n = 53) | 44.8 (51.7 - 37.9) |