Questions and answers from the Ideas Grants 2024 peer reviewer webinar. Recorded 16 July 2024. A PDF version is available to download.

Speakers

  • Dr Julie Glover, Executive Director, Ideas Grants, Research Foundations, NHMRC
  • Dr Dev Sinha, Director, Ideas Grants, Research Foundations, NHMRC
  • Katie Hotchkis, Assistant Director, Ideas Grants, Research Foundations, NHMRC
  • Professor Stuart Berzins, Ideas Grants Peer Review Mentor
  • Associate Professor Nadeem Kaakoush, Ideas Grants Peer Review Mentor
  • Dr Nicole Lawrence, Ideas Grants Peer Review Mentor
  • Professor Louisa Jorm, Ideas Grants Peer Review Mentor

Questions answered during Q&A session

Q1a: Comment sharing

Question: When we are completing our assessments, will we be able to see other peer reviewers’ comments?

Dr Dev Sinha: You can view other reviewers' comments after you finish your assessments. There will be a four-week peer review period, during which you should enter your scores and comments in the system. After that, you will see the comments on the same application from the other four reviewers. You can then review them and raise any issues with us if you find something unclear, inaccurate, or inappropriate.

Q1b: Re-reviewing after comment sharing

Question: Will we be able to re-review a grant if there is a disagreement in scoring?

Dr Dev Sinha: There will be no opportunities to re-review, you will only be asked to review the comments from other peer reviewers on the applications you assessed.

Q1c: Reporting comments to NHMRC during the comment review process

Question: Once we get to see other comments as a reviewer and if we notice comments that are inappropriate, what is the actual action from NHMRC?

Dr Dev Sinha: If you have any concerns about the appropriateness of other reviewers' comments, please inform your secretariat. We will review them and contact the reviewer in question if necessary. Often, the comments may just be a result of transcriptional or typographical errors, sometimes they have other inconsistencies that need to be corrected, or more details added. We will evaluate the comments according to our policy and conduct a risk assessment prior to deciding whether to remove the assessment or not. Removing an assessment is a very rare measure, but that is the ultimate step that we take to address those issues.

Dr Julie Glover: Removing a comment, score, or assessment is a serious action that we only take as a last resort. Our first step is to try and resolve the issue with the peer reviewer. We also check for outliers and if your score differs significantly from others without a clear explanation in the comments, we will follow up with you and ask you about that. This does not mean that we expect everyone to score the same. We appreciate different perspectives and expertise that may lead to different scores. Ultimately, we need to make sure that the comments follow peer review policies and that the applicant can understand the feedback they receive from their peer reviewers.

Q2: Comment writing

Question: If we are not confident in all aspects of the grant, how do we write comments that are meaningful or specific for the applicant?

Prof Nicole Lawrence:  When writing comments for the applicant, you do not need to provide scientific validation for their proposal, but rather you need to provide a clear and evidence-base for your score against each criterion. You should explain what aspects of the proposal were strong or weak, and how they aligned or diverged from the expectations of the Ideas Grants Scheme. Your comments should help the applicant understand the rationale behind your score, be it a 5 or a 6, and how they could improve their proposal in the future.

Q3: Clinical trial or cohort study applications

Question: Applications that appear to primarily be clinical trial or cohort studies, do these need to be flagged to the secretariat as potentially ineligible or have the applications already been screened for eligibility prior to review?

Dr Dev Sinha: The Ideas Grants Scheme is not intended to support a proposal where a clinical trial or a cohort study is the primary objective. However, it is not an eligibility criterion. Ultimately, the onus is on the applicant team to address the objective and assessment criteria for the Ideas Grants Scheme in order to be competitive for funding.

Regarding eligibility issues in general, let me briefly explain the process and our role. If you spot any eligibility concerns, please inform your secretariat and proceed to evaluate the application based on the Ideas Grants assessment criteria. We adhere to due process and investigate every concern raised by you, but applicants have the right to appeal and a right to receive a fair review of their applications as well, therefore it is imperative that you continue to review the application after you have raised the issue with NHMRC.

Q4a: Distribution of scores

Question: I assume there is no expectation that any individual reviewer will necessarily have a "normal" spread/distribution of scores across their allocated applications (e.g. you may have skewed distribution with nothing but 5, 6 and 7)? Is there any post-scoring normalisation process across reviewers? If one particular reviewer has consistently higher or lower scores than the median/mean can this be corrected?

Dr Julie Glover: The Peer Review Analysis Committee (PRAC) examined various issues related to the scoring process, such as normalisation or variability of the scores. Based on the analysis of the first two years of Investigator and the Ideas Grant Scheme, the PRAC report concluded that peer review is variable.

We observed that some assessors tend to score high and some tend to score low, but there are not many assessors who consistently deviate from the other peer reviewers on the same applications. PRAC discussed whether we should normalise or rescale the scores, and this was a major topic of debate in several meetings. However, they decided that it was not clear that this would improve the outcome and they preferred us and the peer reviewers to rely on internal consistency. Our Peer Review Mentors have shared some good examples of how they achieve internal consistency in their scoring, but also how they score according to the score descriptors. Please refer to those score descriptors, as this is our current approach to ensuring as much consistency as possible.

We are monitoring this closely and we do analysis from year to year to see if there are any other indicators that we can use to enhance consistency. We would also welcome your feedback. The other thing I should mention is that statistically, because you are reviewing a relatively small number of applications, you won't get a normal distribution. You may get all good applications, you may get all poor applications, the main thing is just to score according to the score descriptors please.

[Additional commentary by NHMRC: Link to PRAC report: Peer Review Analysis Committee]

Q4b: Seven-point scale

Question: Whether we score a 5 or a 6 has a big impact, but many times it is unclear to me; the app in front of me is borderline. Why is scoring restricted to a few integers?

Dr Dev Sinha: We have consulted extensively with our advisory committees and PRAC about the seven-point scale that we currently use, which is aligned with international standards. We have observed that the scores tend to cluster around certain points, such as five or six for high quality applications. If we were to use a zero to 100 scale, the clustering would still occur, but with larger numbers, such as 70 or 80. This would not reflect a significant difference in quality, but rather an artificial inflation of the range within which this variability occurs. For example, the difference between a 65 and a 75 could well be equivalent to the difference between a five and a six on the current scale. We are always open to feedback from experts in this field, such as statisticians, and we regularly review our scoring data and the distribution of scores.

Dr Julie Glover: Just to emphasise, that has definitely been a source of great discussion particularly for PRAC.

Q5: Validation of content in applications

Question: How much validation of the content described in applications does an average reviewer do (for example, read prior published work that describe key findings or technologies that the applicants refer to but does not describe in detail within the grant)?

Prof Stuart Berzins: The application should contain enough information for the reviewer to make a judgement. Sometimes I might check something if I have doubts about a citation or a claim that the applicant makes. But I do not usually look up the prior work that they refer to but do not explain in the grant. That is the responsibility of the applicant to provide the relevant details.

Q6: Assessing applications outside of research area

Question: If a reviewer is given an application to assess that they then realise falls out of their area of competence to review authoritatively, how should this be managed by the reviewer?

Prof Stuart Berzins: You will lose a lot of applications due to conflict of interest and they are often in the areas that you know really well. If you only review applications that you know everything about, it would be hard to find enough reviewers for every grant. There are multiple assessors for each grant, so they can balance each other's strengths and weaknesses. Unless the application is completely outside your area and you cannot understand the concepts at all, try to assess it against the criteria. You should be able to judge whether they have made a good case, whether there is an important problem, whether the concepts are sound, whether the outcomes are significant, and so on. These things do not require a lot of knowledge in that specific area. The applicant also has the responsibility to make the application clear and understandable to the wider scientific community.

Prof Louisa Jorm: I agree with Stuart - I think there is a document that you can download around what are the key characteristics of good Ideas Grant applications. Good Ideas Grant applications should be clear and accessible to reviewers from different fields. You should be able to assess the quality of the application based on the criteria, even if you are not a specialist in that area. The application should have a strong rationale, a clear problem, sound concepts, and significant outcomes. These are general aspects that do not require a lot of background knowledge. The applicant should also make their application easy to understand for the wider scientific community.

[Additional commentary by NHMRC: The document mentioned can be accessed via the Ideas Grants 2024 opportunity on GrantConnect (GO6844).]

Q7: Order of reading applications

Question: How would the panel recommend the order of reading Ideas Grant applications? I personally started from the first page and finished at the last page. However, I have also heard that people start from the last page (capability statement) to get a sense of applicants' track records first

Associate Prof Nadeem Kaakoush: I suggest you use a process that allows you to review all the applications fairly. For me, I read the first page and the last page first, then the rest of the research proposal. The first page shows me the aim of the project, the last page shows me the capability of the applicants, and the rest shows me the methods and strategies. However, I also review all the applications again after I finish reading them to avoid any bias on my side.

Prof Nicole Lawrence: I agree with Nadeem that having a process is important and I think you need to find what works for you and the grant scheme. For the first few applications, I spend more time to figure out where the information is and how to evaluate the criteria. Then I use this process for the rest of the applications and review them faster. Your process may vary depending on your preferences and the research field.

Dr Dev Sinha: Thanks Nicole and thanks Nadeem, very practical advice. I might just add on an NHMRC office perspective. When you are looking to your assessments, please do another quick scan of the people involved in the proposals, even though you've looked at that before at the COI stage. If you spot any conflicts at this stage, please highlight them to us early on so we can find an alternative reviewer in time.

Dr Dev Sinha: Thank you, Nicole and Nadeem, for your practical advice. I would also like to add something from the NHMRC office perspective. When you are doing your assessments, please check the personnel involved in the proposals again, even if you have already done that at the CoI stage. If you notice any conflicts at this stage, please let us know as soon as possible so we can assign another reviewer in time.

Q8: Time limits to assess applications

Question: Do you assign a time limit to review all your grant applications?

Prof Nicole Lawrence: The time it takes to review each application varies. I do not have a fixed time limit, but I always try to allocate enough time for each one. I also try to review the applications close together, rather than spreading them out too much. This helps me stay focused and consistent in my process.

Q9: Score sharing

Question: How can I see the scores of other reviewers? I think my score should matter more if I have more expertise than someone who gives a low score. The current system seems to favour negative scores over positive ones. Can we change this?

Dr Julie Glover: We have certainly been discussing internally the pros and cons of sharing scores as well as comments. One of the concerns that we have around that is we really want to encourage people to use the full range of scores. We really want reviewers to look individually at the criteria and score accordingly. That was one reason why we are a bit concerned about sharing scores. The other thing is we really want your independent scores, so we do not want to drive people towards the mean score. As long as your score is justified, we are comfortable with that.

It is also quite technically challenging to implement in system as well, but it is something we are talking about internally and we would really value your feedback on that in the in the peer reviewer survey.

Q10a: Track record vs Capability assessment

Question: How do I write a capability assessment without reference to track record? What reasons would cause a reviewer to detract points from this section?

Assoc Prof Nadeem Kaakoush: The capability assessment should focus on the ability of the applicants to perform the proposed work, not their career achievements. For example, the quality and impact of their publications, the amount of funding they have received, or the extent of their collaborations are not relevant. What matters is whether they have the skills and expertise to carry out the specific methods and techniques required for the project. The main reason I would deduct points from this section is if the applicants lack the necessary competence in a key aspect of the proposal.

Prof Nicole Lawrence: I would just like to take the opportunity to do some myth busting here as well. Having a diverse team with different skills and experience is important for the Ideas Grant. The scheme also intends to support the involvement of early career researchers. The main thing is to have a team that can achieve the objectives of the grant, not just the most senior people in the field.

Q10b: Gender Balance and Capability assessment

Question: How should we consider gender balance in the team capability assessment, given that we are not supposed to pay attention to the gender of the applicants? I usually avoid commenting on this aspect even when some teams are not diverse. What is the best practice for this?

Dr Dev Sinha: The main factor for the Capability criterion is whether the team has the appropriate people with the relevant skills, experience, collaborations, and infrastructure to conduct the proposed research. Other aspects, such as gender balance, are of secondary importance compared to the team's ability to achieve the research objectives. There is a bit of a common-sense approach there as well. For instance, you cannot expect gender balance in a single CI application.

Dr Julie Glover: We have a section in the Peer Review Guidelines that helps you to understand your own biases as well. You should be aware of your biases, whether they are conscious or unconscious, and whether they are related to gender, ethnicity, institution, or research discipline. We also provide some guidance from the Declaration on Research Assessment and an implicit association test. These can help you to identify your biases and to ensure that you assess each application fairly.

[Additional commentary by NHMRC: Section 4.3.6.1 Mitigating Bias in Peer Review from the Ideas Grants 2024 Peer Review Guidelines on GrantConnect (GO6844) is being referred to. NHMRC also recommends the San Francisco Declaration on Research Assessment (DoRA) guidance on Rethinking Research Assessment.]

Q10c: Early/Mid-career researchers and Capability assessment

Question: The Ideas Grant scheme aims to support innovation and postdoctoral or early/mid-career researchers, but some reviewers seem to judge capability based on traditional track record measures. How can we ensure fairness and transparency in this assessment category and avoid penalising younger researchers?

Professor Stuart Berzins: The score descriptors indicate that the traditional track record measures are not the criteria for scoring Capability. The main question is how capable is the team of delivering the outcomes or achieving significant results for this project? You need to consider the team composition, the balance of skills, and the relevance of expertise for this project.

It is reasonable to use previous publications and collaborations as supporting evidence for capability, but they are not sufficient or decisive. It is crucial to avoid being biased by the impact factor or prestige of the publications. It is really important that peer reviewers do not fall into the habit of being persuaded by the fact that someone has had a couple of Nature papers in the last 12 months.

Associate Professor Nadeem Kaakoush: From a practical point of view, what you can do to help is to flag any comments that do talk about track record or markdown because the applicant is a postdoc or whatever.

 As a practical suggestion, I would advise you to alert NHMRC if you notice any comments that unfairly focus on track record of an applicant based on their career stage.

Dr Nicole Lawrence: I also want to emphasise that a strong track record is not the only criterion for success in this scheme. Your application has to demonstrate a high-quality research proposal, with clear and feasible aims, a well-justified budget, and a sound methodology. These aspects of grant writing require skills and experience that may not be fully developed in early career researchers, even if they have an excellent PhD record. I think this is one of the factors that contributes to the lower success rate of young applicants compared to more senior researchers in this round.

Q11: Transformative research

Question: Transformative research - a major pillar of the Ideas Scheme. Can we perhaps elaborate in the underlying principles and talk about what constitutes transformative research?

Dr Julie Glover: Transformative research, or innovative research, is not the same in every discipline. Sometimes, innovation means using a technique from one area in another area, which can be transformative and very innovative. It is not always about creating a new gadget or a widget, but rather a new approach.

Professor Stuart Berzins: I think transformative research is about changing the way people think or apply knowledge in a certain field. For example, if I have to choose between two applications, one that confirms what is already suspected but not formally confirmed and another that challenges the existing paradigm, I would prefer the latter. I would look for the potential impact of the research beyond the immediate outcomes.

Professor Louisa Jorm: I agree that there are going to be differences according to the discipline. But the applicants themselves have to clearly articulate how their research is different and how it will change the practice or knowledge in their field. They have to persuade you as the peer reviewers, rather than expecting you to benchmark their research against the field.

Q12: Significance section

Question: I find rating the Significance section most difficult to score as I find most applications rate 5 or 6 so how do we differentiate applications in this section?

Professor Stuart Berzins: The specifics of the Significance section may vary depending on the application, but the key is to have a consistent and fair assessment method for all applications. You should use the same criteria for the first and the last application you review. As Nicole said earlier, it may take some time to calibrate your scoring system and decide whether an application deserves a 5 or a 6. The literal meaning of the words ‘very good’, ‘good’, or ‘exceptional’ is less important than the relative ranking of the applications. The best advice I can give is to have a clear and transparent assessment approach that applies to different applications equally. This will ensure fairness to all applicants and avoid bias.

Dr Nicole Lawrence: The applicants need to explain clearly how their application is innovative and significant, especially if their research areas are not familiar to you. You should not have to do extra work to understand their claims. If they fail to communicate their value proposition, then you cannot give them a high score. You can only score them based on what you can understand from their application.

Q13: Comparison to Grant Review Panels

Question: I have noticed that some assessors provide very brief or superficial comments on the applications they review. I wonder how this affects the fairness and quality of the peer review process, especially without a panel discussion where assessors have to present and justify their views to their colleagues. How do the outcomes of the current scheme compare to the previous ones where panels were involved? Do the five reviewers assigned to each application have enough expertise and motivation to provide a balanced and rigorous assessment?

Dr Julie Glover: In 2019, we conducted an analysis on the role of panel discussions in the peer review process. We found that the panel members tended to follow the scores of the two primary reviewers who presented their opinions on the application. Often, the rest of the panel did not have the relevant expertise to evaluate the application or to act as primary or secondary reviewers.

Another finding was that the panel discussions usually resulted in lower scores for the applications, rather than higher ones. We have been moving towards a more application-centric approach, where we match the peer reviewers to the specific applications based on their suitability and expertise. This has also helped to reduce the workload and burden on the peer reviewers.

We acknowledge that there is variation in the quality of the comments provided by the peer reviewers, but we also recognise that it is a challenging task. That is why we are trying to offer more guidance and support for the peer reviewers. We know that peer review is not only done for us, but also for journals, institutions, and other organisations. We are open to any suggestions or collaborations on how to improve the quality and commitment of peer review.

One thing to keep in mind when writing your comments as a peer reviewer is that you do not know the outcome of the application. Even if you score it highly, you do not know how the other four reviewers will score it. Our funding cut off is very competitive, and some very good applications with positive comments may not get funded because they are just below the threshold.

[Additional commentary by NHMRC: The work referred to here was published by NHMRC as a CEO communique]

Associate Prof Nadeem Kaakoush: I was part of the 2019 Ideas Grant Review Panel and from personal experience, I can say that I do not recall any application actually increasing in score.

Q14: Budgets

Question: You are saying that we can recommend only reductions to items in budgets, and by implication that we can't or should not recommend increases. Is this correct?

Dr Dev Sinha: If you think the budget is in insufficient to complete the work, it is important to point it out. We cannot create a budget that is higher than people have asked for and it becomes then, the question of feasibility of completing the project within the proposed budget. However, keep in mind that applicants may not disclose all their sources of funding, nor are they required to, but they have to justify every item in their budget. That is the balance that you have to strike as a reviewer, and if you find that there is no explanation for a low budget, please flag it in your budget comments.

Professor Stuart Berzins: The question of whether to recommend a budget increase is related to the quality of the application itself. If the applicants have not planned their budget realistically and have not accounted for the necessary resources and salaries for the work, it shows a lack of understanding of the project requirements and a weakness in the grant proposal. As a reviewer, I do not scrutinize every budget item individually, but I look for obvious discrepancies or inconsistencies. For example, if the applicants are requesting an unreasonable amount of money for a certain item that I know the market price of, I will comment on that. I do not go through the budget line by line and compare prices. I focus on the most significant aspects of the budget.

Dr Nicole Lawrence: I look for the rationale behind the budget items. I do not have the expertise to verify the accuracy of the costs across different applications. I want to see how they have broken down the expenses, and how they have explained the relevance of each item to the project. I apply the same criteria to the consumables and the salaries. Are they providing a clear justification for the amount they are requesting? I do not attempt to calculate the numbers myself, I just assess the quality of the justification they have provided.

Q15: Interdependent aims

Question: How do you evaluate applications that have interdependent aims? I think this is a sign of a weak proposal and an inexperienced team, because if one aim fails, the whole project fails. Is this consistent with the score descriptors?

Professor Louisa Jorm: I do not think that is a fair way to judge the score descriptors, because we want to see how the applicants have identified and managed the potential risks. It is not realistic to assume that interdependent aims will always fail or succeed together. You need to look at the specific nature and likelihood of the risk, and how the applicants have planned to address it. We are looking for projects that are ambitious, innovative, and risky, so there will always be some uncertainty. The question is, are they honest about it and do they have a mitigation strategy.

Q16: Conflicts of interests – Associate Investigators

Question: I am collaborating with some AIs and have not yet published with them. I am concerned that reviewing applications where they are involved may introduce bias due to conflict of interest. How can NHMRC reassure me that my doubts about conflict of interest will not affect my review?

Dr Dev Sinha: If you have any doubt about whether you have a conflict of interest, please seek a ruling. We take our rulings very seriously and apply the policy in the context of each situation, so please provide as much information as possible.

Dr Julie Glover: Different people have varying levels of comfort with assessing applications that involve their collaborators. Our main goal is to ensure a fair and unbiased peer review process. If you are uncertain about your conflict-of-interest status, please let us know and provide relevant information. We will evaluate each case according to our policy and give you feedback if needed. However, an Associate Investigator does not receive any salary or funding from the grant. We have to balance our CoI policy so that we do not exclude too many reviewers who have collaborative ties, which we encourage. We regularly review our guidelines on this matter and try to set them at a reasonable level. But it also depends on your personal judgement about whether you can review an application impartially, and you need to inform us through the appropriate channel.

Questions unanswered during the Q&A sessions due to time constraints

Q17. Conflict of Interest and Suitability on previously assessed applications

Question: Many applications that I have received for Conflict of Interest and Suitability assessment are applications that I have assessed last year. What is your view of this?

Answer: As we get better matching between peer reviewers and applications, this may be a more common occurrence. You will need to disregard what was previously submitted and focus on what they have put forward this year, scoring using the assessment criteria.

Q18: Mandatory training

Question: Do you intend to make mandatory training to peer reviewer to ensure consistency?

Answer: Not at this stage, but it is something we may look into for future rounds as we review our policies and processes. We will welcome any feedback you may have on our 2024 training process in the peer review survey.

Q19: Feedback on comments

Question: Will we be able to see our comments when we are reviewing the other comments, and will we get the feedback on our comments as it is a great learning opportunity

Answer: Not at this stage, but it is something we may also look into implementing in future rounds.

 

Downloads

File type
Size