Evaluation Resources Frequently Asked Questions

​The external links from this webpage provide additional information that is consistent with the intended purpose of this site. NIGMS cannot attest to the accuracy or accessi​bility of a nonfederal site.


The information below is meant to provide guidance for those thinking about evaluation and is based on questions received from the training community. It is not intended to apply to every type of evaluation need and is not a comprehensive list of all questions related to evaluation. Rather, it can be a starting point for those considering different elements of evaluation. Additional information will be added periodically. Relevant resources are listed after some questions. These are free resources, unless otherwise noted. Please contact us if you have any other questions​​​: https://www.nigms.nih.gov/about/Pages/Staff-Contacts.aspx#twd

Survey Development

Evaluation determines the extent to which a program has achieved its goals or outcomes. It may utilize an assessment as a tool to measure aspects of the evaluation or include research questions that the team wants to answer. There are several types of evaluations and to choose the best method for your program, it is necessary to understand the differences between evaluation types; below is a potential resource.

Relevant Resource:

Survey length varies depending on the needs of the program or evaluation; however, surveys should not be longer than necessary, to reduce the burden on participants and to increase the likelihood of completion.

In addition to considering the number of questions to include in a survey, consider how much time the survey will take to complete. For example, surveys with many straightforward, binary-type questions may take less time to complete than a survey with fewer questions that are multiple choice or short-answer. An overly lengthy survey may lead to lower response rates. The following resources may be of use in constructing survey questions.

Relevant Resource:

Programs must consider institutional review board (IRB) protocols when designing incentives, as some IRB protocols include guidelines on survey incentives.

Response rates may be improved with incentives; however, offering large incentives could be considered coercive. Generally, it is important to acknowledge participants for their time completing the survey while avoiding causing participants to feel obligated to complete surveys due to monetary rewards that they may not have received otherwise.

Relevant Resource:

Open response, or open-ended, questions on a survey allow respondents to explain their answers, provide examples, and expand on their thinking. Such questions should be clear in wording and in design. It is recommended that survey developers avoid writing multi-part questions to avoid delays in analysis, and instead keep each question to a single topic.

Survey developers should consider the additional time that will be needed to analyze open response surveys. Responses can be qualitatively coded either manually or using licensed software such as Nvivo and Atlas.ti. It is helpful to develop a codebook in which codes are linked to the various metrics and constructs of a program. Open-response questions can unveil prominent themes and topics.

Relevant Resource:

Sample Sizes

Small sample sizes may create some difficulties and limitations in program evaluation efforts. Planning for a small sample size during the planning phase can be helpful for various statistical analyses and reporting and overall approaches to addressing imbalance. Before implementing a survey, consider the minimum sample sizes that will be needed to answer questions of interest, and determine whether the survey population will yield the necessary number of responses.

Relevant Resource:

Although a larger sample size may better accurately reflect an intended population, larger sample sizes may come with more data cleaning and other stratification to identify the criterion of importance.

Personnel

Hiring an external evaluator may streamline the evaluation effort. Consider the costs of hiring an evaluator, and what may be allowable through your grant funds versus from university or institutional resources. Evaluation is considered an "allowable cost" for many grants, and funds within budget may be used to defray the cost of the evaluation. Institutional support is expected to contribute toward the cost of evaluation. Please consult the relevant notice of funding opportunity for guidelines and contact your grants management specialist and/or program director with questions.

To conduct an evaluation, it is important that the evaluator has knowledge of the program goals, allowing for an effective mapping of the evaluation to the goals and topics of importance. The program goals should be clearly defined and measurable to facilitate their use in an evaluation. Program teams is expected to meet with the evaluator(s) to discuss goals and to ensure the evaluator has a thorough understanding of the program. It is imperative that the evaluator/ evaluation team protect sensitive information.

An evaluator external to the program may bring greater expertise and less bias to an evaluation; however, it is not always necessary to hire an external evaluator. Program teams may consider working with evaluators internal to their institution who are not affiliated with their program implementation; thus "external" to the program while remaining local to the institutional environment.

Programs should also consider costs of working with external evaluators and talk to their grants management specialists and/or program directors with questions.

For ideas on how to locate an evaluator or other evaluation professionals, see the resource below.

Relevant Resource:

Formative evaluations can be done on-site. Conducting a formative evaluation is a way to gain data that can be used, for example, when applying for grants, conducting an institutional self-assessment, or refining program aspects. However, working with external evaluators* to assess program outcomes will help distance the evaluative findings, and avoid potential bias (and the appearance of bias) in evaluations.

*This can be a person who is external to the program being evaluated while still being on-site. For example, staff from a different college at the same institution.

Rubrics and Metrics

NIGMS does not provide rubrics for evaluation because every program is unique in their goals and implementation techniques. However, a wide variety of resources and validated instruments exist that programs can use, if appropriate. Depending on the goals of the program, the measures and rubrics needed to aid the evaluation will vary.

The NRMN measures library provides examples of survey items, scales, and other types of measures that program directors and evaluators may find useful when assessing the efficacy of interventions in the STEM fields. Other evaluation general tools can be found on the Better Evaluation website.

Relevant Resource:

Some examples of how to incorporate psychosocial measures into your evaluation are included in the Diversity Program Consortium's Hallmarks of Success. Additional tools to help measure psychosocial outcomes can be found in the NRMN measures library, and some additional survey questions that can serve as examples are available in the Diversity Program Consortium surveys.

There are several factors to consider when designing a survey, and selecting appropriate survey scales is an important aspect of the process. Different measures will require different scales. A common scale for use in surveys is the Likert scale. When possible, consider using validated measures and associated scales. Consistent scales throughout a survey may lead to more straightforward analyses. When constructing your own measures, discuss the scales and different options (e.g., including an N/A or neutral option, scales with odd numbers of responses) with your evaluator/ evaluation team. The resources below outline several conditions for various point scales.

Depending on the program goals, adding demographic categories that best capture the surveyed sample's diversity may be advised. Respondents should be provided the option to select "none of the above" or "prefer not to respond"​ to demographic questions. It may be necessary to allow respondents to choose multiple responses for demographic questions. Consider the use of gender categories and sexual orientation questions respectfully, as well as the privacy of respondents and the power dynamics of who is asking for data from whom. Use discretion when determining what information is needed, especially when asking questions that may decrease feelings of safety in respondents or cause a reduced response rate.

Program Related/NIH

The goals and associated evaluations of each program vary. Each program is unique and creates training and mentoring activities for different populations; thus, direct comparison between programs may be problematic.

Evaluations help program directors and institutions measure progress toward their goals, as well as find potential areas for improvement. In addition, including evaluation data is useful for reporting and when applying for future grant awards.

Data from evaluations may also be used in manuscripts if the program staff are interested in writing about their outcomes, sharing recommended practices, or novel program ideas.

Note: programs must receive the proper clearance through their institutional review board (IRB) for evaluations, particularly for those that may result in the public release of data.

It is important to use common measures when comparing across cohorts and years.

Common measures can prove useful if an evaluator is interested in looking at longitudinal or cross-site analyses. Using the same measures over time can help to measure progress in a more standard way. However, evaluators should not feel the need to maintain common measures if the measures are outdated or no longer relevant (e.g., a question about a seminar that is no longer offered).

Developing common measures takes time and discussion between the evaluation team and the implementation team and should be undertaken before evaluation begins.

There are many factors to consider when planning to share data across institutions. The team should first become familiar with the institutional review board (IRB) policies at their institution and for any of their partner/ collaborating institutions with whom they want to share/ compare data. Depending on local regulations and standards, the teams may be able to apply for a blanket IRB agreement among the participating institutions.

An NIH-funded study being conducted at more than one U.S. site involving non-exempt human subjects research may be subject to the NIH Single IRB policy and/or the revised Common Rule (rCR) cooperative research provision (§46.114). For more information, visit: https://grants.nih.gov/policy/humansubjects/single-irb-policy-multi-site-research.htm

If multiple institutions plan to share data, the teams should develop guidelines for proper use, storage, and access to the shared data through a Data Sharing Agreement (or similar document). View the Diversity Program Consortium Data Sharing Agreement for one example.

Relevant Resources

NIGMS does not provide guidelines or rubrics for use in evaluations to encourage creativity and independence in implementation and because all programs are unique. Programs should develop their evaluation based on their own program needs and interests. Program goals can be used to guide evaluation questions. Some examples of evaluation standards, effective practices and measures can be found in the resources on this page; however, these resources are developed by outside sources and are not endorsed by NIGMS.

No. Training grants prepare individuals for careers in the biomedical research workforce by developing and implementing evidence-informed educational practices including didactic, research, mentoring, and career development elements. While funded programs are expected to conduct ongoing program evaluations and assessments to monitor the effectiveness of the training and mentoring activities, training grants funds are not intended to support Human Subjects Research (see additional information on Human Subjects Research from NIH and HHS).

If an investigator wishes to conduct Human Subjects Research involving the trainees supported by the training program as research study participants, they must:

Applicants are encouraged to reach out to Scientific/Research Contact listed in the funding announcement if there are any questions.

Taxonomy of trainee pathways may vary depending on the population and reporting needs. Evaluators can reference literature in their field to learn taxonomy standards. It is suggested that evaluation teams define terms that will be used at the beginning of the evaluation, and that they use clear and consistent taxonomy throughout the survey administration and data analysis processes.

Relevant Resources

  • Evolution of a Functional Taxonomy of Career Pathways for Biomedical Trainees. Mathur A, Brandt P, Chalkley R, Daniel L, Labosky P, Stayart C, Meyers F. Journal of Clinical and Translational Science. 2018 Apr;2(2):63-65. https://doi.org/10.1017%2Fcts.2018.22​

Sometimes, external forces necessitate changes to proposed plans, and program teams can face these changes thoughtfully. Changes to implementation and unexpected training outcomes can be described in progress reports and proposal renewals. Conducting a well-planned evaluation and subsequent analysis can help to determine why teams might see outcomes that differ from expectations. If program goals are not being met, well-designed evaluations should help to determine where to make refinements.

NIGMS does not provide guidelines or rubrics for use in evaluations because each program is unique in terms of goals, context, student populations, etc. Teams who are interested in developing an evaluation can reference existing evaluation tools, such as those on this site as examples. The examples included on this site are not endorsed by NIGMS; rather, they are provided as references and resources teams can use when developing their evaluations.​

​​