Enhance Program Evaluation Capacity


Listed below are the details of the projects funded under NOT-GM-20-020:


Listed below are the details of the projects funded under NOT-GM-21-024:


Listed below are the details of the projects funded under NOT-GM-22-013:


Title: Administrative Supplement to Enhance IRACDA Program Evaluation Capacity
Principal Investigators:
Claire Moore and Mitch McVey, Tufts University Boston
One of the goals of the IRACDA program is to “provide a resource to motivate the next generation of scientists at partner institutions and to promote linkages between research intensive institutions (RIIs) and partner institutions that can lead to further collaborations in research and teaching.” To date, IRACDA leaders have collected a range of data serving as evidence that this goal is being accomplished, most of which have focused on the number of courses taught by trainees and the number of students from the partner institutions who conducted mentored research at the RIIs. The need to better classify and document the broad range of impacts being made at the partner institutions is necessary for the sustainability of the program. In addition, the need for better equipping IRACDA stakeholders at both the RII and partner institutions for consistently and accurately collecting a wider range of data will enable the broader family of programs to better describe the unique nature of these impacts over time. Consistent and accurate descriptive-level data are also essential for conducting any deeper studies of partner impacts that are higher in the evidence hierarchy, e.g., case control studies, cohort studies, etc. We envision these developments having large impacts on our IRACDA and partner institutions, but also the set of institutions associated with other IRACDAs, and other training programs such as the Bridges to the Doctorate and Bridges to the Baccalaureate program that likely face similar challenges in documenting the nature of the partnerships between participating institutions.

Title: Cal State LA Enhance Evaluation Capacity
Principal Investigator:
Linda M Tunstad, Kim-Lien Thi Dinh, Errol V Mathias, Khuloud Sweimeh, California State University Los Angeles
We propose to improve the evaluation infrastructure of the Cal State LA MORE Programs. This project will create a robust database management system for all the MORE programs at Cal State LA by consolidating and validating the data in previous and current trainee files, tracking academic and career progress. The University will provide the Access relational database software, which will be modified with guidance from Institutional Effectiveness, to create a common database to store, manage, change, search, and extract student information from the multiple MORE programs. A catalog of validated evaluation instruments will be created and stored within the University Qualtrics system, available to researchers. We will work with various institutional stakeholder to create a campus culture of documenting and tracking student data from MORE affiliated programs into the University (through the Office of Institutional Effectiveness and the GET system) and Alumni Association databases on a regular basis. These enriched databases will be much more useful to the training programs, and to pertinent parts of the university.

Title: Predoctoral Training Program in Biological Data Science at Brown University
Principal Investigator:
Sohini Ramachandran, Bjorn Sandstede, Eliezer Upfal, Zhijin Jean Wu, Brown University
The objective of the Predoctoral Training Program in Biological Data Science at Brown University is to turn “I-shaped” predoctoral students — with strength in one discipline — into “pi-shaped” Biological Data Scientists with two core strengths: (1) generating and analyzing biological data, and (2) developing theoretical models for and testable hypotheses regarding biological processes. The current evaluation plan for this training program is centered on metrics related to trainees’ success, such as number of publications, presentations, and grant submissions that are readily quantifiable and wholly appropriate given the resources available. At annual meetings between program leaders, faculty advisors, department chairs, and preceptors, each trainee’s course performance and development are discussed. With the augmented institutional capacity enabled by this award, we will expand our evaluative efforts in three substantial ways: 1) evaluation of preceptors and inclusion of the preceptor perspective, 2) development of a more in-depth, psychometrically sound inventory for trainees and preceptors, and 3) utilization of qualitative methods to complement quantitative methods so context-specific and/or unexpected issues can be tracked. These three evaluative facets will each allow the experience and interpretation of events by the trainees and preceptors to be explored more deeply.

Title: UNC-Duke Collaborative Clinical Pharmacology Postdoctoral Training Program-Evaluation Supplement
Principal Investigator:
Kim L R Brouwer, Michael Cohen-Wolkowiez, Paul B Watkins, Univ of North Carolina Chapel Hill
The University of North Carolina at Chapel Hill (UNC) in collaboration with Duke University has successful NIH-funded T32 training programs in adult and pediatric clinical pharmacology that build on our exceptional research environment with world-class programs in drug discovery, drug development, pharmacogenomics, and drug safety. There are 9 additional NIGMS-funded pre- and postdoctoral training programs at UNC that span multiple units (Schools of Pharmacy, Medicine, Public Health, Dentistry, and the College of Arts & Sciences). The goal of this supplement was to facilitate the development of an evidence-based assessment process and a comprehensive evaluation platform to strengthen the effectiveness of our program evaluation tools. Through engagement with various stakeholders (e.g., employers, trainees), core skills that trainees need and important characteristics of successful T32 programs were identified. Related program evaluation resources (e.g., evaluation basics, logic models, focus group scripts, core and modular surveys) were developed to help each program maximize evaluation efficacy. These easily accessible materials (https://tarheels.live/t32programevaluation/) can be used for systematic, evidence-based assessment to help ensure that clinical pharmacology and biomedical training programs remain contemporary and aligned with stakeholder needs.

Title: Maximizing Opportunities for Research Excellence - Evaluation Training Supplement
Principal Investigator:
Elizabeth S Watkins, Jason K Sello, University of California, San Francisco
The diversification of American scientific leadership is essential to the enrichment of both the talent engaged in and the quality of scientific research produced by the biomedical enterprise in the United States. This Administrative Supplement to Enhance Program Evaluation Capacity will develop an evaluation plan that a) determines how URM students' educational experiences and scientific and organizational identity formation contribute positively to their achievement, commitment, and career choice and b) provides guidance for institutions in designing and implementing graduate education and support programs. This plan will be achieved in two stages. First, we will design a set of instruments that measures a) students' positive and supportive experiences within their core education and co-curricular experiences, b) their individual scientific identity, and c) their organizational identity alignment. Second, we will implement the measures longitudinally and prospectively to establish the relationship between these measures and students' achievement, commitment/degree completion, and career choice. These interventions are grounded in a positive, “anti-deficit” framework that prompts research and evaluation questions about how URM learners persist and successfully navigate their educational journey. Informed by theories from sociology, psychology, gender studies, and education, the anti-deficit framework will be used to develop this evaluation plan that measures graduate programs' ability to create a supportive environment for URM students and to enhance those students' interests in academic careers.

Title: Transdisciplinary Predoctoral Training in Biomedical Science and Engineering
Principal Investigator:
Clarissa A Henry, Lucy Liaw, University of Maine Orono
The current T32 award supports an innovative, evidence-based training program that includes a co-mentorship framework, transdisciplinary research opportunities, an experiential professional rotation in industry/non-academic setting, and multiple supports to increase the diversity of trainees. The proposed comprehensive evaluation will collect quantitative and qualitative data from current students, former graduates, faculty, and partners. Information will be collected through multiple methods, including surveys, semi-structured interviews, and focus groups. Data will address a broad range of topics, including (1) the quality and breadth of the training students receive, (2) the scholarly opportunities provided to trainees, and (3) the supports and barriers trainees experience – including faculty engagement and overall program operations. By also incorporating a sustainability component for continued annual evaluation following the completion of this supplement, this project will help inform the ongoing development of this and other graduate training efforts.

Title: University of Chicago Initiative for Maximizing Student Development (IMSD)
Principal Investigator:
Nancy B Schwartz, University of Chicago
The University of Chicago (UChicago) Initiative for Maximizing Student Development (IMSD) is designed to provide research training and educational opportunities for newly admitted PhD graduate students from groups underrepresented (UR) in the biomedical and behavioral sciences and for non-UR students who would benefit from IMSD opportunities. The Program is tiered to focus intervention and developmental activities to match the background preparation, research experience, and learning styles of individual students as well as their stage of progression in graduate school. While the current IMSD has leveraged the most relevant and existing evaluation resources and tools to meet the goals of assessing the effectiveness of training our IMSD Scholars, recent literature has emphasized the importance of including consideration of psychosocial factors (Williams et al., CBE Life Sci 2017) and improving the environment in which the students are trained and learning (Puritty et al., Science, 2017); the latter aspect was brought to our attention by the IMSD students. Thus, in the second cycle of the IMSD, we propose to enhance both our interventions and our evaluation capacity to monitor and assess these additional institutional inputs. Other than typical surveys and data-gathering activities that focus on easily identified quantitative metrics measuring trainee-only progress through graduate school, we currently have no comprehensive evaluation capacity for these additional interventions. This proposal is designed to develop the requisite evaluation capacity with: i) products and processes to implement expanded assessments; ii) training of biosciences leaders in order to modify and sustain this endeavor; iii) modify and adapt these products and processes for other T32 programs, thus institutionalizing these good practices.

Title: Training Program in Cellular and Molecular Biology
Principal Investigator:
Heather L True-Krob, Washington University
The objective of the Cellular and Molecular Biology Umbrella Training Program (T32 GM007067) in the Division of Biology and Biomedical Sciences (DBBS) at Washington University is to provide rigorous, interdisciplinary training in cell and molecular biology to a diverse cohort of students, enabling them to pursue careers at the vanguard of scientific research, education, and outreach. Its educational mission aims to ground students in the basic concepts and methodologies of cell and molecular biology, train them to think critically as well as write and speak effectively. In this effort, our guiding philosophy is to extend all successful program elements to as many students as possible in order to maximize the training of all our students and thus the future impact of our students on society. To support the training objectives and continuous program improvement, DBBS aims to enhance capacity for data-driven program evaluation. DBBS collects comprehensive data at each stage of student training to track the progress of students in 12 DBBS interdisciplinary bioscience Ph.D. programs, including those supported by the parent T32 grant and other NIH-funded T32 grants at Washington University in St. Louis. An administrative supplement from NIGMS will allow DBBS to 1) Mine our data to perform current-state baseline analysis of our programs, 2) Build technical and operational capacity for sustainable longitudinal program evaluation and continuous improvement, and 3) Disseminate our evaluation model and baseline data to the bioscience Ph.D. training community. With new curriculum launching in the fall, the time is right to perform a current-state baseline analysis of existing program evaluation data and to evaluate the effects of new training curriculum. The activities proposed will build the technical and operational capacity to evaluate the impacts of new Ph.D. training curriculum and enable data-driven decision-making for continuous curriculum improvement.

Title: T32 Predoctoral training grant in genetics
Principal Investigator:
Kelly A Dyer, University of Georgia
The mentoring relationship lies at the core of all training. The objective of the Supplement is to improve the assessment of mentoring, specifically in NIGMS training programs, but also in other programs at the University of Georgia.The proposed research will develop mentoring assessments, automate generation of reports, and give recommendations for improvements. Programs currently evaluate many aspects of their faculty, incoming students, curriculum, training objectives and student outcomes, but mentoring per se is not explicitly evaluated. Since the mentoring relationship is critical to the experiences of students, evaluating this relationship is especially important. Assessment of mentoring will allow students and faculty to gain insight into their mentoring performance relative to others, allowing them to modify practices or seek training. In addition, assessment allows general strengths and weaknesses to be identified at the program level and acted upon. To facilitate appropriate responses to assessment reports, the proposed work will also develop recommendations based on the assessment outcomes to be used by individuals and programs to improve their mentoring relationships. The overarching goal is to generate tools that allow mentoring assessment, generate actionable reports, and propose interventions that can be utilized iteratively, leading to long-term, sustained improvement of the mentoring relationships experienced by students in the NIGMS training programs and beyond. Such improvement will lead to better outcomes for students of all backgrounds.

Title: Enhancing biomedical research training program evaluation capacity
Principal Investigator:
Lawrence F Brass, University of Pennsylvania
Evaluating training program efficacy and using that information to improve trainee success has become even more important than ever for biomedical research training programs. In this proposal, the 6 NIGMS-funded T32 training programs for PhD and MD/PhD predoctoral students at the University of Pennsylvania, including the MSTP program directed by the P.I., will work together to put into place an improved infrastructure for program evaluation built around short and long term metrics for success. Our goals are to: 1) improve our ability to assess the impact of courses, workshops and skills training programs, 2) develop sustainable tools for tracking short and long term outcomes, and 3) identify predictors of trainee success during and after completion of the training program. Reaching these goals will improve our ability to answer critical questions, including what has been the impact of each training program? How well do students supported by our NIGMS T32 grants do compared to their colleagues who were not supported in achieving sustainable careers consistent with the goals of each training program? In other words, does being in our training programs make a difference?

Title: UCLA NIGMS T32 Program Assessment
Principal Investigator:
Jorge Torres, University of California Los Angeles
UCLA’s Center for the Advancement of Teaching (CAT) supports faculty, administrators, postdoctoral scholars, and graduate students in serving their mission of fostering and championing effective teaching. Within CAT sits a dedicated centrally located Center for Educational Assessment (CEA) unit that regularly collaborates with faculty and campus partners by drawing on its expertise in pedagogy, educational assessment, and curricular research. CEA has a history of participating in assessment of various graduate student and postdoctoral programs at UCLA including the campus-wide Teaching Assistant Training Program, CIRTL graduate student Teaching-as-Research Program, Nanosystems Engineering Research Center, and the NIH IRACDA program for postdoctoral scholars. With this experience CAT is optimally positioned and well-equipped to house a centralized assessment resource dedicated to evaluate NIH-funding training programs, starting with Cellular and Molecular Biology (CMB) Training Program as a model. UCLA NIH-funded training programs need a common platform for assessment. Over the last decade, UCLA has built extensive infrastructure and developed innovative programs for graduate student and postdoctoral scholar career development training. However, this raises a significant need to assess training program goals and outcomes to inform program improvement. At any given time, UCLA has over 40 NIH-funded training grants but lacks a dedicated system and standardized process for training program evaluation. Consequently, programs are currently addressing this need on an ad hoc basis. The goal of this supplement is to develop a centralized assessment resource with the capacity to collect, analyze, and archive participant data and programmatic outcomes applicable to most, if not all, training programs at UCLA. CEA work will include the development of standard assessment resources and the creation of an online dashboard to share the assessment results. The effectiveness of this assessment process will be evaluated by building a base of samples that would become available on the dashboard, monitoring ongoing activities, and tracking accomplishments and research merit.

Title: Molecular, Cellular, & Developmental Dynamics PhD Program
Principal Investigator:
David L Van Vactor, Harvard Medical School
Molecular, Cellular and Developmental Dynamics (MCD2) is a NIGMS-supported training program at Harvard Medical School (HMS) devoted to the deep mechanistic investigation of fundamental biological phenomena. MCD2 offers students a rigorous intellectual preparation for independent careers in scientific discovery. The program’s breadth teaches trainees to perceive and interrogate a problem with a multi-dimensional approach, integrating insights from a combination of cutting-edge tools. We aim to foster creative and precise thinkers capable of unraveling the most challenging questions in the life sciences. As we begin a process of self-evaluation and optimization of our T32 program over the next two years, we wish to create an extensive set of assessment tools to objectively determine the efficacy and long-term impact of our programming, curriculum and professional development components. This effort will build the capacity for us to perform annual cycles of performance analysis to support iterative improvements in our program well into the future. In addition to instruments designed to assess our core training objectives and program components, such as our courses in experimental design, quantitative and computational analysis, and scientific communication, we also propose to design instruments to thoroughly assess our Responsible Conduct of Research (RCR) curriculum, thus helping us to continually optimize this important and mandatory training resource. Over the next year, we plan to design and assemble this comprehensive evaluation toolkit with the help of external consultants with expertise in program evaluation together with students from our Harvard Graduate School of Education; this will bring together expertise in the social sciences, education and evaluation to complement our own existing strengths and knowledge. Once piloted with the MCD2 cohort, the new instruments will then be tested on all the other NIGMS-funded T32 cohorts at HMS. We believe that all of these programs at HMS would benefit greatly from adaptation and application of such evaluation tools.

Title: University of Rochester Medical Scientist Training Program
Principal Investigator:
M Kerry O'Banion, University of Rochester
There is a great societal need for physician-scientists who work at the interface of clinical patient care and scientific investigation. The MSTP at the University of Rochester equips future physician-scientists with an in-depth understanding of medicine and a rigorous research experience, allowing them to pursue careers that fill important positions in academic medicine and translate research discoveries to improve health. The purpose of the proposed project is to collaboratively develop an evaluation framework that deepens the understanding of University of Rochester’s Medical Scientist Training Program’s successes through different periods of training and at transition points, and identifies opportunities for continuous improvement, using systematically collected evidence to guide decision-making. In order to develop a plan that is sustainable and implementable, the project will apply concepts for evaluation capacity building, which includes engaging people in organizations to look at their practices and process through a purposeful and systematic lens, guided by specific questions, to collect and analyze credible evidence to answer continuous improvement questions. The project aims to reach the following objectives:

  • Complete a written evaluation framework of the UR Medical Scientist Training Program that can be used for both formative and summative evaluation;
  • Enhance Evaluation Capacity Building (ECB) of the internal program team to implement the evaluation plan with support and adequate resources;
  • Curate and develop companion evaluation capacity building activities, tools and templates that are accessible and available to other MD-PhD and NIGMS predoctoral programs.

Development of the evaluation framework will occur through a partnership between the faculty and staff in the University of Rochester’s Medical Scientist Training Program and a program evaluation team from the Warner School of Education and Human Development at the University of Rochester. The process and products developed in this supplement will not only benefit the MSTP, but will also be shared with graduate program directors and all NIH funded graduate training programs at the University of Rochester School of Medicine & Dentistry.

Title: Training in Pharmacological Sciences
Principal Investigator:
Joey V Barnett, Vanderbilt University
The Nashville biomedical research community enjoys a network of NIGMS-funding training programs that include several T32 grants as well as an Initiative for Maximizing Student Diversity (IMSD) Program at Vanderbilt University (VU), a Research Initiative for Scientific Enhancement (RISE) Program at Meharry Medical College, a Postbaccalaureate Research Education Program at VU, and Maximizing Access to Research Careers (MARC) Programs at Tennessee State University, VU, and Fisk University. We applied this supplement to support collaborative activities across these institutions in order to increase competency in program evaluation and build institutional evaluation capacity. We created a series of three facilitated workshops linked with intervening activities designed to foster skill development and reflection on program structure and goals that were responsive to the results of a needs assessment survey. The workshops guided the development and deployment of tools to support assessment and evaluation using a collaborative, team-based structure. The program of study developed supported the following deliverables:

  • Build capacity, by using a train-the-trainer model, to develop a community of expert colleagues who will support programmatic evaluation.
  • Provide an opportunity to re-imagine and re-focus components of graduate training.
  • Develop an understanding of and experience with competency-based assessment.
  • Familiarize participants with evaluation instruments and tools, such as logic models.
  • Create a bank of online resources to support evaluation.
  • Design of post-award activities that will support the sustainability of the effort.

We gathered over 60 participants across four institutions. Evaluation results suggest that new knowledge and confidence to design and implement program evaluation increased among participants. A guest lecture series, open to both program participants and faculty members at all institutions, was aligned with the training workshops and provided in depth presentations of current practices and theory, including competency-based assessment and evaluation instruments and tools. The Evaluation 101 website was built to support the workshops and serve as an open resource. After the final workshop, all participants were surveyed to gather information regarding interest in and preferences for continued engagement on topics related to graduate training and evaluation. Together the activities and outcomes supported by this supplement have fostered the creation of an interactive community of educators with the shared goal of improving assessments and outcomes of biomedical training programs.

Title: Developing Capacity to Evaluate Training Programs via Development of Human, Institutional and Social Capital
Principal Investigator:
Joshua Nathaniel Leonard, Northwestern University
The interdisciplinary Biotechnology Predoctoral Training Program at Northwestern University supports core activities for training a select group of students, nucleates the biotechnology community, and provides many training opportunities.The program addresses the interdisciplinary nature of biotechnology training through course work, research, experiential learning, industrial internships, seminars and other opportunities that broaden the students' training in and understanding of modern biotechnology.The goal of the proposed supplement project is to create, develop and test evaluation capabilities for this and all NIGMS T32 predoctoral training programs at Northwestern, with the long-term objective of initiating campus-wide improvement in evaluation and assessment of graduate education training in biomedical research. This goal will be addressed through the following aims: (1) Develop evaluation skills of training grant directors and key personnel. We will create and implement in-person and on-line training activities along with online resources that will teach training program leadership how to create program-specific logic models for program design and evaluation. This training is expected to lead to greater PI confidence about evaluation and more positive attitudes about the value of evaluation (2) Develop, test and disseminate policies, procedures and standards for training program evaluation. We will expand and systematize existing evaluation infrastructure for NIGMS T32s at Northwestern to support robust and effective evaluation. Creation of this infrastructure will provide low-cost, sustainable support for evaluation data acquisition and storage and unify evaluation across programs and lower the “activation” barrier for conducting high quality evaluation. (3) Implement activities for training grant directors and key personnel to share best practices in evaluation. We will develop a collaborative learning community of NIGMS PI/PDs, and key faculty and administrators, and develop supportive networks throughout the university administration for training program evaluation. The proposed creation of a community of practice around evaluation will provide motivation and support for training grant directors via peer support and sharing of best practices. These activities will impact not only the 167 trainees and 114 preceptors participating in the four NIGMS training programs but will create the foundation for institution-wide evaluation policies and practices for training programs.

Title: Chemistry-Biology Interface Training Program
Principal Investigator:
Helen E Blackwell, University of Wisconsin-Madison
The Chemistry-Biology Interface Training (CBIT) Program is a unique graduate training program at the University of Wisconsin–Madison centered on an appreciation of the blending of chemical and biological methods to solve biomedical problems. The mission of the CBIT Program is to train graduate students to address research problems that transcend the traditional boundaries of chemistry and biology using innovative and rigorous approaches, to provide professional development skills and career guidance to maximize the impact of their training, and to develop an inclusive and supportive community of engaged scholars working at the chemistry-biology interface.The outcome of the UW–Madison CBIT program will be a diverse pool of expertly trained scientists who have the broad base of technical and professional skills necessary for them to contribute substantively to the biomedical workforce. With this request for supplemental funds to enhance program evaluation capacity, we will develop assessment instruments to facilitate a transition from our current largely qualitative program assessments to more rigorous, quantitative assessments of our program components. We will (1) establish a working group of campus NIGMS-funded T32 representatives that will work in coordination with UW–Madison program evaluation experts to both create a quantitative tool for evaluating training program efficacy and implement this tool across all NIGMS-funded T32s, (2) expand assessment to all UW– Madison T32s, and (3) disseminate the results of our effort to campus leadership and NIH.This work will benefit not only CBIT, but also other T32 programs at the University of Wisconsin–Madison and beyond.

Title: Training in Biomolecular Pharmacology
Principal Investigator:
David H Farb, Boston University Medical Campus
The Program in Biomolecular Pharmacology at Boston University is designed to train predoctoral students in pharmacological research and discovery, combining the fundamental principles of pharmacology with novel approaches drawn from bioinformatics, biophysics, biochemistry, biomedical engineering, chemistry, genetics, molecular medicine and neuroscience. Trainees undertake an integrated curriculum that emphasizes translation of rigorous basic science research and acquisition of scientific skills for careers in academia, government or the private sector. The T32 program has a long and successful track record of placing graduates into scientific positions, and the majority of these positions are in industry/pharma. Nevertheless, it is critically important for the program to undergo continuous and longitudinal self-evaluation and assessment. This is particularly important in the current employment climate, where non-traditional career paths present challenges for developing unique training criteria and opportunities. We propose to strengthen our T32 evaluation in terms of improved: 1) evaluation infrastructure, 2) evaluation analysis, methods and workflows, 3) metrics for student, mentor and program outcomes, and 4) PI cross-program discussions to explore “Common Metrics” linked to different training strategies to trigger insights about what is working and what innovative ideas are worthwhile testing in our programs to improve outcomes. By enhancing our evaluation capacity, our T32 program will be positioned to implement strategic initiatives that prepare our trainees to become highly productive members of the rapidly evolving biomedical workforce.

Title: Training Program in Quantitative Biology and Physiology
Principal Investigator:
John A White, Boston University (Charles River Campus)
Boston University’s Charles River Campus has three NIGMS T32s in distinct scientific areas (Quantitative Biology and Physiology, Bioinformatics/Computational Biology, Synthetic Biology and Biotechnology). While evaluation and assessment have long been expected of training programs, the BU NIGMS faculty directors understand that a deeper understanding of the evaluation methods and effective use of the tools provided by the University are needed. This supplement provides a catalyst to the commitment of establishing a common framework that will continuously inform student-centered training. This will include developing common instruments, shared programming, integration of workforce feedback, and faculty ownership of data-driven decision making and mentoring. We expect that the work of these training programs will establish a foundation of metrics and outcomes that will also support the development of future NIGMS programs across biomedical disciplines on our Charles River Campus.


Title: Training Program in Biochemistry, Cell and Developmental Biology
Principal Investigator:
Lawrence H. Boise, Emory University
We propose to expand our initial competency-based assessment instrument for the Biochemistry, Cell, and Developmental Biology (BCDB) training program to effectively evaluate student outcomes across all NIGMS supported training programs at Emory. To enhance the evaluation capacity of our original instrument, we propose two changes. First, we will convert the assessment tool to one that can be used on a digital platform (e.g., Qualtrics) for more efficient data capture and storage across a larger scale of students. The second change will be to expand the use of the tool to better capture the effectiveness of the first two years of the program in helping students develop competencies required for PhD scientists. Developing this evaluation capacity will not only allow us to better assess the value of the BCDB training experience, but it will also allow us to compare student and program outcomes across the BCDB T32 training program, institutional graduate program, and other NIGMS training programs at the institution. This evaluation approach will allow the NIGMS-supported programs to evaluate the strengths and weaknesses in their current approaches and provide a means to efficiently share approaches that are successful in individual programs across all programs that support the training of PhD scientists at the institution.

Title: Graduate Training in Genetics
Principal Investigator:
Karen Guillemin, University of Oregon
Supplemental funding is requested by the University of Oregon Graduate Training in Genetics T32 program to significantly expand institutional evaluation capacity for all of our research training programs. The primary objective of the parent award is to prepare scientists for productive careers at the forefront of modern genetics, whether it be in an academic, corporate, or governmental contexts. The Genetics T32 program aims to (i) train students to become creative, rigorous, and experimentally skilled scientists with a deep and broad understanding of gene function and heredity; (ii) teach students to communicate science effectively to the lay public, professional colleagues, and students in the classroom; and (iii) prepare students to bring this expertise into the workforce by offering diverse opportunities to develop professional skills. The planned redesign of existing evaluation activities, as well as the creation of new survey instruments to assess career preparation and diversity, equity, and inclusion, will allow us to easily compare outcomes across all of our research training programs and identify opportunities to leverage high-impact activities, address areas of need, and transparently share trainee outcomes within UO and externally to potential applicants.

Title: Development of Research and Writing Skills (DRAWS): A tool for broader assessment to enhance research and research training
Principal Investigator:
Viswanathan Krishnan, California State University Fresno
The overall goal of this supplement is to enhance and develop the DRAWS approach as a tool to broadly understand the research and research training activities across the master's student population and their respective faculty mentors. The specific aims are (a) To enhance and develop an online assessment tool based on the DRAWS that could be readily implemented across the master's student population securely, (b) To deploy an experimental design that would increase the sample size of the student population with controls to measure effectiveness between different sub-groups of students, (c) To develop an interactive dashboard of information using visual analytics, and (d) To disseminate the methods and tools developed available to other training programs. This will be the first interdisciplinary effort at the College of the Science and Mathematics (CSM), School of Education, and the Office of Institutional Effectiveness (OIE) to develop a comprehensive assessment tool.

Title: Improving Data Collection Infrastructure to Enhance Evaluation Capacity of Graduate School
Principal Investigator:
Blake Nicquet, University of Texas Health Science Center
In current state, the formative evaluation for our PREP and IMSD programs focuses on the academic progression of trainees and the data as it relates to mentorship and non-cognitive factors that predict success during training. In future state we anticipate expanding our evaluation assessments to examine the efficacy of the activities and interventions being applied within the programs in the context long term (10 years and beyond). Moreover, the IMSD and PREP programs are currently the only ones that utilize the non-cognitive aspects of the database. With the update of the IMPACT infrastructure, the development of new forms (accessed from mobile devices) and capacity to extract data from these forms directly into IMPACT, the GSBS would be poised to expand the data mining potential to training directors as they prepare tables and track data related to the submission and renewals of T32 grants.

Title: Virginia Commonwealth University Postbaccalaureate Research Education Program
Principal Investigator:
Joyce Lloyd, Virginia Commonwealth University
This proposal employs a strength-based collaborative participatory approach to engage training program stakeholders (fellows/research mentors) in enhancing evaluative capacity and involvement in the research mentoring experience assessment (RME) process. The two objectives of this project are to develop a series of four stakeholders’ evaluation capacity training workshops and to involve the stakeholders in the process of generating the RME assessment instrument. Four faculty workshops will be offered: 1. Evaluation Basics, 2. Developing Logic Model, 3. Evaluation Assessment Development, and 4. Utilization of Evaluation Findings. The fellows’ workshops are: 1. Evaluation Basics, 2. Program Evaluation and Training Experience, and 3. Engaging in Evaluation, and 4. Reflective Assessment. Following the workshops, the two groups will assist in designing instrumentation for RME assessment. The nominal group technique and Delphi procedure will be utilized to develop separate RME mentor and fellow assessments. After validation, the instruments will be used for all CoHD training programs. Project process and outcome evaluations will be conducted and a mixed-method approach will be used to obtain quantitative and qualitative data from various sources. Triangulation will be done to integrate both types of data for deeper insights and interpretations of the overall effect. The developed workshop materials and instrument and results of this ECB training and stakeholder engagement model will be disseminated through websites, newsletters, presentations and publication.

Title: Development of Evaluation Plan, Tools, and Protocols for the Sacramento State RISE Program
Principal Investigator:
Mary McCarthy Hintz, California State University, Sacramento
The Sacramento State RISE Program (RISE) team proposes to work with the Sacramento State Institute for Social Research (ISR) to build an evaluative infrastructure with two primary objectives: (1) to develop measurement tools, processes, and a long-term evaluation strategy that can be sustainably implemented by RISE staff after the end of this grant, and (2) to align and integrate the evaluation activities from other federally-funded programs aimed at UR students enrolled in STEM majors at Sacramento State so the measurement tools, data, and learning can be shared and leveraged across these programs.

Title: Enhancing PREP Evaluation Capacity using a Culturally Responsive, Equitable, and Sustainable Approach
Principal Investigator:
Edward Smith, Virginia Polytechnic Institute and State University
Culturally responsive evaluation (CRE) involves creating relationships of reciprocity with participants and adapting to their specific set of needs in a participatory manner. CRE places culture and the community of focus at the center of the evaluation, supporting community empowerment, and has a goal of social justice. The app will allow the PREP program to account for more student reflection and continuous feedback loops between the students and their peers, and the students and project team. Maximizing student development while implementing available tools and resources has been one of the key factors in the continued success of PREP. CRE encompasses the experience of the students and the staff and other stakeholders involved in PREP, creating an environment where beliefs, values, and behaviors can be socially adapted within a group. The app will aid in moving the existing evaluation beyond a snapshot once or twice a year, and position the program to address pressing needs, remove issues of third party access and sharing that exist with the current use of Poll Everywhere, and center the margins of URM populations.

Title: Training in Biotechnology: Emphasis in Protein Chemistry
Principal Investigator:
John W Peters, Washington State University
The Washington State University (WSU) Protein Biotechnology Training Program proposes to synergize existing evaluation activity by developing new evaluation tools to measure (a) student-centered, (b) formative, and (c) quantitative dimensions of evaluation. This expanded mix of evaluations, including the use of the Net Promoter Score, will better direct specified program improvement initiatives. Grant funds would create additional in-house evaluation expertise and tools by providing access to research-proven platforms and evaluation consultants. Tools with proven effectiveness will disseminated to other WSU entities and NIGMS programs. Institutional support exists to sustain expertise and capacity created with grant funding.

Title: Molecular Biophysics Training Program
Principal Investigator:
Ning Zheng, University of Washington
Based on discussions among the relevant T32 program directors, the MBTP Co-Directors have identified a major gap in program evaluation in the local community and have taken the lead in coordinating the activities outlined below, which will enhance evaluation capacity. By providing effective evaluation guidelines, these activities will help T32 directors adjust their ongoing evaluation planning and implementation strategies. By offering evaluation templates and adaptable evaluation modules, the proposed activities will help expand and standardize the evaluation toolbox available to each program. These tools can be adopted in their entirety, or incorporated as modules into existing evaluation methods by individual training programs. Last but not least, the proposed activities include a sample evaluation of a particular training program, which will set an example for all training programs going forward. A comparison between the original program evaluation plan and the sample evaluation outcomes will highlight differences and gaps that might be relevant to all other T32 program. Overall, these activities are aimed at both leveraging the expertise of an internal educational assessment office on the UW campus and producing sustainable improvement of program evaluation capacity for all predoctoral T32 training programs across the institution.


Title: Expanding program evaluation capacity and enhancing training programs through alumni perspectives
Principal Investigators:
Steven Russell Lentz and Pamela Geyer, University of Iowa
The mission of the Iowa MSTP is to promote the development of inquisitive, creative and rigorous scientists, whose leadership and discoveries advance the science of medicine and its translation into clinical practice. At Iowa, the MSTP has led innovation in T32 training that includes building a unique culture that weaves program evaluation into program design. These efforts have benefitted from the recent recruitment of a dedicated faculty program evaluator (Dr. Hoffmann) who has developed methods to assess effectiveness of training, including annual surveys, targeted interviews, and focus groups that assess special topics or cohort experiences within the MSTP. Our current program evaluation model leans heavily on insights from current students, staff and faculty, and is well-suited to evaluate the internal workings of the program. We now propose to develop a platform to explore what happens after the MSTP by integrating a different population into our program evaluation strategy – our program alumni. This evaluation platform will be piloted with our MSTP alumni with the expectation that it will expand our institutional evaluation capacity not only for the MSTP but also for other T32-funded training programs on campus.

Title: Developing Integrated Evaluation Capacity for DSU RISE Programs
Principal Investigator:
Karl Meletti, Delaware State University
The proposed evaluation integration process will include three phases. Phase 1 will be a four month planning and development process in which an integrated mixed methods evaluation process and tools will be created. The INBRE Director of Evaluation, the ISTEP graduate student, and the ISTEP Project Director will share evaluation tools and processes and develop an integrated evaluation plan. Phase 2 is a four-month period of piloting the tools and processes. During this phase the ISTEP Project Director will train the graduate student on piloting procedures. The evaluation tools and processes will be reviewed by U-RISE staff and mentors and their feedback and suggestions will guide finalization. The data collected during the piloting phase will be used to evaluate the current U-RISE program. The ISTEP Project Director and graduate student will work with the U-RISE program staff to create a program improvement plan based on the evaluation findings, establishing an actionable feedback process. The final phase, Phase 3, is four-month process of revising and finalizing tools and processes and developing procedural manuals. Processes to seamlessly integrate the U-RISE evaluation and reporting requirements will be described and mechanisms to sustain the evaluation processes will be established. An internal data sharing process will be established so that the robust data produced through the ISTEP process continues to be shared at the faculty, department, College, and institutional levels. During Phase 3 the ISTEP process will be assessed to determine effectiveness. A summative evaluation of ISTEP goals will be conducted and will include evidence to support the assessment of each goal.

Title: Expanding Biomedical Science Training Program Evaluation Capacity through Interactive Modules Supplement to Initiative for Maximizing Student Development (IMSD) Program
Principal Investigator:
Fred Pereira, Baylor College of Medicine
To build evaluation capacity among training program personnel engaged in evaluation, we propose leveraging existing resources (e.g., eLearning course software), and expertise at BCM to create five interactive, online modules, suitable for use as an asynchronous, “build your own” short course or for blended instruction. Module one will introduce educational program evaluation, providing foundational knowledge related to planning; alignment of outcomes, actions, and assessments; culturally responsive practices; use of findings for improvement; and generation of evidence to support dissemination and replication. Four complementary, topic-specific modules will then allow users to develop knowledge most closely related to their program’s specific evaluation needs (i.e., small sample sizes). After developing module outlines with an advisory committee of expert evaluators and training program personnel, the supplement request project director will work with a BCM instructional designer to build the modules in the eLearning course software, Articulate Rise 360, for which we have an institutional license. Modules will be piloted with 24 training program volunteers and revised based on their survey and focus group feedback. Resources from BCM programs will be gathered and established into an ongoing collection of evaluation materials. Modules and resources will be available free-of-charge internally and on the external BCM site.