Leslie Goodyear, PhD, brings more than 16 years of experience evaluating educational projects and programs at local, regional, national, and international levels. Goodyear, the President-Elect of the American Evaluation Association and Associate Editor of the American Journal of Evaluation, has conducted evaluations and evaluation capacity building in formal and informal educational settings, afterschool, youth civic engagement, HIV prevention, youth development, and human services programs, with a recent focus on STEM programs in informal settings. In this blog post, she describes a free online guide that is designed to promote strong, productive partnerships between Principal Investigator and Evaluators.
Do you work in a nonprofit or government agency? Do you provide services to the public like education, healthcare, counseling, housing, or you name it? If so, you’ve encountered evaluation. If you’re a funder, chances are you’ve commissioned evaluations of the projects you’ve funded. If you’re a stakeholder—a board member, a parent, or even a concerned community member—you’ve looked to find out whether your program has been evaluated and what the results were. Face it: evaluation is everywhere these days.
With evaluation here to stay, what can organizations and project leaders do to ensure that it’s done well and that results are used? What do project leaders need to know to work with evaluators? How can all involved stay on track so the right information is collected, the right people are involved, and the right information is presented to the right audiences so that they can make the right decisions?
Evaluation can be a daunting task for program designers and implementers. It’s usually required by the funder, it can seem like the evaluator speaks another language, and the stakes for the program can seem very high. Evaluators face their own challenges. Often working with a tight budget and a tight timeframe, expectations are high that they deliver both rigor and relevance, along with evidence of programmatic impact. With all this and more in the mix, it’s no surprise that tension can mount and miscommunication can drive animosity and stress.
As the head of evaluation for the ITEST Learning Resource Center and as a program officer at the National Science Foundation, I saw dysfunctional relationships between Principal Investigators (PIs) and their evaluators contribute to missed deadlines, missed opportunities, and frustration on all sides. As an evaluator, I am deeply invested in building evaluators’ capacity to communicate their work and in helping program staff understand the value of evaluation and what it brings to their programs. I was concerned that these dysfunctional relationships would thwart the potential of evaluation to provide vital information for program staff to make decisions and demonstrate the value of their programs.
To help strengthen PI/Evaluator collaborations, I've spent quite a bit of time doing what I called “evaluation marriage counseling” for PI/Evaluator pairs. Through these “counseling sessions,” I learned that evaluation relationships are not so different from any other relationships. Expectations aren’t always made clear, communication often breaks down, and, more than anything else, all relationships need care and feeding.
As a program officer at NSF, I had the chance to help shape and create a new resource that supports project PIs and evaluators in forming strong working relationships. My colleague Rick Bonney of the Cornell Lab of Ornithology and I developed a guide—now available online—to working with evaluators, written by PIs, for PIs. Although the Guide was designed for the Informal Science Education community, the lessons translate to just about any situation in which program staff are working with evaluators.
The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects walks readers through the stages of an evaluation and points out potential stumbling blocks. For each stage, the Guide describes the kinds of conversations PIs/project leaders and their evaluators should have and how to work together to ensure that the evaluation is used to inform decision making and demonstrate program impact. I hope that the Guide will help "save some evaluation marriages" and contribute to creating strong, functional relationships that enable PIs and evaluators to focus on the important work of examining outcomes and strengthening programs. Do you have questions about the Guide? Please feel free to contact me.
- Read a blog post from the AEA365 website titled Leslie Goodyear on The Importance of Asking "Stupid Questions" in Qualitative Evaluation.
- Check out a recent book that Leslie co-edited: Qualitative Inquiry in Evaluation: From Theory to Practice.
- Learn more about our Research and Evaluation work.
- Explore the ITEST Learning Resource Center website, now known as STELAR.