OPEN Member Spotlight:
How long have you been a member of OPEN, and how have you been involved?
Since 2001 — I was the events chair in 2017 and part of 2018. I also coordinated the 2017 OPEN conference. I also serve on the Nonprofit and Foundation TIG leadership council of AEA, and the coffee break webinars are an easy and informative professional development opportunity.
[I belong to OPEN for the] ongoing connections, collaboration, and professional development. I’ve met people through OPEN that I collaborate with on projects. As a consultant, it is important to stay connected with the evaluation related conversations. I am on social media and gather information there. But, nothing replaces in person connections with fellow evaluators.
What first got you interested in evaluation? What’s your favorite thing about being an evaluator?
When I was living in San Francisco, we were planning our move to Portland. I posted my skill set and what I was looking for on CNRG (a listserv for the nonprofit community). Someone from Education Northwest reached out to me, and offered to do an informational interview when I came to town. When I was at Education Northwest, I was the Oregon schools evaluator for a five-year project focused on creating high learning communities in schools that were failing. Our evaluation team developed a rubric and then tools to measure the dimensions within that rubric. It was my first time doing site visits, and I loved it!
Helping people see how accessible program evaluation can be, and the lightbulb moment when they realize they can use data to learn (it’s not a burden!). [My evaluation hero is] Michael Quinn Patton. He spoke at an OPEN conference years ago and it completely changed my approach.
What is a highlight from a recent evaluation you’ve done?
I am currently working with Meyer Memorial Trust: Affordable Housing Initiative on a cross-site program evaluation for their manufactured home repair program. The year 1 program summary report was released summer 2018, and the final report will be completed fall 2019.
What is a memorable moment from a favorite evaluation?
I received a distraught call from a client after I submitted their year 1 evaluation report, “The data are wrong. We cannot share this with our funder until it’s corrected.” First, I learned never, ever email a report. Always present it in person first. Second, I learned how to help a client use what they view as ‘negative findings’ into a learning opportunity.
Overall, the evaluation report was positive. The 16 cohort members felt communications from my client were poor. My client insisted they were communicating with everyone. This was an opportunity for me to help with overall program design. We created a one-page improvement plan, illustrating how data were used to improve their program communications. We measured it again one year later, and results were positive. And, my client was so happy because of sharing their ‘negative data’ with an improvement plan, the funder gave them more money for a different project.
This memorable moment changed how I share results, discuss challenges and how to address them, and overall ensure clients understand evaluation is not a PR activity.
What are the most exciting areas of growth and learning in evaluation?
Program evaluation has been taking a turn toward being more accessible than it was in the past. With the growth of the data visualization in our reports and plans (thanks Stephanie Evergreen!), we are becoming more mindful of not just the process of doing the evaluation, but how results are being communicated as well as used. This means being more thoughtful about how we present the information. I typically do two reports. First, a technical report with the traditional format and generally this report remains an internal document. Second, a stakeholder report that my clients can share publicly. This leverages data visualization to ensure the information is clearly communicated.
There’s also more interest in evaluation capacity building. A passion of mine. More and more nonprofit professionals want to learn how they can do program evaluation internally. This presents a great opportunity for our field, to look at how we can appropriately train non-evaluators to do basic program evaluation.