Admission time: like many of us in Library Land, I am still figuring out the best ways to measure program outcomes. Marking attendance is relatively easy (although to be fair, sometimes the teens move do around a lot, which can make them tricky to count). It’s a bit harder to identify the changes I want to see as a result of my program, and then accurately measure those changes.
The Programming Guidelines ask us to “Engage in youth-driven, evidence-based evaluation and outcome measurement.” I’m not quite there yet. As I mentioned in my post about our weekly drop-in, we’ve been working with participants in that program to identify priorities, and now we’re moving towards evaluations that will measure whether those priorities are being met. But it’s still a work in progress.
What I have gotten better at is working with community partners to create evaluations for programs. For example, we regularly work collaborate with Year Up to build their students’ information and digital literacy skills. Before each workshop, we meet with Year Up staff to make sure that we’ll be teaching the skills they want participants to gain. Collaborating with partners on our evaluations and learning from them about their own evaluation methods has made a huge difference in the quality of our evaluations overall.
At Year Up, I give the students pre- and post-tests to see how much our classes are moving the needle on desired skills and knowledge. We send Year Up staff an early draft of the tests (same questions for both) and incorporate their feedback in the final evaluation tool. Seems foolproof, right?