Admission time: like many of us in Library Land, I am still figuring out the best ways to measure program outcomes. Marking attendance is relatively easy (although to be fair, sometimes the teens move do around a lot, which can make them tricky to count). It’s a bit harder to identify the changes I want to see as a result of my program, and then accurately measure those changes.

The Programming Guidelines ask us to “Engage in youth-driven, evidence-based evaluation and outcome measurement.” I’m not quite there yet. As I mentioned in my post about our weekly drop-in, we’ve been working with participants in that program to identify priorities, and now we’re moving towards evaluations that will measure whether those priorities are being met. But it’s still a work in progress.

What I have gotten better at is working with community partners to create evaluations for programs. For example, we regularly work collaborate with Year Up to build their students’ information and digital literacy skills. Before each workshop, we meet with Year Up staff to make sure that we’ll be teaching the skills they want participants to gain. Collaborating with partners on our evaluations and learning from them about their own evaluation methods has made a huge difference in the quality of our evaluations overall.

At Year Up, I give the students pre- and post-tests to see how much our classes are moving the needle on desired skills and knowledge. We send Year Up staff an early draft of the tests (same questions for both) and incorporate their feedback in the final evaluation tool. Seems foolproof, right?

Year Up dataWell, here’s a graph I made from the results of an earlier incarnation of those pre- and post-tests. Can you spot the problem(s)?

Library jargon. Words like “catalog” and “keywords” muddied the results, because (especially before the workshop) students didn’t really know what those words meant. My vague question about whether “all the world’s knowledge” is available via Google wasn’t great either. Students figured that the answer was probably “no”–because of course librarians hate Google. (I don’t, honest!) As I phrased it, the question didn’t measure the movement I saw in their understanding of WHY a lot of the world’s best info isn’t available on Google. (Which as we all know is about money, honey.)

This wasn’t the best evaluation tool. The next time I created a survey for Year Up, I drastically rewrote the questions. But that’s okay! This survey did measure some outcomes–e.g., a huge increase in library resource knowledge among participants. And I learned some pitfalls to avoid next time.

I’m a big fan of giving myself permission to fail, and I take myself up on it a lot–especially when it comes to measuring outcomes. The important thing is to learn and adjust, and get better data next time.

 

 

 

About Hayden Bass

Hayden Bass is a Teen Services Librarian in Seattle. She chairs YALSA's Programming Guidelines Taskforce and is a member of the 2015 Printz Committee.

Comments are closed.

Post Navigation