Reach Your Goals: Evaluation (Part 1)
Okay, you have what you think is a great program. You’re excited, and so are your stakeholders. You set objectives (Five Program Pitfalls to Avoid), and they’re SMART ones (Set Program Objectives the Smart Way). But your objectives won’t mean anything if you have no way to tell if you’ve met them. That’s where evaluation comes in.
Evaluation can seem scary. It brings back memories of sitting for high-school exams in a room full of sweaty, nervous teenagers. When you evaluate a program, you open yourself up to face the possibility that you might not be as effective as you imagine. Relax. This evaluation is going to be fun. After all, the focus here is on what you’re doing right, not what you’re doing wrong.
Over the next two posts, we’ll look at four different methods of evaluation. It’s up to you to decide what will work best for your particular program.
Depending on your program’s goals, tracking your audience or participants can be a useful evaluation tool. At its most basic, audience tracking is simply recording the number of participants in each program. If your numbers increase over time, that’s a good sign that you’re doing something right. You can also track the number of participants from certain communities by having participants fill out a short demographic questionnaire when they register. (Just be sure that you describe the different demographics as respectfully and inclusively as possible when writing the questionnaire.) Or you could look at program participants as a percentage of a larger audience — for example, what percentage of people staying at the campground came to the fireside Ranger Talk?
This is usually a pretty easy method. Count participants, record the number in a spreadsheet, and you’re done. It doesn’t require any specialized knowledge on your end to run the evaluation. However, audience tracking should rarely (if ever) be the only evaluation method you use. Increased numbers tell you that your program is doing something right, and look impressive on a grant application — but they don’t tell you what is going right. For that, you have to dig a little deeper.
Participant surveys are everywhere now! When you shop at certain stores or eat at certain restaurants, you get a survey code on your receipt. Go to a website, fill out their survey about your customer experience, and be entered to win a gift card.
You can also use surveys in your program to check on your participants’ experience. There are a few ways you can incorporate surveys into your program:
- Quick verbal survey of participants at the end of the program: for example, asking a class of 2nd graders to point to the program station they enjoyed the most
- Paper survey, handed out at the end of the program with the expectation that people will return it as they leave
- Online survey on a tablet or computer at your program site, with the expectation that people will fill it out before leaving
- Online survey emailed out to participants after the program
Obviously, these different methods require different collection tactics on your end. If you’re using a verbal survey, you’ll have to remember and record the responses. If you use a paper survey, someone will have to manually input the data. Online surveys will compile data for you, but your participants might not all be comfortable with that or have access to a computer at home.
No matter what, don’t expect a 100% return rate. Even asking people to fill out a paper survey before they leave doesn’t mean that everyone will (though this is the most reliable method). People are even less likely to complete a survey once they’ve gotten home. Return rates decrease dramatically over time.
You can also send out surveys a while after your program to determine if it had any long-term impact. I do this with volunteer trainings — about two months after the training, I ask volunteers to identify (via online survey) strategies from the training that they’ve implemented. This helps me to check on the practical usefulness of the training I conducted.
Depending on your program, you may choose to survey different stakeholders. For school programs, we ask the class’s teacher to complete a survey that asks about the program’s relevance to school curriculum and their overall experience of visiting the museum. For programs in development, I ask all participants (even 4th graders) to complete a quick survey about what they learned. You will need to decide who is best equipped to give you the type of information you need. Keep in mind, though: surveys only measure the participant’s perception of the program, which can differ from your perception or goals.
These two evaluation methods require the least input from the program facilitator during the program. In the next post, we’ll talk about two evaluation methods that are a little more intensive on the facilitator’s part, but can really delve deeper into understanding the participants’ experience.