Why Impact Measurement is about more than just results

Results! Where are the Results?!

I’ve heard this question more times than I can count. It’s practically a cliche at this point…and more than a little stress-inducing. But I can’t hold it against the executives, board members, and leaders who have asked it over the years —because, truthfully, impact measurement is widely misunderstood.

When we talk about impact measurement (or measurement & evaluation), most people think about looking backward—summarizing what happened, tallying up numbers. They think of results - specifically, the end results.

But here’s the thing: impact measurement isn’t just about showing what happened at the end. It’s also about learning, adapting, and improving along the way. I like to think of it as the R&D of the social impact world—a tool we use to test, refine, and ensure our initiatives are making the difference we want them to have.

Obviously, we don’t just dream up a social impact initiative, run with it full speed and then expect to have achieved everything we ever dreamed of. We wouldn’t launch a new company product without research, testing, and iteration. So why do we expect to launch social impact initiatives without the same rigor?

Here’s how different types of evaluation can help throughout the social impact life cycle:

 

Developmental Evaluation

Developmental Evaluation (DE) helps you figure out where to even begin when developing a program or strategy. It helps you navigate uncertainty and complexity and figure out what you should be considering, or which direction a strategy should take. 

If you have a $500,000 grant budget, how do you know where that money is best spent? Is it better to invest in early education or in mental health? Is it better to do short-term grants or offer multi-year funding? If you want to design a new employee engagement program, how do you know what employees actually want? Developmental Evaluation helps you make informed decisions about which new strategies to take on.  It helps social impact leaders figure out what they should even be doing in the first place. 

Formative Evaluation

You’ve just started a new program or initiative, and are learning as you go. You have an idea of where you are headed, but the details are still coming together. Think of formative evaluation as your mid-course correction tool. It happens while a program or initiative is still being developed or in its infancy, providing feedback to fine-tune it before full rollout. The goal is to increase efficiency and effectiveness—helping you adjust before it’s too late

If you are deciding to run a pilot program or phase for a new initiative, Formative Evaluation is your tool to understand what worked, what didn’t. 

Process Evaluation

Process evaluation is all about execution. It looks at things like participation, program fidelity, and whether things are going to plan. If you’re running a program and want to know if it’s being implemented as intended, process evaluation will give you the insights you need. Often process evaluation will yield what we think of as traditional “KPIs”  - the number of attendees, participation rate, Satisfaction & Net Promoter Scores. 

Summative Evaluation

Summative evaluation happens at the end of a program and measures whether this specific program met its goals and objectives. While typically used for individual programs, it can be used to assess an entire strategy or portfolio over time. 

Impact Evaluation

Also known as Outcome Evaluation, this is the big-picture assessment—the one that looks at long-term, broad effects of an initiative across different stakeholders. This is what most people think of when they ask, “Where are the results?”  It examines whether an initiative is achieving its intended outcomes within a specific timeframe, but also goes beyond, to understand both positive and negative, intended and unintended consequences. It can help guide strategic decisions for future investments.


Shifting the Conversation on Impact Measurement

The real power of impact measurement isn’t just about answering the question, Did it work? It’s about constantly learning, adapting, and making sure that what we’re doing is actually driving the change we want to see.

So next time someone asks, “Where are the results?”—let’s reframe the question.

What happened? For Who? What matters? Because if we’re only looking at the “results”, and ignoring the opportunity for learning and adaptation, we’re missing the real opportunity to drive meaningful, sustainable impact.

Previous
Previous

Why evaluation should start the beginning, not at the end

Next
Next

The Two Types of Theory of Change (and Why You Might Need Both)