Measuring Impact: What Funders Actually Want to See in Your Evaluation Plan
Measuring Impact: What Funders Actually Want to See in Your Evaluation Plan
The evaluation section of a grant proposal causes more anxiety than any other. Nonprofits worry they need expensive external evaluators, randomized controlled trials, or PhD-level research designs. The reality is much more accessible — most funders simply want to see that you have thought carefully about how you will know whether your program is working.
What Funders Actually Expect
For most foundation and government grants under $500,000, funders expect a clear statement of what outcomes you intend to achieve, specific indicators you will track to measure progress, a description of how and when you will collect data, a plan for using data to improve your program, and honest acknowledgment of limitations in your approach.
They do not expect perfection. They expect intentionality and honesty.
Outputs vs. Outcomes vs. Impact
Understanding these distinctions is critical. Outputs are what you produce — the direct, countable products of your activities (workshops delivered, people served, materials distributed). Outcomes are the changes that result — shifts in knowledge, behavior, condition, or status among your participants. Impact is the long-term, systemic change your work contributes to, often in combination with other factors.
Most funders want to see outcome-level measurement. They want to know not just that you served 200 people, but that those 200 people experienced meaningful change as a result.
Designing Practical Measurement
Start with your logic model (if you have one) and identify the most important outcomes at each stage. Then ask: what would we observe if this outcome were occurring? That observable change becomes your indicator.
For each indicator, determine the simplest reliable way to measure it. Pre-and-post surveys are the most common tool for knowledge and attitude changes. Administrative data (attendance records, completion rates, employment records) works well for behavioral outcomes. Interviews and focus groups capture nuanced qualitative changes that surveys miss.
The Evaluation Budget
A reasonable evaluation budget is 5-10% of total project costs for most grants. This covers survey design and administration, data entry and analysis, and reporting. For grants over $500,000 or those requiring external evaluation, budget 10-15% and identify a qualified evaluator in your proposal.
For smaller grants, internal evaluation conducted by program staff is perfectly acceptable. Just be transparent about this approach and acknowledge the limitations of self-evaluation (primarily, potential bias).
Common Mistakes in Evaluation Plans
Avoid promising to measure things you cannot actually track. Do not claim you will demonstrate "long-term impact" in a one-year grant period — that is not credible. Do not list dozens of indicators; focus on three to five that truly matter. And never present evaluation as an afterthought tacked onto the end of your proposal — weave it throughout your project description to show that learning and improvement are central to your approach.
Using Data for Continuous Improvement
The best evaluation plans are not just about proving impact to funders — they are about learning what works so you can do better. Build feedback loops into your program design. Review data quarterly. Make mid-course corrections when the data suggests your approach is not working. Then report these adaptations to funders as evidence of organizational learning and responsiveness.
Funders are increasingly interested in "developmental evaluation" — the practice of using real-time data to adapt programs in complex, changing environments. If your work operates in such an environment, framing your evaluation this way can be a strength rather than a weakness.