Practical Paths to Better Impact Measurement
- Aug 19
- 3 min read
When we talk about impact measurement, it’s easy to think about numbers, charts, and annual reports. But as the recent Making Change Visible: Insights into Effective Impact Measurement webinar revealed, the real power of measurement lies in the stories it tells, the learning it fosters, and the trust it builds between funders and grantees.
Hosted by Philanthropy Ireland, the session was chaired by Hazel, who reminded participants that philanthropy is as much about reflection as it is about giving: “Impact isn’t only about what happens out there in communities, it’s also about how we evolve as funders.”
The heart of the discussion came from a case study presented by Jen Riley, Chief Impact Officer at SmartyGrants, and Kate Randall from the Community Broadcasting Foundation (CBF), an Australian funder supporting over 400 grantees, including First Nations, multicultural, LGBTQIA+ media, and youth broadcasters.
The CBF Journey: From Counting Outputs to Capturing Change
Before 2022, CBF’s reporting leaned heavily on outputs: how many grants were awarded, the size of the audiences reached, the number of stations funded. Occasionally, powerful stories would surface about a youth-led broadcast tackling local misinformation or a First Nations radio station amplifying community voices but there was no consistent way to capture and compare these stories across the portfolio.
The turning point came with a strategic restructure, reducing nine funding programmes to three and introducing a shift toward outcomes and impact reporting. This meant asking new questions: What difference are we making? Who is experiencing that change? How do we know?
Implementing this was no small feat. As Jen explained, “CBF wanted better insight without making reporting feel like a second full-time job for grantees.” Many were volunteer-run, with varied data literacy. The solution was a flexible approach: allowing grantees to select up to four priority outcomes, align them with CBF’s framework, and keep their own metrics at first.
Workshops on “theory of change” helped identify crucial “middle outcomes”, the tangible shifts between immediate outputs and long-term goals. Advisory committees simplified the language and added well-being outcomes, ensuring the framework reflected the sector’s values. Over 12 months, Kate and her team developed a Monitoring, Evaluation, and Learning (MEL) framework that trimmed 200+ indicators down to fewer than 100 and produced seven core evaluation questions, some focused on unexpected outcomes and broader contributions.
For Those Already Implementing Impact Measurement
If you have systems in place, the CBF story offers practical ways to go deeper:
Identify “middle outcomes”: Capture the meaningful shifts that happen before big-picture change.
Align grantee and funder objectives: Let grantees set their own priorities first.
Iterate continually: Review frameworks regularly, streamlining where possible.
Blend data types: Mix metrics with narrative case studies for richer insights.
Share data back: Give grantees access to dashboards or reports for transparency and shared learning.
For Those Considering Impact Measurement for the First Time
Starting from scratch can feel daunting, but the advice from the webinar was clear:
Start with purpose: Decide why you’re measuring and who will use the findings.
Begin small: Choose a handful of indicators you can manage well.
Minimise reporting burden: Especially for smaller organisations.
Use neutral language: Avoid terms like “success indicators”, use plain english phrases to reduce pressure.
Build capacity: Offer training, resources, and peer support as part of your grantmaking offering.
MEL for Funders: Looking in the Mirror
The session closed with a reminder that monitoring and evaluation isn’t just for grantees—it’s equally vital for funders. Hazel emphasised: “We should be asking ourselves the same hard questions we ask of others.”
Key practices include:
Assess applicant experience: Measure clarity of guidelines, perceived fairness, and accessibility.
Use real-time feedback: Dashboards and check-ins can flag issues early.
Ask reflective questions: What worked? What didn’t? What was unexpected?
Foster a culture of curiosity: Treat data as a learning tool, not a compliance checkbox.
By embedding these approaches, grantmakers can strengthen relationships, enhance transparency, and ensure that every euro invested creates measurable, meaningful change.
You can view the recordings here
Comments