Funders’ reporting and evaluation systems are rarely loved: they are more often regarded as compliance or ‘policing’. But not so for the Inter-American Foundation apparently: IAF received better feedback from its grantees on its reporting and evaluation system than have the ~300 other foundations also analysed by the Center for Effective Philanthropy (CEP). In both 2011 and 2014, IAF asked CEP to solicit anonymous feedback from our grantees about various aspects of our work, as CEP has done for many other foundations. CEP’s questions include this:
“How helpful was participating in the foundation’s reporting/evaluation process in strengthening the organization/program funded by the grant?”
Both times, IAF got the highest scores CEP has seen for this question. Furthermore, IAF comes top on this metric by some margin. Respondents can answer from 1 (“not at all helpful”) to 7 (“extremely helpful”), and in 2014, IAF scored 6.00; the funders that rated second and third on this question scored 5.80 and 5.72.
As explained in an earlier post, IAF’s model is unusual in being more engaged with grantees than most funders are, visiting each several times and providing an evaluator’ who visits biannually to help set up the data system and verify the data which grantees submit. IAF also has an unusually broad set of metrics, and allows grantees choice in which ones they use.
The process helps
The main finding from Giving Evidence’s research is that the reporting and evaluation system is part of the intervention. Whilst that may sound obvious, it’s not how funders typically conceptualise reporting and evaluation: more often ‘the funder’s intervention’ is the money and maybe some other support, and the reporting and evaluation are separate from that, and to understand the effect of that.
Yet IAF’s grantees seem to gain some considerable benefits via the reporting and evaluation system. One is data: many of the grantees said that they didn’t have a data management system at all before IAF’s involvement, and that the requirement to report, and help getting a sensible data collection system, meant that they are now able to make data-informed decisions. All the grantees we interviewed (nine) gave at least 7 out of 10 for the value to them of the data they gained.
“Before, we would have gone without collecting these data. We did not think it was important. But today, yes, we would do it independently of a funder’s requirement.”
Financial audit is the component of the reporting and evaluation system most highly valued by grantees. This too is a surprise since it’s not normally even considered part of reporting and evaluation. It fits with other findings that IAF’s system helps grantees to gain:
- Capacity: grantees learn to collect, handle, interpret, present and use data. This is particularly important for the organizations with least developed skills in management and analysis, and who have not previously collected data at all.
- Confidence / courage: in their ability to collect data, and that their data are accurate and complete. Some grantees find this useful in their dealings with other organizations, such as other funders.
- Credibility: with their beneficiaries / communities, and with other organizations. Terms like ‘accountability’ and ‘transparency’ were used frequently.
These benefits are most prized by grantees which are earlier on the learning curve. We played a ‘game’ with interviewees in which they were given notional money and could either spend it on the various parts of the reporting and evaluation process or could keep it for programmes: most (6 of 9) would rather have the reporting and evaluation process than the equivalent money for their programmes. Given the unpopularity of the most funders’ reporting processes, this seemed remarkably high.
The reporting framework and metrics themselves did not prove terribly popular or influential, but the evaluators’ visits were so, being integral to building grantees’ knowledge and skills:
Her role is not to find errors. She is here to help us grow stronger and improve.
The recommendations made by the IAF, we also apply them to all our projects. It helps us improve our administrative systems. When the project ends, we will continue with these practices…
The evaluator’s … observations and criticisms… are useful for us to improve and see things that we don’t see on our own.
Implications for IAF and other funders
A high-touch reporting and evaluation process may be useful when dealing with small grassroots organizations. Some grassroots organizations reported being so unskilled with data – and showed themselves to be in some numerical exercise in our interviews – that we would question the accuracy, meaning or usefulness of data they report to funders if they are not given support. Conversely, organizations which are more sophisticated and already further up the learning curve gain less from a high-touch process; some may need less support and some may need none. It may be wise to segment grantees with respect to the extent and type of support they need.
The full case study is published here. IAF and Giving Evidence hope that this is useful to other funders, and also encourages other funders to share data about their own performance and findings in order to improve the practice of grant-making across the field.
Caroline Fiennes is Director of Giving Evidence. Twitter: @carolinefiennes
You might also like
Talk about community-led development!
The IAF has joined a new initiative under the U.S.-Caribbean 2020 framework to strengthen the region's capacities to thrive in the face of natural disasters.
Our grantee partner Fundación Hijos de la Sierra Flor is bringing women together to create spaces for dialogue, exchanges and recreation.