Workshop on Strategies to Improve the Effectiveness of Evaluation of Nonprofit Social Service Providers

U.S. Department of State Third Annual Conference on Program Evaluation - Evaluating Partnerships Track
Washington, DC
June 8, 2010

MR. SRIKANTIAH: Good morning, everyone.

I'd like to welcome you all to our second workshop under the theme "Evaluating Partnerships," the title being "Strategies to Improve Effectiveness of Evaluation of Nonprofit Social Service Providers," and we have three excellent panelists to discuss this topic.

The first is Dr. Kathryn Newcomer. She is the Director of the Trachtenberg School of Public Policy and Public Administration at George Washington University, where she is also the Co�'Director of the Midge Smith Center for Evaluation Effectiveness.

Dr. Newcomer teaches public and nonprofit program evaluation, research design, and applied statistics.

She also conducts research for government agencies and nonprofit organizations on performance measurement and program evaluation, and has conducted training and given lectures on evaluation throughout the world.

Our second speaker today is Dr. Laila El Baradei. Dr. El Baradei is Associate Dean for the School of Global Affairs and Public Policy and Visiting Professor of Public Administration at the American University in Cairo, Egypt.

She is also a tenured professor of public administration at the faculty of economics and political science at Cairo University. Some of her areas of teaching include strategic management, development management, and organizational behavior. Over the years, Dr. El Baradei has also provided consultancy services to a number of organizations, including the World Bank, USAID, UNDP, and the Center of Development and Research in Bonn, Germany.

And our third speaker is Dr. Sandra Garcia. She is an Assistant Professor at the School of Government at the University of Los Andes in Colombia, where she teaches policy evaluation, quantitative methods, poverty inequality, and public policy.

Dr. Garcia conducts research on program and impact evaluation, as well as social policy, particularly child and family policy. She has published articles in the Journal of Development Effectiveness and recently collaborate on design of the impact evaluation of a large�'scale conditional cash transfers program in Colombia.

We have a very accomplished panel today, and I'd like to start with Dr. Newcomer, who will provide an overview of the issues facing evaluation in a development context.

Thank you.

Get Adobe Reader View slide presentation - Newcomer ]
Get Adobe Reader View slide presentation - El Baradei ]
Get Adobe Reader View slide presentation - Garcia ]

DR. NEWCOMER: Thank you very much, and I'm going to �'�' this is really going to be speed talking, because we don't have very long.

After seeing what happened at the last session, I'm going to have to talk really fast, but I have to say this, that given where we are, that Laila and I became very good friends because we had a Fulbright together back in 2002 to 2004, and we have remained extremely close professional colleagues ever since then, and Sandra and I were able to start working together because of an Open Society grant, and we have continued, and so this research project is really a fruit of collaboration across nationally for us.

So I'm just delighted to be here, but we want to know something really quickly. How many of you work for the Federal Government? Raise your hand. Okay. How many are professional evaluators? How many have done work in other countries in development? Okay. There you go. We were very curious. Okay. Okay.

So today, we are going to, very quickly, as quickly as we can talk, talk about the context expectations and practice of performance measurement, monitoring, and evaluation, but more the former, in �'�' for nonprofit service providers in other countries, and we also are going to provide some suggestions about how we might improve this operation.

So I'm going to kind of set the context first. I have actually been here, three blocks from here, for 29 years, working particularly with the Federal Government and nonprofits in our country most of the time and really been �'�' sort of had a front row seat on what's been going on.

So I'm going to set the context for this a little bit. As I'm sure all of you know, with the Government Performance and Results Act, there are requirements, of course, for performance reporting, and then, during the George W. Bush administration, the Office of Management and Budget became much �'�' put more teeth in the act with the PART process, the Program Assessment Rating Tool, which is gone, but which a key thing was it really raised the visibility of evaluation and performance.

Now, as you will hear later, when Shelley Metzenbaum comes and speaks, the current administration is very interested in both performance measurement and evaluation, although the �'�' and transparency �'�' I want to come back to that �'�' but instead of PART, they are trying to enhance capacity and evaluation of the agencies and stressing to enhance use.

"Use" is a key word you'll probably hear Shelley say quite often in her talk, that because there is the perception, that I share, that there is a huge infrastructure for performance reporting and evaluation in the Federal Government, but not necessarily used for learning, and by the way, my �'�' my work with dozens and dozens, probably hundreds, of nonprofits in the U.S. over the last 20 years basically is the same thing. There's a lot of rhetoric, a lot of talk about the necessity, for example, for outcomes assessment and so on, but more talk than there is play on that.

The World Bank, of course, is also talking a lot about impact evaluation all the time, and the necessity for performance reporting as a �'�' pretty much everywhere, and in the United States, you have the managing for results at the state level, city level, and as a matter of fact, the performance folks, including �'�' well, she'll probably mention that �'�' in OMB are very enamored with this notion of transferring this to the Federal Government, for example, and then, of course, United Way �'�' since 1996, United Way has been saying to all of these nonprofits, you shall report on outcomes, and so again, there's a lot �'�' been a lot of push on that, and then, of course, the talk about the need for credible evidence, evidence�'based policy management practice, etcetera.

So now let me talk a little bit about my �'�' what I've noted, what my knowledge has led me to say about what I think we have in terms of supply, and the first point is I think that the demands for evidence exceed the capacity.

Whether you're a nonprofit in Dallas, Omaha, or Cairo, the capacity issue is extremely problematic in terms of really being able to get to outcomes, and the notion of use that the Zeitz (phonetic) and Metzenbaum team at OMB have noticed about the Federal Government is �'�' is not only a problem for the Federal Government.

The notion of reporting for accountability to get the numbers, feed the beast, some people call it, is not necessarily always conducive to people being able to analyze trends, learn, figure out how to make real-time or even partial real�'time improvements in what they're doing.

There's very few people that have studies to demonstrate that government or services are absolutely better off than they were before we started, you know, more performance measurement and evaluation, and by the way, I am an evaluator and I do that, so I'm not trying to say that what we do doesn't matter, but it's very difficult to make the case that the benefits always outweigh the cost, and in the development context �'�' and I'm sure that most of you know this way more than I do, but my colleagues can certainly testify that the issues of reliability and data collection and the validity measurement, validity of measures, is even more important than it is in Dallas and Omaha in terms of an issue.

I want to just distinguish that we're really not talking so much about specific evaluation studies as we are talking about what people call either performance measurement or monitoring, which are routine measures of, on a more consistent basis, of inputs, outputs, and maybe outcomes, but what I want to emphasize is that both involve both measurement and judgment, and there's a lot of judgment that needs to go on. This is not simple.

As I always say, it's much easier for me to teach somebody how to run a chi square on some data than it is to figure out what are reliable, relevant indicators of virtually anything, and what we all know is that measurement does not ensure use. That is, that even though you may have ongoing measurement, that doesn't mean that you go to what some people call performance management.

That is, utilizing the data for real�'time decision�'making or even planning, for example, in planning portfolios of grantees in foundations or at the local level.

You know, ideally, I think we all hope that performance measures will focus on mission, mission�'driven accomplishments and achievement and so on, but we also want relevant, relevant measures and then valid and reliable data, because if data are not credible, believe me, it's very difficult to move on to utilizing those measures in decision�'making.

I think we all know that, and so we all hope for good, effective performance management, and I think that, in terms of �'�' I have to just throw this in, that we all believe in the theory underlying what we all do, and that's why it's important, is that we want to see improvement of the services delivered, and really, we do want to see better outcomes for the �'�' those that are served by programs. The point is that simply measuring on an ongoing basis can answer some questions, like how many people are served, and maybe even how satisfied they are, but it's very difficult to get on to really having thorough knowledge of, but are they better off in the long run, two years down, five years down.

Are the children that are helped by the NGOs that we're going to talk about today better off than they were because of the services?

This is just my own little rendition that I think that the kind of learning that's going to arise from these kinds of endeavors is really dependent upon iterative, long�'term investment and thinking and improvements in terms of how we capture information, how we use it, how we make �'�' make sure that there is time for people to �'�' to deliberate on what the data mean and so on, and I think that it takes a long time for us to even learn whether or not we're measuring the right thing for internal purposes, as well as for planning and accountability and so on.


So from my research and practice over the last 20�'some years, what do I think are some of the typical barriers that we actually then looked to see how these were playing out in Egypt and Colombia?

Lack of clarity regarding the espoused and then actual use of data, concern about who bears the burden and the cost. What I have found, in particular, is that when you talk about measurement, you really need to think about the organizational culture in a nonprofit or a foundation or a funder; that is, the level of comfort with �'�' for example, is there consistent, persistent consistent leadership support of this notion of learning from the valuation and measurement?

Do you have clarity in communications vertically within an organization and horizontally with grantees and other partners? What about work force stability? I know that that is a problem in many nonprofits, very small, poor nonprofits. There's really a large turnover, and so that's difficult even if you do make a commitment to �'�' to training and capacity�'building. Level of comfort with quantitative analyses or perhaps even skepticism.

Receptivity to learning, and then sometimes a lack of clarity in terms of the theory, in terms of what we're doing and ability to achieve longer�'term outcomes because of the complexity.

So we thought, well, what about looking at how the experience that we have with nonprofits in the U.S. plays out in the �'�' in nonprofits? We chose to look at only nonprofits dealing with children's services, because we wanted to keep a little bit more focus. We have also �'�' I have also talked to folks here at Save The Children and at UNICEF to see from the providers' �'�' the funders' perspective, as well, and I'm going to come back at the end with just a few comments about some recommendations that we're going to make, but I'm going to turn this over to my dear colleague, Laila.

DR. EL BARADEI: Good morning.

So the next part of the presentation is about NGOs in Egypt providing services to children, and the outline for this section of the presentation will follow the four research questions that we were interested in trying to find an answer for, basically how these nonprofit service providers practice performance reporting, how they use the performance data, the types of performance measurement tools that they use, and their perceived usefulness for these tools, and then, more interestingly, the recommendations that they offer to funders about how to improve the whole process of monitoring and evaluation and the kind of work that they do.

Basically, we focused on a sample of six nonprofit organizations providing services to children in Egypt. The basic features of these six NGOs were that they were all targeting children at risk, and by children at risk, they included children living in deprivation and extreme poverty conditions, homeless, or what we refer to in Egypt as street children, abused children, children with disabilities.

They were all working in slum areas in Cairo, the capital city, and they were all receiving funding from a variety of sources, whether international funding agencies like the USAID or DENEEDA (phonetic) or JAIKA (phonetic) or national funding agencies or private sector organizations.

The average number of funders per NGO was about five different funders, and they reported that the different funders required different formatting for their monitoring and evaluation reports.

They said that they were capable of meeting the reporting requirements, and some of the NGOs said they had no choice but to report, it's the requirement, so there's nothing else they can do about it, and they perceived the reporting to be quite accurate.

Some of the challenges that they mentioned as regards reporting, that donors may micro�'manage the process.

For example, when they are focusing on financial disbursement issues, they �'�' a donor may send a representative to photocopy each and every receipt in the organization, which is quite a waste of time and energy, or some donors do not state their reporting requests from the beginning; they may add other requirements or additional measures or indicators later on, which is perceived to be a little bit burdensome.

All six NPOs are required to set performance targets, and they perceive these as helpful, and the benefits perceived include avoiding past mistakes, measuring results, allocating funds. They report trends in data collection, but the time�'frame �'�' that is, from one NGO to another �'�' they perceive the performance data to be useful in making day�'to�'day decisions and also in some strategic decisions.

For example, one of the NGOs started out by trying to move children from hazardous jobs to less hazardous jobs, but after implementing the project for a while, they found out that families send their children to work starting the age of six; 48 percent of the targeted group of children were in the age bracket 6 to 12.

So they found it illegal to try to send these children to any kind of training.

They changed the whole objective of the project to trying to get these children back to school, and this led to other further complicated issues, finding out that the children were not even registered, they didn't have birth certificates, and trying to convince the parents to issue birth certificates for them. So they said that reporting on their performance helped them in redefining even the project objectives.

The performance data also is perceived to lead to other benefits like reporting on what they do to government officials or marketing their services and marketing what they do and attracting further donor funds.

Some of the NGOs even managed to win awards for what they were doing, one of these being the Elonin Auftor (phonetic) or Colors and Strings NGO in Egypt that managed to win the Michelle Obama Coming Up Taller award six months ago here in Washington, and basically, part of that may be attributed to its ability to keep record of what it's doing and have an updated web page available for everyone to be able to check on it and see what's happening.

Reasons for not being completely satisfied with their performance reporting included the fact that it's burdensome sometimes, because donors overemphasize quantification issues.

They mentioned specifically the USAID, that they are always insisting on having percentages for citizen satisfaction with the project delivered and the project that the NGO is trying to deliver, and they think that this does not capture the whole essence of what they are doing, that they are teaching children arts and crafts, for example, it cannot be translated into figures and numbers all the time.

Also they mentioned, very interestingly, that donors may sometimes impose requirements not directly related to the main mission of the organization, like they are providing services to children, but the donors may require in their reporting to collect information about the percentage of women who participate in the election process, because they have other political agendas, also that they are trying to cater to.

The current practice in the use of performance data, usually the field workers collect the data, the project managers prepare the reports, and exceptionally, sometimes the NGOs, because it's such a problem for them to following the reporting requirements, they may hire external consultants to get the job done, and pay them through the project money, and finally, the general managers review the report.

All of the NPOs, the six, use log frames or logical framework analysis in their reporting and attempts to stay focused, but they mentioned that there is a need to make sure it doesn't get too much complicated, it has to be in a simple format, and when asked about what they perceive to be the most useful measurement indicator, they more or less gave examples related to outcomes rather than outputs.

For example, the number of children who become literate are granted verification certificates by government. They said this is the most important thing that really measures what we are here for.

The interesting part is when we asked them about recommendations for how to improve the whole process of monitoring and evaluation, and I think they came up with very informative suggestions.

They asked for more training to be provided to non�'governmental organizations or NPOs to help them to understand what monitoring and evaluation tools are all about, to help them in preparing the reports, to help them also to learn how to use the measures and the data that they collect.

They asked for standardizing the data collection and reporting methods. Maybe this is what an earlier presentation in this room were talking about related to the Paris Declaration and the need for harmonizing donors' procedures. So this is something that is required.

And they asked for simpler tools. They had some creative ideas for how to actually document qualitatively and not just quantitatively what they are there for, like preparing portfolios in one organization for all what the child learns within his experience there.

They asked for more flexibility for catering to people's needs, not for donors coming in with their own agendas and not listening to what the people actually require.

They also asked for adapting the monitoring and evaluation methodology, if possible, to what is actually applied within the non�'governmental organization, and understanding that the performance measure sometimes does not capture the essence of the activity; there is more there that is not always captured by the monitoring and evaluation tool.

Basically, there is a need for more qualitative assessment in parallel to the quantitative assessment to get the whole picture through.

Now Sandra will continue with what's happening in Colombia.

Thank you.

DR. GARCIA: Good morning.

So I'm going to present our preliminary results from the field work in Colombia, which was parallel to Laila's work in Egypt, following the same research questions regarding the practice in performance measurement, the use of those measurements in the decision�'making process, and also some recommendations from the providers.

So we also interviewed six nonprofit organizations in Colombia, following the same protocol of interviews that Laila used in Egypt.

The six NGOs are servicing children, very vulnerable children, either because they're in extreme poverty or at risk of abuse and neglect or because they are �'�' belong to displaced families from the internal conflict that we have in Colombia.

All of them provide services in Bogota and also at the national level, and one of them offers services in Cali, which is another big city in Colombia.

You have the list of the NGOs that �'�' that are included in our study from Colombia, and in terms of the type of services, they go from temporary child protection to very high�'quality early education to these extremely vulnerable children to nutrition services or reading services to children.

In terms of the current practices on reporting, what we found is that funders require performance reports in different formats, as in Egypt, as well, and we also found that there is a bias towards extremely detailed inputs, so like financial �'�'accountability on financial �'�' or how money is spent, and less on outputs or outcomes, and the NPOs themselves are asking for more capacity to measure and report on outcomes, because they �'�' that will allow them to know how they're doing in terms of achieving their objectives rather than the detail on how their activities are being achieved.

In terms of the challenges that they perceive in their performance �'�' in the practice of performance measurement, one of the most important barriers is difficulty in data processing and systematization, and also on aggregating the data.

So they have very detailed measures, for example, for the children, on their nutritional status or even in cognitive development measures, but they don't have the ability to aggregate that data and to be able to analyze it in a more refined way to take some important decisions.

As in the case of Egypt, another big challenge is the overload of work.

So it's a very cumbersome process not only of measurement but also on reporting, and they claim that this is in additional to the real work of providing the service.

So as Kathy will show you at the end, there is a big challenge in terms of capacity of these organizations.

There is also a claim for difficulty in measuring of some key outcomes.

So for example, entrepreneurship, which one of the organizations tried to promote income�'generating projects for these families, and they want to measure the way that people are changing their mind into promoting their own income�'generating projects and how that type of outcomes are very difficult to �'�' to measure.

In terms of how they use these data, they definitely use it in their day�'to�'day management.

For example, they monitor the cost per child in order to improve efficiency or they also monitor the dropout rates from their own projects in order to see how the design of their projects is adequate to meet the needs of these �'�' of these people.

They also use it for strategic management. So they really use these data to revise their strategies and redesign some of their projects.

Also as in the case of Egypt, they all the time use these to market success and to get help for fund�'raising.

Actually, the picture is one example of how they �'�' of one of the organizations, Salvadori (phonetic), that provides nutritional services and how they market success using all the data they collect.

So in sum, they do feel or they do perceive that performance measurement is a fundamental tool.

However, they are not at all �'�' I mean, they are not very satisfied with the reporting process itself, and they perceive that there is a disconnection sometimes from the reporting requirements and the performance measurement itself.

So sometimes what is required to report is disconnected from their own mission, and sometimes they don't feel that what they're reporting helps them to learn �'�' to get some learning in order to improve in their organization.

And there is also a second important challenge, is that they're required to report insistently on outputs and inputs and not so much on outcomes, and they do want to report on outcomes, even if they're not required to.

In terms of �'�' okay �'�' in terms of the practices, it's very similar than in Egypt. So it's more like a team process of data collections.

It varies by NGO, so obviously NGOs that are larger are able to have actually their own research department, but NGOs that are smaller, the actual field workers are in charge of collecting data and reporting data to the higher level.

So there is a lot of heterogeneity in the process of data collection and the production of these reports, and funders sometimes are not aware of these sometimes barriers of capacity to develop these processes.

In terms of the measurement tools, most, but not all of them, as opposed to in Egypt, use logic modeling or logic frameworks as a tool for planning and measurement.

The ones that do �'�' they see a lot of benefits from doing this, like to guarantee continuous improvement and to do strategic planning and obviously to identifying your needs and develop measurement indicators.

However, some of them also recognize that they do logic modeling because it is a requirement from the funders. For example, in particular, from the international funders.

So sometimes, again, there is a disconnection between what they really think that they need to do in order to improve and then what they are required to do in order to get the funding.

I'm going to just mention a couple of recommendations, because Kathy is going to sum up on those.

Definitely, one of the most important recommendations that they would love to give to funders is to increase the capacity to collect and analyze data in order to improve in their daily management.

Also a better alignment between the measurements and the objectives, so the goals of the organization. So that's �'�' that was a recommendation that came in all of the interviews.

Another important recommendation that they have is to involve all the stakeholders, including the beneficiaries, in the process, so that data is accurately collected and then used, which is a very important part.

And last, we were also interested in knowing whether they are using or interested in using some impact evaluation methods to know whether or not their activities do have a real impact on the beneficiaries, and actually, a positive surprise is that two out of those six organizations actually are running right now impact evaluations that are also in collaboration with actually United States universities, and there is like a wave of moving towards impact evaluations so that they take decisions in expanding their programs or changing them totally.

So that's like a new wave in evaluation in NGOs.

So now Kathy is going to briefly sum up.

DR. NEWCOMER: Thank you.

The three of us have thought about what we've learned and come up with seven recommendations, and I'm going to kind of title this as reverse accountability. That is, it's accountability from the donors to the grantees. So these are all about donors.

You don't have this, by the way, on your PowerPoints. Sorry.

Donors should give service providers more discretion in how they characterize outcomes attributable to their work.

Number two, donors should recognize the value of qualitative assessments of outcomes such as success stories and critical incidents to supplement quantitative data requirements.

Third, donor reporting requirements should be leaner and more focused on achievement of the specific mission�'driven goals of the providers. So in other words, data requirements should be aligned with the goals of the service providers.

Four, donors should streamline and simplify reporting and formatting requirements, and permit service providers to provide similarly formatted reporters to the multiple donors that they report to.

Five, donors should encourage and reward providers for involving their stakeholders in framing, monitoring, and evaluation processes.

Six, donors should fund training and performance data collection, analysis, reporting, and use to support monitoring and evaluation in the field.

And lastly, donors should encourage and reward providers for performing monitoring and evaluation in�'house, as opposed to out�'sourcing these activities, to therefore enhance the use and learning within the organizations.

As a footnote, UNICEF reported that they, just in January, have changed to make �'�' to provide more flexibility in the reporting requirements, and Save The Children said, not as a defense but as somewhat of an explanation, that they have actually �'�' some of their requirements are because of the people who fund them.

So it's a complex situation for accountability.

Thank you.

MR. SRIKANTIAH: Thank you very much.

We now have some time for a couple of questions and comments. We are recording this session, so please push the button on the microphone in front of you. If you do not have a microphone in front of you, I will pass one to you. Thank you.

QUESTION: Hi. I have a question about that issue about allowing more flexibility in terms of reporting requirements and �'�' and outputs versus inputs and outcomes. As a donor �'�' and I'm here at the Department of State �'�' one of the things that we do with our grantees is that we're moving towards this idea of common performance indicators, because we have accountability, as well, to the taxpayer, to other people within our section, and so it's hard for us to allow flexibility when we have to compare apples to apples.

So if we're looking at a public information campaign, for example, we need to make sure that people are measuring both outputs and outcomes the same way.

So how would you recommend �'�' although I'm also sensitive to the issues that you raise, how would you recommend that we are also accountable and use this for our performance measurements while acknowledging the difficulties that many grantees, especially really local international grantees, have with this?

DR. NEWCOMER: Well, just briefly, I would suggest that there be very, very common indicators and so that the �'�' and then you allow the grantees to supplement with additional ones, but try to keep it to very few and maybe have some collaboration so that there aren't widely different common indicators across funders.

What do you think?

DR. EL BARADEI: And maybe while using even the logical framework analysis, there is a section for assumptions, so you know, if these assumptions do not materialize, there should be another way out, a Plan B that they are allowed to pursue, and this will give them more flexibility.

MR. SRIKANTIAH: Next question? Next comment?

Speakers, are there any additional comments that you wanted to be able to make from your kind of expeditious presentations? We have a couple of minutes for you to expand on any ideas or thoughts.

DR. EL BARADEI: My own impression about this kind of research �'�' it was a learning experience for me in Egypt to go and visit these non�'governmental organizations working with children in slum areas in Egypt, and it was an eye�'opener about what poverty is all about.

It's not just low�'income, but it has a lot of other implications on the life of the child, the family, their ethical norms and values. Everything else is affected, and not all of that can be captured through a monitoring and evaluation system.

It's a very tough thing to try to achieve a complete measure of something that may not be easily measurable, that it's not just number of children who go to the classes or even who get the certificate as a measure of the outcome, but it's more than that.

It's more about the impact on the family and on the community, and the more we try to include maybe qualitative measures in our �'�' besides our quantitative measures, we will be able to get a more comprehensive picture.

MR SRIKANTIAH: Question in the back.

QUESTION: I think this is more of a comment. My name is Andy Blum. I work with the U.S. Institute of Peace grant program, so we're sort of a small donor, but we are on the donor side, and this perspective of sort of the donor incentives I think is really important.

And you know, donors do �'�' donors can't sort of implement those policies that you say, you know, they have constraints that they are working under, and one of the things that we've done in the peace�'building field is created a space where donors and implementers can talk collectively about these problems, because the individual implementers find it very hard to go to a donor and say, you know, this is what we would like you to do, but collectively, you know, maybe we can sort of start to meet our interest and meet the interest that you're identifying, as well. So it's just really a comment. I sort of would like your �'�' maybe your thoughts on that.

DR. EL BARADEI: Yes, I think this would be an excellent idea, allow more dialogue between donors and recipients, and not just a matter of commitment, as in the Paris Declaration, there is this commitment always for more dialogue and more involvement of different groups of stakeholders, and the talk is there, but the actual implementation, the work, this is what's lacking currently.

And the more we can come up with ideas for facilitating this kind of dialogue and learning from the experience, donors being acceptable of a recipient's feedback into the process and integrating it into their own procedures and systems to improve on how they get things done, this is actually an excellent idea.

QUESTION: It seems that part of the issue is a change of mind�'set. There's a tendency to look at monitoring and evaluation as an overlay on what the organization really does, and I think that, ideally, what you want is for service providers to recognize that monitoring and evaluation is an organic part of what they do.

It helps them to deliver their services more effectively, and it also if you have benchmarks, it allows you to keep your partners engaged, because you can demonstrate that you are making progress. So I think part of it is a change in mind�'set, as well.

DR. NEWCOMER: That's an excellent point, and that was one of the points we were making about trying to move towards capacity building as opposed to contracting out, is that if somebody is just paid to just come and do it at you, you know, once a year or whenever, it does not become something that's a part organically of how you see managing your organization, absolutely.

And I think it's interesting, the point about use �'�' I actually was working with the U.S. Department of Education in terms of talking about how teachers and school systems use data and the need for more training there.

This is like universal. It's not just poor NGOs serving children in Colombia and Egypt but all over.

We assume that everybody knows how to use data.

Oh, yeah, you know, that's really easy, everybody's just going to �'�' because you know, if you �'�' if you provide it, they will come, they will use it, kind of a thing, that, of course, everybody is just going to automatically know how to interpret trends and whatever, and that isn't the case.

And so it's not just teach people how to input data into an Excel spreadsheet but what kinds of comparisons and so on. You don't have to have a graduate course to do this, but �'�' and it takes some time, and it takes commitment from an organization to say, yes, we do think it's so important that we're going to have our staff devote the time and, as a collective, we're going to treat this as a group effort of how we're going to do this.

MR. SRIKANTIAH: We have one minute left for another question.

QUESTION: Hi. My name is Pashanik (phonetic) Amenu, a program project design officer at USAID. And I'm curious if you have any good examples of incentives of what to build into a contracting mechanism, RFP or RFA, that would encourage NGOs to actually do that monitoring and evaluation in�'house versus, down the road, hiring somebody.

DR. EL BARADEI: When we ask the NGOs to do it in�'house, they have to receive the training, what is it all about.

We shouldn't assume that they will naturally be able to develop their own logical frameworks and their own balance score cards, but they have to learn the process, learn how to understand it, how to collect the data, how to report on it, and how to use it, more importantly, and then they'll be able to do it in�'house and there will be no longer a need to hire an external consultant and have him go through the whole process and get the things on paper done, but they don't actually make much use of it.

It's just a report that they shelf and give a copy to the donor agency to be in compliance with their requirements, and that's it.

So they need to get actual training that will lead to a change in their behavior and actually build that capacity in, collecting, reporting, and using the data they collect.

DR. GARCIA: I would add that to help build the infrastructure will be a good incentive. So if, for example, their basic software or basic instruments that will be developed in order to collect the data in�'house, and that will stay as a infrastructure for months or years, it will be a good incentive, too.

MR. SRIKANTIAH: Thank you. It looks like we're out of time.

I'd like to thank our three speakers, and thank everyone today.