2013/United Nations Development Programme (UNDP), Volume I
Preface
The Multilateral Organisation Performance Assessment Network (MOPAN) is a network of donor countries with a common interest in assessing the organisational effectiveness of multilateral organisations. MOPAN was established in 2002 in response to international fora on aid effectiveness and calls for greater donor harmonisation and coordination.
Today, MOPAN is made up of 16 donor countries: Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Ireland, The Netherlands, Norway, Republic of Korea, Spain, Sweden, Switzerland and the United Kingdom. For more information on MOPAN and to access previous MOPAN reports, please visit the MOPAN website (www.mopanonline.org).
Each year MOPAN carries out assessments of several multilateral organisations based on criteria agreed by MOPAN members. Its approach has evolved over the years, and since 2010 has been based on a survey of key stakeholders and a review of documents of multilateral organisations. MOPAN assessments provide a snapshot of four dimensions of organisational effectiveness (strategic management, operational management, relationship management, and knowledge management). In 2012, MOPAN is piloting a new component to examine an organisation’s development results in addition to its organisational effectiveness.
MOPAN 2012
In 2012, MOPAN assessed six multilateral organisations: the African Development Bank (AfDB), GAVI Alliance (formerly the Global Alliance for Vaccines and Immunisation), the Joint United Nations Programme on HIV/AIDS (UNAIDS), the United Nations Development Programme (UNDP), the United Nations Children’s Fund (UNICEF), and the World Bank.
MOPAN Institutional Leads liaised with the multilateral organisations throughout the assessment and reporting process. MOPAN Country Leads monitored the process in each country and ensured the success of the survey.
Multilateral Organisation |
MOPAN Institutional Leads |
Institutional Co-Leads |
African Development Bank (AfDB) |
Canada |
Switzerland and the United Kingdom |
Global Alliance for Vaccines and Immunisation (GAVI) |
France |
Spain and Sweden |
Joint United Nations Programme on HIV/AIDS (UNAIDS) |
Finland |
France |
United Nations Children’s Fund (UNICEF) |
Austria |
Spain |
United Nations Development Programme (UNDP) |
Norway |
Switzerland and Sweden |
World Bank (IBRD/IDA) |
Australia |
The Netherlands |
Countries |
MOPAN Country Leads |
Cambodia |
Germany and Spain |
Democratic Republic of Congo |
France and Republic of Korea |
Ghana |
Canada and Denmark |
Honduras |
Switzerland |
Morocco |
France and Belgium |
Niger |
Switzerland and France |
Nigeria |
The United Kingdom and Finland |
Philippines |
Australia and Spain |
Zimbabwe |
Sweden and France |
Acknowledgements
We thank all participants in the MOPAN 2012 assessment of UNDP. UNDP’s senior management and staff made valuable contributions throughout the assessment and document review processes and provided lists of their direct partners to be surveyed. Survey respondents contributed useful insights and time to respond to the survey. The MOPAN Institutional Leads, Norway, Switzerland and Sweden, liaised with UNDP throughout the assessment and reporting process. The MOPAN Country Leads oversaw the process in the field and ensured the success of the survey. Consultants in each country provided vital in-country support by following up with country-level survey respondents to enhance survey response rates.
Roles of Authors and the MOPAN Secretariat
The MOPAN Secretariat, led by Ireland in 2012 and co-chaired by Germany, worked in close cooperation with the MOPAN Technical Working Group to launch and manage the survey. MOPAN developed the Key Performance and Micro-indicators, designed the survey methodology, coordinated the development of lists of survey respondents, and approved the final survey questionnaire. MOPAN also directed the design of the approach to document review. MOPAN oversaw the design, structure, tone, and content of the reports.
Universalia and Epinion developed the survey instrument and carried out the survey and analysis. Universalia carried out the document review and wrote the reports.
Epinion is a leading consulting firm in Denmark that analyses and evaluates data to support decision making. It conducts specially designed studies for public and private organisations based on data collected among an organisation’s employees, members, customers, partners, and other sources. Epinion has 75 employees and 200 interviewers. Website: www.epinion.dk
Universalia Management Group is a Canadian consulting firm established in 1980 that specialises in evaluation and monitoring for international development. Universalia has made significant contributions to identifying best practices and developing tools in the fields of organisational assessment; planning, monitoring, and evaluation; results-based management; and capacity building. Website: www.universalia.com.
Acronyms
AAA - Accra Agenda for Action
AOC - Agenda for Organisational Change
CAS - Country Assistance Strategy
COMPAS - Common Performance Assessment System
DaO - Delivering as One
DRF - Development Result Framework
HDR - Human Development Report
IDA - International Development Association
IEG - Independent Evaluation Group
KPI - Key Performance Indicator
M&E - Monitoring and Evaluation
MDGs - Millennium Development Goals
MI - Micro Indicator
MOPAN - Multilateral Organisation Performance Assessment Network
MRF - Management Results Framework
NGO - Non-governmental organization
ODA - Official Development Assistance
OECD-DAC - Organisation for Economic Cooperation and Development – Development Cooperation Directorate
PBA - Programme-based approach
PIU - Project Implementation Unit
RBM - Results-Based Management
ROAR - Results-Oriented Annual Report
UNBOA - United Nations Board of Auditors
UNDAF - United Nations Development Assistance Framework
UNDG - United Nations Development Group
UNEG - United Nations Evaluation Group
UNFPA - United Nations Population Fund
UNICEF - United Nations Children’s Fund
Contents 1. Introduction 1.1 MOPAN 2. MOPAN Methodology – 2012 2.1 Overview 3. Main Findings: Practices and Systems that Enable the Achievement of Results 3.1 Introduction 3.3.1 Overview 4. Main Findings: Development Results Component 4.1 Overview 4.2.1 Overview 4.3 Evidence of Extent of Contribution and Relevance at Country Level 4.3.1 Overview 5. Conclusion |
Figures
Figure 2.1 Respondent Rating Scale
Figure 2.2 UNDP - Distribution of Responses (n=262) on all Questions Related to Micro- Indicators
Figure 2.3 Number of Survey Respondents for UNDP by Country and Respondent Group
Figure 2.4 MOPAN Ranges and Descriptions
Figure 3.1 Overall Ratings of UNDP Organisational Effectiveness by Respondent Group
Figure 3.2 Overall Ratings on Key Performance Indicators (mean scores, all respondents and document review ratings)
Figure 3.3 Quadrant I: Strategic Management, Survey and Document Review Ratings
Figure 3.4 Quadrant I: Strategic Management, Mean Scores by Respondent Group
Figure 3.5 KPI 1: Providing Direction for Results, Ratings of Micro-Indicators
Figure 3.6 KPI 2: Corporate Focus on Results, Ratings of Micro-Indicators
Figure 3.7 KPI 3: Focus on Thematic Priorities, Ratings of Micro-Indicators
Figure 3.8 KPI 4: Country Focus on Results, Ratings of Micro-Indicators
Figure 3.9 Quadrant II: Operational Management, Survey and Document Review Ratings
Figure 3.10 Quadrant II: Operational Management, Mean Scores by Respondent Group
Figure 3.11 KPI 5: Resource Allocation Decisions, Ratings of Micro-Indicators
Figure 3.12 KPI 6: Linking Aid Management to Performance, Ratings of Micro-Indicators
Figure 3.13 KPI 7: Financial Accountability, Ratings of Micro-Indicators
Figure 3.14 KPI 8: Using Performance Information, Ratings of Micro-Indicators
Figure 3.15 KPI 9: Managing Human Resources, Ratings of Micro-Indicators
Figure 3.16 KPI 10: Performance-oriented Programming, Ratings of Micro-Indicators
Figure 3.17 KPI 11: Delegating Authority, Ratings of Micro-Indicators
Figure 3.18 Quadrant III: Relationship Management, Survey and Document Review Ratings
Figure 3.19 Quadrant III: Relationship Management, Mean Scores by Respondent Group
Figure 3.20 KPI 12: Supporting National Plans, Ratings of Micro-Indicators
Figure 3.21 KPI 13: Adjusting Procedures, Ratings of Micro-Indicators
Figure 3.22 KPI 14: Using Country Systems, Ratings of Micro-Indicators
Figure 3.23 KPI 15: Contributing to Policy Dialogue, Ratings of Micro-Indicators
Figure 3.24 KPI 16: Harmonising Procedures, Ratings of Micro-Indicators
Figure 3.25 Quadrant IV: Knowledge Management, Survey and Document Review Ratings
Figure 3.26 Quadrant IV: Knowledge Management, Mean Scores by Respondent Group
Figure 3.27 KPI 17: Evaluating External Results, Ratings of Micro-Indicators
Figure 3.28 KPI 18: Presenting Performance Information, Ratings of Micro-Indicators
Figure 3.29 KPI 19: Disseminating Lessons Learned, Ratings of Micro-Indicators
Figure 4.1 Development Results Component – Overall Ratings
Figure 4.2 KPI A: Evidence of Extent of Progress toward Organisation-Wide Outcomes, Ratings
Figure 4.3 KPI B: Evidence of Extent of Contribution to Country-Level Goals and Priorities, Rating
Figure 4.4 UNDP Stakeholder Survey - Cambodia, Mean Scores by UNDP Country Programme Outcome
Figure 4.5 UNDP Stakeholder Survey – DRC, Mean Scores by UNDP Country Programme Outcome
Figure 4.6 UNDP Stakeholder Survey – Ghana, Mean Scores by UNDP Country Programme Outcome in Ghana
Figure 4.7 UNDP Stakeholder Survey - Honduras, Mean Scores by Country Programme Outcomes
Figure 4.8 UNDP Stakeholder Survey - Philippines, Mean Scores by Country Programme Outcomes
Figure 4.9 KPI C: Evidence of Extent of Contribution to Relevant MDGs, Overall Rating and Criteria
Figure 4.10 KPI D: Relevance of Objectives and Programme of Work to Stakeholders, Overall Rating and Survey Mean Scores by Country
Executive Summary
This report presents the results of an assessment of the United Nations Development Programme (UNDP) conducted by the Multilateral Organisation Performance Assessment Network (MOPAN). MOPAN assesses the organisational effectiveness of multilateral organisations based on a survey of stakeholders, a review of documents, and interviews with headquarter-based staff. In past years, MOPAN has not assessed an organisation’s development results, but is testing a component on this with four organisations in this year’s assessment.[1]
UNDP’s mission is to assist countries “to achieve sustainable human development”. Its work is demand-driven and focused on country priorities. Its Strategic Plan for 2008-2011,[2] which was extended to 2013, defines four focus areas based on the highest levels of demand: achieving the MDGs and reducing human poverty, fostering democratic governance, supporting crisis prevention and recovery, and managing energy and the environment for sustainable development. As highlighted in the midterm review of the Strategic Plan,[3] over the period of the current plan UNDP has had to respond to different donor demands and shifts in the development landscape (e.g. financial, food and fuel crises; the rise to prominence of climate change issues, and increased focus on achieving the Millennium Development Goals).
In 2011, UNDP initiated the Agenda for Organizational Change (AOC), a reform process that aims to: enhance organisational effectiveness; improve internal governance; strengthen leadership, culture and behaviour; and ensure effective programme delivery.
MOPAN assessed UNDP based on information collected through a survey of key stakeholders, a review of documents, and interviews with UNDP staff. The survey targeted MOPAN donors at headquarters, as well as MOPAN donors and UNDP direct partners in nine countries (i.e., Cambodia, the Democratic Republic of Congo, Ghana, Honduras, the Philippines, Morocco, Niger, Nigeria, and Zimbabwe). In total, 262 respondents participated in the survey. The document review examined publicly available corporate documents as well as country programming documents from surveyed countries. Interviews were conducted with UNDP staff to help identify missing documents for review and to provide information to triangulate data.
MOPAN assessments provide a snapshot of four dimensions of organisational effectiveness (strategic management, operational management, relationship management, and knowledge management). The main findings of the 2012 assessment of UNDP in these performance areas and in a pilot component on development results are summarised below.
Strategic Management
In the area of strategic management, MOPAN established criteria to determine if a multilateral organisation has strategies and systems in place that reflect good practice in managing for results. Overall, the 2012 assessment found that:
• Survey respondents consider that UNDP provides adequate leadership on results management. The document review found weaknesses in UNDP’s corporate results frameworks, particularly in the selection of indicators and establishment of results chains. UNDP recognises these problems and is working to address them.
• UNDP performs adequately in mainstreaming its cross-cutting priorities, and generally includes these in its country strategies. It received strong ratings from survey respondents and the document review for mainstreaming gender equality, but mixed ratings on mainstreaming capacity development, South-South cooperation, and human rights-based approaches.
• Direct partners considered UNDP very strong in linking its expected results at the country level with national development strategies and UNDAFs. There is room for improvement, however, in the formulation of results statements and in the inclusion of sufficient performance indicators within its country programming strategies.
Operational Management
In operational management, MOPAN established criteria to determine if a multilateral organisation manages its operations in a way that supports accountability for results and the use of information on performance. Overall, the 2012 assessment found that:
• UNDP received some of its highest ratings in the area of financial accountability due to its external and internal audit practices, anti-corruption policy, and systems to address financial irregularities.
• UNDP is seen as strong in delegating a significant amount of decision-making responsibility to country-level managers.
• UNDP’s use of performance information to improve its policies and strategies, as well as its country programmes, is considered adequate and the organisation is taking steps to improve its management of poorly performing programmes.
• UNDP is seen to allocate resources in accordance with its publicly available criteria.
• UNDP has made progress in results-based budgeting but its reporting does not yet fully reflect the improvements in UNDP’s resource planning system.
• UNDP’s use of targets to monitor project/programme implementation received mixed ratings. While survey respondents rated it as strong, the document review noted inconsistencies in this area.
• Survey respondents consider UNDP adequate in conducting benefit/impact analyses of new initiatives prior to approval. The assessment team noted however that procedures for screening projects according to social and environmental criteria prior to approval have only recently been formalised and are thus not yet institutionalised.
• UNDP’s management of human resources is considered adequate by surveyed stakeholders. The document review considered the results-focused nature of its performance assessment system for senior staff to be inadequate and had insufficient data to assess other aspects of human resource management.
Relationship Management
In relationship management, MOPAN established criteria to determine if a multilateral organisation is engaging with its partners at the country level in ways that contribute to aid effectiveness. Overall, the 2012 assessment found that:
• UNDP is seen to perform a key role and to be very strong in its coordination of the UN development system at the country level. The organisation demonstrates its commitment to harmonising procedures by participating in joint missions and strengthening national capacity through coordinated support.
• UNDP is considered strong in developing funding proposals in concert with national governments and direct partners. However, the length of time required for some of its administrative procedures is seen to negatively affect implementation.
• UNDP is perceived to make positive contributions to policy dialogue and to respect the views of partners.
• Direct partners and donors in-country consider UNDP adequate in its use of country systems, although many lacked sufficient knowledge to respond. While insufficient documentation constrained the document review, available evidence indicates that UNDP could increase its efforts to use national systems and avoid parallel implementation structures.
Knowledge Management
In knowledge management, MOPAN established criteria to determine if a multilateral organisation has reporting mechanisms and learning strategies that facilitate the sharing of information inside the organisation and with the development community. Overall, the 2012 assessment found that:
• UNDP’s Evaluation Office is independent and has established practices to ensure the quality of its evaluations. UNDP is generally seen to involve beneficiaries and partners in evaluation processes and to use evaluation findings to inform decision making. While UNDP has a policy to ensure sufficient evaluation coverage of its programming activities, sustained efforts are needed to ensure compliance.
• UNDP’s use of performance information to report on results is considered adequate by surveyed stakeholders. The document review rated UNDP weak in this area and noted room for improvement in: reporting on the achievement of outcomes, presenting performance information using measurable indicators, reporting on changes to policies or strategies based on performance information, and reporting on how lessons learned and best practices are used to improve programming.
• UNDP is considered adequate in disseminating lessons learned both within and outside the organisation. Its use of online communities of practice was noted as an effective means of sharing information.
Development Results Component
The 2012 pilot assessment focused on the evidence of UNDP’s contributions to development results.
• Evidence of the extent of progress towards organisation-wide results: MOPAN donors at headquarters consider UNDP to be making progress in achieving results in its four focus areas. Corporate reports, however, present incomplete results data across years. While there is documentary evidence of UNDP’s progress towards planned lower level results, evidence is limited for its overall contribution to higher-level change.
• Evidence of extent of contribution to country-level goals and priorities: In four of the five countries sampled, surveyed stakeholders perceive UNDP to be adequately contributing to its expected outcomes. The country-level reports sampled, however, do not provide clear evidence of UNDP’s results; this is due, in part, to the paucity of indicators with which to measure national outcomes or UNDP’s contribution to them.
• Evidence of extent of contribution to relevant MDGs: In-country donors and direct partners consider UNDP to adequately contribute to national efforts to achieve the MDGs. UNDP’s results at country level are linked to the MDGs, and UNDP has a unique role in working with governments in the monitoring of and reporting on MDGs. The reports consulted, however, do not sufficiently describe UNDP’s contribution in support of country efforts to achieve MDGs.
• Relevance of objectives and programmes of work to stakeholders: Surveyed stakeholders in-country considered UNDP strong overall for responding to key national development priorities, and adequate for providing innovative solutions to help address challenges, and adapting its work to the changing needs of partner countries.
Trends since 2009
This is the second time that UNDP has been assessed by MOPAN using its Common Approach methodology. In 2012, UNDP was recognised by respondents for contributing to policy dialogue as well as for supporting national plans and a country focus on results. In 2009, UNDP’s strongest ratings were related to its decentralised decision making and contribution to policy dialogue. Its lowest ratings concerned its dissemination of lessons learned and its use of country systems. In both 2009 and 2012, survey respondents highlighted UNDP’s role in coordinating with governments and other UN agencies as the organisation’s greatest strength, and cited a high level of bureaucracy as a major organisational challenge.
Conclusions
UNDP is recognised for its coordination role in the United Nations system.
UNDP’s coordination role within the United Nations system was seen as a key organisational strength in both the 2009 and 2012 MOPAN assessments. This was reflected in comments to open-ended questions in the 2012 assessment, in which survey respondents highlighted UNDP’s roles as Chair of the United Nations Development Group (UNDG), lead agency for the Millennium Development Goals (MDGs), publisher of the Human Development Reports (HDRs), and coordinator of the Delivering as One (DaO) initiative within countries.
Direct partners value UNDP and gave it strong ratings in most of the indicators assessed; however, they continued to identify UNDP’s bureaucracy and administrative inefficiencies as a key area for improvement.
Direct partners were very positive in their assessment of UNDP’s practices and systems; they rated UNDP strong on its overall organisational effectiveness and on all but two key performance indicators (UNDP’s adjustment of procedures to local conditions and capacities, and use of country systems). In comments to open-ended questions, the area for improvement cited most frequently by respondents was UNDP’s heavy bureaucracy related to decision- making, human resources, planning processes, and its systems for managing funds. This was also identified as a major organisational challenge in 2009.
UNDP is considered strong in mainstreaming gender; its integration of other cross- cutting priorities received mixed ratings.
The survey and document review commended UNDP for its practices and systems to mainstream gender equality, recognising the significant improvements made following an evaluation in 2005. UNDP was considered adequate in mainstreaming South-South cooperation. It received mixed ratings on its integration of other cross-cutting priorities. Surveyed stakeholders considered UNDP strong in integrating capacity development and human rights-based approaches (HRBA) in its programming. The document review rated UNDP adequate in integrating capacity development and inadequate in HRBA. Due to political sensitivities and changing Executive Board directives, HRBA is not an explicit cross-cutting theme for UNDP; its policies in this area do not provide comprehensive guidance, and the organisation lacks the accountability mechanisms to ensure mainstreaming.
UNDP has sound financial accountability systems.
UNDP has strong systems in place for financial audits, strong policies for anti-corruption and risk management, and procedures to address financial irregularities.
UNDP’s Evaluation Office works independently and has strong mechanisms to ensure the quality of evaluations.
UNDP’s evaluation policy, revised in 2011, establishes the Evaluation Office’s independence and ensures that all evaluations have a management response. The organisation has also recently put in place mechanisms to improve quality assurance processes for evaluations at global, regional and country levels, and is considered by stakeholders to adequately use evaluation findings in its decisions on programming, policy and strategy. Continued work is required, however, to ensure compliance with the planned evaluation coverage at the country level.
UNDP’s commitment to management for results has not yet translated into perceived or documented changes in the practices assessed by MOPAN.
This MOPAN assessment provides a snapshot of UNDP’s organisational effectiveness in the early stages of implementation of its ambitious Agenda for Organizational Change. The assessment found that positive changes in systems and practices have already resulted from this process, and that others are well underway. UNDP is clearly committed to results-based management across the organisation and is working to improve corporate and country level planning, monitoring, and reporting on results by: formulating results statements with increased clarity and precision; making links between outputs and outcomes more explicit; identifying measureable indicators and milestones; and consistently defining and providing data on baselines and targets. It has worked closely with donors in this process. UNDP has also taken steps to better link resources to results, improve its management of staff performance, and enhance the design of its projects and programmes.
Since the initiation of the Agenda for Organizational Change, however, reports to the Board do not always provide sufficient detail on progress in the different components of this agenda.
UNDP faces the challenge of developing robust results frameworks while remaining responsive to country priorities and demands.
This challenge is reflected in the assessment ratings, in which UNDP is seen as strong in aligning its work with national development priorities, but less successful in designing frameworks that clearly link results at all levels (project, programme, sector, country, and corporate results), in developing precise indicators to track these results, and in aggregating results achieved.
Development Results Component
Surveyed stakeholders generally hold positive views of UNDP’s achievement of results.
Donors at headquarters believe UNDP is making strong progress in achieving its goals in its corporate focus areas, and stakeholders at the country level find UNDP to be relevant and performing fairly well in contributing to the MDGs and to its expected outcomes.
UNDP’s reporting on results achieved remains an area where further attention is required.
UNDP’s current reporting at corporate and country level does not yet provide a holistic picture of organisational performance. Although evaluations and reports point to the success of individual projects and programmes, documentary evidence of UNDP’s achievement of corporate and country level outcomes is limited and inconclusive. However, UNDP is working to improve its reporting on results.
Overall MOPAN Ratings of UNDP
The chart below shows the ratings on the 19 key performance indicators that MOPAN used to assess UNDP in 2012. These indicators were designed to measure organisational effectiveness (practices and systems), not development results on the ground. UNDP received ratings of adequate or strong on all19 key performance indicators assessed by survey respondents. Ratings ranged from weak to strong on the 16 key performance indicators assessed in the document review.
1. Introduction
1.1 MOPAN
This report presents the results of an assessment of the United Nations Development Programme (UNDP) that was conducted in 2012 by the Multilateral Organisation Performance Assessment Network (MOPAN). In 2012 MOPAN assessed six multilateral organisations: the African Development Bank (AfDB), GAVI Alliance (formerly the Global Alliance for Vaccines and Immunisation), the Joint United Nations Programme on HIV/AIDS (UNAIDS), the United Nations Development Programme (UNDP), the United Nations Children’s Fund (UNICEF), and the World Bank.
Background
MOPAN was established in 2002 in response to international fora on aid effectiveness and calls for greater donor harmonisation and coordination. The purpose of the network is to share information and experience in assessing the performance of multilateral organisations. MOPAN supports the commitments adopted by the international community to improve the impact and effectiveness of aid as reflected in the Paris Declaration on Aid Effectiveness, the Accra
Agenda for Action, and the Busan High Level Forum. MOPAN’s processes and instruments embody the principles of local ownership, alignment and harmonisation of practices, and results-based management (RBM).
MOPAN provides a joint approach (known as the Common Approach) to assess the organisational effectiveness of multilateral organisations. The approach was derived from existing bilateral assessment tools and complements and draws on other assessment processes for development organisations – such as the bi-annual Survey on Monitoring the Paris Declaration on Aid Effectiveness and annual reports of the Common Performance Assessment System (COMPAS) published by the multilateral development banks. In the long term, MOPAN hopes that this approach will replace or reduce the need for other assessment approaches by bilateral donors.
MOPAN assesses four dimensions of organisational effectiveness
MOPAN has defined organisational effectiveness as the extent to which a multilateral organisation is organised to contribute to development and/or humanitarian results in the countries or territories where it operates.
Based on a survey of stakeholders and a review of documents, MOPAN assessments provide a snapshot of a multilateral organisation’s effectiveness in four dimensions:
• Developing strategies and plans that reflect good practices in managing for development results (strategic management)
• Managing operations by results to support accountability for results and the use of information on performance (operational management)
• Engaging in relationships with direct partners and donors at the country level in ways that contribute to aid effectiveness and that are aligned with the principles of the Paris Declaration (relationship management)
• Developing reporting mechanisms and learning strategies that facilitate the sharing of knowledge and information inside the organisation and with the development community (knowledge management).
In 2012, MOPAN also piloted a new component to assess a multilateral organisation’s contributions to development results. This component was tested with four of the six organisations assessed this year (the AfDB, UNDP, UNICEF, and the World Bank).
Purpose of MOPAN assessments
MOPAN assessments are intended to:
• Generate relevant, credible and robust information MOPAN members can use to meet their domestic accountability requirements and fulfil their responsibilities and obligations as bilateral donors
• Provide an evidence base for MOPAN members, multilateral organisations and direct partners to discuss organisational effectiveness and in doing so, build better understanding and improve organisational effectiveness and learning over time
• Support dialogue between MOPAN members, multilateral organisations and their partners, with a specific focus on improving organisational effectiveness over time, both at country and headquarters level.
The MOPAN methodology is evolving in response to what is being learned from year to year, and to accommodate multilateral organisations with different mandates. For example, the indicators and approach for the 2012 MOPAN review of a global fund and organisations with significant humanitarian programming were adapted to reflect the reality of these organisations.
1.2 Profile of UNDP
UNDP was established in 1965 by the United Nations General Assembly as the consolidation of two existing entities: the Expanded Programme of Technical Assistance and the United Nations Special Fund.
Mission and mandate
UNDP’s ultimate goal is to generate positive and transformative changes in peoples’ lives and to offer them choices and opportunities. More specifically, the organisation’s mission statement, defined by Executive Board decision 96/29 in 1996, is “to help countries in their efforts to achieve sustainable human development by assisting them to build their capacity to design and carry out development programmes in poverty eradication, employment creation and sustainable livelihoods, the empowerment of women and the protection and regeneration of the environment, giving first priority to poverty eradication.”
UNDP is mandated by the UN General Assembly to fulfil two roles. The first is to provide policy and technical support to programme countries to strengthen their capacities to design and implement development strategies that reflect their local context, national priorities, and internationally agreed development goals. The second is to support the promotion of coordination, efficiency and effectiveness of the United Nations system as a whole at the country level, notably by managing Resident Coordinators who lead UN country teams and coordinate development operations. The organisation also serves as a global knowledge network, sharing development best practices and solutions among countries.
Structure and Governance
UNDP maintains a wide field-presence, with offices in 129 countries and operations in an additional 48. Headquartered in New York City, the organisation also ensures its global presence through six regional centres and five representation offices. These offices provide advisory services to UNDP’s in-country staff based on global applied research, develop partnerships, and promote regional capacity building initiatives. In addition, UNDP is home to the Special Unit for South-South Cooperation (SU/SSC) and is affiliated with the United Nations Capital Development Fund (UNCDF) and United Nations Volunteers (UNV).
UNDP is governed by an Executive Board composed of representatives from 36 member states who serve on a rotating basis. Responsibilities of the Executive Board include approving UNDP’s programmes, monitoring its performance, deciding on administrative and financial plans and budgets, and reporting to the UN Economic and Social Council annually. The Administrator, appointed by the United Nations Secretary-General in consultation with the Executive Board, is the head of UNDP and is responsible for providing direction and control.
Strategy and Services
UNDP’s work is founded on the principle that countries should own and lead development interventions. Therefore, its work is driven by country demand. Informed by the United Nations tenets of universality and impartiality, UNDP’s approach to human development focuses on supporting national capacity and avoiding political conditionality. The organisation acts as a ‘provider of last resort’ in crisis settings and when it is specifically asked in cases where there is a deficit in national capacity.
The UNDP Strategic Plan, 2008-2011: Accelerating global progress on human development, which followed the organisation’s Multiyear funding framework, 2004-2007, was developed to provide information on UNDP’s aspirations, concrete objectives and specific targets. It is UNDP’s first strategic document that includes both a development results framework and an institutional results framework (the latter includes sections on management results and UN coordination results). In 2009, UNDP’s Executive Board extended the Strategic Plan to 2013 to align it with the Quadrennial Comprehensive Policy Review (QCPR) of operational activities for development of the United Nations system.
UNDP’s Strategic Plan identifies four focus areas based on high demand for support from programme countries and on UNDP’s strengths and comparative advantage. These are: i) achieving the MDGs and reducing human poverty; ii) fostering democratic governance; iii) supporting crisis prevention and recovery; and iv) managing energy and the environment for sustainable development. In addition, the Strategic Plan explicitly identifies three cross-cutting priorities to be mainstreamed throughout the organisation’s four areas of focus (i.e., capacity development, gender equality, and South-South cooperation).
Finances
UNDP is funded entirely from voluntary contributions provided by bilateral and multilateral partners and programme countries. Contributions in 2011 totalled $4.83 billion, which represents a decrease of $0.18 billion from 2010. Regular core resources, which are non- earmarked funds, accounted for 20 per cent of this total.
The global economic crisis and volatile exchange rates continue to pose challenges to the predictability of UNDP’s core funding. To ensure the organisation’s strategic direction and sustainability, UNDP’s Executive Board, at its 2012 annual session, reiterated the need for predictable and stable funding of UNDP’s core resource base and encouraged traditional donors to at least maintain their contribution levels and for new donors to increase their contributions.
Reform process
In 2011, UNDP embarked on a reform process known as the Agenda for Organizational Change (AOC) to “lift its performance from good to great”. The objectives of this organisational change initiative are to: i) improve governance within UNDP; ii) enhance organisational effectiveness; iii) lift leadership, culture, and behaviour; and iv) ensure effective programme delivery. Many of the changes taking place as part of this initiative are meant to strengthen the organisation’s results focus and to benefit the preparation of the next Strategic Plan (2014-2017). In particular, the AOC aims to improve UNDP’s strategic planning and priority-setting, organisational performance and results, programme quality and focus, as well as results and resource linkages.
For additional information, please consult the UNDP website: www.undp.org.
1.3 Previous Assessments
UNDP was previously assessed by MOPAN surveys in 2004, 2007, and 2009. As the MOPAN Common Approach came into existence in 2009, only the 2009 assessment is based on the MOPAN standardised questions. Nevertheless, it is useful to highlight a few key findings from all three surveys.
MOPAN Survey 2004
In this survey of three multilateral organisations, MOPAN donors in 10 countries were asked about performance at the country level. UNDP was perceived as “doing a good job” overall and as playing a crucial role in development throughout the world, due to its operational activities in a vast network of countries and its coordinating function within the UN development system.
Policy dialogue, capacity-building activities, and support to non-state actors were regarded as positive dimensions of its relationships with national partners. Support to national policies and strategies and alignment with these were also perceived to be key areas of UNDP contribution. However, UNDP was criticised for not positioning itself on controversial issues. The assessment also noted that further progress could be made in fostering country ownership and ensuring the lasting impact of its capacity development efforts. Information sharing with inter- agency partners, harmonisation, and responsiveness to donors were identified as areas with room for significant improvement. In particular, harmonisation of UNDP’s procedures with other donors was found to be quite slow.
While respondents had mixed views about UNDP’s performance with respect to its interagency coordination, they welcomed the introduction of new arrangements, such as the UNDAF and UN House, and noted improvements in the quality of Resident Coordinator staffing.
MOPAN Survey 2007
The 2007 MOPAN survey assessed three multilateral organisations’ behaviours towards national stakeholders (i.e., governments, NGOs, and the private sector) and other international development agencies based on the perceptions of MOPAN donors stationed in 10 countries.
Though UNDP’s contribution to policy dialogue was viewed as positive overall, perceived areas of improvement for the organisation included its avoidance of politically sensitive issues and its excessive focus on coordination as opposed to substantive policy inputs. In terms of advocacy, UNDP fared well, with respondents highlighting its support to national campaigns. Mixed views were expressed by survey respondents with regard to capacity development. Whereas the quality of its technical advice was assessed positively, the organisation was criticised for often being directly responsible for project management, as this is seen to limit capacity development and ownership by national partners. However, UNDP was considered to be supportive of national poverty reduction strategies.
With regard to UNDP’s interagency relationships, respondents appreciated the organisation’s information-sharing, but noted the potential for improvement in the briefings given on visiting missions. UNDP was also considered a key partner in aid coordination, especially of interagency working groups. Its performance was seen to vary considerably across the surveyed countries, however. Though UNDP was regarded as an active contributor to both local donor harmonisation initiatives and harmonisation within the UN system, respondents indicated that UNDP could increase its participation in joint programming and field missions.
MOPAN Common Approach 2009
UNDP was assessed in 2009 at an institutional and country level, using the MOPAN Common Approach standardised survey questions. Respondents included MOPAN donors at headquarters, as well as direct partners and MOPAN donors based in nine countries. UNDP was one of four organisations assessed.
Survey respondents identified UNDP’s role in coordinating governments and other UN agencies as its greatest organisational strength. Decentralised decision making and contribution to policy dialogue were also highlighted as key areas of strength. However, the perceived wide scope of UNDP’s mandate and high level of bureaucracy were cited as challenges for the organisation. Respondents rated UNDP as inadequate for its dissemination of lessons learned, use of parallel implementation units (PIUs), and for limited use of national financial reporting procedures. The assessment recognised that UNDP had undergone organisational change efforts to improve its coherence, focus, accountability and transparency, and that these were leading to progress.
2. MOPAN Methodology – 2012
2.1 Overview
Background
MOPAN continues to refine its assessment framework. In 2009, the MOPAN Common Approach replaced the Annual MOPAN Survey, which had been conducted since 2003. The Common Approach is broader and deeper than the previous surveys and includes the following components:
• Expanded survey – The MOPAN survey now brings in the views of direct partners or clients of multilateral organisations, peer organisations (or other relevant stakeholder group), and those of donors, that is, MOPAN members at both headquarters and country level.
• Document review – Since 2010, survey data are complemented by a review of documents prepared by the multilateral organisations being assessed and other sources.
• Interviews – In 2012, MOPAN complemented survey data and document review with consultations and interviews at the headquarters of multilateral organisations assessed.
In 2012 MOPAN also tested a new component to assess the results of multilateral organisations.[4]
As MOPAN’s methodology has changed significantly in the last three years, comparisons of this year’s assessments and previous assessments should take this into consideration.
The following is a summary of the MOPAN methodology in 2012.[5]
MOPAN 2012
In 2012, MOPAN assessed the effectiveness of six multilateral organisations: the African Development Bank (AfDB), GAVI Alliance (formerly the Global Alliance for Vaccines and Immunisation), the Joint United Nations Programme on HIV/AIDS (UNAIDS), the United Nations Development Programme (UNDP), the United Nations Children’s Fund (UNICEF), and the World Bank. The assessment was conducted in Cambodia, Democratic Republic of Congo, Ghana, Honduras, Philippines, Morocco, Niger, Nigeria, and Zimbabwe.[6]
The MOPAN Common Approach examines organisational systems, practices, and behaviours that MOPAN believes are important for aid effectiveness and that are likely to contribute to results at the country level. It groups these organisational capacities in four areas of performance: strategic management, operational management, relationship management, and knowledge management.
Key Performance Indicators and Micro-indicators – Within each performance area, organisational effectiveness is described using key performance indicators (KPIs) that are measured with a series of micro-indicators (MIs).
The micro-indicators are assessed using data from a survey and document review. The survey collects perception data from a variety of stakeholders (see Section 2.2) and the review of documents relies on a set of criteria that provide a basis for the assessment of each micro- indicator (see Section 2.3). However, not all micro-indicators are assessed by both the survey and the document review; consequently, some charts do not show survey scores and document review scores for each KPI or MI.
UNDP was assessed using 19 KPIs and 69 MIs. The full list of MIs assessed is provided in Volume II, Appendix V (KPI and MI Data by Quadrant).
2.2 Survey
To gather diverse perspectives on the multilateral organisations being assessed, MOPAN generally seeks the perceptions of the following primary respondent groups:
• Donor Headquarters Oversight (HQ): Professional staff, working for a MOPAN donor government, who share responsibility for overseeing/observing a multilateral organisation at the institutional level. These respondents may be based at the permanent mission of the multilateral organisation or in the donor capital.
• Donor Country Office Oversight (CO): Individuals who work for a MOPAN donor government and are in a position that shares responsibility for overseeing/observing a multilateral organisation at the country level.
• Direct Partners (DP): Individuals who work for a national partner organisation (government or civil society) in a developing country. Respondents are usually professional staff from organisations that receive some sort of direct transfer from the multilateral organisation or that have direct interaction with it at country level (this could take the form of financial assistance, technical assistance, policy advice, equipment, supplies, etc.).
MOPAN donor respondents are chosen by MOPAN member countries. Other respondents are identified by the multilateral organisation being assessed.
The survey is customised for each organisation assessed and can be completed online in English, French, or Spanish or offline (paper, email, or interview) in these same languages. See Volume II (Appendix II) for the survey. Individual responses to the survey are confidential to the independent consultants managing the online survey or collecting data offline in the field.
Respondent Ratings – Survey respondents are presented with statements describing an organisational practice, system, or behaviour and asked to rate the organisation’s performance on a scale of 1 to 6 as shown below.
Figure 2.1 Respondent Rating Scale
Score |
Rating |
Definition |
1 |
Very Weak |
The multilateral organisation does not have this system in place and this is a source of concern. |
2 |
Weak |
The multilateral organisation has this system but there are important deficiencies. |
3 |
Inadequate |
The multilateral organisation‘s system in this area has deficiencies that make it less than acceptable. |
4 |
Adequate |
The multilateral organisation’s system is acceptable in this area. |
5 |
Strong |
The multilateral organisation’s system is more than acceptable, yet without being “best practice” in this area. |
6 |
Very Strong |
The multilateral organisation’s system is “best practice” in this area. |
In some cases, not all survey questions are answered, either because: 1) the individual chose not to answer, or 2) the question was not asked of that individual. In these cases, mean scores are calculated using the actual number of people responding to the question. As noted in the methodology (Volume II, Appendix I), ‘don’t know’ survey responses are not factored into the calculation of mean scores. However, when the proportion of respondents answering ‘don’t know’ is considered notable for a micro-indicator, this is indicated in the report.
The distribution of responses on the six rating choices plus ‘don’t know’ are summarised by respondent group across all UNDP survey questions in Figure 2.2.
Figure 2.2 UNDP - Distribution of Responses (n=262) on all Questions Related to Micro- Indicators
While there were responses in all six possible choices, relatively few responses overall were at the ‘weak’ end of the scale. One-fifth of responses from donors in-country were ‘don’t know,’ which is slightly higher than the level of ‘don’t know’ responses provided by other groups. Direct partners responded more positively than other groups overall, with nearly one-quarter of their responses within the ‘very strong’ category.
Survey Response Rate
MOPAN aims to achieve a 70 per cent response rate from donors at headquarters and a 50 per cent response rate among the population of respondents in each of the survey countries (i.e., donors in-country and direct partners). The number of respondents targeted in each category (i.e., the total population) and the actual response rates for the UNDP survey are presented in Figure 2.3 below. The response rates for all categories of respondents exceeded the targets. While there are variations in the response rates by category and location of respondents,
UNDP survey results reflect the views of 262 respondents.
Figure 2.3 Number of Survey Respondents for UNDP by Country and Respondent Group
Converting Individual Scores to Group Ratings
As noted above, individuals respond to survey questions on a six-point scale where a rating of “1” is considered a judgment of “very weak” up to a rating of “6” intended to represent a judgment of “very strong.” A mean score is calculated for each respondent group (e.g., donors at HQ). Since mean scores are not necessarily whole numbers (from 1 to 6) MOPAN assigns numerical ranges and descriptive ratings for each range (from very weak to very strong) as shown below.
Figure 2.4 MOPAN Ranges and Descriptions
Range of the mean scores |
Rating |
1 to 1.49 |
Very Weak |
1.50 to 2.49 |
Weak |
2.50 to 3.49 |
Inadequate |
3.50 to 4.49 |
Adequate |
4.50 to 5.49 |
Strong |
5.50 to 6.00 |
Very Strong |
The ranges are represented to two decimal places, which is simply the result of a mathematical calculation and should not be interpreted as representing a high degree of precision. The ratings applied to the various KPIs should be viewed as indicative judgments rather than precise measurements.
Data Analysis
First level survey data analysis includes calculations of mean scores, medians, standard deviations, frequencies, (including analysis of ‘don’t know’ and missing responses), as well as content analysis of open-ended questions. The ‘don’t know’ responses are removed from the calculation of mean scores, but the proportion of respondents choosing ‘don’t know’ is retained as potentially useful data.
A weighting scheme is applied to ensure that no single respondent group or country is under- represented in the analysis. The weighting is intended to correct for discrepancies/variation in: the number of individuals in each respondent group, the number of countries where the survey took place, the numbers of donors in-country, direct partners, and other respondent groups within each country where the survey took place. Weighted figures are carefully reviewed and analysed before inclusion in the multilateral organisation reports.
Second level analysis examines differences in the responses among categories of respondents and other variables. When significant differences are found, these are noted in the report.[7]
For a full description of survey data analysis see Volume II, Appendix I.
2.3 Document Review
The document review considers three types of documents: multilateral organisation documents, identified with the help of the organisation; internal and external reviews of the organisation’s performance, found on the organisation’s web site or provided by the organisation; external assessments such as the Survey on Monitoring the Paris Declaration, the Common Performance Assessment (COMPAS) report, and previous MOPAN surveys.
Ratings for key performance indicators (KPIs) are based on the ratings for the component micro-indicators in each KPI. For each micro-indicator, a set of criteria are established which, taken together, are thought to represent good practice in that area. The criteria are based on existing standards and guidelines (for example, UNEG or OECD-DAC guidelines), on MOPAN identification of key aspects to consider, and on the input of subject-matter specialists. The rating on any micro-indicator depends on the number of criteria met by the organisation. In cases where the micro-indicator ratings for one KPI are highly divergent, this is noted in the report.
While the document review assesses most micro-indicators, it does not assign a rating to all of them (when criteria have not been established). Consequently, some charts do not show document review scores for each KPI or MI. Documents are also used to aid in the understanding of the context in which the multilateral organisations work.
The document review and survey use the same list of micro-indicators, but some questions in the document review are worded differently from those in the survey. The document review and survey also use the same rating scale, but scores are presented separately on each chart in the report to show their degree of convergence or divergence.
2.4 Interviews
As of 2012, interviews are conducted at the headquarters of multilateral organisations with individuals who are knowledgeable in areas that relate to the MOPAN assessment.
Interviewees are asked to provide knowledge, insight, and contextual information that could assist the MOPAN Assessment Team in analysing document review data, and to identify other relevant documents for the Assessment Team to consider. This helps ensure that the Assessment Team has all the appropriate and necessary documents, enhances the Team’s ability to triangulate data from various sources, and assists the Assessment Team in the analysis of the key performance indicators by providing contextual information.
Interviews are conducted with a small number of staff who work in the primary units that relate to areas of the MOPAN assessment. Interviewees are identified by the multilateral organisation in conjunction with the Assessment Team and MOPAN. An interview guide is prepared and interviewees are advised of the content areas beforehand.
Data gathered during interviews is used to understand the context in which the agency is working, as well as how decisions are made. In the event that survey data present a picture that is very different from the document review, information from interviews can help clarify how the multilateral organisation approached a certain issue.
2.5 Basis for Judgment
From 2003 to 2009, the basis for judgment in MOPAN assessments was the perceptions of survey respondents. With the introduction of the document review in 2010 and interviews in 2012, judgments now draw on a variety of sources that can be compared and triangulated.
To the extent possible, the assessment standards and criteria are tailored to reflect the nature and operating environment of the multilateral organisations under review.
The MOPAN approach uses multiple data sources and data collection methods to validate findings. This helps eliminate bias and detect errors or anomalies.
The MOPAN reports gain trustworthiness through the multiple reviews and validation processes that are carried out by members of the network and by the multilateral organisations themselves.
2.6 Reporting
Institutional Reports
Individual institutional reports are produced for each multilateral organisation assessed. The results of the document review are presented alongside the survey results and discussed in light of the perception-based scores and interviews in order to further substantiate and contextualise the overall findings. For those agencies that were evaluated in 2009, a brief analysis of trends is included.
Country Data Summaries
A summary of survey results is produced for each multilateral organisation in each of the countries surveyed where sufficient survey data exists. Country Data Summaries provide feedback to those who participated in the MOPAN assessment and provide input for a dialogue process. They are not published and are shared only with individuals who attend the country workshop on the MOPAN assessment findings, which takes place in the first quarter of the year following the assessment.
2.7 Strengths and Limitations of Methodology
MOPAN continues to improve methodology based on the experience of each year of implementation. The following strengths and limitations should be considered when reading MOPAN’s report on UNDP.
Strengths
• The MOPAN Common Approach is based on the core elements of existing bilateral assessment tools. In the long term, the intent is to replace or reduce the need for other assessment approaches by bilateral donors.
• It seeks perceptual information from different perspectives: MOPAN donors (at headquarters and in-country), direct partners/clients of multilateral organisations, peer organisations, and other relevant stakeholders. This is in line with donor commitments to the Paris Declaration on Aid Effectiveness and the Accra Agenda for Action regarding harmonisation, partner voice, and mutual accountability.
• It complements perceptual data with document review and interviews, thus using multiple sources of data. This should enhance the analysis, provide a basis for discussion of agency effectiveness, and increase the validity of the assessment through triangulation of data.
• The reports undergo a validation process, including multiple reviews by MOPAN members and review by the multilateral organisation being assessed.
• MOPAN strives for consistency across its survey questions and document review for each of the multilateral organisations, while allowing for customisation to account for differences between types of multilateral organisations.
Limitations
MOPAN Framework
• The countries are selected based on established MOPAN criteria and comprise only a small proportion of each institution’s operations, thus limiting broader generalisations.
• The Common Approach indicators were designed for multilateral organisations that have operations in the field. For organisations that have limited field presence or that have regional structures in addition to headquarters and country operations, there have been some modifications made in the data collection method and there will be a need for greater nuance in the analysis of the data.
Data sources
• The MOPAN Common Approach asks MOPAN members and the organisations assessed to select the most appropriate individuals to complete the survey. While MOPAN sometimes discusses the selection with the organisation being assessed, it has no means of determining whether the most knowledgeable and qualified individuals are those that complete the survey.
• The document review component works within the confines of an organisation’s disclosure policy. In some cases, low document review ratings may be due to unavailability of organisational documents that meet the MOPAN criteria (some of which require a sample of a type of document, such as country plans, or require certain aspects to be documented explicitly). When information is insufficient to make a rating, this is noted in the charts.
Data Collection Instruments
• Three issues potentially affect survey responses. First, the survey instrument is long and a fatigue factor may affect responses and rates of response. Second, respondents may not have the knowledge to respond to all the questions (e.g., survey questions referring to internal operations of the organisation, such as financial accountability and delegation of decision-making, seem difficult for many respondents, who frequently answer ‘don’t know.’) Third, a large number of ‘don’t know’ responses may imply that respondents did not understand certain questions.
• The rating choices provided in the MOPAN survey may not be used consistently by all respondents, especially across the many cultures involved in the MOPAN assessment. One potential limitation is ‘central tendency bias’ (i.e., a tendency in respondents to avoid extremes on a scale). Cultural differences may also contribute to this bias as respondents in some cultures may be unwilling to criticise or too eager to praise.
• Because one of MOPAN’s intentions is to merge previously existing assessment tools into one, and to forestall the development of others, the survey instrument remains quite long.
Data Analysis
• While the document review can serve to evaluate the contents of a document, it cannot assess the extent to which the spirit of that document has been implemented within the organisation (unless implementation is documented elsewhere).
• Mean scores are used in the MOPAN reports to provide central tendency values of the survey results. The mean has the advantage of being the most commonly understood measure of central tendency, however, there is a disadvantage in using the mean because of its sensitivity to extreme scores (outliers), particularly when population samples are small. The assessment team reviewed the median and standard deviations in analysing the survey results. Volume II, Appendix V provides the standard deviations for each survey question.
Basis for Judgment
• Although MOPAN uses recognised standards and criteria for what constitutes good practice for a multilateral organisation, such criteria do not exist for all MOPAN indicators. As a result, many of the criteria used in reviewing document content were developed by MOPAN in the course of the assessment process. The criteria are a work in progress and should not be considered definitive standards.
• The Common Approach assessment produces numerical scores or ratings that appear to have a high degree of precision, yet can only provide general indications of how an organisation is doing and a basis for discussion among MOPAN members, the multilateral organisation, and other stakeholders, including direct partners.
Despite some limitations, the Assessment Team believes that the MOPAN reports generally provide a reasonable picture of systems associated with the organisational effectiveness of multilateral organisations.
2.8 Testing a New Component in 2012: Assessing Development Results
Overview
Until 2012, MOPAN assessments focused on the organisational effectiveness of multilateral organisations by examining organisational practices, systems, and behaviours that MOPAN believes are important for managing to achieve development results. In 2012, MOPAN tested a component to assess how multilateral organisations report on development results achieved – with four of the six organisations assessed: AfDB, UNDP, UNICEF, and the World Bank.[8]
Sub-Components
• KPI A – Evidence of extent of progress towards organisation-wide outcomes[9]
• KPI B – Evidence of extent of contributions to country-level goals and priorities
• KPI C – Evidence of extent of contributions to relevant MDGs
• KPI D – Relevance of objectives and programme of work to stakeholders.
The assessments at the institutional/organisational level (KPI A) and at the country level (KPI B) are separated due to differences in focus, scope and reporting on results at these two levels.
KPIs B, C, and D all focus on the country level and are applied in five countries. Each multilateral organisation is asked to identify the countries where they are likely to have the best data on results.
KPI D assesses relevance as a measure of the extent to which surveyed stakeholders consider the multilateral organisation supports country priorities and meets the changing needs of direct partners and the target population.
Methodology
Various types of qualitative and quantitative data are sought to answer a set of questions about the multilateral organisation’s performance in the indicators noted above. Data are collected using three different methods: document review, stakeholder survey, and interviews with staff at HQ and, if feasible, in country offices.
• The document review draws largely on organisational performance reports and country level performance reports and evaluations.
• The stakeholder survey asks donor and direct partner respondents to rate the organisation’s achievement of planned results and the relevance of its activities at the country level. The questions are tailored, as required, to each organisation.
• Interviews are conducted to identify reliable data, identify key staff to consult in country offices, and to help contextualise the analysis of results.
Analysis of institutional level data focuses on the extent to which planned results from the strategic period were achieved, based largely on performance reports and thematic evaluations. Analysis of country level data focuses on the organisation’s contribution to results in the sample of countries selected for the MOPAN assessment.
The judgment of an organisation’s performance on each KPI draws on a set of criteria. The Assessment Team uses a “best fit approach,” a criteria-referenced basis for judgment that is suitable when criteria are multi-dimensional, there is a mix of both qualitative and quantitative data, and it is not possible to calculate a simple sum of the data points.[10]
The ratings reflect the Assessment Team’s judgment and analysis of data from all sources. The ratings are qualitative in nature and defined according to a 4-point scale – strong, adequate, inadequate/weak. As in the 6-point scale used in the survey, a rating of “strong” signals that the organisation is approaching best practice, while a rating of “inadequate/weak” signals that the organisation still has important limitations in demonstrating progress towards its stated results, and particularly its contributions to development outcomes.
Limitations to the Methodology
The methodology was designed to draw on the evidence of results achieved, as presented in the reports of a multilateral organisation. However, there is a critical difference between assessing the actual results achieved on the ground and assessing the evidence of results in the organisation’s reports to its key stakeholders. This is a limitation that is inherent in the current approach.
MOPAN will review the experience with this pilot component and make adjustments in the methodology, as required.
3. Main Findings: Practices and Systems that Enable the Achievement of Results
3.1 Introduction
This chapter presents the findings of the 2012 MOPAN assessment of the practices and systems of UNDP. Findings are based on respondent survey data and document review.
• Section 3.2 presents overall ratings on the performance of UNDP and summarises respondent views on its primary strengths and areas for improvement
• Section 3.3 provides findings on each of the four areas of performance (strategic, operational, relationship, and knowledge management).
3.2 Overall Ratings
This section provides a summary of overall ratings. It includes: survey respondent ratings of UNDP’s overall organisational effectiveness, survey respondent views on UNDP’s strengths and areas for improvement, and survey and document review ratings for all key performance indicators.
Survey ratings of UNDP’s organisational effectiveness
MOPAN has defined “organisational effectiveness” as the extent to which a multilateral organisation is organised to support direct partners in producing and delivering expected results. Respondents were asked the question: “How would you rate the overall organisational effectiveness of UNDP?” As shown in Figure 3.1, direct partners provided a greater percentage of high ratings (5 and 6) than other respondent groups.
Figure 3.1 Overall Ratings of UNDP Organisational Effectiveness by Respondent Group
Respondents’ Views on UNDP’s Strengths and Areas for Improvement
The survey included two open-ended questions that asked respondents to identify UNDP’s greatest strengths and areas for improvement. All 262 respondents provided responses to both questions. Respondent comments that included more than one theme were coded in various categories.
Survey respondents considered UNDP’s greatest organisational strengths to be its technical expertise and knowledge, as well as its coordination and leadership functions/capacities within the United Nations system. Many respondents also commented positively on UNDP’s ability to work well with a multiplicity of national partners.
UNDP was commended by 24 per cent of respondents (62) for its strong technical expertise. In particular, respondents noted that UNDP’s knowledge of partner countries enables it to better address capacity building and other issues at the country level. In addition, respondents indicated that they valued UNDP staff for their diverse professional backgrounds and expertise, which enables them to provide skilled support to implementing partners. UNDP is also recognised for being able to tap into a worldwide network of external experts.
UNDP’s role as the focal point of the United Nations system was perceived by 21 per cent of respondents (55) as a strength. Respondents highlighted UNDP’s roles as Chair of the United Nations Development Group (UNDG), lead agency for the Millennium Development Goals (MDGs), publisher of the Human Development Reports (HDR), and coordinator of the Delivering as One initiative within countries.
According to 19 per cent of respondents, UNDP’s coordination and collaboration with partners, such as state actors and civil society organisations, also constitutes a key strength. UNDP is seen as working closely with partners and as creating synergies among them.
Survey Respondent Comments on UNDP Strengths “Its greatest strength lies in its promotion of national ownership of programmes by national institutions and civil society.” (Direct partner) “UNDP is present in several sectors and subsectors and thereby has the opportunity to have a more holistic approach in supporting the partner country in achieving its development goals. UNDP appears to be sectorally technically strong as well as able to access a large global resource base to enable qualified support and interventions.” (Donor in-country) “Thanks to its extensive network of Country Offices and to the coordination role of the Resident Representative in most of the Offices, UNDP can interact with the institutions of partner countries in a timely manner even in a context of political/natural/economic disruptions.” (Donor at headquarters) |
Respondents suggested that UNDP’s bureaucracy and administrative procedures could be more efficient. Some also voiced the need for UNDP to further sharpen its organisational focus and improve monitoring, evaluation and reporting on results. UNDP’s coordination and collaboration with partners – an area that some respondents consider to be a strength – was also mentioned as an area for improvement.
More than one-quarter of respondents (76) identified UNDP’s heavy bureaucracy and subsequent inefficiencies as the organisation’s key area for improvement. This concern was expressed in particular by direct partner respondents, 88 per cent of whom (61) noted it as an organisational weakness. The respondents highlighted issues related to UNDP’s decision- making, human resources, planning processes, and its systems for managing funds.
UNDP’s organisational focus and orientation was also seen by 16 per cent of survey respondents (43) as requiring further improvement. Donors at headquarters and in-country commented on UNDP’s lack of specialised research in its priority areas, and highlighted the need for the organisation to use its comparative advantage to develop a more in-depth and focused programme. Its current programming was described as broad and spread out over too many sectors, limiting its impact.
Interestingly, UNDP’s coordination and collaboration with partners – an area identified above as a strength – was also mentioned by 16 per cent of survey respondents (41) as being one of the organisation’s weaknesses. Respondents, especially donors in-country and direct partners, noted the need for UNDP to improve its communication and information sharing with counterparts, especially with those outside of the UN system. The process for selecting implementing partners and UNDP’s cooperation strategy with them were also mentioned as areas requiring attention.
Finally, a further 16 per cent of respondents (41) felt that UNDP’s capacity to monitor, evaluate, and report on results should be improved. Donors at headquarters and direct partners were most concerned. Comments focused on the need for UNDP’s reporting to better communicate and demonstrate contribution to results at the organisation-wide, regional and country levels, and for impact evaluations to be conducted on projects and programmes.
Survey Respondent Comments on UNDP Areas for Improvement "The links between the Headquarters and the Country Offices could be made more efficient. Human Resources Management is also a serious issue (lack of career plans/progression path to build up a knowledgeable and stable workforce, etc.) that hinders the long-term functioning of operations." (Donor at headquarters) “UNDP appears to have too many activities happening at the same time without the needed resources available to finish or follow-up with the activities. Some self-reflection is needed in order to adjust the actual engagement level without compromising the quality of the service and support given.” (Donor in-country) UNDP needs to look at the "replicability", the "continuity" and "up- scaling" of projects/programmes they fund, especially after the end of UNDP funding. UNDP projects/programmes should have an "impact evaluation" component, which should be shared with the implementing agencies and if possible with the beneficiaries themselves and the public." (Direct partner) "Bureaucratic procedures, especially between the local management and HQ in NY are slow and apparently cumbersome and inflexible. A tendency to take on tasks and responsibilities without the necessary increase in staffing.” (Direct partner) |
Overall Ratings of Key Performance Indicators
Figure 3.2 below presents scores from the document review and the survey on key performance indicators (KPIs) in the MOPAN 2012 assessment of UNDP. The white bar presents the survey score, while the black square presents the document review score. For example, on the first indicator, “providing direction for results”, UNDP received a score of 4.26 (adequate) in the survey and a score of 5 (strong) in the document review. In the overall ratings from the survey and document review, UNDP was seen to perform adequately or better on the majority of key performance indicators. UNDP received scores of adequate or better on all 19
KPIs assessed in the survey. UNDP received scores of adequate or better on 10 of the 16 KPIs assessed in the document review.[11]
The survey and document review ratings differed on 12 KPIs – 7 of which were rated lower by the document review than by survey respondents, and the opposite for the remaining 5. The reasons for these differences are discussed in the following sections.
Figure 3.2 Overall Ratings on Key Performance Indicators (mean scores, all respondents and document review ratings)1
1 The document review for KPI 14 was designed to draw on data from the 2011 Survey on Monitoring the Paris Declaration. The white diamond indicates that there was insufficient data to provide ratings on two of the three MIs in this key performance indicator.
3.3 UNDP’s Performance in Strategic, Operational, Relationship, and Knowledge Management
3.3.1 Overview
This section presents the results of the 2012 Common Approach assessment of UNDP in four performance areas (quadrants): Strategic, Operational, Relationship, and Knowledge Management.
The following sections (3.3.2 to 3.3.5) provide the overall survey and document review ratings for the KPIs in each quadrant, the mean scores by respondent group, and findings based on an analysis of survey and document review ratings in each quadrant.
When there were notably divergent ratings between survey respondent groups or between the survey results and document review ratings, these are noted and the information gleaned from interviews with staff is integrated when it has a bearing on the analysis. Where statistically significant differences among categories of respondents were found, these differences are noted.
The survey data for each KPI and MI by quadrant are presented in Volume II, Appendix V. The document review ratings are presented in Volume II, Appendix VI.
3.3.2 Strategic Management
Survey responses and the document review presented a mixed picture of UNDP’s performance in strategic management. While overall survey scores were adequate or better, there were differences between respondent groups. The document review identified room for improvement in UNDP’s corporate focus on results; UNDP is aware of the challenges in this area and is engaged in addressing them.
Figure 3.3 presents the overall survey and document review ratings for the four KPIs in the strategic management quadrant. As shown in Figure 3.4, donors and direct partners had divergent views: direct partners rated UNDP as strong on all KPIs they assessed, while donors at headquarters and in-country rated it adequate.
The document review noted deficiencies in the organisation’s corporate results frameworks (e.g., results chain and indicators) but also recognised its considerable and on-going efforts to strengthen results systems and practices (such as the creation of a peer review mechanism to aid in the development of the next results framework) and efforts to improve performance management as part of its Agenda for Organizational Change.
Overall, UNDP was rated strong by survey respondents and adequate by document review for integrating cross-cutting priorities into its work, with some variation in ratings across the different priority areas.
Figure 3.3 Quadrant I: Strategic Management, Survey and Document Review Ratings
Figure 3.4 shows the mean scores for the four KPIs for all survey respondents, and by category of respondent.
Figure 3.4 Quadrant I: Strategic Management, Mean Scores by Respondent Group
KPI 1: Providing Direction for Results
Finding 1: Survey respondents rated UNDP’s executive management as adequate in providing direction for the achievement of external/beneficiary focused results. The document review rated UNDP as strong in making key documents available to the public.
Overall, survey respondents rated UNDP as adequate on the three MIs in this KPI: the organisation was considered to support a focus on results and on beneficiaries, to demonstrate leadership on results management, and to make key documents available to the public. The document review, which only rated the MI on availability of documents, gave UNDP a score of strong.
Figure 3.5 KPI 1: Providing Direction for Results, Ratings of Micro-Indicators
MI 1.1 – Value system supports results-orientation and direct partner focus
Survey respondents were asked whether UNDP’s institutional culture reinforces a focus on results and whether it is direct-partner focused. On both counts, direct partners rated UNDP more positively than other respondent groups, and these differences are statistically significant. UNDP was rated as adequate overall.
MI 1.2 – Leadership on results management
MOPAN donors at headquarters were the only respondent group asked about the extent to which senior management demonstrates leadership on results management. More than 70 per cent rated UNDP as adequate or above on this indicator.
MI 1.3 – Key documents available to the public
The majority of survey respondents (77 per cent) rated UNDP as adequate or above in making key documents readily available to the public.
The document review rated UNDP strong on this MI. On its website, UNDP provides an information disclosure policy, as well as all key documents, including Executive Board decisions, reports on sessions, and evaluations. Moreover, to improve transparency, UNDP shares data on its global website on the expenditures incurred in each of its projects (in accordance with the standard of the International Aid Transparency Initiative). Improvements could be made, however, to UNDP’s French and Spanish websites: many of the French and Spanish translations of documents are surprisingly only available through the English website.
KPI 2: Corporate Focus on Results
Finding 2: UNDP was perceived by survey respondents as adequately focusing on results. The document review noted that UNDP’s results frameworks are less than adequate but that it is undertaking reforms to improve this.
MOPAN donors at headquarters were the only respondent group asked to assess UNDP’s performance on this KPI. They rated UNDP as adequate for operationalising its mandate in an organisation-wide strategy but had mixed views on UNDP’s application of the principles of results-based management.
The document review ratings varied from weak to strong. The 2012 MOPAN assessment was conducted at a time when UNDP is undertaking significant reforms in its approach to planning, monitoring and reporting on results. Although weaknesses were identified in UNDP’s results frameworks, particularly in terms of the selection of indicators and establishment of results chains, survey data and documents suggest that the organisation has recognised most problems and is committed to strengthening its results focus and to developing robust frameworks for the next Strategic Plan (2014-2017).
Figure 3.6 KPI 2: Corporate Focus on Results, Ratings of Micro-Indicators
MI 2.1 – Organisational strategy based on clear mandate
MOPAN donors at headquarters were asked to rate UNDP on the clarity of its mandate and on whether its organisation-wide strategy (the Strategic Plan covering the 2008-2013 period) is in alignment with this mandate. The majority of respondents (72 per cent) rated it adequate or above. The document review rated UNDP as adequate on this MI.
Since its creation as the consolidation of the Expanded Programme of Technical Assistance and the United Nations Special Fund in 1965, UNDP’s mandate has been revised by General Assembly resolutions and Executive Board decisions to ensure its continued relevance. Notably, in the late 1990s, UNDP’s Executive Board approved a mission statement for the organisation as well as 20 guiding principles to focus its interventions.
The document review found that UNDP’s 2008-2013 Strategic Plan is aligned with its mission statement. The Strategic Plan identifies four priority focus areas that are based on historical demand and UNDP’s comparative advantage: poverty reduction, democratic governance, crisis prevention and recovery, and environment and sustainable development. It also describes how each of these will be implemented in the 2008-2013 programming cycle.
Over the period of the current plan, UNDP has had to evolve and respond to shifts in the development landscape (e.g., financial, food and fuel crises; the rise to prominence of climate change issues, and the global consensus to increase efforts to achieve the Millennium Development Goals) and to different stakeholder demands.[12] It has made some mid-term adjustments to reflect these shifts in landscape and is working to sharpen its focus on key priorities for 2012-2013 and for the next strategic cycle (2014-2017).
MI 2.2 – Organisational policy on results management
MOPAN donors at headquarters were the only respondent group consulted on the extent to which UNDP ensures the application of results management across the organisation. Although they rated UNDP as adequate overall on this MI, there were mixed views; 46 per cent of respondents rated UNDP as inadequate or below while 48 per cent perceived its performance as adequate or above.
The document review rated UNDP adequate for promoting an organisation-wide policy on results management. The addendum to UNDP’s Strategic Plan describes the nature of its engagement with results-based management (RBM) and specifies how it is integrated at different levels within the organisation. In addition, UNDP has established guidelines for results management and provides staff responsible for managing programmes and projects with relevant training.
In 2011, UNDP also embarked on an ambitious reform process, known as the Agenda for Organizational Change (AOC), which promises to further strengthen the organisation’s results focus. It has made significant improvements in its corporate strategic planning system, with the introduction of an Annual Business Plan that serves as the foundation for integrated work plans at both headquarters and country level; it has improved its monitoring and reporting system with a redesign of the country-level results reporting tool (the Results Oriented Annual Report) with more precise indicators that are meant to improve aggregation of performance data, and the establishment of a system to scan corporate performance in relation to delivering on the Strategic Plan.
MI 2.3 – Plans and strategies contain results frameworks
This MI was only assessed through document review, which rated UNDP as inadequate. UNDP’s Development Results Framework (DRF) and Management Results Framework (MRF), originally presented in the addendum to the June 2008 Strategic Plan, were revised following the Mid-Term Review in 2011.[13] Despite efforts to strengthen these frameworks, elements of the amended DRF and MRF need improvement.
While the DRF includes both outcome and output level statements, these are not linked to each other in the framework: outcomes are presented in a section on UNDP’s four focus/programming areas, while outputs are presented in a separate section on development effectiveness. The latter section does not focus on UNDP’s products and services directly, but rather on enabling factors which can improve programming (e.g., quality of country programming, networking and knowledge on practices, and integration of gender equality concerns). In addition, the outcome statements in the DRF often incorporate many ideas (composite outcomes) and mix ends and means, which can render them unclear.
These issues are not unknown to UNDP. In fact, the organisation’s Executive Board, upon receiving the Mid-Term Review at its 2011 annual session, emphasised the need for more robust results frameworks and requested that UNDP develop frameworks for 2014-2017 that include the following elements: outputs, outcomes and impacts.[14] UNDP has since actively engaged in preparing the organisation’s future results frameworks, consulting with Executive Board members, organising workshops, and establishing a peer review group to provide advice and feedback for the design of the next DRF.
MI 2.4 – Results frameworks link outputs to final outcomes/impacts
This MI was assessed only through document review which rated UNDP as weak for the results chains in its frameworks. As stated in the evidence for the previous MI, outputs and outcomes are presented in UNDP’s DRF, but these results statements are not related. In addition, impact statements are missing from the DRF. In the MRF, only output statements are presented. Hence, there are no results chains to speak of in UNDP’s current results frameworks.
UNDP is committed to improving its results chains for the 2014-2017 Strategic Plan. Following a workshop on results chains organised in collaboration with Norway in November 2011, which involved results-based management and evaluation experts from all regions as well as senior staff members from other UN funds and programmes, UNDP drafted results chains for four outcomes from its current Strategic Plan and sent these examples to its peer review group for their input. The example results chains are indicative of promising change for the quality of UNDP’s next results frameworks.
MI 2.5 – Plans and strategies contain performance indicators
This MI was assessed only through the document review which rated UNDP as weak for the quality of the performance indicators presented in its corporate results frameworks. The majority of indicators in the DRF provide an insufficient basis to assess corresponding results statements, lack specificity on what is to be measured, and lack baseline values and targets. In addition, the four output indicators (presented in the section on UNDP’s four focus areas) are not associated with any output statements and serve more to compile information for UNDP’s reporting exercise than to track achievement of outputs.
The MRF includes output indicators that are, for the most part, relevant, monitorable and clear, but it does not include outcome indicators or outcome level results.
KPI 3: Focus on Thematic Priorities
Finding 3: UNDP’s mainstreaming of gender equality was rated strong by survey respondents and the document review. The other three cross-cutting priorities (capacity development, South-South cooperation, and human rights based approaches) received mixed ratings.
The MOPAN Common Approach examined UNDP’s practices and systems related to the integration of four cross-cutting thematic priorities: gender equality, capacity development, South-South cooperation, and human rights-based approaches.
While the first three are explicitly mentioned as cross-cutting priorities in UNDP’s Strategic Plan, human rights-based approaches are not. The Strategic Plan indicates that UNDP’s work on human rights is responsive to demand from programming countries.
Overall, survey respondents rated UNDP as adequate or strong for mainstreaming each of the four priorities in its work, while the document review ratings ranged from weak to strong.
Figure 3.7 KPI 3: Focus on Thematic Priorities, Ratings of Micro-Indicators
MI 3.1 – Gender equality
In the survey, UNDP received an overall rating of strong for the extent to which it mainstreams gender equality in its work. A little more than half of respondents (51 per cent) rated UNDP as strong or very strong, 34 per cent as adequate, and 8 per cent as inadequate or weak. Direct partners were more positive in their responses, while MOPAN donors in-country were less so. These differences are statistically significant.
In line with the survey, the document review rated UNDP as strong for its gender mainstreaming efforts. Following a negative evaluation in 2005, UNDP made substantial changes to its organisational systems and practices on gender. Its Gender Equality Strategy (2008-2013) delineates roles and responsibilities for gender mainstreaming at the corporate, regional and country levels. In 2008, UNDP introduced gender considerations in its country office Results-Oriented Annual Reports (ROARs) to track whether and how each programme outcome contributes to gender equality and women's empowerment. In 2009, it instituted a gender marker to track allocations and expenditures for gender equality results within its financial management system – which has been touted by the UN Secretary General as a best practice to be replicated by other organisations. The 2011 review of UNDP’s Gender Equality Strategy praised UNDP’s accountability mechanisms for mainstreaming gender as being robust. However, further progress is needed to achieve gender parity in UNDP’s middle and senior level management, to ensure more even leadership from senior managers, and to establish Gender Focal Teams within all country offices.
MI 3.2 – Capacity development
A majority of survey respondents (55 per cent) rated UNDP as strong or very strong for mainstreaming capacity development in its work. The document review rated the organisation as adequate on this MI.
Capacity development is described in UNDP’s Strategic Plan as the organisation’s overarching service or contribution to programming countries and as a priority to be pursued across all four focus areas. The organisation has strong guidance for capacity development and a range of tools for its staff, for instance on conducting capacity assessments and on measuring capacity development results. Inspired by its successful experience with the gender marker, UNDP launched a capacity development tracker in 2011 to assess the extent to which capacity development is integrated into its project planning.[15]
Although UNDP has invested significant resources over the last twenty years and made a leading contribution to advancing knowledge on effective capacity development, the Evaluation of UNDP Contribution to Strengthening National Capacities (2010) indicates that capacity development advisory teams within UNDP Regional Service Centres are small relative to demand for their support, meaning that they are unable to play a full and substantive role across all countries. Additionally, UNDP has been slow to act on the recommendations of this evaluation.[16]
MI 3.3 – South-South cooperation
The majority of respondents (65 per cent) rated UNDP as adequate or above on the extent to which it mainstreams South-South cooperation (SSC) in its work and 19 per cent answered ‘don’t know’.
The document review rated UNDP as adequate on this MI. The Fourth Cooperation Framework for South-South Cooperation, which is prepared by UNDP’s Special Unit for South-South Cooperation, provides strategic direction and policy tools for all partners interested in SSC, and details UNDP country offices’ operational activities for this cross-cutting priority. The document, which has been extended till 2013, also defines SSC institutional and development results to be achieved by UNDP’s Special Unit. However, the organisation has yet to deliver on its promise to prepare a corporate strategy on SSC that elaborates on specific roles, lines of accountability, and responsibilities for joint results. Moreover, the UN Joint Inspection Unit’s evaluation of South-South and Triangular Cooperation in the United Nations System (2011) indicates that the presence of staff from UNDP’s Special Unit within Regional Service Centres is insufficient to ensure appropriate regional and country coverage, and that there is a mismatch between the Special Unit’s resources and its mandate. An evaluation of the South-South Programme by UNDP’s Evaluation Office is scheduled to be presented to the Executive Board in 2013 and could demonstrate whether progress has been made on these issues.
MI 3.4 – Human rights-based approaches
Overall, survey respondents rated UNDP as strong in applying human rights-based approaches in its work (77 per cent rated UNDP as adequate or above and only 11 per cent rated it as inadequate or below). Direct partners were more positive than other respondent groups and this difference is statistically significant.
Strictly adhering to MOPAN criteria, the document review rated UNDP as inadequate on this MI. UNDP issued a policy document on Integrating Human Rights with Sustainable Human Development in 1998 and a practice note on Human Rights in UNDP in 2005. However, these documents do not clearly define roles and responsibilities for integrating the human rights dimension in UNDP’s work. Although UNDP has a Global Human Rights Strengthening Programme (GHRSP) dedicated to mainstreaming human rights in UNDP’s policies, programmes and processes, and a tripartite agreement with the Office of the High Commissioner for Human Rights (OHCHR) and the International Coordinating Committee of National Human Rights Institutions, there is a lack of accountability mechanisms for integrating human rights-based approaches within the organisation. The GHRSP 2011 annual report[17] indicates that UNDP intends to establish a mainstreaming forum on human rights and use it as an opportunity to reconsider its accountability framework and potentially to make changes to its performance appraisal system, its results reporting at the country level, the responsibilities of the Resident Coordinator, and/or staff training.
Due to political sensitivities and changing directives from UNDP’s Executive Board, human rights-based approaches are not an official cross-cutting priority for the organisation in its current Strategic Plan. In its communication with the MOPAN Assessment Team, UNDP noted that the upcoming Strategic Plan for 2014-2017 represents an opportunity to firmly anchor human rights-based approaches in the organisation and to ensure that human rights mainstreaming is given the same emphasis as the other cross-cutting priorities.
KPI 4: Country Focus on Results
Finding 4: Survey respondents rated UNDP’s country programme documents as strong for their results focus. The document review rated UNDP as adequate, noting areas for improvement in the formulation of indicators and statements of expected results.
Survey questions on this KPI were asked only of MOPAN donors in-country and direct partners. UNDP was perceived as strong overall but direct partners responded more positively than donors on each MI, and the differences are statistically significant.
The document review rated UNDP adequate overall on this KPI. It considered UNDP’s country frameworks strong in aligning expected results with national development strategies and in integrating cross-cutting results, but inadequate in linking results and defining indicators at different levels.
Figure 3.8 KPI 4: Country Focus on Results, Ratings of Micro-Indicators
>
MI 4.1 – Frameworks link results at project, programme, sector and country levels
MOPAN donors in-country and direct partners were asked whether UNDP's country programme documents (CPDs/CPAPs) link results from project, sector and country levels. They rated UNDP as strong overall.
The document review rated UNDP as inadequate on this MI. UNDP’s country programme documents contain statements of expected results at output and outcome levels that are also generally referred to in project documents. While these results statements are linked to UNDP’s focus areas, they are not linked to the outcomes in UNDP’s corporate development results framework (DRF), which complicates aggregation of results at the corporate level. In addition, the results statements in country programme documents are not always appropriate to their results level and the link between output and outcomes is not always logical.
MI 4.2 – Frameworks include indicators at project, programme, sector and country levels
Overall, survey respondents rated UNDP as strong on the indicators included in its results frameworks at national, sectoral, and project/programme levels (46 per cent rated UNDP strong or very strong).
The document review rated UNDP as inadequate on this MI. Output level indicators were for the most part found to be monitorable, relevant and clear, though sources of data and data collection methods were never specified. However, few country programme documents (CPDs or CPAPs) reviewed include outcome level indicators. Consequently, indicators provide an insufficient basis to assess UNDP’s achievement of planned results.
MI 4.3 – Expected results consistent with national development strategies and UNDAF
On this MI, 51 per cent of survey respondents rated UNDP strong or very strong for including statements of expected results that are consistent with national development strategies in its country programme documents (CPDs or CPAPs).
The document review found UNDP very strong in ensuring consistency between its expected results and national development priorities All UNDP country strategies explicitly show how UNDP’s expected results contribute to those included in the UNDAF.
MI 4.4 – Expected results developed in consultation with direct partners/beneficiaries
This micro-indicator was assessed only by survey. Respondents were asked whether UNDP consults with direct partners to develop its expected results. More than half (59 per cent) rated UNDP strong or very strong.
MI 4.5 – Results for cross-cutting priorities included in results frameworks
UNDP obtained an overall score of strong in the survey for including results related to cross- cutting priorities such as gender and capacity development within its country programme documents (CPDs or CPAPs). The majority of survey respondents (52 per cent) rated UNDP as strong or very strong.
The review of documents rated UNDP as strong for its inclusion of cross-cutting priorities in its country strategies. All country programme documents (CPDs or CPAPs) reviewed refer to cross-cutting priorities. However, they do not always identify all four thematic priorities assessed by MOPAN (gender equality, capacity development, South-South cooperation, and human rights-based approaches) or articulate these in their results frameworks. This was especially the case for South-South cooperation and human rights-based approaches.
3.3.3 Operational Management
UNDP is considered strong in most aspects of financial accountability and in delegating decision making to the country level. There is room for improvement in other areas of operational management, most notably in linking aid management to performance through results-based budgeting, managing human resources, and making programmes more performance oriented (e.g., by setting targets).
Figure 3.9 below shows the overall survey and document review ratings for the seven KPIs in the operational management quadrant.
Overall, survey respondents rated UNDP’s performance in areas related to operational management as adequate, except for UNDP’s financial accountability which they highlighted as strong. The document review provided mixed ratings that varied from weak to strong. There was sound evidence that UNDP’s external and internal audit mechanisms provide independent and credible information to detect financial and operational irregularities, and that performance information is adequately used to revise UNDP’s policies and strategies. UNDP’s commitment to delegating authority at the country level was also considered strong, in line with the original UN General Assembly resolution which established UNDP’s country programming. However, implementation of guidelines for impact assessments prior to initiation of project and programmes was observed to be at an early stage, and the indicators and targets used to track project implementation and the achievement of results were often weak.
Figure 3.9 Quadrant II: Operational Management, Survey and Document Review Ratings
Figure 3.10 shows the mean scores for the KPIs for all survey respondents, and by respondent groups.
Figure 3.10 Quadrant II: Operational Management, Mean Scores by Respondent Group
KPI 5: Resource Allocation Decisions
Finding 5: UNDP was considered adequate overall for allocating resources in line with established criteria.
Survey respondents and the document review rated UNDP’s resource allocation performance as adequate. There was insufficient data to assess the predictability of UNDP’s disbursements.
Figure 3.11 KPI 5: Resource Allocation Decisions, Ratings of Micro-Indicators
MI 5.1 – Criteria for allocating resources publicly available
Survey respondents were asked whether UNDP makes readily available its criteria for allocating resources. Though the majority (66 per cent) rated UNDP as adequate or above on this MI, nearly one-quarter (23 per cent) rated it as inadequate or weak. Direct partners responded more positively than donors in-country, with this difference being statistically significant.
In line with the survey data, the document review rated UNDP as adequate on this MI.
In 1995, UNDP’s Executive Board introduced a new three-tier target system (TRAC) to allocate core resources at the country level.[18] The system was reviewed and revised in 1999, 2002 and 2007, and is expected to be modified in 2013 to increase flexibility and responsiveness to the programming needs of countries, as well as to factor in considerations related to the organisation’s next Strategic Plan (2014-2017), upcoming integrated budget, and ongoing reform process (Agenda for Organizational Change). Documents on UNDP’s website (in the section on the Executive Board) provide information on the criteria used to determine the distribution of UNDP’s regular programme resources, but the formula for allocating resources is not publicly available. Internally, UNDP’s Office of Planning & Budgeting calculates the allocation of core resources in June of every year, in accordance with the criteria in effect, and posts the information on UNDP’s intranet for each Country Office to view and acknowledge. Upon acknowledgment from each Country Office, the resources are then released.
MI 5.2 – Resource allocations conform to criteria
The majority of survey respondents (69 per cent) rated UNDP as adequate or above for allocating its resources according to the established criteria, though 12 per cent rated it as inadequate or below.
MI 5.3 – Resources released according to agreed schedules
Given insufficient documentary evidence, UNDP was not rated on whether it releases planned resources according to agreed schedules.
KPI 6: Linking Support to Performance
Finding 6: UNDP had made progress in results-based budgeting and this is evident in the views of survey respondents. Reporting, however, does not yet fully reflect the improvements in UNDP’s resource planning system.
Overall, UNDP was perceived by survey respondents as adequately linking its resources to results. In contrast, the document review rated UNDP’s performance as inadequate, as there is room for improvement in UNDP’s reporting.
Figure 3.12 KPI 6: Linking Aid Management to Performance, Ratings of Micro-Indicators
MI 6.1 – Allocations linked to expected results
UNDP was rated adequate overall by survey respondents for linking budget allocations to expected results: 58 per cent rated UNDP as adequate or above and 24 per cent as inadequate or below. Direct partners were more positive in their responses than MOPAN donors at headquarters or in-country and the differences are statistically significant.
The document review assigned UNDP a rating of inadequate on this MI. While the organisation tracks budget and expenditures from activities through to outcomes in its internal enterprise resource planning system (Atlas), the organisation-wide budget documents presented to UNDP’s Executive Board do not yet provide cost information on expected development results. Jointly with UNICEF and UNFPA, however, UNDP has been implementing a harmonised cost classification model and adapting its reporting practices to results-based budgeting (RBB). As part of this process, UNDP published its first Institutional Budget for 2012-2013 (2011), a document that follows a results-based budgeting format in which management and UN coordination outputs are costed. The Institutional Budget is an important step towards UNDP’s goal of presenting a consolidated budget in 2014 that integrates resources with development and management results.
MI 6.2 – Disbursements linked to reported results
MOPAN donors at headquarters were asked whether UNDP’s results reports provide information on the amounts disbursed to achieve these results. Though half of respondents (50 per cent) rated UNDP as adequate or above, 31 per cent rated it as inadequate or below.
In the document review, UNDP was rated inadequate for linking disbursement information to reported results. Although UNDP’s Annual Report to the Administrator: Performance and Results has for the last three years included expenditure figures for all corporate outcomes, no discussion or information is provided on the variances that exist between planned and incurred costs, or between expected and achieved organisation-wide results.
KPI 7: Financial Accountability
Finding 7: Survey respondents considered UNDP’s policies and processes for financial accountability to be strong. The document review rated UNDP strong or very strong on nearly all MIs.
Survey respondents rated UNDP strong on five MIs and adequate on two. Their level of familiarity seemed low, however, with ‘don’t know’ responses ranging on average from 25 to 34 per cent on all MIs except for one pertaining exclusively to UNDP’s internal financial audit processes.
The review of documents generally provided strong or very strong ratings on the MIs related to financial accountability. UNDP was found to have clear and well-defined practices and systems in place to identify irregularities and risks, and to implement corrective actions.
Figure 3.13 KPI 7: Financial Accountability, Ratings of Micro-Indicators
MI 7.1 – External financial audits performed across the organisation
MOPAN donors at headquarters, the only respondent group consulted, were asked two questions on this MI. They rated UNDP as strong for conducting external financial audits that meet international standards, but only adequate for these audits meeting their needs. Both questions generated a high proportion of ‘don’t know’ responses (respectively 29 and 23 per cent).
UNDP was rated very strong on this MI based on a review of documents. External audits of UNDP’s financial statements are conducted by the United Nations Board of Auditors (UNBOA) on a biennial basis. The audit reports include a letter of transmittal and certification from the Chair of the UNBOA confirming that the audit has been performed in accordance with the United Nations System Accounting Standards.
MI 7.2 – External financial audits performed at the regional, country, or project level
Donors in-country and direct partners were asked whether UNDP's regional or country-level operations are appropriately audited by an external body. More than half of respondents (59 per cent) rated UNDP as adequate or above, and only 7 per cent as inadequate or below. However, 34 per cent of respondents responded ‘don’t know’. Direct partners responded more positively than donors in-country, and the difference is statistically significant.
The document review rated UNDP as adequate on this MI. External audit reports of UNDP’s financial transactions and operations are performed by the UNBOA at the institutional level, with some coverage at the regional and country levels. Aggregate findings from country audits, however, are mentioned only briefly in reports to support recommendations made at the institutional level. The sampling methodology used to determine which regional or country offices are audited is not explained.
Internally, UNDP’s Office of Internal Audit is responsible for providing oversight services (including financial audit) for all programmes, operations and activities undertaken by UNDP headquarters, country offices, regional service centres, liaison offices, and offices in any other location. UNDP’s internal audits ensure ample coverage. These reports have not been available to the public, but in June 2012 UNDP’s Executive Board decided the organisation will share all internal audit reports written after December 2012 on its website. Executive summaries of all previous internal audits will be disclosed also.
MI 7.3 – Policy on anti-corruption
MOPAN donors at headquarters rated UNDP as strong for having an appropriate policy on anti- corruption: 44 per cent rated it strong or very strong, 29 per cent as adequate, 2 per cent as inadequate, and 25 per cent responded ‘don’t know’.
In the document review, UNDP was rated very strong for its policy and guidelines on anti- corruption. The UNDP Anti-Fraud Policy (2011), which replaced the UNDP Fraud Policy Statement issued in 2005, provides information on complaint mechanisms and on ‘whistle blower’ protection for staff reporting on fraud. It also explains the roles, responsibilities and accountabilities of the Administrator, managers, staff, and contractors. The policy commits managers, through a consultative process, to identify and assess the risk of fraud in programmes and project areas in accordance with UNDP’s Enterprise Risk Management (ERM) Framework (2010). The policy also refers to additional relevant documents for combating fraud, such as the Programme and Operations Policies and Procedures (2011) section on procurement fraud and corrupt practices.
MI 7.4 – Systems for immediate measures against irregularities
Survey respondents rated UNDP as strong overall for appropriately following up on financial irregularities, including fraud and corruption: 36 per cent rated it strong or very strong, 24 per cent as adequate, and 8 per cent as inadequate or weak. ‘Don’t know’ responses ranged from 27 to 38 per cent among respondent groups. Donors in-country were less positive than the other respondent groups and the differences are statistically significant.
UNDP’s systems for taking immediate measures against irregularities were rated very strong by the document review. Together, the financial regulations and rules of the United Nations and of UNDP provide detailed guidance on irregularities to be investigated by internal and external audits. Executive Board decisions also define procedures that UNDP must follow in responding to irregularities identified through these audits (e.g., requesting that it prioritise recommendations and set an expected time frame for their implementation). The organisation not only tracks implementation of audit recommendations through its internal web-based dashboard, but provides this information to its Executive Board through three reports: the Office of Audit and Investigations (OAI)’s Annual Report on Internal Audit and Investigations, UNDP’s Report on the implementation of the recommendation of the Board of Auditors, and UNBOA’s Financial Report and Audited Financial Statements for the Biennium. UNDP’s Bureau of Management also plays a quality assurance function for both internal and external audits, identifying systemic issues and overseeing implementation within UNDP.
MI 7.5 – Internal financial audit processes provide credible information
MOPAN donors at headquarters perceived UNDP’s internal financial audits to be strong in providing credible information to its governing bodies: half rated UNDP as strong or very strong, 31 per cent as adequate, and 4 per cent as weak.
UNDP’s internal audit processes were rated very strong based on documentary evidence. UNDP Financial Regulations and Rules (2012) designate the organisation’s Office of Audit and Investigations (OAI) as the body responsible for conducting “independent, objective assurance and advisory activities” in accordance with the international standards developed by the Institute of Internal Auditors. The OAI’s charter further defines its responsibilities and scope of work, and emphasizes its structural independence from programming, highlighting that the OAI must report and is accountable to the Executive Board.
MI 7.6 – Effective procurement and contract management processes[19]
UNDP’s procurement and contract management processes for the provision of services were rated adequate or above by 57 per cent of survey respondents, inadequate or below by 11 per cent, and 32 per cent responded ‘don’t know’. Direct partners responded more positively than donors in-country and at headquarters, with these differences being statistically significant.
MI 7.7 – Strategies for risk management
MOPAN donors at headquarters were asked whether UNDP has appropriate strategies and plans for risk management. A little more than half of respondents (52 per cent) rated the organisation as adequate or above, 17 per cent as inadequate or below, and 31 per cent provided a ‘don’t know’ response.
Based on a review of documents, UNDP was rated strong for its strategies on risk management. The organisation adopted an Enterprise Risk Management (ERM) Policy in 2008, and subsequently enhanced its ERM framework in 2010. At the corporate level, an ERM committee, chaired by UNDP’s Associate Administrator and consisting of Deputy Directors from all bureaus, meets quarterly to review the application of risk management within the organisation, to identify key corporate risks, and to determine actions to be taken. Structurally, UNDP’s risk management system also includes an ERM secretariat that supports the ERM committee in assessing organisation-wide risks and in implementing corrective actions, as well as risk focal points within most units who coordinate efforts to strengthen risk management. Units at all levels (corporate, departmental, regional, and country) must maintain risk logs and integrate key risk management results into their work plans. Through an online system, risks are escalated following regular reporting lines to ensure that decisions are made and actions taken on high level risks. The Joint Inspection Unit’s (JIU) 2010 Review of Enterprise Risk Management in the United Nations identified UNDP as one of the leading agencies in ERM implementation within the United Nations system, though it noted that the implementation was relatively new and would require time to become fully integrated into organisational processes and culture.
KPI 8: Using Performance Information
Finding 8: UNDP was considered adequate overall for its use of performance information. The document review considered UNDP’s system for tracking the implementation of evaluation recommendations as strong, while survey respondents rated it strong for its use of performance data to plan new interventions at the country level.
UNDP was rated as adequate by survey respondents on three MIs, and as strong on one. The document review found evidence that UNDP has systems and practices in place to promote use of performance information to adjust or initiate programming and policies. While examples of use exist, these are not always well documented and therefore not easily traced.
Figure 3.14 KPI 8: Using Performance Information, Ratings of Micro-Indicators
MI 8.1 – Using information to revise and adjust policies
Nearly two-thirds (65 per cent) of donors at headquarters surveyed rated UNDP as adequate or above for using project/programme, sector and country information on performance to revise corporate policies; 15 per cent rated it inadequate or below; 21 per cent responded ‘don’t know’.
The document review examined UNDP’s use of performance information to revise and adjust policies and programmes, and provided a rating of adequate. Information on UNDP’s organisation-wide performance is available from a range of sources, including reports presented to the Executive Board on audits, evaluations, and implementation of the Strategic Plan. Thematic evaluations in particular have resulted in noticeable shifts in policies and strategies. One example is the 2008-2013 Gender Equality Strategy, which was born in response to an evaluation on gender mainstreaming that identified the need for greater leadership and commitment from the organisation’s senior management. Another example of an organisation-wide policy change is the development of UNDP’s Enterprise Risk Management Policy (2007) which responded to a recommendation in UNBOA’s external audit for the 2004-2005 biennium.
There is also some limited evidence that UNDP’s reporting on its performance and results in relation to the 2008-2013 Strategic Plan is leading to changes in programming: in 2011, UNDP revised its results frameworks and reduced the number of programmatic outcomes from 34 to 25 due in part to evidence of low demand for certain outcomes.[20]
MI 8.2 – Using information for planning new interventions
Donors in-country and direct partners were asked whether UNDP, in consultation with the government, uses information on its projects/programmes or initiatives to plan new areas of cooperation at the country level. While the overall rating for this MI was strong, donors were less positive and the difference is statistically significant.
The document review rated UNDP adequate on this MI. While UNDP has clear procedures for ensuring that performance information is used to inform the design of new programmes, there is little evidence of how this has shaped specific programmes.
At the country level, UNDP produces a range of reports to monitor and evaluate its achievement of outcomes: Results-Oriented Annual Reports, Country Programme Performance Summaries, Outcome Evaluations, and Assessment of Development Results (ADRs). The Programme and Operations Policies and Procedures (2011) manual specifies that UNDP Country Offices must consider these documents as well as Annual Reviews of the UNDAF and project evaluations to inform the design phase of a new country programme. In fact, the manual also specifies that an ADR or Country Programme Performance Summary (in the absence of the former) must be presented as an accompanying document when a draft Country Programme Document is submitted for approval by the Executive Board.
MI 8.3 – Proactive management of poorly performing initiatives
Country level respondents (donors in-country and direct partners) were asked whether UNDP's poorly performing programmes and projects are subject to proactive management. Their views were mixed: 46 per cent rated UNDP adequate or above, 23 per cent rated it inadequate or below, and 30 per cent answered ‘don’t know’. Overall, direct partners were more positive than donors in-country and the difference is statistically significant.
Based on the review of documents, UNDP was found to be adequate in its proactive management of poorly performing programmes, projects and initiatives. UNDP has developed a range of tools and procedures to track and review its performance at the project and country programme level. Examples include the Balanced Scorecard, the annual review of country programmes (which is part of the UNDAF Annual Review process), the Results-Oriented Annual Reports (ROARs), as well as evaluations conducted at the country level. In addition, quarterly review meetings between regional bureaus and UNDP’s Bureau of Management discuss the performance of country offices, including their programming. Information and specific instructions from these meetings are relayed to country offices on a quarterly or biennial basis. Another review mechanism, which is in the process of being institutionalised across the organisation, is the Country Office Scan. This exercise looks at country-specific performance and capacities of country programmes that demand attention. As this is a new practice, it is too early to see the impact in improving the performance of poorly performing programmes.
MI 8.4 – Evaluation recommendations are acted upon
MOPAN donors at headquarters, the only respondent group surveyed on this MI, rated UNDP adequate for appropriately tracking the implementation of evaluation recommendations reported to its governing bodies.
In the document review, UNDP was rated strong on this MI. UNDP’s Evaluation Policy (2011) stipulates that all evaluations must have a management response. UNDP’s Evaluation Office is responsible for maintaining a system that records these responses and that tracks the follow-up actions taken by managers on those recommendations they have agreed to. The system is accessible on UNDP’s Evaluation Resource Center website and is open to the public. In addition, UNDP presents information regarding evaluation management responses to its Executive Board through two reports: the Annual Report on Evaluation, which provides an overview of evaluation compliance (i.e., it identifies which evaluations have received a management response), and through the Annual Report of the Administrator, which is mandated by Executive Board decision 2011/3 to include information on the implementation status of management responses.[21]
KPI 9: Managing Human Resources
Finding 9: Survey respondents provided an overall rating of adequate but also signalled limited knowledge about certain human resource practices. The document review found it difficult to assess UNDP’s management of human resources due to lack of information.
Results from the assessment were mixed, with respondents rating UNDP adequate on its human resources management and the document review providing an overall score of inadequate based on a single MI.
Given that many of UNDP’s documents regarding human resources are internal, it was difficult to get a clear sense of how the organisation’s human resources systems and approaches operate in practice and of what human resources management reforms are planned under UNDP’s Agenda for Organizational Change.
Figure 3.15 KPI 9: Managing Human Resources, Ratings of Micro-Indicators
MI 9.1 – Results-focused performance assessment systems for senior staff
This MI was assessed only through document review, which rated UNDP as inadequate. UNDP’s Results and Competency Assessment (RCA) Policy and Procedures (2012) applies to UNDP staff below the Assistant Secretary-General level (i.e., including senior professionals in the United Nations P and D staff categories). This policy requires that staff define concrete, achievable, and measurable results/goals that are aligned with those of the organisation.
The performance of UNDP’s Administrator and Associate Administrator[22] is assessed through a different mechanism. They sign Senior Managers’ Compacts with the Secretary-General on an annual basis. The compacts set specific programmatic objectives and managerial targets, and outline clear roles and responsibilities for senior officials. They also serve as the basis for assessment by the UN Secretariat’s Management Performance Board.
The UNDP Financial Report and Audited Financial Statements for the Biennium ended 31 December 2009 and Report of the Board of Auditors (2010) noted issues related to compliance with the RCA system. For example, it observed that performance appraisals were not always prepared and completed within the deadlines specified by the RCA guidelines, and that they were not examined by UNDP’s Career Review Group. In response to issues raised by the auditors, UNDP indicated that it suspended the Career Review Group (CRG) for 2011 and that it set up a “CRG-light” mechanism in 2012 to review only exceptional cases that needed to be brought to the attention of senior managers. UNDP also indicates that it regularly monitors compliance on performance appraisals and sends emails with statistics to all senior managers. However, no reports on compliance are readily available.
MI 9.2 – Transparent system to manage staff performance
MOPAN donors at headquarters rated UNDP adequate overall on this MI. Almost half (44 per cent) rated UNDP adequate or above, 19 per cent rated it inadequate or below, and 38 per cent responded ‘don’t know’. This was the highest rate of ‘don’t know’ responses observed for all survey questions related to UNDP’s practices and systems (i.e., excluding the pilot results component).
The document review was constrained by the unavailability of data/documents to assess this MI and therefore did not provide a rating.
MI 9.3 – Staff rotation adequate for the development of effective partnerships
Donors in-country and direct partners were asked whether UNDP keeps deployed international staff in country offices for a sufficient time to maintain effective partnerships at country level. The majority (76 per cent) rated the organisation as adequate or above. Direct partners were more positive than donors in-country and the difference is statistically significant.
KPI 10: Performance-oriented Programming
Finding 10: Although survey respondents viewed the performance orientation of UNDP’s country programming processes to be adequate, documentary evidence indicated there is significant room for improvement.
Overall, survey respondents at the country level rated UNDP as adequate for using processes that are results-oriented in its programming. Direct partners were more favourable in their responses than donors in-country.
The document review noted that UNDP is in the process of strengthening benefit/impact analyses to inform programme and project design at the country level, but that the poor formulation of indicators and targets in project planning documents is problematic for tracking UNDP’s performance.
Figure 3.16 KPI 10: Performance-oriented Programming, Ratings of Micro-Indicators
Donors in-country were asked whether UNDP subjects new programming initiatives to impact analysis. The majority of respondents (60 per cent) rated UNDP as adequate or above, 18 per cent as inadequate or below, and 22 per cent answered ‘don’t know’.
Adhering to MOPAN criteria, the document review rated UNDP inadequate in subjecting new initiatives to benefit or impact analyses before their approval. However, this should improve once the recently approved Environmental and Social Screening Procedure for UNDP Projects becomes fully institutionalised. The procedure provides specific instructions on the benefit/impact assessment process and identifies staff who are to receive and provide training.
MI 10.2 – Milestones/targets set to monitor implementation
Donors in-country and direct partners were asked whether the targets that UNDP sets enable monitoring of progress in project/programme implementation at the country level. Nearly half of respondents (47 per cent) rated the organisation as strong or very strong, 37 per cent as adequate, and 8 per cent as inadequate or below. Direct partners were more positive than donors in-country and the difference is statistically significant.
The document review gave UNDP a rating of inadequate for setting milestones to track the progress of project implementation. Although more than half of Annual Work Plans (AWPs) sampled included milestones, many of these were vague and did not constitute an appropriate basis to measure progress in project implementation. For the most part, dates for milestone achievement were provided, but, indicators to measure successful milestone completion often did not have associated baseline and target values.
KPI 11: Delegating Authority
Finding 11: While survey respondents rated UNDP as adequate overall for its delegation of decision-making authority to country offices, the document review considered it strong.
Direct partners and donors in-country were consulted on UNDP’s delegation of authority to the country level. Overall, they rated one MI as adequate and the other as strong.
The document review confirmed that UNDP’s policies and practices for delegating authority are strong, which is in keeping with its decentralised structure.
Figure 3.17 KPI 11: Delegating Authority, Ratings of Micro-Indicators
MI 11.1 – Country offices have sufficient authority to manage activities
Donors in-country and direct partners were asked whether UNDP country offices have sufficient delegated authority to manage activities at the country level. Half of respondents rated the organisation as strong or very strong, 28 per cent as adequate, and 11 per cent as inadequate or below. Direct partners were more positive than donors in-country and the difference is statistically significant.
UNDP was rated as strong for its delegation of authority regarding key management and operations on the basis of the documentary evidence reviewed. The UN General Assembly resolution 2688(XXV) established the key parameters of UNDP’s country programming in 1970. This resolution states that, “[t]here should be the maximum possible delegation of authority to the resident director”. This perspective is upheld in UNDP’s Accountability Framework and Oversight Policy (2008) which presents six principles that guide accountability within UNDP, one of which pertains to the ‘formal and consistent delegation of authority’. The MOPAN Assessment Team found that UNDP documents, in particular its Programme and Operations Policies and Procedures (2011) manual, clearly delineate the extent to which decisions regarding changes in projects or programming can be made at the local level and provide a high level of autonomy for Resident Representatives/Directors.
MI 11.2 – New programmes/projects can be approved locally within a budget cap
Donors in-country and direct partners were asked whether UNDP has adequately decentralised its project approval processes to local levels within a budget cap. Although the majority (59 per cent) rated UNDP as adequate or above, a considerable proportion (28 per cent) answered ‘don’t know’.
As for the previous MI, UNDP was rated strong by the document review. Once a Country Programme Document (CPD) has been approved by UNDP’s Executive Board, the authority to approve projects at the country level is delegated to the resident representative by the director of the regional bureau.[23] The resident representative, in partnership with the programme country government, has the authority to adjust the CPAP to changing circumstances in the country, as long as the country programme’s overall framework (which has been approved by the Executive Board) does not change. This authority can be removed if so warranted by Executive Board decisions. In terms of a budget cap, the Programme and Operations Policies and Procedures (2011) manual indicates that project approvals cannot exceed the UNDP non- emergency core resources released by headquarters for the years during which projects are active.
3.3.4 Relationship Management
Overall, UNDP is seen to provide strong coordination of the UN development system at the country level, contribute positively to policy dialogue, and support national plans. Its administrative procedures, however, are seen to negatively impact implementation.
Figure 3.18 below shows the overall survey review ratings and document review scores for the five KPIs in the relationship management quadrant.
In the area of harmonisation, UNDP’s efforts are mostly recognised as good practice, with ratings of strong on most indicators. UNDP’s role in supporting the UN development system at the country level was particularly noted by both survey respondents and document review. In its efforts to support alignment and country ownership, UNDP is seen to be respectful of national counterparts and a valued contributor to policy dialogue. Its lengthy administrative procedures, however, are seen to be a barrier to implementation and limited data makes it difficult to appreciate UNDP’s evolution with regard to use of country systems.
Figure 3.18 Quadrant III: Relationship Management, Survey and Document Review Ratings1
1 The document review for KPI 14 was designed to draw on data from the 2011 Survey on Monitoring the Paris Declaration. The white diamond indicates that the data required for the assessment was unavailable for UNDP.
Figure 3.19 shows the mean scores for the five KPIs for all survey respondents, and by respondent groups.
Figure 3.19 Quadrant III: Relationship Management, Mean Scores by Respondent Group
KPI 12: Supporting National Plans
Finding 12: Surveyed stakeholders considered UNDP strong in its support of national plans.
UNDP’s Strategic Plan identifies national ownership as a foundational dimension of its work, indicating that development strategies must reflect national circumstances, capacities and aspirations. This KPI was assessed by survey only.
Figure 3.20 KPI 12: Supporting National Plans, Ratings of Micro-Indicators
MI 12.1 – Funding proposals developed with national government or direct partners
Donors in-country and direct partners were asked whether UNDP supports funding proposals designed and developed by the national government or other direct partners. The majority of respondents (53 per cent) rated the organisation as strong or very strong. Among direct partners, 68 per cent rated UNDP’s performance as strong or very strong. Donors in-country were less positive than direct partners and this difference is statistically significant.
KPI 13: Adjusting Procedures
Finding 13: UNDP was perceived as adequate for adjusting its procedures to local conditions and capacities. However, survey respondents had mixed views on the appropriateness of the time required to complete UNDP procedures at the country level.
The MIs in this KPI were assessed only by survey, and only by direct partners and donors in- country.
Both respondent groups rated UNDP as adequate for adjusting procedures in accordance to local conditions and capacities. However, they had differing views on the burden imposed by UNDP’s procedures, suggesting room for further improvement to ensure efficient processes across all countries. In their survey comments, direct partners identified bureaucratic inefficiencies as the area where UNDP needs to improve most.
Figure 3.21 KPI 13: Adjusting Procedures, Ratings of Micro-Indicators
MI 13.1 – Procedures easily understood and completed by partners
Donors in-country and direct partners rated UNDP as adequate for its use of procedures which can be easily understood and completed by partners.
MI 13.2 – Length of time for procedures does not affect implementation
Survey respondents had mixed opinions on whether the length of time it takes to complete UNDP procedures affects implementation. Overall, UNDP was rated adequate or above by 43 per cent of respondents, and inadequate or below by 43 per cent. Donors in-country rated this MI lower than direct partners and the difference is statistically significant.
MI 13.3 – Ability to respond quickly to changing circumstances
UNDP was rated as adequate overall for responding quickly to changing circumstances: 63 per cent of respondents rated UNDP as adequate or above and 26 per cent as inadequate or below.
MI 13.4 – Flexibility in implementation of projects/programmes
When asked whether UNDP adjusts its implementation as learning occurs, the majority of direct partners and donors in-country (70 per cent) rated the organisation as adequate or above, and 18 per cent as inadequate or below.
KPI 14: Using Country Systems
Finding 14: Donors in-country and direct partners rated UNDP’s use of country systems as adequate. The document review found room for improvement in UNDP’s use of country financial systems.
Donors in-country and direct partners rated UNDP as adequate overall for its use of country systems and promotion of mutual accountability. However, a high proportion responded ‘don’t know’: on average, 32 per cent of donors in-country and 29 per cent of direct partners.
The document review found UNDP inadequate in its use of country financial systems. It was not able to rate two of the MIs due to lack of data.
Figure 3.22 KPI 14: Using Country Systems, Ratings of Micro-Indicators1
1 The document review for this KPI was designed to draw on data from the 2011 Survey on Monitoring the Paris Declaration. White diamonds indicate that the data required for the assessment was unavailable for UNDP.
MI 14.1 – ODA disbursements/ support recorded in annual budget
There was insufficient data for the document review to rate UNDP on the percentage of its aid flows to government that are reported in national budgets (Paris Declaration Indicator 3).
As indicated in UNDP’s Response to the 2011 Survey on Monitoring the Paris Declaration, the Paris Declaration only requires countries to provide budgetary data on the United Nations as a whole and not on individual agencies. Of the 74 UNDP partner countries that participated in the 2011 Survey on Monitoring the Paris Declaration, only 13 provided data disaggregated for UNDP. This represents an insufficient basis to accurately assess UNDP’s global performance in this area.
It should also be noted that the United Nations Development Group (UNDG), in its report on the 2011 monitoring survey,[24] indicates that the target for Indicator 3 is difficult for UN agencies to achieve, as the indicator itself is not a direct measure of their performance. In practice, it appears that not all country governments record official development assistance (ODA) in their budgets, even when the United Nations has provided this information on time. Poor communication channels between government entities can also prevent ODA information officialised with sector or line ministries from being relayed to the ministry of finance, thus impeding its inclusion in the national budget.
MI 14.2 – Use of country financial systems
Donors in-country and direct partners were asked whether UNDP uses country financial systems (i.e., public financial management and procurement) as a first option for its operations where appropriate. This MI yielded a high percentage of ‘don’t know’ responses (30 per cent), with a little less than half of respondents (49 per cent) rating UNDP as adequate or above, and 22 per cent as inadequate or below.
In the document review, UNDP was rated as inadequate for its use of country financial and procurement systems (Paris Declaration Indicators 5a and 5b). UNDP’s Response to the 2011
Survey on Monitoring the Paris Declaration reveals that 24 per cent of the organisation’s disbursements to government were channelled through the country’s public financial management system, and that 13 per cent of its assistance to government used country procurement systems. It defended these low numbers by noting that many countries request UNDP’s procurement services even when their own institutional capacity is considered sufficient, and that UNDP, which is bound by financial regulations and rules approved by its Executive Board, usually does not disburse funds directly to the treasury of a country.
The MOPAN Assessment Team recognises that one of UNDP’s value-added roles is to strengthen the policy and institutional mechanisms of countries to manage ODA. However, little data was found on UNDP’s performance in strengthening country financial systems at a global aggregate level. The terms of reference (TOR) for the Evaluation of UNDP Contributions to Strengthening National Capacities (2010) state that “there continue to be shortfalls in using country systems and capacities”.
MI 14.3 – Use of country non-financial systems
Donors in-country and direct partners rated UNDP as adequate overall for using countries’ non- financial systems (e.g., monitoring and evaluation) as a first option for its operations: 47 per cent rated UNDP adequate or stronger, 19 per cent inadequate or weaker, and 35 per cent responded ‘don’t know’.
MI 14.4 – Parallel implementation structures avoided
Due to the absence of baseline data, the document review could not rate UNDP on the extent to which it has reduced its use of parallel implementation units since 2005 (Paris Declaration Indicator 6). Data disaggregated from the OECD survey on monitoring the Paris Declaration and available only for 2010 indicates that UNDP had 261 project implementation units (PIUs) operating parallel to government structures during that year (an average of four per country).
Though this number appears high, UNDP noted in its response that bilateral donors often ask it to establish PIUs to manage donor funds on behalf of governments in situations of conflict and fragility so as to diminish fiduciary risk during transition periods.
UNDP does not appear to have internal monitoring data to track the number of PIUs in place and the purpose they serve. In 2004, a UNDP human development viewpoint document[25] noted that the argument that PIUs are needed because of weak national capacities is a circular one and that, in cases where PIUs are established, there should be an explicit exit strategy with effective safeguards.
MI 14.5 – Promotion of mutual accountability in its partnerships
In the survey, UNDP was rated as adequate overall for encouraging mutual accountability in its partnerships for their commitments to the Paris Declaration and subsequent aid effectiveness agreements. More than half of respondents (59 per cent) rated UNDP as adequate or above, 15 per cent rated it as inadequate or below, and 27 per cent answered as ‘don’t know’. Donors in-country were less positive than direct partners (providing an overall response of adequate instead of strong), and this difference is statistically significant.
KPI 15: Contributing to Policy Dialogue
Finding 15: UNDP’s contribution to policy dialogue and respect for the views of partners was recognised as strong by all survey respondent groups.
Overall UNDP was rated strong for its contribution to policy dialogue. On both MIs in this KPI, donors in-country were less positive than other respondent groups, rating UNDP as adequate, and the differences are statistically significant.
Figure 3.23 KPI 15: Contributing to Policy Dialogue, Ratings of Micro-Indicators
MI 15.1 – Reputation for valuable input to policy dialogue
Survey respondents rated UNDP’s performance as strong in providing valuable inputs to policy dialogue: 85 per cent provided ratings of adequate or above.
MI 15.2 – Policy dialogue respects partner views
UNDP’s respect for the views of its partners during policy dialogue was rated as strong overall: 71 per cent of donors at headquarters, 48 per cent of donors in-country and 72 per cent of direct partners responded within the strong or very strong categories.
KPI 16: Harmonising Procedures
Finding 16: UNDP was recognised as adequate overall by survey respondents in harmonising procedures at the country level, and as strong for its coordination of the UN system. The document review highlighted UNDP’s participation in joint missions, provision of coordinated technical cooperation, and management of the Resident Coordinator System.
Only donors in-country and direct partners were consulted on UNDP’s performance with regard to harmonisation at the country level. They rated UNDP adequate on two MIs, and strong on one. In all cases, direct partners responded more positively than donors in-country and this difference is statistically significant. The document review scores ranged from adequate to very strong.
Figure 3.24 KPI 16: Harmonising Procedures, Ratings of Micro-Indicators
MI 16.1 – Participation in joint missions
The document review rated UNDP as strong for its participation in joint missions (Paris Declaration Indicator 10). UNDP’s Response to the 2011 Survey on Monitoring the Paris Declaration shows that 42 per cent of its missions in 2010 were joint, which is above the global percentage target set by the Paris Declaration (40 per cent). Although this figure is for 2010 only, it is clear that UNDP is committed to joint efforts, as evident in its Balance Scorecard, which has an indicator to track joint programming at the country level.
MI 16.2 – Technical cooperation disbursed through coordinated programmes
The majority of survey respondents (79 per cent) rated UNDP as adequate or above for disbursing technical assistance through coordinated programmes in support of capacity development.
The document review rated UNDP strong for strengthening national capacity through coordinated support (Paris Declaration Indicator 4). UNDP’s Response to the 2011 Survey on Monitoring the Paris Declaration indicates that 73 per cent of the organisation’s technical cooperation is coordinated, which is well above the target of 50 per cent.
MI 16.3 – ODA disbursements/support for government-led PBAs
Survey respondents were asked whether UNDP participates in programme-based approaches (other than through budget support). More than three-quarters (78 per cent) rated UNDP adequate or above on this MI.
The document review rated UNDP as adequate on this MI. In 2010, the proportion of UNDP’s aid to country governments in the form of programme-based approaches was 53 per cent, which is below the target set by the Paris Declaration (66 per cent). However, by setting up multi-donor trust funds and other joint funding arrangements for stakeholders, UNDP increases the use of common procedures and arrangements in countries. Further, these funding arrangements enable the coordination of aid delivery to national programmes and sectors.
MI 16.4 – Coordination of the UN development system at the country level
Overall, survey respondents perceived UNDP as strong in facilitating coordination of the UN development system at the country level. More than half of respondents (57 per cent) rated the organisation as strong or very strong, 23 per cent as adequate, and 14 per cent as inadequate or below.
The document review rated UNDP very strong on this MI. UNDP’s coordinating function for the UN development system at the country level is embodied in its role as the manager of the Resident Coordinator System. This role and its associated responsibilities have been defined and reaffirmed through UN General Assembly resolutions 34/213 and 62/208 and through a series of UNDG documents. UNDP outputs for UN coordination are defined in its revised institutional results framework and are costed in its Institutional Budget Estimates for 2012-2013 (2011).
UNDG developed a Management and Accountability System (2008) to strengthen the Resident Coordinator System and a related implementation plan (2009) to be operationalised by different actors in the UN system, among which UNDP. As of 2012, UNDP reports that it has completed all actions for which it was responsible according to this plan, but that the implementation of commitments by other agencies has been slow, which has limited overall progress.
The Report of the Secretary-General on the Functioning of the Resident Coordinator System, Including Costs and Benefits (2011) and an independent evaluation on the functioning of the UN Resident Coordinator system prepared for ECOSOC in 2012 identified some serious constraints and challenges for the Resident Coordinator System (such as the Resident Coordinators’ limited formal authority and the inherent opposition between the horizontal nature of the Resident Coordinators’ work and the vertical systems and processes that link UN entities to their respective headquarters). However, these are not in UNDP’s power to change as the manager of the system.
3.3.5 Knowledge Management
UNDP was assessed positively for its dissemination of lessons learned and considered strong for its evaluation of external results. The document review noted considerable room for improvement in UNDP’s performance reporting to provide stakeholders with a clear picture of its progress towards results over time.
Figure 3.25 below presents the overall survey and document review ratings for the three KPIs in the knowledge management quadrant.
Overall, survey respondents rated UNDP adequate for its knowledge management performance. The document review ratings ranged from inadequate to strong.
While the document review identified UNDP’s results reporting as an area requiring improvement, it should be noted that UNDP has shown strong commitment in recent years to improve its performance reporting and results frameworks. As noted in the discussion on Strategic Management, UNDP has developed a peer review mechanism to help with the design of its results frameworks for the next Strategic Plan (2014-2017) and is engaged in regular consultations with its Board to improve the quality of its results measurement and reporting.
Figure 3.25 Quadrant IV: Knowledge Management, Survey and Document Review Ratings
Figure 3.26 shows the mean scores for the three KPIs for all survey respondents, and by respondent groups.
Figure 3.26 Quadrant IV: Knowledge Management, Mean Scores by Respondent Group
KPI 17: Evaluating External Results
Finding 17: UNDP was perceived as adequate in most aspects of evaluation and strong in the independence and quality of evaluations. Evaluation coverage is an area for improvement.
Survey respondents rated UNDP adequate or better on the MIs in this KPI and highlighted the independence of its evaluation unit as a strength. The document review corroborated this view based on the recent revision of UNDP’s evaluation policy and noted recent improvements in the organisation’s quality assurance processes for decentralised evaluations.
Figure 3.27 KPI 17: Evaluating External Results, Ratings of Micro-Indicators
MI 17.1 – Independent evaluation unit
MOPAN donors at headquarters were asked whether UNDP ensures the independence of its evaluation unit. Nearly two-thirds (65 per cent) rated the organisation as strong or very strong, and 19 per cent as adequate.
On the basis of documentary evidence reviewed, UNDP’s evaluation unit was rated very strong. As the custodian of the evaluation function within the organisation, UNDP’s Evaluation Office is responsible for conducting independent evaluations; setting standards related to the planning, execution and use of decentralised evaluations; assessing the quality of decentralised evaluation reports; and disseminating knowledge on evaluation methodologies and best practice. It is also responsible for reviewing UNDP’s evaluation policy, the latest version of which was adopted in 2011. This latest policy, in fact, strengthens the Evaluation Office’s independence; the Evaluation Office submits annual costed programmes of work and reports on evaluation directly to the Executive Board. Its Director has the final say on the contents of all evaluation reports issued by the Evaluation Office.
MI 17.2 – Sufficient evaluation coverage of programming activities
UNDP’s evaluation coverage was assessed by document review only and rated as adequate. UNDP’s Handbook on Planning, Monitoring and Evaluating Development Results (2009) and its addendum (2011) specify which types of evaluations are mandatory and provide some indications on how to prioritise evaluations of projects and programmes. The handbook also mentions that programming units at the country, regional, and global levels should subject 20 to 30 per cent of their entire portfolio over the programming cycle to independent evaluation. Each year, UNDP is also required to submit a costed evaluation plan to the Executive Board for all new country, regional, global, thematic and South-South programmes. Once approved, programme units are required to complete all evaluations listed in their evaluation plan. All evaluation plans are disclosed to the public on the Evaluation Office’s Evaluation Resource Center (ERC) website. However, not all country offices assessed by MOPAN in 2012 appeared to comply with their evaluation plans; some countries had not completed scheduled independent evaluations.
MI 17.3 – Quality of evaluations
The document review rated UNDP strong for ensuring the quality of its evaluations. UNDP’s Evaluation Policy (2011) specifies how multiple actors within the organisation are involved in strengthening evaluation practices and reporting. For example, the Evaluation Office is responsible for setting standards, sharing information on methodologies, and assessing the quality of reports. Directors of Regional Bureaus, on the other hand, must ensure that country programmes are evaluable and that country offices prepare quality evaluation plans which they then implement.
In 2011, UNDP’s Evaluation Office developed and implemented a quality assurance mechanism to review independent decentralised evaluations at the global, regional and country levels using tools and criteria that draw on the United Nations Evaluation Group’s Norms for Evaluation in the UN System (2005) and on UNDP’s evaluation guidelines.[26] Out of 135 evaluations assessed in 2011, the Evaluation Office noted that 19 per cent were ‘satisfactory’,
44 per cent were ‘moderately satisfactory’ and 36 per cent were ‘moderately unsatisfactory’ or ‘unsatisfactory’.[27]
At the central level, the Evaluation Office has quality assurance processes for the independent evaluations it conducts. Evaluation inception reports and drafts of evaluation reports are reviewed by its senior evaluators and by an independent external advisory panel that reports directly to the head of the Evaluation Office. The panel consists of three leading authorities on development effectiveness, global development issues of relevance to the study, and development evaluation.
MI 17.4 – Use of evaluation findings to inform decisions
MOPAN donors at headquarters perceived UNDP’s use of evaluation findings in its decisions on programming, policy and strategy to be adequate: 71 per cent rated it adequate or above, and 15 per cent as inadequate.
MI 17.5 – Beneficiaries and direct partners involved in evaluations
Donors in-country and direct partners were asked whether UNDP involves its direct partners and beneficiaries in evaluations of its projects or programmes. Three-quarters of survey respondents rated UNDP adequate or above, and 15 per cent as inadequate or weak. Direct partners were more positive in their responses than donors in-country, and the difference is statistically significant.
KPI 18: Presenting Performance Information
Finding 18: Although survey respondents perceived UNDP’s reporting on performance to be adequate overall, the document review provided mixed ratings.
MOPAN donors at headquarters were the only respondent group surveyed on this KPI. Overall, they considered UNDP to adequately report on its performance in achieving outcomes, delivering on the Strategic Plan, and meeting Paris Declaration targets. In contrast, the document review concluded that some aspects of reporting were adequate but there are areas where significant improvements are needed, such as in reporting on outcomes and using data from indicators to report on performance.
Figure 3.28 KPI 18: Presenting Performance Information, Ratings of Micro-Indicators
MI 18.1 – Reports on achievement of outcomes
MOPAN donors at headquarters were asked whether reports submitted by UNDP to its Executive Board provide clear measures of achievement of outcomes. Their views were split: 48 per cent rated UNDP as adequate or above and 42 per cent rated it inadequate or below.
Strictly adhering to the MOPAN criteria, the document review rated UNDP as weak in reporting on outcomes. The Annual Report of the Administrator on the Strategic Plan is UNDP’s primary tool for reporting to the Executive Board on its organisation-wide progress in achieving the development and institutional results defined in its Strategic Plan. These reports, which have been in existence since 2009, focus on a select number of development outcomes for more in- depth analysis. The objective is to have each outcome from UNDP’s development results framework covered by the end of the strategic planning cycle (i.e., 2013). Within the annexes of the report, information on the achievement of institutional and development effectiveness outputs is also presented.
A significant shortcoming of these reports is that they do not provide an analysis of how UNDP’s products and services are leading to development results at the outcome level. Moreover, partial reporting on outcomes (i.e., focusing on only a few outcomes per year and on each of these only once over the strategic planning cycle) presents a fragmented view of UNDP’s contributions to results; it is not possible to measure or monitor UNDP’s progress across years.
UNDP has shown strong commitment in recent years to improve its performance reporting and results frameworks, and should be lauded for these efforts. As mentioned in KPI 2, the organisation has developed a peer review mechanism to help with the design of its results frameworks for the next Strategic Plan (2014-2017) and is engaged in regular consultations with its Board to improve the quality of its results measurement and reporting.
MI 18.2 – Reports on performance using data obtained from measuring indicators[28]
Based on a review of the most recent Annual Report of the Administrator on the Strategic Plan,[29] UNDP’s use of measurable indicators to report on performance is considered weak. UNDP does provide data on output indicators for its expected management results and development effectiveness results (which refer to factors such as the quality of UNDP’s results reporting, knowledge management and cross-cutting themes), but presents little data on performance indicators from a programming standpoint. Information on programming-related indicators covers only one year and a few select outcomes. In 2012, UNDP introduced reporting on the type of engagement of its interventions using what it identifies as an output indicator. This indicator uses country outcomes as a measure of organisation-wide output contribution. The latter is assessed according to four dimensions: awareness, policy, implementation, or resilience. The information provided by this indicator remains broad and is notably lacking in baseline and target values.
UNDP’s most recent Annual Report of the Administrator on the Strategic Plan does indicate that a fuller analysis of output, outcome and related indicators will be conducted for the cumulative review of the Strategic Plan scheduled for 2013, and that findings from this analysis will feed into the preparation of the organisation’s Strategic Plan for the 2014-2017 period. The report’s annexes also appropriately comment that:
Changes to the structure and quality expectations of an annual reporting process cannot deliver immediate gains with regard to substantive improvements in results reporting, and consequently continuous commitment to a results culture are essential to improve results monitoring, data collection, and strategic results analysis that inform management decisions. (p.4)
MI 18.3 – Reports against corporate strategy, including expected results
UNDP was perceived by MOPAN donors at headquarters as adequately reporting against its corporate strategy. More than two-thirds (69 per cent) rated UNDP as adequate or above, and 21 per cent as inadequate or below.
UNDP’s reporting against its organisation-wide strategy, including management and development results, was rated inadequate based on the document review. Recent Annual Reports of the Administrator on the Strategic Plan (i.e., 2009, 2010, and 2011) refer to statements of expected results listed in the development results framework (DRF) and management results framework (MRF). While results in the MRF are addressed in full in tables in the annexes of the report (with attention to progress against targets), reporting on programmatic results from the DRF focuses on a limited number of outcomes, as noted in MI 18.1 above. A brief narrative in each report provides examples of contributions made and challenges faced at the country level. However, the reports do not provide explanations of differences observed between planned and actual development results. UNDP’s current reporting therefore does not allow stakeholders to fully understand UNDP’s progress and performance in contributing to its intended results.
MI 18.4 – Reports on Paris Declaration commitments use indicators and country targets
MOPAN donors at headquarters were asked whether UNDP reports to its governing body on performance in relation to its Paris Declaration commitments. Slightly less than half of respondents (46 per cent) rated UNDP as adequate or above and 21 per cent as inadequate or below. One-third responded ‘don’t know’.
UNDP was found adequate in reporting on the Paris Declaration indicators. In 2005, the United Nations Development Group was the official signatory to the Paris Declaration for the United Nations system. As a member of UNDG, UNDP participated in the 2006, 2008 and 2011 surveys on monitoring the Paris Declaration conducted by the Organisation for Economic Co- operation and Development (OECD). In these surveys, UNDP’s performance data is aggregated with that of all other UN agencies, funds and programmes. In 2011, however, UNDP also produced an individual response on its implementation of the Paris Declaration using disaggregated data from the 2011 OECD survey. UNDP should be commended for producing this agency-specific response, even though the absence of data from previous years makes it difficult to assess its progress in achieving the targets on aid effectiveness set by the Paris Declaration.
Both UNDG and UNDP stressed in their responses to the 2011 Survey on Monitoring the Paris Declaration that official development assistance – which is at the very heart of the Paris Declaration – is but one component of development effectiveness.[30]
MI 18.5 – Reports on adjustments to policies/strategies based on performance information
This MI was assessed through document review only, which scored UNDP as adequate.
As noted in MI 8.1, there is evidence that UNDP uses performance information from a range of sources to revise and adjust its policies and strategies. These changes are reported to the Executive Board on an annual basis through documents such as the Annual Report of the Administrator on the Strategic Plan and the Report of the Office of Audit and Investigation and the associated management responses. However, information on the changes themselves is often not very detailed. For example, information in UNDP’s official reporting to the Executive Board on its current reform process, known as the Agenda for Organizational Change, is quite succinct in enumerating activities accomplished, but does not provide sufficient description of what exactly was performed and the result of these activities.
MI 18.6 – Reports on programming adjustments based on performance information
UNDP was rated adequate on the basis of the documentary evidence available.
UNDP’s Programme and Operations Policies and Procedures (2011) manual indicates that UNDP must actively participate in the UNDAF annual review – a process which involves UN agencies, government and direct partners – to assess its yearly achievement of targets at the output level and to ensure its contribution to UNDAF/national results. As part of the review process and to inform discussions, UNDP prepares project progress reports that indicate whether revisions related to results frameworks, cost estimates, and annual targets are needed. The annual UNDAF review meeting verifies that recommendations from the previous review were acted upon by concerned parties, and new suggestions and recommendations are made based on current evidence. The conclusions and recommendations from the UNDAF Annual Review are supposed to lead to revisions in UNDP’s Country Programme Action Plan and to project Annual Work Plans, as necessary. This information then feeds into UNDP country offices’ Results-Oriented Annual Reports (ROARs) and into other formal reports specifically required by direct partners and donors. It should be noted, however, that ROARs are strictly internal UNDP documents.[31]
KPI 19: Disseminating Lessons Learned
Finding 19: MOPAN donors at headquarters rated UNDP as adequate for disseminating lessons learned. The document review noted positive changes in UNDP’s knowledge management practices.
UNDP was considered by MOPAN donors at headquarters as adequately reporting on and integrating lessons learned. The document review observed that UNDP is committed to solidifying its role as a provider of knowledge on human development, but noted room for improvement in UNDP’s reporting on how lessons learned and best practices are transforming the organisation’s programming.
Figure 3.29 KPI 19: Disseminating Lessons Learned, Ratings of Micro-Indicators
MI 19.1 – Reports on lessons learned based on performance information
MOPAN donors at headquarters found UNDP to adequately identify and disseminate lessons learned from performance information. The majority of survey respondents (75 per cent) rated UNDP as adequate or strong.
UNDP was rated strong by the document review for sharing information on lessons learned based on performance data. UNDP’s Knowledge Strategy[32] commits the organisation to strengthening learning and to sharing best practices across countries, regions, and focus areas. A key component of the strategy was the establishment of UNDP’s Knowledge Management Group, which is responsible for fostering collaboration on learning and for disseminating knowledge both within and outside UNDP. The organisation has also implemented online networks or communities of practice to connect UNDP staff and partners with experts across the globe; the Teamworks platform covers thematic areas for the whole Strategic Plan, while Cap-Net is specific to capacity development. Although UNDP is contributing to knowledge and expertise on international development, further improvements are needed to systematize integration of lessons learned at all levels and to clearly report on changes being made on the basis of emerging knowledge.
MI 19.2 – Lessons shared at all levels of the organisation
MOPAN donors at headquarters were asked whether UNDP provides opportunities at all levels of the organisation to share lessons from practical experience. The majority of survey respondents (67 per cent) rated UNDP as adequate or above. One-quarter of respondents answered ‘don’t know’.
4. Main Findings: Development Results Component
4.1 Overview
This section presents the results of the 2012 Common Approach assessment of UNDP in the pilot component to assess the development results of multilateral organisations. It includes four key performance areas:
• Evidence of extent of progress towards organisation-wide outcomes (KPI A)
• Assessment of country-level results and relevance:
- Evidence of extent of contribution to country-level goals and priorities (KPI B)
- Evidence of extent of contribution to relevant Millennium Development Goals (KPI C)
- Relevance of objectives and programme of work to stakeholders (KPI D)
Figure 4.1 provides a snapshot of the findings of this pilot. Please note that this report applied a simplified 4-point scale that uses the same “traffic light” colours used elsewhere in the report.
As noted in Section 2.8 of this report, the scale was simplified to reflect the methodological approach used in the pilot of the development results component – in which various sources of data are considered together when rating the organisation’s performance on each KPI. The methodology is explained in Volume II, Appendix I.
Figure 4.1 Development Results Component – Overall Ratings
4.2 Progress towards Organisation-wide Results
4.2.1 Overview
This section presents the results of the assessment of UNDP’s progress towards organisation- wide results. KPI A suggests that an effective organisation should demonstrate progress towards organisation-wide, institutional outcomes.[33] These are usually related to the organisation’s strategic objectives. The assessment draws primarily on the evidence that the organisation has available on its results, particularly on its contributions to development outcomes.
UNDP’s results and reporting at organisation-wide level[34]
UNDP’s Strategic Plan for 2008-2011, extended to 2013, is organised around four focus areas: a) achieving the MDGs and reducing human poverty, b) fostering democratic governance, c) supporting crisis prevention and recovery; and, d) managing energy and the environment for sustainable development.
UNDP communicates annually on its programming results at an organisation-wide level through its Annual Report of the Administrator on the Strategic Plan: Performance and Results. As noted in MI 18.1, each year this report provides an in-depth analysis of a sub-set of outcomes from the corporate development results framework. The aim is to have all outcomes from UNDP’s development results framework covered by the end of the strategic planning cycle (i.e., 2013).
The Annual Report of the Administrator has undergone considerable revisions since 2009 in response to demands from UNDP’s Executive Board, and with the objective of improving qualitative and quantitative evidence. The most recent report (covering UNDP’s performance in 2011 and issued in June 2012) includes indicator data for the outcomes assessed in depth and a new section on “Development Effectiveness” that reports on the quality of country programming, cross-practice integration and knowledge, capacity development, gender equality and South-South cooperation.
The Mid-Term Review of UNDP’s Strategic Plan (which was consolidated with the Annual Report of the Administrator for 2010) presented a revised Development Results Framework (DRF) that reduced the number of outcomes to 25 (from the 34 adopted in 2008). The Annual Report of the Administrator for 2011 presents performance information based on this revised results framework.
Organisation-wide performance information on UNDP is also available in thematic evaluations presented to the Executive Board. Thirteen such evaluations were produced from 2008 to 2011, on such topics as disaster prevention and recovery, local governance, and environment and energy.
Data used for this assessment
The assessment of KPI A is based on survey data from donors at headquarters and documentation made available by the organisation. This included the Annual Report of the Administrator for 2011, the Mid-Term Review of the UNDP Strategic Plan and Annual Report of the Administrator for 2010, and the latest major thematic evaluations (2008-2011). In reviewing the latest annual report, attention was paid to the following elements: the quality of the evidence presented to substantiate the results achieved, including use of baselines and targets, and descriptions or analysis of contribution. The mid-term review and evaluations were reviewed to find complementary evidence and help validate reported achievements, with the understanding that some evaluations covered a period that preceded that of the Strategic Plan.
4.2.2 KPI A: Evidence of Extent of Progress towards Organisation- Wide Outcomes
UNDP’s role is unique it in that it provides products and services which primarily support the development of national capacity. In fact, UNDP’s Strategic Plan for the current programming cycle identifies capacity development as the organisation’s overarching contribution and as the “how” of the programming that it does. Capacity changes are often at the heart of the outcomes articulated by UNDP and generally relate to the performance, stability and adaptability of national institutions.[35] Capacity development is a multidimensional, systems concept. At once a process and a product, its results are harder to measure, aggregate and report on.
Overall Assessment
Figure 4.2 shows the overall rating for this KPI based on the review of UNDP’s contributions to development outcomes in its four focus areas – as expressed in UNDP corporate reports[36] and as indicated by surveyed donors at headquarters. The headings show the criteria that MOPAN used to assess each focus area and determine the overall rating (criteria met are indicated in blue). The last column provides the mean survey scores, based on the same 6-point scale that was used in the assessment of UNDP’s practices and systems.
Figure 4.2 KPI A: Evidence of Extent of Progress toward Organisation-Wide Outcomes, Ratings
Evidence of extent of UNDP’s contributions
Finding 20: Corporate performance reports do not yet provide a holistic account of UNDP performance as an organisation. UNDP recognises this and is taking steps to improve.
Over the past two years, and particularly since the Mid-Term Review of the Strategic Plan, UNDP has introduced a number of changes in the way it reports on performance. Nevertheless, in the context of greater donor demand for evidence of development results, the assessment noted some challenges in UNDP reporting, many of which are being addressed by UNDP through a peer review mechanism to improve the organisation’s results frameworks and regular consultations with its Executive Board (see MIs 2.3, 18.1, 18.2 and 18.3 for further details). These include:
1. Partial information: Over the period of the current Strategic Plan, UNDP developed 34 outcomes, which it later reduced to 25, within its four focus areas. However, there is no one corporate report that presents information on the progress of all outcomes over the whole period (e.g., the annual report in 2009 reported on six outcomes, the 2010 report on eight, and the 2011 report on nine). Thus, UNDP’s key constituencies, including MOPAN donors, rely on episodic reporting that does not provide comprehensive and cumulative progress results.
2. Unit of analysis: UNDP reports primarily use “number of countries” as the unit of analysis for change. Corporate outcomes, for example, are often measured through indicators that reference the number of countries adopting policies or practices or showing evidence of improvement in certain areas. While this unit of analysis has some validity given the role UNDP plays in advising and supporting countries, it does not provide enough information to understand the significance or depth of changes that are occurring at the country level (see sidebar).
Example of Outcome Results and Unit of Analysis Outcome 2.2: Electoral laws, processes and institutions strengthen inclusive participation and professional electoral administration Indicator: Number of countries which have increased the percentage of eligible voters included in voter registries Result: 17 of 20 supported countries (85 per cent) Indicator: Number of countries where electoral management bodies have adopted measures to advance gender equality Result: 12 of 20 supported countries (60 per cent) (Adapted from Table 1. 2011 Results for Nine In-depth Outcomes, Annual Report of the Administrator for 2011) |
3. Theory of change. A theory of change can help explain the intended links from outputs to immediate outcomes and towards development results. In its lessons learned on the 2011 reporting cycle, UNDP noted its difficulties in presenting theories of change and using them in its reporting. It cited two reasons for this: the absence of such theories in the current Strategic Plan, and the nature of UNDP reporting by calendar year rather over the life of an intervention.
According to the Annual Report of the Administrator for 2011, UNDP has developed four output dimensions to help situate UNDP’s products and services within its achievement of higher level development results: awareness raising (including convening and brokering), policy (including planning, budgeting, assessments and policy), implementation (including implementation of pilots and inclusive development), and resilience. It is not yet clear if or how these dimensions will help UNDP explain its contributions to higher level results.[37]
4. Lack of baselines and targets at both output and outcome levels. Baselines and targets provide reference points for judging progress over time. In the absence of these for both outputs and outcomes, UNDP provides little information on its progress over any given period, and no notion of how far it has come in an outcome area.
Finding 21: Evidence from reports and evaluations indicates that UNDP’s projects and programmes have had some success in achieving their results and contributing to development outcomes.
According to the Annual Report of the Administrator for 2011, more than half of the countries supported by UNDP had experienced positive change in the nine outcomes that were reported on in depth (Table 1 of the UNDP report). Since the performance data provided in the report covers only a subset of the full list of outcomes to which UNDP contributes, it is not possible to extrapolate or generalise more broadly on UNDP’s organisation-wide performance based on these results. In addition, as noted in the previous finding, the report does not provide a measure of progress over time.
UNDP carried out independent evaluations of some projects and programmes, as well as thematic areas, from 2008-2011.[38] Findings and recommendations of independent evaluations and surveys related to UNDP’s contribution to the respective expected outcomes are summarised in the Annual Report of the Administrator.[39] Overall, the 2011 report stated that results were mixed in terms of the effectiveness in achieving programme goals. While some Assessment of Development Results (ADR) reports noted “significant contributions to development results,” others identified areas of intervention that had not yielded expected results, due to, for example, overly ambitious targets or UNDP’s secondary role in the programme area.
In 2012, the Canadian International Development Agency (CIDA) reviewed UNDP’s development effectiveness by conducting a meta-synthesis of a sample of evaluation studies published between 2009 and 2011. Most of the studies were either country programme evaluations or global thematic evaluations. The meta-synthesis findings were generally positive: 66 per cent of UNDP evaluations were considered to demonstrate satisfactory or highly satisfactory achievement of development objectives and expected results, and 83 per cent of evaluations were found to be satisfactory or better in reporting on positive benefits for targeted groups.[40]
Finding 22: MOPAN donors at headquarters rated UNDP strong for its progress towards its goals in three of its four focus areas. In the reports reviewed, evidence of UNDP contributions to outcomes varies and is still largely associated with immediate outcomes.
According to the Annual Report of the Administrator for 2011, the bulk of UNDP’s development expenditures over the 2008-2011period was directed to two focus areas: democratic governance (36 per cent) and achieving the MDGs and reducing human poverty (31 per cent). Approximately 22 per cent was allocated to crisis prevention and recovery, and 11 per cent to energy and environment.
The following sub-sections comment on progress in each of the four focus areas, drawing on the survey results from MOPAN members at headquarters and data provided in the Annual Report of the Administrator for 2011.
Poverty and Achievement of MDGs
When asked about the organisation’s progress towards its goal of supporting countries in achieving the MDGs, MOPAN members at headquarters rated UNDP strong. The Annual Report of the Administrator for 2011 provided in-depth information on two of the seven outcomes in this focus area.
UNDP has informed government strategic and action planning by providing technical advice, guidelines and other tools (e.g., the MDG Acceleration Framework). An annex to the Annual Report of the Administrator for 2011 noted that, “UNDP has contributed to raising awareness to accelerate the national achievement of MDGs, particularly by supporting governments to develop clearly defined targets and indicators, prepare a detailed costing plan, and produce regularly-issued MDG reports.”[41] However, the report also indicated that the improvement of plans and strategies has not necessarily translated into more sustainable income, employment, and improved social protection for vulnerable groups, which is the desired higher level change articulated in one of its outcome statements (Outcome 1.2).[42]
In addition, data on the implementation side of UNDP’s work, which includes piloting initiatives, is mixed. On the one hand, the Annual Report of the Administrator for 2011 suggested that some pilot initiatives that were scaled up eventually benefitted millions of people.[43] However, it also noted that innovative and successful local-level and pilot projects (including those addressing sustainable income, employment and social protection) are not consistently scaled up.
Democratic Governance
MOPAN members at headquarters rated UNDP strong for its progress in supporting countries in fostering democratic governance. The Annual Report of the Administrator for 2011 provided in-depth analysis of three of the nine outcomes in UNDP’s democratic governance focus area (related to UNDP’s support for electoral systems, justice systems, and women’s empowerment).
In the section on UNDP’s election support (GOV 2.2), the report provided some data linking outputs to immediate outcomes. Its election-related deliverables were defined as contributions in policy/planning and awareness raising, including convening groups and providing a space for participation. The combination of policy support and awareness raising appears to reflect UNDP’s assumption that both democratic participation and the development of structures and policies around elections are crucial for achieving lasting democratic change.
Examples highlighted in the report focus on UNDP’s work in setting up voter registration systems, improving civil registries, election awareness-raising and training (for example, of women and indigenous groups to increase their numbers as voters and candidates). UNDP’s outputs resulted in some reported contributions in terms of numbers of eligible voters, participation in electoral processes, including female voters, and improvements in the functioning of electoral systems, although the extent to which UNDP contributed to these is less certain.
UNDP carried out an evaluation of its contribution to strengthening electoral systems and processes in 2012, which underlined its effectiveness in delivering technical assistance to governments.[44] Other reports and ADRs point to longer term challenges in ensuring sustainable electoral structures and institutions.
Crisis Prevention and Recovery
UNDP’s crisis prevention and recovery focus area addresses disaster risk reduction, as well as recovery and support to countries in transition, conflict or other special situations. UNDP’s support of governments in the area of crisis prevention and recovery was rated strong by survey respondents. The 2011 annual report provided in-depth analysis of two of the six UNDP outcomes in this focus area. The strongest evidence of results over the period related to improved justice and citizen security (CPR 3.5). In summarising UNDP’s contribution in this area, the report highlighted the achievement of the following results: products and services provided to re-establish justice and security services in the aftermath of crises; strengthened national capacity to improve the responsiveness and accountability of justice and security institutions; strengthened legal and penal capacities; and community empowerment and citizen security. All of these results are in line with the type of support outlined in UNDP’s strategic vision on assistance in crisis-affected countries.[45]
Energy and the Environment
From 2008 to 2011, UNDP’s portfolio in this focus area increased by 19 per cent, and its work related to climate change increased by 300 per cent.
The survey asked separate questions on energy and environment. On the first question, which focused on supporting countries in managing energy for sustainable development, 54 per cent of respondents rated UNDP as adequate or above, and 29 per cent answered ‘don’t know’. In response to the second question, which focused on supporting countries in managing the environment for sustainable development, 67 per cent of donors rated UNDP as adequate or above and 21 per cent answered ‘don’t know’.
The Annual Report of the Administrator for 2011 highlighted two outcomes in this focus area: development plans and programmes integrate environmentally sustainable solutions, and governments and communities have the capacity to adapt to climate change. Both outcomes rely on UNDP’s expertise in budgeting, assessments, planning and convening, policy guidance, and training. The report noted the adoption of initiatives, strategies, and plans intended to address environmental sustainability, climate change adaptation, and access to environmental services. Higher level development results were specified in some areas (e.g., environmental protection of land, access to low-emission and climate-resilient energy solutions) but not others, and there was limited evidence of how UNDP initiatives contributed to these.
4.3 Evidence of Extent of Contribution and Relevance at Country Level
4.3.1 Overview
This section presents the assessment of evidence of UNDP’s contributions to country level results and its relevance to stakeholders. By separating the KPI at the organisation-wide level from KPIs at the country level, MOPAN recognises the demand-driven nature of many of the activities of a multilateral organisation and the key role that is played by its country programming or strategy document, where expected results at the highest level (outcomes and impact) reflect a shared responsibility between the multilateral organisation and the partner country.
UNDP’s results and reporting at country level[46]
UNDP’s work at the country level is based on a five-year programming cycle which is aligned with the cycle of the United Nations Development Assistance Framework (UNDAF). UNDP plans its work at country level through a number of key documents, including the Country Programme Document (CPD) and the Country Programme Action Plan (CPAP). These country programming documents include frameworks of expected results that are usually linked to UNDAF outcomes and related Millennium Development Goals.
UNDP communicates its results through the Results Oriented Annual Report (ROAR) and Country Programme Performance Summary; however these are internal documents and were not systematically available for this assessment. The Assessment of Development Results (ADR) reports, which are independent evaluations conducted by UNDP’s Evaluation Office and the only publicly available documents on the performance of UNDP’s country programmes, aim to provide an assessment of outcomes and UNDP’s contribution to them.[47] However, ADR coverage is limited: According to the Evaluation Office website, only 51 countries have been the object of an ADR evaluation since 2002 and the target of 15 to 18 ADR reports per year set forth in the Guidelines for an Assessment of Development Results (2009) has yet to be reached.
UNDP also publishes reports on programme and project evaluations (conducted at mid-term and at the end of the programme cycle) and independent outcome evaluations conducted at the end of a programme cycle that are designed to assess how results contribute to changes in development conditions.[48] However, this is not systematic, and the selection of programmes or projects to be evaluated is based on country offices’ strategic decisions and their evaluation plans.
Data used for this assessment
For this pilot, the country-level assessment is based on data from a sample of five of the nine MOPAN survey countries (Cambodia, DRC, Ghana, Honduras and the Philippines), which were proposed by UNDP based on the availability of results data.
In the document review, the Assessment Team reviewed results information provided from the most recent programming cycle in these countries (e.g., CPD, CPAP, ADR, outcome evaluations of country programmes, other external reviews, assessments or evaluations carried out during the programme cycle). ADRs were only available for four countries.[49] Attention was paid to the following elements: quality of the results statements; the relevance of indicators, baselines and targets; the strength of the link between results statements and results achieved; the quality of evidence presented to substantiate the results achieved, including an assessment of contribution; and the overall performance story.
In the survey, respondents were asked questions that were tailored to each of the five countries. Interviews with senior UNDP country office staff also informed the analysis of context and ensured that the Assessment Team had the right documentation.
4.3.2 KPI B: Evidence of Extent of Contribution to Country-level
Goals and Priorities
This KPI indicates that an effective organisation would demonstrate contributions to country- level goals and priorities. The assessment reviewed survey data from in-country stakeholders (donors and direct partners) and documentation provided by five UNDP country offices following interviews with senior country office staff.
Overall assessment
Figure 4.3 shows the overall rating for this KPI based on the review of UNDP’s contribution to country level goals and priorities – as expressed in UNDP reports and as indicated by surveyed stakeholders. It also shows the criteria that MOPAN used to assess each country and determine the overall rating (criteria met are indicated in blue).
Figure 4.3 KPI B: Evidence of Extent of Contribution to Country-Level Goals and Priorities, Rating
Evidence of Extent of UNDP’s Contributions
Finding 23: UNDP’s reports on results provide a mixed picture of outcome achievement and there are notable gaps in data.
UNDP reporting at the country level provides considerable evidence that its programming is relevant to government priorities, but often does not provide a clear picture of how UNDP is contributing to development results on the ground and of the scale and magnitude of that contribution. This may be due to several factors related to UNDP's practices and systems that limit its ability to present a full picture of results at a higher level.
Theory of change – As evident in the sample of five countries reviewed for this assessment, UNDP offers a wide range of products and services across its focus areas in response to government priorities. It does not, however, offer a theory of change for each focus area within that country context. Country reports provide few examples of programming areas where the links between outputs and higher-level national results are based on an underlying theory that could be tested.
Insufficient detailed data on outputs and outcomes – Detailed information on results at the country level is often not available and the link between outputs and the achievement of higher level outcomes is insufficient. Greater detail may be available in internal reports, such as the ROARs.
Clarity of outputs and outcomes – UNDP works in close collaboration with government and at times this creates confusion with regard to UNDP’s outputs and government outputs. In addition, outputs and outcomes are sometimes mislabelled in UNDP frameworks. Outputs are often framed as strengthened capacities of organisations – a level of change that goes beyond the provision of UNDP’s direct products and services. While most indicators are quantitative, they do not always measure something that is relevant to the desired result.
In many cases, outcome statements refer to more than one development result, creating confusion as to how the outcome might be measured. For example, working to achieve poverty reduction and environmental sustainability requires multiple indicators, presumably moving in the same direction, to demonstrate progress. In some counties, results-oriented planning was first used only in the most recent programme cycle.
Measuring Success of Pilot Projects – UNDP uses its leveraging role and strong collaboration with government to establish pilot programmes or projects. However, it is often missing the parameters or targets with which to measure the success of such pilots, which limits possibilities for scaling up and replicating across regions or countries.
Finding 24: Assessing the contribution of UNDP’s efforts to build national capacity is a challenge and is limited by the long-term nature of desired results.
UNDP works to improve the capacity of governments and other actors in a country by providing policy and planning advice, support on advocacy, research and data collection products, training, and pilot programmes.
One of the challenges observed in the UNDP programme documents reviewed was a lack of specific outputs, baselines and targets for these capacity development efforts. This makes it difficult to determine the extent to which outputs were achieved and completed, and for UNDP to report on its results within a reporting period.
Assessing the contribution of UNDP’s capacity development efforts is also limited by the scope and long-term nature of UNDP’s desired results with governments. For example, convening conflicting parties in favour of a peace process can take many years and involve setting up the processes and goodwill for exchanging dialogue and carrying out training. In Ghana, for example, the effects of training on conflict resolution and transformation were not yet apparent in evaluations.[50] In the DRC, initiatives supporting conflict resolution had not yet led to any appreciable change. These are just a few examples of long-term processes in support of government priorities.
Evidence of Results Achieved, by Country
The following assessment of results achieved by country is drawn from both the survey results and an analysis of documentation. Specific examples are given of results where the UNDP country office provided particularly strong documented evidence that a result was achieved. Since UNDP country offices define their programming outcomes in response to national priorities, survey respondents’ views are shown in separate tables below.
Cambodia
The majority of MOPAN donors in-country and direct partners rated UNDP Cambodia as adequate in all programming areas. However, there was a high proportion of ‘don’t know’ responses from MOPAN donors in-country on outcomes B3 and B1 (40 and 60 per cent respectively).
During the 2006-2010 programming period, the largest proportion of resources (approximately 46 per cent) was assigned to strengthening democratic local governance and 16 per cent for reinforcing democratic institutions at the national level. This reflects the imperative of Cambodia’s transition to peace and the need for a participatory system of democratic governance.
UNDP reported significant results in Cambodia in the democratic governance focus area, as described in the sidebar. In particular, two UNDP projects with the Ministry of Interior helped to develop the structures and systems of decentralised local governance in Cambodia. A number of UNDP results under the Project to Support Democratic Development through Decentralization and Deconcentration (PSDD) project improved the administrative and financial structures at the sub-national level. The second project, Democratic and Decentralized Local Governance (DDLG), established 24 local government associations, thus encouraging greater partnership and accountability of local government.
It is not clear, however, how these two projects contributed to higher level development results that have benefitted local communities, and the poor in particular. It may be too early to assess higher level outcomes at this stage. There is evidence nonetheless that these types of decentralised governance structures provide a sustainable platform for future development at local levels in Cambodia.
Deconcentration and Decentralisation in Cambodia UNDP Outcome as identified in the CPAP (2006-2010): Increased efficiency and effectiveness of the public administration and decentralised governance structures to deliver basic services. UNDP Intended Outputs: - key roles and functions of sub-national government bodies in delivery of public services identified and translated into legal instruments; - comprehensive fiscal decentralisation strategy developed including development of a legal framework for taxation and sources of revenue; - capacities of communes for decentralised planning, management and delivery of public goods and services further strengthened. What UNDP Achieved (with the Ministry of the Interior): - delivered policy analysis for the Organic Law outlining roles and responsibilities for decentralisation and deconcentration; - drafted legal documents related to the Organic Law and supporting guidelines and studies on the implications of the law, in particular to identify functions for transfer to the new sub-national administrations. - designed a training programme and guidelines for government staff in anticipation of regulations soon to be issued on sub-national planning and budgeting; - designed systems for data management and monitoring, and produced manuals and guidelines governing the procedures to be applied at the sub-national level, primarily by the Commune Councils and more recently by the districts; - developed and is implementing a gender mainstreaming strategy. Developmental results: While stakeholders have commended the establishment of decentralised local governance structures, there is little that can be said about whether they are more efficient and effective or have led to the delivery of improved basic services Sources: UNDP (2010) Cambodia ADR; UNDP-EC (2011) Terminal Evaluation Strengthening Democratic and Decentralized Local Governance in Cambodia: (DDLG); Osana International Inc (2010) Final Evaluation of “Project to Support Democratic Development through Decentralization and Deconcentration (PSDD)” |
Democratic Republic of Congo
When asked about UNDP DRC’s effectiveness in achieving its expected outcomes, 51 per cent of MOPAN donors in-country and direct partners rated UNDP inadequate or below, and 42 per cent provided ratings of adequate or above.
Figure 4.5 UNDP Stakeholder Survey – DRC, Mean Scores by UNDP Country Programme Outcome
The assessment considered evidence from UNDP DRC’s 2008-2012 programming cycle. The Country Programme Action Plan (CPAP), which was revised in 2010, includes eight outcomes and covers programming in three of UNDP’s four focus areas: democratic governance, poverty reduction and MDG achievement, and energy and environment. One Assessment of Development Results (ADR) was completed for DRC in 2012, during the final year of the country programme cycle.
During the 2008-2012 programming period, the largest proportion of resources[51] was assigned to the Good Governance Support Programme (approximately 61 per cent) and to the Programme against Poverty in Support of the MDGs (roughly 26 per cent).
The implementation of UNDP’s programme in DRC has been challenging. Continued violence in the country and a multiplicity of competing development needs have led to a highly diversified country programme.
For the assessment of results in DRC, the MOPAN team was able to review complete Results- Oriented Annual Reports (ROARs) for 2009 and 2010, and a partial ROAR (narrative section) for 2011. Within these documents, information is presented on progress made in achieving indicator targets. The information provided, however, is not always relevant to the corresponding indicators. Although most of the outcomes are reported to be on track according to the self-reported data in the ROARs, the Outcome Evaluation of UNDP, Country Programme 2008-2012 (2012) concluded that UNDP had made only partial progress towards its country- level goals and that many outcomes were not on track to meeting objectives. The evaluation noted that UNDP had made progress in strengthening national capacities for the attainment of the Millennium Development Goal (B6) with the preparation of the Growth and Poverty Reduction Strategy Paper (GPRSP 2).
Other positive results, albeit at lower levels, were noted by the 2012 ADR, (see sidebar). For example, the support to the Cour des Comptes (Court of Auditors) is seen to be a well- targeted and efficient programme that contributes to making the public administration more efficient and reliable (B4). Although UNDP’s programme did not meet its targets at the outcome level, there are lower-level interventions that were successful.
DRC Public Finance Management Reform UNDP Expected Outcome: The legal, regulatory and ethical bases of a modernised, transparent administration are put in place, and key ministry structures, workforce, and jobs at the national and provincial levels are made efficient. UNDP Intended Output (intermediate level result): Capacity of the Cour des Comptes and Parliament to supervise are reinforced. What UNDP Achieved (with the Cour des Comptes): - supported the publication of annual reports and specialised reports which were submitted to Parliament; - convened meetings between the Assembly, the Senate and the Cour des Comptes in order to build mutual understanding; - trained staff at central and provincial levels on the government’s Public Investment Programme ; - provided support for needs assessment and legal support. Development Results: Financial information provided by the Cour des Comptes assists Parliament in understanding finance and helps in the overall work of the government. Sources: UNDP (2012) DRC ADR; UNDP (2011); Outcome Evaluation UNDP Country Programme: DRC 2008-2012 |
Ghana
When asked about UNDP Ghana’s effectiveness in achieving its expected outcomes, the majority of survey respondents (76 per cent) rated UNDP adequate or above. UNDP’s contributions to increasing attention to the MDGs in Ghana's national development planning frameworks and policies were rated strong or very strong by 77 per cent of respondents.
Figure 4.6 UNDP Stakeholder Survey – Ghana, Mean Scores by UNDP Country Programme Outcome in Ghana
UNDP Ghana’s thematic programming has undergone some changes over the last two programme cycles. The governance and environment thematic areas have remained largely unchanged, except for the incorporation of elements of crisis prevention and recovery. In the 2006-2010 programming cycle, the poverty reduction thematic area was split in two: strategic/economic policy (upstream work) and sustainable livelihoods and employment (downstream projects that serve to inform policy).
During the period, 31 per cent of programme resources were dedicated to achieving the MDGs and reducing poverty, 27 per cent to energy and the environment, and 20 per cent to fostering democratic governance.
UNDP has reported results in the strengthening of democratic governance outcomes through support for the National Architecture for Peace, which was credited for having reduced conflict in the 2008 elections in Ghana. However, evaluations have noted that the longer term sustainability of this initiative remains a question.
UNDP Support to Democratic Governance in Ghana UNDP Expected Outcome: Enhanced mechanism put in place for conflict prevention, management and resolution Baseline: No formal institutions for managing conflicts in the country Target: Establish institutions for managing conflicts at all levels of society What UNDP Achieved: - established the National Peace Council and Regional Peace Advisory Councils and provided the architecture for the councils; - provided training in conflict prevention and peace education of the peace councils as well as women and youth; - connected the different implementing partners into action around specific issues, for example, managing the potential for pre-election violence in the Northern Region. Development Results: National peace councils and Regional Peace Advisory Councils were established. They were credited with having prevented or contained conflicts during the 2008 elections and act as an early warning system on conflict in the country. Source: UNDP (2011). Ghana ADR; CDA (2009) The Conflict Prevention and Resolution Portfolio of UNDP Ghana Evaluation Report |
Honduras
The assessment focused on UNDP’s planned results expressed in the 2007-2011 Country Programme Document and Country Programme Action Plan. In broad terms, the programme focused on access to water and sanitation and education; citizen security; reform of state institutions; strengthening democracy; strengthening capacity for rural development; improved capacities for prevention and treatment of HIV/AIDS; and integrated environmental policy and management. Most of these areas provide continuity from the previous programming cycle, reflecting UNDP’s core strategy of long-term capacity development of public institutions. The planned five-year investment was USD 116 million, including USD 3.8 million from UNDP’s regular resources (only 3 per cent of the total funds for the country programme).[52]
When asked about UNDP Honduras’ contributions to achieving its expected outcomes, the majority of respondents (51 per cent) rated UNDP as strong or very strong. There was a high level of ‘don’t know’ responses (60 per cent) from in-country donors on UNDP’s contribution to strengthening the capacity of the Honduran government to improve land-use planning systems.
Figure 4.7 UNDP Stakeholder Survey - Honduras, Mean Scores by Country Programme Outcomes
The political crisis in Honduras in 2009 had important ramifications for UNDP and several of its initiatives with the government were interrupted. A significant increase in violence and the effects of the financial crisis required UNDP to adjust strategies and planned outcomes.
For the assessment of results in Honduras, the MOPAN assessment was able to review the Results-Oriented Annual Reports (ROARs) for 2010 and 2011. Most of the outputs were being completed as planned and outcome areas were reported to be on track as of 2011, with indicators showing positive change (even though outcomes have not been achieved).[53]
The UNDP outcome area focusing on democratic governance was evaluated in 2010. The evaluation, together with UNDP’s self-reported data in the ROAR, indicates that UNDP has contributed to inclusive public policies (specifically focused on gender equality and on the reported use of UNDP’s knowledge products).
The outcome evaluation provided evidence of policies adopted, plans completed, and new indicators included in government plans and programs. Thus, there is evidence of use of the products and services that UNDP provides, but limited evidence of progress towards more far- reaching change. (For example, if gender-sensitive budgets have been approved in state institutions, what has that represented in terms of budget re-allocations and service provision?)
Some examples of the results that UNDP helped to achieve in democratic governance are illustrated in the sidebar. However, UNDP’s work in other areas of democratic governance (e.g., to modernise political parties and strengthen networks of civil society organisations) has had fewer tangible effects. Although results are in progress, it is difficult to discern from the available documentation what UNDP’s contributions are and to understand why and how it engages in certain areas but not others.
UNDP Support for Democratic Governance in Honduras UNDP CPAP Outcome 3.3: Strengthened representative democracy at local and national level for the implementation of public policies The national civil registry office had not included up to 30 per cent of the population in the Department of Gracias a Dios, inhabited primarily by the Miskito indigenous group. UNDP provided assistance for the development of a national strategy and plan to reduce under-registration. As of 2011, under-registration had been reduced by 25% UNDP CPAP Outcome 1.4: Population, with emphasis on excluded groups, advance towards universal and equitable access to water and sanitation A national and local policy was approved and governance of water and sanitation was addressed in 36 municipalities. New and improved water and sanitation systems are benefiting almost 3,000 families. UNDP CPAP Outcome 2.3: Small rural producer groups (agricultural and non-agricultural) have improved their productivity, access to markets, and competitiveness More than 2,600 producer groups were organised in 128 municipalities by 2011, which benefitted 43,217 people. These groups developed seven production chains (coffee, basic grains, dairy, and others), generating more than 5,000 full-time jobs. Note: This data reflects the work of a project implemented by UNDP, IFAD, the Government of Honduras, and the Central American Bank for Economic Integration |
The Philippines
The majority of survey respondents (79 per cent) rated UNDP adequate or above for contributing to the achievement of its expected outcomes. Notably, 81 per cent of survey respondents gave ratings of strong or very strong for UNDP’s contribution to country goal B1.
Figure 4.8 UNDP Stakeholder Survey - Philippines, Mean Scores by Country Programme Outcomes
The assessment examined results in UNDP Philippines’s last completed programming cycle (2005-2009), which was extended to 2011. The outcomes mirror those in the previous country programme (2002-2005) and the current country programme (2012-2016),
Core resources were reduced radically at the beginning of the programme cycle. Consequently, the MDG and governance areas were confined to limited amounts of core resources. The environment and crisis prevention practice areas generated significant non-core resources and comprised the bulk of the overall resources available to the country programme. In 2007, there was a significant decline in mobilisation of non-core resources and in delivery on planned expenditures.
UNDP Support of Pro-Poor Budgeting in the Philippines UNDP Expected Outcome: to improve the policy and planning framework to more extensively incorporate effective, people-centred approaches to development planning, budgeting and monitoring, with a special focus on women, children and vulnerable groups. UNDP’s products and services included the support of statistical systems, its convening of like-minded groups to develop a pro-poor budget, and its publication of pro-poor research and analysis in favour of the MDGs. Development Results: Five billion Philippine pesos were reallocated in favour of health, education, agriculture and the environment. The Department of Budget Management amended its guidelines and procedures to favour the MDGs. The indicators and data collected from the poverty indicators at the municipal and household level assist in developing targeting programmes in favour of pro-poor assistance programmes. Source: UNDP (2009) The Philippines ADR; Outcome Evaluation Achieving the MDGs and Reducing Human Poverty Programme United Nations Development Programme – Philippines |
UNDP Philippines made progress towards outcomes in support of poverty reduction and the achievement of the MDGs in the most recent programming cycle.
UNDP has made strong links between these two outcomes in its programming (see sidebar).
The 2009 ADR and Outcome Evaluation in 2008 noted that these particular outcomes led to concrete changes in the way the government of the Philippines carries out its budgeting and decision making in favour of the poor.
4.3.3 KPI C: Evidence of Extent of Contributions to Relevant MDGs
This KPI recognises that multilateral organisations have made commitments to the Millennium Development Goals (MDGs) and assumes that they explicitly articulate or make links to the MDGs to which they contribute at the country level. The MDGs are collective, global targets that have, in many cases, been adapted by partner countries in defining their priorities. While partner countries are responsible for making progress toward the MDGs, multilateral organisations ensure that their aid, knowledge, and other types of support facilitate achievement of these goals.
Links between UNDP’s Objectives and the MDGs
UNDP has a mandate for monitoring development progress globally and a special interest in the MDGs. Its tasks include: supporting coordination of the UN and development stakeholders, providing technical and financial support for the preparation of MDG country monitoring reports, and forging closer collaboration within UN Country Teams.
Through its focus area specifically dedicated to the achievement of the MDGs (i.e., achieving the MDGs and reducing human poverty), UNDP assists countries in formulating, implementing and monitoring national development strategies based on the MDGs.
Overall Assessment
Figure 4.9 shows the overall rating for this KPI based on the review of UNDP’s contribution to relevant MDGs – as expressed in UNDP reports and as indicated by stakeholders surveyed. It also shows the criteria that MOPAN used to assess each country and determine the overall rating (criteria met are highlighted in blue).
Figure 4.9 KPI C: Evidence of Extent of Contribution to Relevant MDGs, Overall Rating and Criteria
Evidence of Extent of UNDP’s Contributions
Finding 25: UNDP plays a facilitating and monitoring role in favour of the MDGs across most countries.
UNDP is mandated to monitor progress in achieving MDGs with partners and to help direct resources where they are most needed. In the countries where it works, UNDP advocates for the MDGs and uses tools such as the Human Development Reports (HDRs) and national MDG reports to encourage and support country efforts to scale-up activities to achieve the MDGs.
As a trusted government partner, UNDP is well placed to provide advice on aligning national priorities and planning processes with the MDGs, and to coordinate with UN Country Teams in leading advocacy in favour of these goals. UNDP provides governments with tools such as the MDG Acceleration Framework and MDG Breakthrough Initiative to heighten focus on lagging goals, and with policy studies and reports to drive discussion on solutions to the most pressing gaps.
Most ADRs reviewed note that UNDP has advocated for the Millennium Declaration and the MDG agenda, developed institutional capacities at the national and local level for MDG-based development strategies, and helped to monitor and report on progress. UNDP’s support to MDG advocacy in the Philippines, summarised in KPI B, provides an example of re-allocation of government budget resources to MDG areas.
Finding 26: UNDP’s programming documents provide linkages to MDGs, but the reports often lack evidence of UNDP’s contributions to the country’s progress towards the MDGs.
UNDP’s role in contributing to the MDGs is not restricted to a monitoring/advocacy function. In the countries sampled for this assessment, UNDP developed projects and programmes to respond to MDG 1 on poverty eradication, MDG 3 on gender equality and the empowerment of women, MDG 6 on combating HIV/AIDS, and MDG 7 on environmental sustainability.
Both CPAPs and UNDAFs, which are agreed to between UN agencies and government counterparts at the country level, delineate programming priorities and create linkages to MDGs. All UNDP country offices make explicit links to the MDGs in their planning frameworks, but UNDP’s contribution to immediate outcomes (or to the country MDG goals or targets) tends to be missing or insufficient. This makes it challenging to assess UNDP’s contribution to national efforts to meet MDG targets as captured in this KPI.
Overview of Survey Data, by Country
Finding 27: Surveyed stakeholders rated UNDP adequate for its contributions to national MDGs.
UNDP’s role in contributing to the MDGs is not restricted to a monitoring/advocacy function. In the countries sampled for this assessment, UNDP developed projects and programmes to respond to MDG 1 on poverty eradication, MDG 3 on gender equality and the empowerment of women, MDG 6 on combating HIV/AIDS, and MDG 7 on environmental sustainability.
Cambodia
When asked about UNDP’s contributions to MDGs in Cambodia, the majority of respondents (68 per cent) rated UNDP adequate or above. The highest mean score was for UNDP’s contributions related to Goal 9 (an MDG specific to Cambodia that focuses on reducing casualties from land mines and explosive remnants of war), though this question also generated a high percentage of ‘don’t know’ responses (42 per cent). One-third of respondents were unfamiliar with UNDP’s work in relation to Goal 7 (ensure environmental sustainability).
DRC
Respondents in the DRC had mixed views on UNDP’s contributions to MDGs: 48 per cent rated UNDP as adequate or above, and 39 per cent as inadequate or below. The highest mean scores were for UNDP’s contributions to Goal 3 (promote gender equality and empower women) and Goal 6 (combat HIV/AIDS, malaria, and other diseases). A high rate of ‘don’t know’ responses (39 per cent) was recorded for Goal 7 on environmental sustainability.
Ghana
UNDP was rated adequate or above by 84 per cent of survey respondents for its contributions to national efforts in achieving the MDGs in Ghana. Respondents rated UNDP strong for its contributions to MDG 1 (eradicate extreme poverty and hunger) and very strong for its contributions to MDG 3 (promote gender equality and empower women).
Honduras
When asked about UNDP Honduras’ contributions to MDGs at the country level, the majority of respondents (86 per cent) rated UNDP adequate or above. The highest mean scores were recorded for Goal 1 (eradicate extreme poverty and hunger) and Goal 3 (promote gender equality and empower women). However, 40 per cent of donors in-country rated UNDP’s contributions to Goal 7 on environmental sustainability as inadequate.
Philippines
The majority of respondents (70 per cent) rated UNDP Philippines as adequate or above for contributing to national efforts to achieve the MDGs. UNDP’s work in relation to Goal 3 (promote gender equality and empower women) was rated highest.
4.3.4 KPI D: Relevance of Objectives and Programme of Work to
Stakeholders
For this KPI, MOPAN assessed relevance primarily as a measure of the extent to which a multilateral organisation supports country priorities and meets the changing needs of direct partners and the target population. The assessment is based exclusively on survey data obtained from direct partners and MOPAN donors in-country in the five countries selected for the pilot component.
Overall Assessment
Across the five countries surveyed, UNDP was considered strong overall for responding to key national development priorities, and adequate for providing innovative solutions to help address development challenges and adapting its work to the changing needs of partner countries.
Respondents in Honduras and the Philippines were generally more positive about UNDP’s relevance than respondents in Cambodia, the DRC, and Ghana.
Figure 4.10 shows the overall assessment rating and the mean scores on the three survey questions on which the assessment is based.
Figure 4.10 KPI D: Relevance of Objectives and Programme of Work to Stakeholders, Overall Rating and Survey Mean Scores by Country
5. Conclusion
This conclusion departs from the specific ratings of the MOPAN assessment and looks at the major messages that can contribute to dialogue between MOPAN, UNDP and its partners. It draws on the survey findings and principle observations of the assessment of UNDP’s practices and systems (Key Performance Indicators 1 – 19) and the assessment of UNDP’s results (Key Performance Indicators A-D).
UNDP is recognised for its coordination role in the United Nations system.
UNDP’s coordination role within the United Nations system was seen as a key organisational strength in both the 2009 and 2012 MOPAN assessments. This was reflected in comments to open-ended questions in the 2012 assessment, in which survey respondents highlighted UNDP’s roles as Chair of the United Nations Development Group (UNDG), lead agency for the Millennium Development Goals (MDGs), publisher of the Human Development Reports (HDRs), and coordinator of the Delivering as One (DaO) initiative within countries.
Direct partners value UNDP and gave it strong ratings in most of the indicators assessed; however, they continued to identify UNDP’s bureaucracy and administrative inefficiencies as a key area for improvement.
Direct partners were very positive in their assessment of UNDP’s practices and systems; they rated UNDP strong on its overall organisational effectiveness and on all but two key performance indicators (UNDP’s adjustment of procedures to local conditions and capacities, and use of country systems). In comments to open-ended questions, the area for improvement cited most frequently by respondents was UNDP’s heavy bureaucracy related to decision- making, human resources, planning processes, and its systems for managing funds. This was also identified as a major organisational challenge in 2009.
UNDP is considered strong in mainstreaming gender; its integration of other cross- cutting priorities received mixed ratings.
The survey and document review commended UNDP for its practices and systems to mainstream gender equality, recognising the significant improvements made following an evaluation in 2005. UNDP was considered adequate in mainstreaming South-South cooperation. It received mixed ratings on its integration of other cross-cutting priorities. Surveyed stakeholders considered UNDP strong in integrating capacity development and human rights-based approaches (HRBA) in its programming. The document review rated UNDP adequate in integrating capacity development and inadequate in HRBA. Due to political sensitivities and changing Executive Board directives, HRBA is not an explicit cross-cutting theme for UNDP; its policies in this area do not provide comprehensive guidance, and the organisation lacks the accountability mechanisms to ensure mainstreaming.
UNDP has sound financial accountability systems.
UNDP has strong systems in place for financial audits, strong policies for anti-corruption and risk management, and procedures to address financial irregularities.
UNDP’s Evaluation Office works independently and has strong mechanisms to ensure the quality of evaluations.
UNDP’s evaluation policy, revised in 2011, establishes the Evaluation Office’s independence and ensures that all evaluations have a management response. The organisation has also recently put in place mechanisms to improve quality assurance processes for evaluations at global, regional and country levels, and is considered by stakeholders to adequately use evaluation findings in its decisions on programming, policy and strategy. Continued work is required, however, to ensure compliance with the planned evaluation coverage at the country level.
UNDP’s commitment to management for results has not yet translated into perceived or documented changes in the practices assessed by MOPAN.
This MOPAN assessment provides a snapshot of UNDP’s organisational effectiveness in the early stages of implementation of its ambitious Agenda for Organizational Change. The assessment found that positive changes in systems and practices have already resulted from this process, and that others are well underway. UNDP is clearly committed to results-based management across the organisation and is working to improve corporate and country level planning, monitoring, and reporting on results by: formulating results statements with increased clarity and precision; making links between outputs and outcomes more explicit; identifying measureable indicators and milestones; and consistently defining and providing data on baselines and targets. It has worked closely with donors in this process. UNDP has also taken steps to better link resources to results, improve its management of staff performance, and enhance the design of its projects and programmes.
Since the initiation of the Agenda for Organizational Change, however, reports to the Board do not always provide sufficient detail on progress in the different components of this agenda.
UNDP faces the challenge of developing robust results frameworks while remaining responsive to country priorities and demands.
This challenge is reflected in the assessment ratings, in which UNDP is seen as strong in aligning its work with national development priorities, but less successful in designing frameworks that clearly link results at all levels (project, programme, sector, country, and corporate results), in developing precise indicators to track these results, and in aggregating results achieved.
Development Results Component
Surveyed stakeholders generally hold positive views of UNDP’s achievement of results.
Donors at headquarters believe UNDP is making strong progress in achieving its goals in its corporate focus areas, and stakeholders at the country level find UNDP to be relevant and performing fairly well in contributing to the MDGs and to its expected outcomes.
UNDP’s reporting on results achieved remains an area where further attention is required.
UNDP’s current reporting at corporate and country level does not yet provide a holistic picture of organisational performance. Although evaluations and reports point to the success of individual projects and programmes, documentary evidence of UNDP’s achievement of corporate and country level outcomes is limited and inconclusive. However, UNDP is working to improve its reporting on results.
[1] The AfDB, UNDP, UNICEF, and the World Bank
[2] UNDP Strategic Plan, 2008-2011: Accelerating Global Progress on Human Development (2008) http://web.undp.org/execbrd/adv2008-annual.shtml
[3] Midterm Review of the UNDP Strategic Plan and Annual Report of the Administrator (2011)
[4] This component was tested in 2012 with the African Development Bank, UNICEF, UNDP, and the World Bank.
[5] The full methodology is presented in Volume II, Appendix I.
[6] MOPAN criteria for country selection include: multilateral organisation presence in-country, presence and availability of MOPAN members, no recent inclusion in the survey, the need for geographical spread, and a mix of low income and middle income countries (middle income countries being subdivided into lower middle and upper middle).
[7] The normal convention for statistical significance was adopted (p≤.05).
[8] These organisations were selected because they were assessed by MOPAN in 2009. The 2009 assessment focused on organisational effectiveness and was based only on survey data.
[9] Different organisations use different terms to refer to their planned results – they may be called goals, objectives, outcomes, etc.
[10] The “best fit” approach is used in public sector institutions (see Ofsted, 2011: Criteria for making judgements).
[11] According to the agreed methodology (see section 2.3), the document review considered most of the MIs, but was asked to provide MI ratings in only 16 of the 19 KPIs. In some cases, however, there was insufficient data to provide a rating (e.g., KPI 14).
[12] Midterm Review of the UNDP Strategic Plan and Annual Report of the Administrator (2011), http://web.undp.org/execbrd/adv2011-annual.shtml
[13] Throughout this report, the assessment of UNDP’s DRF and MRF is based on the revised versions as presented in the annexes of the Mid-Term Review of the UNDP Strategic Plan and Annual Report of the Administrator (2011).
[14] Executive Board Decision 2011/14: Midterm review of the UNDP Strategic Plan.
[15] For the launch year, UNDP set coverage for the CD tracker at 25 per cent of all projects active in 2011 in each country, starting with those that began after January 2010.
[16] In June of 2012, 9 of the 14 key actions planned in the management response were overdue, and 8 had not yet been initiated. However, as of September, UNDP has revised the timelines in its management response and initiated work on all previously unaddressed recommendations.
[17] Implementation of UNDP Global Human Rights Strengthening Programme (GHRSP; 2008-2013) 2011 Annual Report
[18] Executive Board Decision 95/23: Successor Programming Arrangements (1995)
[19] For this MI, respondents were explicitly asked to not consider those procurement and contract management processes which use partner country systems.
[20] Annexes to the Midterm Review of the Strategic Plan and Annual Report of the Administrator: Performance and Results for 2010 (2011)
[21] The annexes of the Annual Report of the Administrator: Performance and Results 2011 indicate that 95 per cent of evaluations have management responses. The document also reports that 62 per cent of actions identified in management responses are completed or on-going for independent evaluations conducted between 2006 and 2011, and that 65 per cent of actions listed in management responses are completed or on-going for decentralised evaluations conducted between 2008 and 2011.
[22] Who are at the level of Assistant Secretary-General and Under Secretary-General respectively
[23] Programme and Operations Policies and Procedures (2011)
[24] United Nations Results Report: 2011 Survey on Monitoring the Paris Declaration (2011)
[25] UNDP (2004). Project Implementation Units: An exceptional option only with effective safeguards.
[26] UNDP Evaluation Office Quality Assessment System for Decentralized Evaluation Reports (2012)
[27] Annual report on evaluation in UNDP 2011 (2012)
[28] Please refer to MI 2.5 for an assessment of the quality of indicators presented in UNDP’s results frameworks.
[29] Annual Report of the Administrator on the Strategic Plan: Performance and Results for 2011 (2012)
[30] Policy and institution building work performed by UN agencies on issues other than aid (e.g., trade, relations, migration, foreign direct investment, climate change, and conflict and fragility) can also have a catalytic effect on the development of countries. Hence, the Paris Declaration indicators, with their narrow focus on aid, are less suited to multilateral Member State organisations like the United Nations than they are to donors.
[31] On an exceptional basis, complete ROARs for two countries were shared with the MOPAN Assessment Team for the pilot results component (chapter 4).
[32] This strategy was initially designed to cover the period 2009-2011, but has been extended to 2013, in accordance with the extension of the Strategic Plan.
[33] Each organisation may use a different term to refer to this level of results.
[34] Please refer to chapter 3 of this report, particularly KPIs 2, 17 and 18, for an analysis of UNDP’s results frameworks and its evaluation and reporting practices.
[35] UNDP. (2010). Measuring Capacity.
[36] A list of documents consulted is provided in Vol. II, Appendix VIII.
[37] Higher level results refer to intermediate outcomes and impacts.
[38] It should be noted that the evaluations are not fully representative of the breadth of UNDP’s work.
[39] These are presented under UNDP’s reporting on Indicator 3 of the Development Results Framework (see Annex II-iii).
[40] CIDA (2012) Development Effectiveness Review of the United Nations Development Programme (UNDP)
[41] UNDP, Annexes to the Annual Report of the Administrator on the Strategic Plan: Performance and Results for 2011, par. 90
[42] UNDP, Annexes to the Annual Report of the Administrator on the Strategic Plan: Performance and Results for 2011, par. 90
[43] See for example the reference to an innovative pilot in China that sought to develop entrepreneurship among the rural population and eventually 1 million farmers annually. (UNDP, Annexes to the Annual Report of the Administrator on the Strategic Plan: Performance and Results for 2011, par.24)
[44] This report was unavailable during document review but was cited in the 2011 Annual Report of the Administrator
[45] UNDP Strategic Vision on Crisis-affected Countries. D/P 2007/20/Rev.1
[46] Please refer to the sections in chapter 3 on KPIs 2, 4, 17 and 18 for the analysis of UNDP’s results- based systems and practices.
[47] UNDP. (2009). Guidelines for an assessment of development results (ADR).
[48] UNDP. (2002). Guidelines for Outcome Evaluators.
[49] While UNDP’s self-reporting is deemed confidential, UNDP agreed to provide the ROAR for Honduras as there was no recent ADR that commented on UNDP’s performance in that country. Subsequent to circulation of the draft report, UNDP also provided the ROAR for the DRC.
[50] In its communication with the MOPAN Assessment Team, the UNDP Country Office in Ghana noted that “Conflict prevention programmes always face this challenge since their success can only be measured by the absence of a conflict. Besides the attribution problem this brings up, it should also be noted that evaluations rarely look at what “did not happen” and prefer to look for development change.”
[51] It is important to note that it does not include the Global Fund against HIV/AIDS, Tuberculosis and Malaria and the Common Humanitarian Funds, as they are not included in the CPD.
[52] During this period, UNDP was the principal recipient of funds from the Global Fund for HIV/AIDS. It implemented GEF projects and projects funded by other multilateral organisations such as IFAD and the European Union.
[53] The one exception is in the area of decentralisation, where the outcome is not expected to be achieved within the planned timeframe.