Minutes of the U.S. Advisory Commission on Public Diplomacy July 2010 Official Meeting

Remarks
Meeting Location: International Forum For Electoral Systems (IFES)
Washington, DC
July 20, 2010


COMMISSIONERS PRESENT:

  • Bill Hybl, Chairman
  • Lyndon Olson
  • Penne Korth Peacock
  • Lezlee Westine
  • Jay Snyder
  • John Osborn

 


State Department Members Present:

  • Rick Ruth, Director, Office of Policy and Evaluation, Bureau of Educational and Cultural Affairs
  • Walter Douglas, Executive Assistant, Office of the Under Secretary for Public diplomacy and Public Affairs
  • Robin Silver, Chief of Evaluation Division, Bureau of Educational and Cultural Affairs
  • Cherreka Montgomery, Director, Evaluation Measurement Unit, Office of the Under Secretary for Public Diplomacy and Public Affairs
  • Duncan MacInnes, Acting Coordinator, Bureau of International Information Programs
  • Daniel Sreebny, Senior Media Advisor, Office of the Under Secretary for Public Diplomacy and Public Affairs

 


P R O C E E D I N G S

CHAIRMAN HYBL: And I know I join with them in thanking all of you for being here. We have all members of the Commission here except one. And I'd like to introduce Ambassador Lyndon Olson with a vice chair who also has served as the United States Ambassador to Sweden, Ambassador Penny Korth. Ambassador Korth was the Ambassador to Mauritania.

AMBASSADOR KORTH: Like delicious.

CHAIRMAN HYBL: Like Mauritius.

(Laughter.)

CHAIRMAN HYBL: And thank you for being here. Joe Snyder, a member who is from New York. Lezlee Westine will be with us shortly, and John Osborn from Pennsylvania, also our executive director Carl Chan.

We'd like to thank Judith McHale, the Under Secretary of State for Public Diplomacy, for her support and cooperation with the Commission, and we have her executive assistant, Walter Douglas, with us today. And, Walter we appreciate you and the members of your staff and Judith's staff for being with us. And I'm going to pass this mike over, because this one works.

STATEMENT OF WALTER DOUGLAS

MR. DOUGLAS: Well, Mr. Chairman, thank you very much for that fine introduction, and good to see the members of the Commission. And, Mr. Chairman, members of the Commission, my colleagues and I are pleased to appear today at this meeting. This morning's topic, "Measurement of U.S. Government Public Diplomacy Efforts," is one of great importance to us all. It's something we talk about quite a bit and are always perfecting.

Under Secretary for Public Diplomacy and Public Affairs, Judith McHale, is firmly committed to this task. As she said in her May 13th testimony before the Senate Foreign Relations Committee, an important lesson of recent years is that we must do a better job of thinking and planning strategically. With a clear mission and a steady eye on long-term global goals, accompanied by careful assessment of programs, personnel and expenditures, this will allow us to craft proactive, purposeful and integrated programs, and further U.S. policy interest and resonate with foreign publics.

The commitment that she made in front of the Senate is also fully reflected in our department's strategic framework for public diplomacy, an I hope all of you have had a chance to read. It really reflects a lot of our thinking going forward and the strategic framework provides a clear mission statement for public diplomacy. Let me read it to you, because I think it's very important to hear this.

"To support the achievement of U.S. foreign policy goals and objectives, advance national interest and enhance national security by informing and influencing foreign publics and by expanding and strengthening the relationship between the people and Government of the United States and the citizens of the rest of the world." On the subject of evaluation and measurement, the strategic framework notes that in the past, "Tools for evaluating short and long term impact have not been uniformly used or built into planning," and then provided the following clear objectives.

"To fully utilize evaluation assessment tools to measure public diplomacy's impact, we are going to improve tracking and reporting mechanisms and incorporate measurement into all public diplomacy plans. We use evaluation measurement results to help focus public diplomacy plans on activities designed to maximize impact. And, finally, it will track short term outputs to ensure successful program management and to build baseline data and longitudinal data sets to understand long term impact on opinions and attitudes." This is a heck of a commitment we are making in our strategic framework as we go forward.

Under the strategic direction of Under Secretary McHale, I would emphasize the importance of quantifiable information on the effect, reach and efficiencies of overseas public diplomacy programs. Some of the most comprehensive reviews of public diplomacy programs are taking place in support of the under secretary's strategic framework. This work must be done and it is being done with more resources provided for this than ever before.

I will now pass the microphone to my colleagues, the experts who coordinate closely on our valuation and measurement challenges, activities, achievements and future plans. Over there we've got Rick Ruth. He's the director of the office of policy and evaluation in ECA, the Bureau of Education and Cultural Affairs.

We have Duncan MacInnes, the acting coordinator for the Bureau of International Information programs, or IIP. And, on my right, I have Cherreka Montgomery. Cherreka is the director of the evaluation measurement unit, or EMU, of the Under Secretary's Office of Policy, Planning and Resources, or PPR, or its nickname, "Ripper."

Thank you very much and I'll turn it over to my colleagues or back to you, if you'd like.

STATEMENT OF RICK RUTH

MR. RUTH: First of all, I want to thank you, Mr. Chairman, and all the Commissioners for this opportunity.

I want to thank everybody who's come. It's very encouraging to see an excellent turnout. I'm glad to see familiar faces: Ambassador Korth, Jay Snyder. Jay and I have some common history which shows that public diplomacy is not for the faint of heart. It's a serious undertaking, and we will talk about that a little bit today.

Obviously, all the commissioners know, I think better than some of us do, the importance of evaluation and performance measurement from their various and varied experience in the private sector and the corporate world, and the NGO and philanthropic world. Performance measurement and evaluation has been essential and understood there for a long time before it came to the Federal Government.

It's been in the government for a while, but again as Walter indicated, it isn't everywhere in a proper and systematic way, and that's something also worth talking about today. But it is gaining traction, and with the White House and the Office of Management and Budget and senior leadership at the Department always pushing for better, more actionable, more timely, quantitative information about public diplomacy and exchanges. The exchanges and public diplomacy evaluations and performance measurement we conduct takes place in a context that's much larger.

And I think it's worth pointing out that the material produced by Cherreka, by Dr. Silver to my left, and others doesn't exist in isolation. There have always been, of course, reports back from our ambassadors, from our missions in the field. Whenever ECA gives a grant to any organization to conduct an exchange, the language of that grant requires that they undertake various kinds of surveying and measurement of the impact of the program.

We have had various databases over the years. We track numbers. We also, of course, meet with the participants, both in the field and when they return, so there's a larger universe of information that we draw on for policymakers and program managers.

The part that is done by the professional evaluators, perhaps, I would say, comes at the tip of that pyramid. It is something that goes back, and I won't scare you by mentioning the 1990s. I'm not going to give you any kind of long history, but I do want to mention that it first came to the Federal Government through the Government Performance and Results Act, known as GPRA, and it goes back to what we call the “GPRA Wars” of the 1990s.

This goes back to the time of the United States Information Agency and there was genuinely a passionate debate about the evaluation of public diplomacy, because there was a camp, a very passionate and articulate camp, which said you can't evaluate public diplomacy. Not only can't you do it, you shouldn't even try to do it, because it's like pulling the petals off of a flower to see how it grows.

If you try to quantify the interactions that we have all around the world in public diplomacy, you profoundly misunderstand the quicksilver nature of the human mind and human interactions. It will never work. And you're Philistine and a bean counter if you try to do it. Naturally, there was a counter argument which said, essentially, oh, I see. So you're telling the American taxpayer give us your money and trust us because we're professionals. We're doing good work with it, and no actual accountability is required.

Well, we know who won that argument. We know who was supposed to win that argument, because in fact you can measure public diplomacy and you can measure the impact of exchanges. You just have to decide to do it and you have to commit the resources to do it. And, as Walter mentioned, there are more resources now being applied by Under Secretary McHale to the general area of performance measurement and evaluation than ever before in the history of public diplomacy, and we'll see some ways in which that can be done.

The Bureau of Educational and Cultural Affairs bit the bullet about 10 or soyears ago now, and created the State Department's first full-time office of evaluation staffed by professional evaluators, which I am not, by the way. This has paid enormous dividends for the bureau, because in congress, OMB, the senior leadership of the State Department and elsewhere, there has been a steady recognition that ECA has adopted the culture of measurement, that we have committed ourselves to steady, constant performance measurement of the impact of our exchanges, and this has been recognized.

Now, I don't believe anybody can draw a one–to-one connection between our evaluation activities and ECA's budget appropriation. But, the fact that OMB and Hill staff year in and year out have made a point of recognizing ECA's evaluation commitment in all their budget documents, I think shows that there is a connection and there is an understanding that Congress and the Administration have greater confidence in applying new funding or increased funding to those activities where you can say: “We've measured it. Here are the results.” It's not simply qualitative, but quantitative as well.

Qualitative is still very important. One good story still can convey more than a ream of data. Qualitative or anecdotal is necessary, but it's not sufficient any longer. OMB and Congress want to see quantitative information now, as well. When you hear some of the things that we're going to tell you, a lot of you are going to say” “I knew that.” And, it's true. You did know that, because of a lot of what we do, which is true of a lot of research in social science and other fields, is confirm the intuitive.

Now, that's not a waste of money, because as I was saying before about quantitative information, the day is gone when someone, such as myself, could go up to congress and say we need more money for this. We need millions of dollars. Trust me. It works. I'm a professional. I've been doing it for years. I can assure you this is effective.

That doesn't fly at all. So even with those things which seem to be quite clear and straightforward, what you need to do is to be able to show if you conduct an exchange like this for example we can show that 87% of the high school participants from predominantly Muslim countries will have this kind of reaction, this increased understanding, this better appreciation of these aspects of the United States and its society. And if you increased the amount, if you double it or triple it, you will still have 87%.

That's the way the research and exchanges work. So even though from your own experience, you can say that, of course, you would expect this kind of exchange or person-to-person experience to have a beneficial effect. In fact, you do need to document it. You do need to nail it down with quantitative information; and, in the process you discover a great many surprises and a great many interesting bits of information, which are very useful to senior program managers and policymakers, and, I think, to a broader interested public.

What we have then at ECA then is several things. First of all, our mantra is evaluation for use, meaning it's a very fine thing for the experts to talk to each other. But, what we need in the end is for actionable, timely information to go up the chain to the policymakers. So that policymaker, that Assistant Secretary, that Under Secretary can say this is information that I can use. I can make decisions about allocation of resources, about programs, about topics based on this data.

That's what we're looking for, always keeping in mind evaluation for use. We do this in two primary ways. First of all, ECA conducts major, independent evaluations. By independent I mean that we put these proposals out for bid to private sector firms who compete for the contract and they conduct the surveys. They do the focus groups. They crunch the data.

We review it, of course. We work with them, but we want that extra credibility, that extra independence that comes from not assessing ourselves in-house. These are lengthy evaluations. They often take 18 months or so from start to finish. They are approximately $5,000,000 to $700,000, which is fairly big money in U.S. Government evaluation terms -- not in the private sector. And they produce at the end a tremendous wealth of data and information, and a great deal of quantitative information about the impact and effectiveness of our programs.

Because these are so lengthy, though, we also have a briefer way of gathering information. We have an in-house, web-based survey system, which we developed, called E-GOALS. Everything has to begin with 'e' now, of course. And this is generally done in the standard fashion with a preprogram, immediate post program and then an 8- to 12-month follow-on survey for the participants in a particular exchange program.

Clearly, it doesn't give you the wealth of data of a major evaluation. It doesn't give you the longitudinal information over time, but it does give policymakers very quick and very interesting information on the immediate impact of these programs, and that's often very helpful as well. And we have done E-GOALS surveys over the years, it's been six or seven years now that we've used E-GOALS.

We are then developing a body of data from multiple sources which does allow us to do more extensive analysis of the impact of our programs. We also do, of course, custom surveys and activities. The Under Secretary each year has a global public affairs conference, which is highly useful to all parts of public diplomacy. And we work with Walter's office and Cherreka's office to produce surveys for that kind of purpose as well, so much more inside the system.

We work very closely with the Under Secretary's office and others in preparing each year the budget request for public diplomacy, because we want to include in there as much performance measurement, as much quantitative data as possible, to show both OMB and Congress that we are committed to the culture of measurement.

And I'll close with one final thing which may seem slightly extraneous, and that is to mention our office of alumni affairs. We began this in 2004. We have over a million ECA alumni around the world. More than 340 of our participants have gone on to become either the head of state or head of government of their country.

In fact, approximately one-fourth of all the member states of the United Nations right now are our alumni. We have over 40 Nobel laureates, in addition of course to thousands of individuals who are community leaders, school leaders, leaders in their profession. At this point, of course, since we have many high school exchanges, we've got tens of thousands of participants who are still very young.

And that's an important demographic for us and we do track them. And this is one way which the alumni office helps out. Because now that we have committed to an alumni website and a universal alumni database when we try to do our longitudinal surveys and say where are these high school students five years later, or where are these urban planners ten years later, then we often have much better success in identifying these people and being able to contact them and survey them, because we have the parallel track of our alumni engagement office. And I'm going to stop there. Thank you very much.

 

STATEMENT OF DUNCAN MACINNES

MR. MACINNES: Thank you, Rick. Thank you, Mr. Chairman for giving me the opportunity to speak to you and the board.

Rick has given us a nice overview of the history, which is good, because he has it, and he says he has certainly been in the lead on this and I give them full credit for that. We in IIP looked at one time trying to duplicate what they did, but were unable to come up with the financial way to do that, because our flow through funding for programs is so small.

$750,000 would be a significant portion of all the moneys we have to do programs. So we can't actually tax a program, like Fulbright, and say a certain percentage. Two percent will be used for evaluation. To get around that we luckily have been working very closely with the office.

Ripper is set up with Cherreka, and the Under Secretary has funded that office to do our surveys and our evaluations and our metrics for us, although we actually interact as a collaborative way. But they have the lead, and Cherreka will talk a bit about, I think, some of the things that we're doing on that side.

I want to just take a step back a bit and talk a little bit about the differences that one has to take in terms of doing metrics on different kinds of PD programs. Rick talked about the ECA programs, which have participants, which have a limited universe. There are a certain number of people coming to the United States, paid for by the U.S. Government. You can actually talk to them and meet them.

We have the problem of having an audience of 6.8 billion people, which is the world. And when we reach out electronically to that world, it is really hard to measure the impact sometimes. So we've come up with a framework that talks about a three-part look at metrics. There's reach, which is how many people are you actually reaching. You know. You have a website and you get five million hits a month, and you know you have five million people visiting (inaudible), visiting your website.

That's reach. It doesn't tell you whether they changed their minds or had much interaction, so we have something called engagement -- how long do they stay on that website. What do they look at. Do they actually take a look? Do they participate in a little survey we have? Do they make a comment on an article? Do they refer it to someone else? And that's engagement. And then we have credibility, and that is do they see us as incredible voice or do they actually recommend us to others.

Do they refer to us on their website? Do they take our material? Do they steal our material and use it, which happens all the time? (Inaudible), you know. And so those three parts help us look at how we have to measure, because we measure not just one thing, we measure many things. Particularly, we have the daunting task of measuring these new media things such as Facebook and Twitter and Web. We have a very, very strong and good Web.

Analytics that gives us so much data at the end of every month on our web statistics that you could actually look at any specific piece and see how many people looked at it. It breaks it down for overseas versus U.S. participants. We struggle with the fact that we have not been able to use cookies, which tracks things, so you can't tell if the same person came back again later, because you don't collect cookies. We are working on trying to get that ability, which will let us do better metrics on our web usage.

We also have web survey tool for our website that asks customers what they think about our website and gives us the metrics back on that. We have the other daunting task of we have the U.S. Government's website that carries six of the major languages, and we have to be able to test and measure how people see our languages, our material in language.

And so this year, working with Cherreka, or last year, we did a survey using (inaudible) on our Arabic language material. You know, how is it reviewed. Is it credible in terms of its translation? Are people using it? And that small survey came out with some fairly positive leads. It gave us some directions on where we need to have better editorial control over the quality, make sure the quality stays at a high level, and ways to actually engage better with our audiences in Arabic.

But we also have to do that and we have not done so in Chinese, in Spanish and French and Russian and Persian. So that complicates it a bit. I think that in many ways the need for metrics is extremely important when we're trying to deal with these large audiences, because it's hard to get a, you know, it's hard to get a handle on the scope of you have a Facebook page with 150,000 participants. How do you measure whether you're making an impact on those people? And, again, we're working with Cherreka on a number of electronic ways of measuring audiences' reactions. And audience analysis is really important for us.

We have reached out to DoD to BBG and others to get their intake on audiences, because it's very hard to know audiences. We need to know a lot more than the very simple thing like, you might say, okay. 30% of Egyptians have access to the Internet. We need to know that Egyptians between the age of 15 and 25 not only have access to the Internet.

But they go to these five sites as their major source of information that they spend this much time on it, that they look at video and they don't read anything, we need to actually have granularity, because that's what we need to actually reach those people. And we're working very diligently on that. It's hard, but luckily I think Cherreka got a study on that she might talk about, but we're working closely with them on that and it's very important for us.

And I think I'll turn it over to Cherreka who can wrap it up, because she's really the lead at least for IP on this.

STATEMENT OF CHERREKA MONTGOMERY

MS. MONTGOMERY: Can do. Good morning, Mr. Chairman and other honorable commissioners.

Thank you so much for this opportunity to share the progress that we've made over the years to evaluate and measure our overseas public diplomacy efforts. I just want to reiterate what we've already sent, particular Walter has really emphasized, and that is the current strong emphasis and leadership and policy commitment through the allocation of public diplomacy resources to evaluation and performance measurement.

We are really at a historical moment now under the leadership of Judith McHale, Under Secretary Judith McHale, and the amount of moneys and the commitment and dedication across the board, across all bureaus of public diplomacy to perform its measurement and evaluation. We are using evaluation and metrics data in our policy approach, our budgeting and also our communication efforts to constantly emphasize, improve program efficiencies, and also the impact and support of our larger strategic public diplomacy objectives.

I just want to walk you back into time, just a little bit, if you bear with me on our journey to incorporate a cultured measurement and our overseas public diplomacy programs. This journey began in September of 2005, and this is when the office of policy planning and resources for the under secretary put initial resources out by hiring, in fact, me, to develop a set of public diplomacy performance measures.

At that time in fiscal years 2005 and also back in 2004 our overseas missions through the Department of State's larger strategic planning process called the mission strategic plans, the MSPs, now MSRPs, we were producing more than 898 different performance measures. When we analyzed that data we found that more than 75% of those performance measures were outputs, just really showing that public diplomacy officers overseas were very busy, and they are, in producing things. But we really lacked a strategic focus on impact.

We weren't given guidance to the field or even teaching or training them how to even begin the process of measuring and yet alone quantify the impact of our reach. And, we also at that time, the Office of Management and Budget came in and actually did what I call an audit of all the overseas efforts. This is independent of the Bureau of Education and Cultural Affairs and, really, ranked that our overseas public diplomacy strategy and performance measurement tools were quite lacking; and they gave us a score of not performing.

We were able through a vigorous analysis of measures by working in consultation and working, having conversations with our regional bureaus and select overseas missions; and, we also developed larger models. We were able to turn that around. We went from 898 performance measures to a core standardized set of 15 performance measures. They are still in use today.

We have added to those measures to really support Under Secretary McHale's strategic framework. So now we went from 15 to 21; and, as a result, we actually improved our OMB rating. We went from a not performing category, which at the time the result's not demonstrated, to performing.

I think that's a huge accomplish to how public diplomacy has responded, both to advice of the Advisory Commission, GAO and others, about the need to really incorporate a culture of measurement. However, through extensive consultations with OMB and some of our congressional friends, we soon learned that while possessing a core set of 15 performance measures without the data to support these measures the measures are meaningless.

And, in fact, Rick and I had a conversation at the time with our OMB examiner who basically told us that we had one year to create data to collect data to support these 15 measures. Again, let me just emphasize this has never been done before as others, and I think some of our academic colleagues have rightly mentioned, that it's very difficult to try to isolate what contributes towards the impact of our overseas public diplomacy efforts.

We are not the only players in the field of public diplomacy or communications in our host countries; and, also, I think, this really simplified it was that the world challenge of really figuring out an appropriate methodology to do so, to measure the impact and collect data on our overseas diplomacy programs. When we started as a journey, other than the Bureau of Educational and Cultural Affairs, there is no methodology.

It's very difficult to apply performance measurement and program evaluation approaches used for domestic policies to measure foreign policy, yet alone public diplomacy overseas. Based on core metrics development process going from 898 down to 15, we realize that there are basically three important elements on any efforts to collect and measure overseas public diplomacy impact.

The first element is the need for standardization. And we started the standardization process in June of 2006, and by that I mean making sure that we have standardized survey questions. When we began as a journey, public diplomacy was not standardized at all, hence, 898 measures.

Every year, we're creating new approaches to measure new ways, whether or not it was qualitative or quantitative. So we really wanted to step back and say to really make this sustainable and make sure that we actually have a lasting cultured measurement, standardization was key in terms of survey development design, our data variables, making sure that our metrics are all aligned across all of our evaluations and assessments, but also just one step back and just sort of say that standardization also means standard operating procedures. And one of the things that we set out to do was to minimize the burden imposed.

Our overseas staff are enormously busy and challenged right now with doing the work of public diplomacy; and, what we wanted to do was set up an independent institution, an outside institution such as the evaluation and measurement unit, or the Bureau of Cultural Affairs Evaluation Office to reduce the burden imposed, to sort of reducing the time, burden and commitment that our PAOs would have in figuring out what is an outcome performance measure. You know, how do you create that. How do you quickly respond to the department's annual strategic planning process?

We also wanted to emphasize the need for alignment, making sure that evaluation, data collection efforts, supports the under secretary's goals as well as the larger goals of the Department of State and the strategic objectives of public diplomacy. And, also, to make sure that all we are doing, that our public diplomacy evaluation assessment efforts are aligned, we have a core public diplomacy performance logic model, and also our core metrics.

We sort of mandate that at least in all of our evaluations and performance assessments, those evaluations and particularly our permanent evaluations, must support at least two of our outcome performance measures. Output data collections are what we do every day; and, quite frankly, they're very simple to do. You just count. It's easier to do than to measure the impact.

The third important area is communication, making sure that we as evaluators communicate the importance of program evaluation performance metrics, why it's so important to collect this data, the utility of this data. Lots of times in the field we get questions from our colleagues who say, well, we're collecting all this data, but it's just being stored in Washington. You're not using it. So it's really important that we let them know that we are using this to fulfill our congressional requirements, our own data requirements, that we are incorporating this data into our budgetary request every year.

Also, I would say, that under communication lies training. And this is where I really want to emphasize is that to sustain performance management approach we have today we have to make sure they are constant training our officers and our foreign service national staff at our overseas missions on what permanent evaluation is, what performance assessment is, what the basic principles are, what those core components are. Whether or not they themselves, using their own discretionary resources want to perform qualitative research, whole focus groups, or whether or not they want to create their own ad hoc surveys to assess sort of quick programs.

We do this by working in partnership with the Foreign Service, with FSI. Actually. in our summer months we make an effort weekly to train all of our incoming cultural affairs officers, our public affairs officers, our information officers, as well as our Foreign Service national staff, our FSNs. In the summer of 2006 we set out one of the most ambitious paths to date, focus on creating and launching two unique performance measurement data collection efforts.

Today, I'm going to talk about a number of those, including some of our more recent program evaluations, but I want to start with our Public Diplomacy Impact Project, and I'll end with our on-line, output performance assessment on database system called, "The Mission Activity Tracker."

The Public Diplomacy Impact Project, commonly known as PDI, is the first ever study to assess the aggregate impact of our overseas PD efforts. When I say the "aggregate impact," I mean across the board, every public diplomacy programmatic tool that's available to a U.S. mission. This includes educational and cultural exchanges. It includes all of our speakers programs, all of our America.gov websites, all of our e-journals, even looking at our public affairs programs, like our foreign media tours and our journalist training.

All those things are assessed through the PDI project. We initially launched this project in 2007. Its original mandate was to collect the baseline data to support six of our core public diplomacy outcome measures and four of the outcome measures that support the Bureau of International Information Programs, IP. We also under PDI perform audience analysis and select study countries. By this I mean that we've created some standardized survey instruments that not only give us the metrics data, but also gives us insights on where we need to go with public diplomacy.

I think one of the lessons learned, and I think that the Under Secretary has already incorporated this, is the need to reach out to new and emerging audiences, making sure that public diplomacy is strategically engaging youth and also strategically engaging other critical audiences in some of our select countries using a mixed method approach, combining both qualitative and quantitative methodologies and rigorous statistical testing database on foreign audiences' responses to these standardized survey questions.

Just sort of just to back up a little bit to say that while PDI is quite robust in 2007, because it was the first time we ever did this, we roughly had a global sample of 1800. And that sample, while insufficient to show large sort of opinion or attitudinal changes in these host countries, we felt that it was sufficient enough to give us our outcome data for our baselines to support our core outcome measures. In 2009 we watched this again.

We actually quadrupled our study sample, which I'm very happy to say, and this I think shows the growth and the support of public diplomacy behind performance assessments. We actually took our study up to over 6,500. We actually hold in the PDI approach in every study location three focus groups, looking at one positive change, the use of new media and social media. Now, particularly given the emphasis on our digital campaign effort sponsored by the Bureau of International Information Programs, and we also hold a focus group among our foreign audiences, just look at the key drivers of either positive or negative opinions towards the United States.

And through that we actually tease out what are the words used, where the sentence is used; and, I can tell you just reflecting back that we did this in 2007 as you might imagine in seven countries where we traveled. There was some consistent themes. The first theme was that the U.S. appears to not be supportive of what it means by democracy and human rights; and, this is largely due to the Iraq war. There was also great emphasis on climate change in the Kyoto Treaty.

Going back, recently, in 2009 we found that widespread -- and we collected this data between February of 2009 and October of 2009, I will say, and also in 2009 we did this work in three largely Muslim populated countries -- there was widespread optimism, cautious optimism; the hopefulness in support of President Obama's tenure as President of the United States.

I'm looking at the change in U.S. engagement and partnership with other countries; and, also, we found in places like India, because of the growth in their economies, a new sort of sense of nationalistic views on how they should be treated by the United States in terms of science and technology.

So, again, these are all the insights that we use and we bring back to Washington along with our quantitative data that supports our outcome performance measures. PDI in terms of the research is essentially an experimental design control group study. Its sample is based on individuals who want to have participated in PD programs for more than five years. One of the lessons learned from 2007 is that we need to make sure that we appropriately await that sample to reflect youth, to make sure that we have youth. You know, those between the ages of 13 and 16 who may sometimes go through our English language training programs, and also what we're calling our young adults, those that go up to the ages of 25.

That's our treatment pool. We also look at when we're calling our control group, those foreign audience members in the same host country but who have never, ever participated in Department of State sponsored public diplomacy program. This is critical to overcome the common research challenges defined needed and necessary in order to define public diplomacies we have influenced; and, I think that some of our academic partners and others have rightly identified this.

And, as you know, how do you measure the impact of public diplomacy? You know, is it contribution or is it attribution? And one of the things we've found through PDI is that we can through statistical testing demonstrate that public diplomacy contributes to the achievement of our foreign policy goals, our national security objectives, through repeat engagement over time. But, I think attribution is still an outlier that we cannot quite take credit for just yet, because there are many other agencies sponsoring U.S. Government who are doing similar work. And I just want to leave it there.

Going back to, you know, how we sort of develop our data for outcome measures, we use a series of composite indices built to support each and every one of our outcome performance measures individually. What we do is that we strategically design the survey instruments to make sure when we're looking at questions such as favorability or understanding that we have multiple survey questions. Ask to every respondent, both the treatment group and the control group in every location. Again, this goes back to standardization and alignment.

Once we collect this data, we actually go through a series of statistical testing to make sure that the construct of this index or this indices are measuring towards the objective of the outcome, and make sure that we can actually detect meaningful significance between those two groups, which we have done through PDI. It is important to note that while focus of the aggregate level of program effectiveness and standards across all overseas programs were just PDI, our public diplomacy efforts, particularly those of both ECA and the Office of Planning and Resources, we conduct in-depth program evaluations.

And, as Rick said correctly what he defined as independent is that we actually hire outside experts to work with us to help us collect this data. And this is really important to make sure that the data we present to you as well as the data we present to congress is something that is pure, that does reflect industry best standards in terms of social science research, and also opinion and attitudinal survey research as well. In this year alone Ripper and its evaluation and measurement unit, which I had, has launched two historical program evaluations.

I just want to pause and just talk about those for a second. The first one is the U.S. speaker specialist program evaluation. This is the first ever independent evaluation of the Bureau of International Information Programs, one of its flagship programs. It's underway right now. We're working with an outside company to do this work, to talk about the methodology. It is again a mixed approach. We're using both focus groups as well as surveys, quantitative surveys.

I think the other thing we've brought in this year that I do want to reflect is that we're incorporating, in addition to focus group, qualitative and quantitative work. We're performing across the board needs assessment in gap analysis. It's really important, and this is really, I think, a credit to Under Secretary McHale to figure out what we're doing now that's effective; where we're not effective, but also where we need to be.

I think this gets us to where we're moving more and more to and that is target audience analysis. And with the gap assessment and needs assessment by doing this work in every country we're actually figuring out what are those demographic profiles or those key audiences whom public diplomacy, being a state craft in the 21st century, needs to engage in the future. And we are doing this with the U.S. speaker and specialist program.

Also, we have launched as Duncan said something which I think is the most innovative thing we've ever done, and this is the electronic media engagement evaluation. This evaluation effort is designed in the first year to perform market research and audience analysis on the use of Web 2.0 and social media tools, trying to identify the most influential topics and phrases used in select digital ecospheres.

At my desk we're looking at how are these key foreign audiences discussing topics and issues that are relevant to public diplomacy. We're also not stopping there. We're also going a step further and actually doing sentiment analysis. We're actually coating those conversations and actually figuring out whether or not those are positive or negative. And I think this is where, again, we're just reflecting what Under Secretary McHale has brought to us; and, that is the need for public diplomacy to go more and more into digital campaign mode. And this is where we are.

And as Duncan rightly said, in this evaluation, one of the things that we are assessing is that when are U.S. missions or the Bureau of International Programs launched. A Web 2.0 or social media effort, we define this evaluation as a campaign. And the way that this particular evaluation is set up is that we assume there are already conversations taking place without the intervention of public diplomacy about some topic.

And, we'll assess that, whether or not it's negative or positive. But then we want to see that once public diplomacy intervenes, what's that bump, and we want to measure that bump. And we don't stop there. We also want to know how long does that bump last. And when the campaign is over and those conversations go back, which may be normal at the time, have they made a difference.

Do we see a difference in how they're discussing issues, whether it's climate change, whether it's any other issue that's irrelevant to us? And I like to think that we're actually, you know, GAO in the past has repeatedly said public diplomacy must adopt a campaign style approach. This is our first time of actually trying to adopt a campaign style approach in our program evaluation, so we'll see. We're working with some leading companies and organizations who have experienced doing survey analysis and opinion research in these countries, and we're doing this as phase one in which we're just market audience, market research audience analysis.

In phase 2, which kicks off in October, we're doing a closer examination of the current reached impact and engagement levels of all the IAP programs available at fiscal year 2011. I think this is where we're really going to pick up a lot of their video, their video opportunities, and also some of the other things they're doing with America.gov. And I'm really proud to say that not only have we done these two major evaluations, which are currently underway for 2009, we have actually recently designed a permanent evaluation for American centers, which is something again we responding to OMB and GAO calls.

We have developed a very robust evaluation plan, which we are geared to launch in fiscal year 2011. We may likely start small, because it's really impossible through our current budgetary levels to assess every single American center. And we're definitely looking at developing and measuring just a few in 2011.

Interpreting program evaluations I named, again, use standardized approaches. We are talking about survey questions. We are talking about quantifying the data, qualitative research, and, again, needs assessment and gap analysis. This again is our attempt to bring in private sector best practices to our performance measurement and evaluation; and, also, just to emphasize that output data, which is standard across the board, is what we collect data for.

But, we want a strong emphasis on outcomes. We want to know which programs are working, which programs are not working. What aspects of our PD efforts overseas are having an impact? And, I think that, you know, just for me, what I'm most proud of, is that the collective support of our public diplomacy offices at the Department of State and certainly Under Secretary McHale, what we now have is a very robust data set on our key audiences, which never existed before.

We never went out and really across the board collected data, at least demographic data, on the audiences in which we are engaging. And, now we have that information, and I think that really sets the stage for us to do longitudinal studies, actually to measure the long-term trends in terms of impact of our public diplomacy programs. While we're not perfect, I think we certainly have set up something that we can build and that is quite sustainable.

Lastly, I do want to point out that we do have this global output performance measurement system called the mission activity track. I just want to pause now and say I apologize for not being able to show you the dashboard, but I might be able to do it before I leave today. MAT, as we finally refer to it, is designed to track the frequency and reach of foreign audiences engaged by public diplomacy activities.

MAT collects audience data on audience reached, the numbers of audience reached, the audience type that we engage. We actually have a whole series of 13 demographic areas that we collect data on the type of audience engaged. We actually incorporate performance-based budgeting to the mission activity tracker. We actually are collecting activity based costing data, which I think is really phenomenal for us to really get into performance-based budgeting.

We actually also feed the data that's collected in that which are largely PD activities; and, I think of this in terms of a tree. Every Washington Bureau for public diplomacy is a program. They offer a host of programs. The Bureau of Education and Cultural Affairs has about, I think, 168 programs that we have in that. The Bureau of International Information programs has about 75 programs, and we actually have the Bureau of Public Affairs programs in that as well.

Every program, when it's used by our missions overseas in my mind reflects an activity which on a tree is a branch and then their leaves. The mission activity tracker counts every branch and leaf, so it's the yield for every Washington program, and this is something that's quite historical. Not only does it track this information in terms of audience reach and the type of audience reach, but we ask, and it's standardized, that every public affairs officer that enters an activity into this database, they must align it to the broader Department of State's strategic goals, and even for the Under Secretaries strategic goals.

And so we have over -- don't quote me, but we have all of the mission strategic plan goals in the mission activity tracker, so we can tell you for example we're working with economic experience. They're very interested in things like intellectual property rights, where we'll work with the Bureau of Global Health. We can actually tell you how many public diplomacy activities are affiliated or were reported against those strategic goals.

We even have a mission activity tracker, all of the elements of the Under Secretary's framework along with the President's strategic priority such as New Beginnings on Muslim Engagement. We're going into Ramadan. Ramadan is next month, and we actually have questions in that to track the number of activities that are opposed to using overseas that are tied to things such as New Beginnings, Cairo Plus, or Muslim Engagement.

Launched globally in 2007 to all of our overseas posts, we have, to date, over 76,000 reported public diplomacy activities used to engage foreign audiences and host countries in which we work. The mission activity tracker, at its quarters and output system, would also give posted opportunity to subjectively say we're the outcomes associated with these activities.

And so I think what I am going to wrap-up here, I think what I presented here, is that through the mission activity tracker, what it gives us is quantifiable count, which are outputs of all the things we're doing overseas. But, at the same time, we're not stopping there, because we know that at times our public affairs officers are able through their own sort of awareness -- whether or not the press conference that they organized or the symposium or the alumni even that they held in country -- they see either positive change, that people are actually taking information and skills gained from that particular PD activity and they're applying it in their local institutions, that to me means impact.

And, through this global database, they're actually able to report on those impacts. If I ever have an opportunity to come back, I will show you the MAT dashboard that has all the strategic themes and the outcomes and the outputs as well. Thank you so much for your time.

CHAIRMAN HYBL: Mr. Walter, any further (inaudible)?

MR. DOUGLAS: I think they sent it all.

(Laughter.)

CHAIRMAN HYBL: Wow! That's a lot of stuff.

(Laughter.)

CHAIRMAN HYBL: How transparent and how accessible is the data that you all collect and to who sees that data? And who has access to that data and the assumptions that underlie the interpretations of that data? Is that held within the bureau or is it public information?

And then one follow-up question is I'm kind of interested in the bump. I'm not very good at Twitter, but I'm great at Facebook, and I'm curious about the kind of things you would do. I mean there are 72,000 things we could probably talk about, but I'm curious about how you discern what's going on.

Say, let's just use something, at least if you don't mind, that I'm familiar with. And how you would affect a bump and how you would interpret the bump, and that's just a single shot sort of thing. But I'm curious as to how you do it.

MR. DOUGLAS: While they're passing the microphone down to Cherreka, I'll just say it's very quickly a straightforward answer for ECA. All the results from our evaluations and/or their executive summaries are always posted on our public website. We've conducted over 50 major evaluations thus far, and they're all available to the public on our website, and that is part of our commitment to transparency.

CHAIRMAN HYBL: Is that data itself available? I mean the hard data that underlie the answers?

MR. DOUGLAS: The internal raw data is not available for every single one. On some of them it is. Some of them have an executive summary which always gives the percentage and the numerical data for the findings. Some of the ones where we have thus far been posting the entire result, it does have all the charts and graphs and all the raw data; well, not raw data, but the final data.

CHAIRMAN HYBL: We were out at USC about two months ago and we had a conversation about the availability of the data and things you work on. If there are folks in academia that want to examine or in some kind of a critical analysis, would they have enough data to do that to make a judgment? Would they have data that would suffice to make a judgment, to second guess, or to complement or supplement assumptions that you made?

SPEAKER: Just one thing, I wouldn't say that we shouldn't start (inaudible) as Rick said; and, also, we have an inbox where we get requests.

CHAIRMAN HYBL: Can you all hear her?

SPEAKER: I'm sorry. We have an e-mail inbox and we get requests a lot of time from people in academia and people in NGOs and foundations. And I provide them not only with our evaluations, but sort of talk with them, move them through the process. And we try to meet with them as well in different fora. So we're quite open about all the information that we do have and we're glad to work with them.

MS. MONTGOMERY: Our PDI data set is available if someone were to request it. I don't think we ever received, to date, a request for the data files. The one thing we do is that we do not collect personally identifiable information. So I think when the request comes in I would just share it with our lawyers, and I think everything there falls under the Freedom of Information Act, as well as the things in mission activity tracker. And so it would be available.

Right now, all of our data is accessible to all public affairs officers. We have a performance measurement portal, which unfortunately is only housed on our intranet. Otherwise, I would have shown you that, which has all of our -- we have a country synopses from 2009 of every country where we collect on PDI 2009 data that's available. We have the MAT dashboard. We have our evaluation. One page is our evaluation protocols. I want it to be a one-stop shot, as my way of communicating out to the field.

Going back to your third point about Facebook, you're right. It is very interesting. It's fascinating. It's challenging. What we're doing now is actually a two-part approach. Maybe it's three parts, maybe four. I'm thinking as I'm speaking. The first thing we're doing is that we're not covering, obviously, everything, so we're looking at certain countries.

So, for example, we're doing this work. We have a team going out to Zambia, a team going out to Algeria. I have a team right now down in Argentina. We're doing this work in select countries, and what we're doing in those countries is two things. One is that we launch online surveys to actually assess, sort of, you know, where people stand today.

What is the current use of Web 2.0 tools? And one of the things I want to make sure of is that we assume that maybe Twitter is everything globally, but maybe in certain markets there is an emerging competitor to Twitter that may be among youth around the ages of 15, a boy age 15 or a girl age 21, that have different preferences. And I think that information is really essential for us to bring back to IIP.

We are also doing a needs assessment of all of our posts to sort of say, you know, what are the technological needs. You know, since we're pushing in Washington, you must do Web 2.0. You must do social media. What are their needs? Do they have all the right tools at their fingertips now? And, if so, what are things that we in Washington need to support them with.

The third thing that we're doing is we're working with some really smart technological partners, one who has done work with the interagency in the past, who has this incredible telescope that they can point to different regions. And we pointed this telescope into select foreign language spheres, so we're sort of doing conversation crawling.

And, by the way, when I say conversation crawling, this is everything that's public, things that are not password protected. Okay. So it's like everything that's available to anybody. We're not going behind the wall. We're looking at private conversations that are locked down through passwords, and we're crawling in various languages. We're crawling in Farsi. We're crawling in Chinese. We're crawling in Russian, and we're just saying, you know, where are the conversations taking place right now? And this is how we're getting at the bunk, and this is pretty much through blogs, through chats, through things posted on-line.

And, what we found to date, and we have a report coming out in a few months, is that sometimes we assume that topics that are really important to us in America are the same important topics people are talking about overseas. And we found that sometimes on that, that's not exactly parallel; but, sometimes, the things that we think are really, really important, they started having those conversations weeks prior to our big launch.

But, we're finding the way that they're phrasing this information is really important, and I think this also goes back to one of the initiatives that out of the Under Secretary's office we have something called the U.S. Marketing College. We bring in private sector who constantly sort of reinforce us, the message that we have to be careful about the messages we use and sort of think outside the box.

CHAIRMAN HYBL: Excuse me, real quick.

MS. MONTGOMERY: Sorry. Yes.

CHAIRMAN HYBL: Do we create any of those sites in those pages? I know Facebook, but let's say we go to you're in Argentina, or wherever you are, someplace in South America. And, you know, we know that that's getting to be a hot spot. And we're kind of interested in what people think about certain things, and we're also interested in having a point of view expressed about certain things.

Do we create those sites, and then, let's say we target 18 through 25-year-olds, whatever? Do we have the data to know how to influence that to create a world for them to respond to and get their responses to questions we asked? And do we use that technology?

MR. MACINNES: Well, I mean, we do. And there's two things. Facebook is a closed garden, you know, in the sense that you can't Google into it, but it does have inherent in all these tools that if you have a Facebook site, you can slice and dice your audience, and it will tell you age groups. It will tell you where there from. It will tell you.

You can get a profile of their users, and we're kind of stuck using the Facebook tools that they provide, because they don't give us the data. I mean this is an interesting side discussion is we have major places like Facebook, which control all their own data, and academics have a problem with this too. They cannot get the data to actually do analysis on what's going on.

They do provide for people that have Facebook sites a lot of ways to get information on whose coming to your site. We also use a lot. We have comment fields where people can comment in. We have voting fields where they can vote on an issue or we can say, what do you think of this issue in 1, 2, 3, and the vote.

One of the things is we have some global Facebook sites, like America.gov site, which is the world. But, every embassy has their own, or most embassies have their own Facebook pages that deal with issues that are local, because all good public policy, like all politics, is local.

You know, people in Australia want to know about a certain thing that affects their lives, and it's quite different than what you might be talking to Indonesians about. But what we do is we have our policies that we're interested in promoting and discussing. We at IIP produce material that people can use on their Facebook pages to stimulate those discussions on issues such as as varied from, you know, tolerance and anti-violence to global warming or climate change issues.

And so we provide to those people that are overseas that are very busy and they have to run a Facebook page with prepackaged materials they can put up and use on their site. It promotes U.S. policy or understanding of U.S. views on things. But it is harder to measure, because unlike the Web where you can actually use Web analytics and crawlers and go in and actually gather a lot of data, we can't crawl with Facebook. Yeah.

CHAIRMAN HYBL: John?

COMMISSIONER OSBORN: Thanks. Thanks to everybody. A very impressive range of activities; a couple you alluded to the imperative and the difficulty of really measuring influence and impact. And I am interested in just sort of teasing out of you a little bit more color and perspective on that, and a couple of sort of thoughts. And, specifically, how do you think you're doing on these things? And, if you had more resources, could you do more?

In two different respects: Measuring public diplomacy's impact or influence as distinct from our foreign policy, and as distinct from other actors, private sector and other folks; and then, secondly, in measuring impact. I guess it depends on the audience. Doesn't it? I mean you could measure a positive bounce by the degree of furthering enthusiasm that people who are inclined to be favorable toward us.

If their enthusiasm and their support jumps up, or you could sort of look at the lowest common denominator, and kind of try to evaluate how effectively we are at blunting our biggest critics where people who would do us harm. So I'm wondering if those kinds of audience differentiation, those different baskets of populations are already included in what you guys are doing, or if you think they might be, and if so, how.

MS. MONTGOMERY: Thank you so much for your question. I'll just start with the last question.

We are able to do that, and we can actually do audience segmentation, which I think is the beauty of demographics, of having demographic data that we collect. But, we also incorporate what I call sort of professional profiles.

I mean, traditionally, public diplomacy has a strategy of engaging the elites in our host countries, and whether or not they're academics or journalists, or civil society leaders or business or traditional leaders were able to actually track their opinions as well. So I think really in my view, both the PDI and our other data sets that we have across the board for PDI evaluations are quite robust, particularly given the small time they have been able to collect it.

What we found in 2009 is that even in some of our countries where you may assume -- and I think this is what's really important -- that we do these independent assessments. In countries where you assume we have broad, you know, national countrywide surveys that say that, you know, in places like Turkey or places like Morocco, Indonesia, that people just have a whole widespread, low favorability ratings and lack of understanding for the United States.

If you create a sample, and this is what I think we've learned. And if you create a sample that really defines public diplomacy's sphere of influence, and I think all we know today is through this control group model, people that we have repeated engaged historically over time and we asked their opinions, and their opinions in our case are the best, because we have never done a survey of their opinions prior. We also don't, you know, no one at the embassy knows who we're going to survey, so they can't sort of bias their opinions that way. And we actually measure the difference.

I mean in these countries we found that our PD audiences actually score higher on our indices regarding understanding, even favorability. And this is why also in our focus groups it's important you have this mixed method, because our focus groups actually tell you that while they understand where we're going, they will express disagreement around certain aspects of our policy, which then goes back to your first question. And that's the biggest challenge, because every country, and I leave it to my colleagues here who are more of the experts than I am.

Every country has a unique, strategic objective. And so the foreign policies that every U.S. mission, you know, while some may be aligned, there are some nuances, and I think that's the challenge. A few months ago GAO came in to speak with us. I believe it was last year. And they said, you know, "Listen. You have to create PDI so that it's more specific on these policies." But how we develop these instruments, that we develop the instruments in consultation with our regional bureaus and with our posts, and so we don't want to ask any question that will somehow harm the bilateral relationship, particularly given that our PD audiences are elites.

And we're dealing with members of the parliament, leading journalists. And I think this is an area of growth for us, and that is how do we structure survey questions in the future. And I think that if we had more resources, what I would envision is that we are able to do something like PDI on a quarterly basis so that you can customize it either by regions, or customize it by posts that are aligned in some way around a foreign policy objective; and, therefore, you sort of track it across one fiscal year period.

Right now with PDI it takes us about, you know, 18 months to actually launch the data process and actually to analyze the data as well. I don't know if Rick, or Duncan, or anyone.

CHAIRMAN HYBL: Just one quick one.

MS. MONTGOMERY: Yes.

CHAIRMAN HYBL: Do you envision below the elites when you're correcting?

MS. MONTGOMERY: Yes. Yes, and in fact we're already teed up to do some of that in fiscal year 2011. That is, you know, maintaining the traditional public diplomacy, you know, treatment group, but also looking at where the gaps are. And I think, you know, some of them are more strategic countries we're looking at, taking a look at compatible datasets, whether or not they're from BBC or from Gallup, whether you ask some of our similar questions, but also taking a look at just doing our own sort of random countrywide sampling of audiences that we think are most influential, that could be most influential or most likely to be the next sort of leaders of a country where PD needs to engage. And I think this just goes back to just larger target audience analyses.

CHAIRMAN HYBL: Commissioner Snyder?

COMMISSIONER SNYDER: Thank you. Thank you, Rick.

First of all, thank you all for wonderful presentations. They were detailed and very in-depth, and I think they've give us -- at least they've given me -- a lot of insight on what's going on in terms of evaluation. Two quick questions.

One, Cherreka, you've talked about the mission activity tracker and you talked about all the PD officers and PAOs and inputs. Do we allow people who are not technically defined as public diplomacy practitioners to input in to the activity trackers?

MS. MONTGOMERY: No. So one of the things we want to do, and this is why MAT was designed, was to track PD user PD resources overseas.

We get lots of requests from USAID and some other government partners or agencies to insert the data, but the way that it's set it up, one, is you have to be a Department of State employee. And that is only available on our Opennet intranet system. And that, if you have and sometimes we have management officers who've been designated by the DCM or someone else at post, to sort of do some PD work.

As long as the public affairs section at that mission okays it, that management officer who works for Department of State could enter activities into the mission activity tracker. But we do have a checks and balance system, and that is only the PAO at that mission is the approver. So anyone at a post essentially could enter a data into activity, but only the public affairs section, the public affairs officer who we deemed to be the expert at PD at that post, can approve an activity which is therefore released for use in Washington. I don't know if that makes sense.

CHAIRMAN HYBL: I want to mention that everybody can see everything once it's posted.

MR. DOUGLAS: I was just going to add on to what Cherreka said about the mission activity tracker is once something is posted by any mission in the tracker it is then visible to everybody in the State Department. Anyone can log on as a guest user and see all the material.

So, if you're a public affairs officer in Ecuador and your new Ambassador wants to do more with social inclusion and underserved youth, you could go on the mission activity tracker and see if anybody else in the world has been doing that kind of activity and learn from their experience. And this kind of global visibility was built in at the outset of MAT and honestly, it may sound like the most logical thing in the world.

But it was quite shocking at the time, particularly to a great many of our Defense Department colleagues who are very accustomed to everything being segmented, that this belonged to the special operations command. This was a U.S. Army site, and so forth and so on. I mean they do wonderful work there, but the idea that you are making this available to everyone universally was quite novel at the time, and we're glad to see it's becoming more of a custom now.

STATEMENT OF DANIEL SREEBNY

MR. SREEBNY: Dan Sreebny. I'm with the Under Secretary's office, and I think our team was the one most recently overseas, most recently as the director of the regional (inaudible) for that PA. (Inaudible) a spring break in Baghdad for a while as PAO, PAO Lending.

And just one aspect of MAT that's very important is that if you were running public diplomacy programs that included the involvement, obviously, the Ambassador and the deputy chief of mission but, you know, speaker program involving people throughout the embassy at every level, we would input that into MAT. So it's not just what public diplomacy officers are doing, but what public diplomacy is doing, but largely the post involving everybody.

But at the same time, you know, my colleagues in the commercial section and the agriculture and political -- any kind -- they're all out doing important work, which we didn't put in MAT, because if the powers that be in Washington came to me and wanted to know what is public diplomacy doing. We want to see. We gave you these resources; what are the activities.

And then using these new ways to measure impact and outcomes, we don't want to muddy the waters by including lots of traditional, diplomatic work that we can't take credit for and that wasn't part of our strategy. And so we included work by people outside the section that was done as part of the public diplomacy activities and program of the mission where the department went large, but we didn't include other activities that they were doing separately.

CHAIRMAN HYBL: Commissioner Westine.

COMMISSIONER WESTINE: Thank you. First of all let me just totally agree. It's been such an impressive presentation, and also just outstanding evaluation measurements.

I'm curious. This may be an unfair question, but I'm curious. How effective do you think the feedback is from these evaluations to changes in our programs? For example, are there some areas that have really grown because of this feedback, or are there other areas that you've had to eliminate and you're smiling? Is it a bad, unfair question? Or can I have your thoughts on the issue?

MR. DOUGLAS: No, it's a wonderful question. Let me say a couple things about it while Cherreka thinks.

(Laughter.)

MR. DOUGLAS: There have been a number of direct impacts on policy and program based on evaluations. Let me just give you a couple of examples. One is our very successful youth exchange program right now at the high school level with predominantly Muslim countries was that we adopted that particular pattern, which was based on an existing model which we had engaged with the former Soviet Union. Because clearly -- and this was a post 9/11 decision -- the sentiment was universal that we need to engage more with younger audiences and underserved audiences in Arab and Muslim countries, what was the proper program model to use?

Because the model of the future leaders exchange program, the flex program with the former Soviet Union, had been underway for a number of years and had been evaluated and measured, we felt confident in adopting that model and going to Congress and asking for significantly increased funding because we had a proven model to go on. In terms of expansion English language programming, Under Secretary McHale has rightly put very, very high priority on expanding English language programming in every possible way.

And much of that, again, this is sort of confirming the intuitive, but she is buttressed by study after study, which shows the tremendous impact of English language in almost incalculable directions in terms of individual self-esteem, accessibility to American materials, confidence in material produced by the U.S. Government, marketability and workforce capacity in other countries.

If you will, general, favorable attitude towards the United States for providing the opportunity for young people and others to acquire highly desirable English language skills. It's one of those rare issues where exactly what we want to give other countries is exactly what they want to receive, English language skills.

And I would mention, also, that it's a little bit dated now, but when Under Secretary Hughes was in office she commissioned a study of an Arabic language monthly publication; and, at the end of the process she decided to terminate funding for it -- not because it wasn't a quality product. I think this is important to say. In fact, it was a very interesting product. It had won a number of awards for design and layout, and graphics and so forth. And our public affairs officers found it very useful.

Public affairs officers are always in need of tangible things to take with them to schools and institutions and so forth. But because the publication was produced under a sense of great urgency, again, after 9/11 when everybody was trying to do everything they could think of that might possibly be of use, it had no deformed measures or goals, or market impact that was set for it and in an environment where an Under Secretary of State for Public Diplomacy wakes up every morning to face unlimited challenges and very limited resources, she had to make a decision to go with putting the funding against a program where she could measure results. Whereas, with this particular magazine, she was not able and they would not have been able to measure specific results.

STATEMENT OF ROBIN SILVER

MS. SILVER: I'd like to just add one thing to that. For ECA when we think of all the program evaluations that we do, we try to look at programs that have either been around for a while to get a sense of fair, effective; and, if not, and how can we make improvements.

We also look at sort of priorities for the political leadership, for the Administration, for the Under Secretary, and we try to be sort of at the cutting edge in terms of our methodologies. So, going forward, we've got new evaluations coming up of our Humphrey Program, which works in development, a new evaluation looking at some of our programs by gender, and also another evaluation looking at different English language programs that ECA is carrying out.

So, for us, this is actually our mission. This is what we do and we're really happy to be doing that and working closely with Cherreka's office on a number of these things.

CHAIRMAN HYBL: Thank you.

MS. MONTGOMERY: Thank you, Mr. Chairman.

I would like to ask, because you're checking all this data and you've got it all input. Do you have a template for the Embassies that when they have lots of other things to do beside public diplomacy?

Is there a list of things that you specifically asked them to answer and how often do they do that? Do they do that when something has happened or an event has occurred? Or, do you ask it on a quarterly basis?

MS. SILVER: Yes. The mission activity tracker is.

MS. MONTGOMERY: The mission activity tracker is that standardized sort of template that we ask them to use. We even have created a supportive tool called the public diplomacy implementation plan, which may change, but it's basically unique mission.

I won't call it strategic, but it's bringing back sort of the institutional analysis that public diplomacy once had many years ago, so they can actually customize and make it relevant for the current fiscal year. We also do a performance measurement portal. We actually have listed some of our feedback surveys, which, by the way, we're using right now to support our new communications efforts and our plan in Pakistan.

But our pass is on a (inaudible) staff. And as every time that they have a major PD program or cultural program, they want to collect. Just, what I think is fundamentals data on satisfaction. Do they think the content was accurate? Was it interesting? Was the event successful? They can administer these feedback surveys and I offer is that they can actually just collect the surveys and scan them back to my office; and, we'll crunch the data on their behalf.

But I think this goes back to, I think, in the future with more and more training we will be able, if posts have the time and the initial resources, go out and take on these efforts themselves. But right now I don't think they are quite yet at that point to do so.

CHAIRMAN HYBL: Good. Thank you.

As this is an open hearing we would encourage the 40 of you or so that are here, and, if you have any questions we'd be glad to address to the panel members, that would be great. We would ask that you stand and identify yourselves so we know who you are. And the reason we ask you to stand is these mikes above you pick up your comments. So, please, yeah. Send them back.

MR. FURMAN: My name is Justin Furman. I'm a communications officer in Pakistan. I have a question for Mr. Ruth.

You had mentioned how one of these commissions (inaudible). My question is what are those sort of success that we're looking for (inaudible). In Pakistan we obviously have gone through this (inaudible.) But, ultimately, (inaudible.)

MR. RUTH: Okay. Thank you very much. There are a number of definitions of success and then a few of them for education and cultural affairs would be for example whether or not there has been change.

That's the first and foremost thing to look for, and then there's been some discussion and pretty much everyone probably wouldn't be here if they weren't familiar with outputs versus outcomes. And we're looking for outcomes. I mean it is important to track outputs to know how many young people you've reached; and, one way or another, how many government officials you've reached with a particular message.

But what we're specifically looking for in our evaluation is what is now different because of the exchange experience. And we know those things are of course directly attributable to the exchange experience. You came to the United States or you went through a particular program. And because of what you (inaudible), the skills you do, you get the following.

Now that kind of change can come in a variety of ways. It can be individual change. You are a professor or a teacher. You began to change the way you teach a certain class or change the curriculum material you use. It gave the institutional change, which obviously we're very keen on identifying. If you are a parliamentarian, for example, you would need the programs for parliamentarians. Then you go back and introduce legislation based on what you want in the United States.

An example totally separate from any current or political issue I would find very impressive to form visitors to the United States in the Americans with Disabilities Act. And, indeed, if a problem totally goes back home and introduces legislation on disabilities in their country, that is directly attributable to the exchange program.

We are also looking for the (inaudible) from the MGM sector, such as creation of an MGA to find domestic variance or something to do with HIV/AIDS and the stigma of AIDS in the workplace, to counter that kind of activity. We find very often visitors to the United States are impressed with the initiative, if you will, the initiative taken by American citizens to address problems that they see without waiting for the government to take action.

They take action in the civil suits in creation of an MGA or a community organization is something that happens quite frequently. It's surprising to me unless it takes a great deal of work and effort to do that in any country and to see that level and activity and explain an exchange experience. There's always something we look for; but, also, it also would be for amplification. How was the exchange experience spread? Obviously, no more than just conversation with friends and families.

There were articles written (inaudible) on television where there are speakers (inaudible) professional conferences and so forth. So those are the kinds of things that we're looking for. We look more specifically at individual programs, because of course it would do like students or teachers or professionals.

We also ask questions about specific topic of the exchange, but I am speaking more broadly of the kinds of rules that we would encompass all exchanges.

CHAIRMAN HYBL: Yes?

COMMISSIONER OSBORN: Just one more. On dealing with large audiences, were you talking basically a global audience or a regional audience that you're reaching electronically through Web and other means? You're not seeking that kind of level of engagement necessarily, because you can't get it when you're dealing with large numbers.

What you're often trying to do is set the narrative of a conversation on the right path, which means getting dissemination and this information out of the conversation; and, oftentimes, success for our people that are doing Web-based things are people who understand their policies are what they really are as opposed to what somebody else is defining them to be. And those are hard to measure, but you can measure it through (analysis) and through textural analysis when you see that the misinformation is disappearing and that the context for how we're discussing it falls within a narrative that give us basically an advantage to have that conversation.

CHAIRMAN HYBL: Yes.

SPEAKER: Hi. I'm Rhonda Zahari, American University. I have a question. Information approaches is the U.S. forte when it comes to communicating. About 70% of the world is more relationship focused. Do you have research tools to measure relationship, like relationship quality, direction, strength? And, also networking approach. Are you pulling in network measures in social analysis?

MS. MONTGOMERY: Yes. Thank you. I neglected to mention that in this year's polling evaluations all of the two that remain both with the U.S. speakers and our electronic media engagement, we have incorporated social network analysis.

So take a look at relationships formed. Who are the most influential persons within those communities? And the reason we're charging it, I don't have any data just yet, but through the public diplomacy impact project and through all of our sort of focus groups, what we found is that person to person engagements really matter; and, I think you hit the right point. (Inaudible) is overseas, and so many communities and cultures overseas.

It is that person to person. It is that relationship. Having the ability to have a conversation and to have that dialog; and, one of the things that comes constant through PDI is this desire for our audiences to feel appreciated and feel as though we're treating them with respect.

And that is, I think, what we try to incorporate with public diplomacy, but it takes time, particularly when you have policies that sometimes these, our key audiences, may disagree with. But yet if you have an opportunity to have that conversation, to have an exchange of ideas, particularly when you have citizens from that country travel to the U.S. and be able to go back and sort of share their lessons and their experiences, it helps us to sort of really foster closer relationships.

MS. SILVER: Could I say one thing? You say programming our evaluations, actually, we focus on those types of things; relationships, professional, personal collaborations between organizations, institutions, as well as sort of networking. But, again, we deliver a result. The people who participated in the exchange programs, people they might have met here, institutions that they worked with, they go back. We want to see what sort of happens next, so it's a little more concrete because we are looking at our programs and they are quite defined in terms of the areas and disciplines that they're working in.

SPEAKER: And one thing I'll add is your question relates to a very important aspect, I think, of EDI and (inaudible.)

We need the work that ECA and PA and we are doing in terms of looking at specific programs, both at the moment and very important over time. We have gone back to study them again, because what works in 2010 and 2012, there may be a new environment and there may be something that makes Facebook and Twitter obsolete. And you won't have to learn something again, and, you know.

And so they need to do that, but at the same time our relationship went to public diplomacy with the people with whom we want to engage in the field and not just inform and influence but, have them, even if they disagree with us, come back to talk to us about why they disagree with us. And we listen to (inaudible) is a mixture of all of these programs into (inaudible) and personal engagement lists.

So that it's somebody who does the English language program and Access Micro Scholarship Program, and then comes to a few talks or comments here as a speaker, perhaps becomes a Fulbright rater. It's this variety of activities with a relationship, and BDI allows us to look and say in the relationship through all these programs and activities how does it in terms of you understanding the U.S., your engagement with the U.S. and the networks you build, and your opinions on the U.S., what is the impact.

CHAIRMAN HYBL: Are there questions? Anyone? Members of the Commission? If not, I would like to thank panelists today and those in the Department. Certainly, we are supporting them as a Commission of the Under Secretary and the good work that you all are doing in that area. We hope that we'll have an opportunity to meet again as we go through a process doing some of the same work that you are.

For those of you that are here representing the public and particular institutions, thank you for being with us this morning. We want to thank IFES, the International Foundation for Electoral Systems, for hosting this. To Carl Chan, our executive director, and most of all, we wish you great success in evaluation and you’re working in a world that's never going to be easy.

Thank you all for being here.

(Applause.)

(The meeting was adjourned.)