Minutes of the U.S. Advisory Commission on Public Diplomacy September 2010 Official Meeting

Washington, DC
September 28, 2010

Commissioners Present:


  • Bill Hybl, Chairman
  • Lyndon Olson, Vice Chairman
  • Penne Korth Peacock
  • Jay Snyder, Commissioner
  • John Olson, Commissioner


Presenters from the LBJ School of Public Affairs
University of Texas at Austin, Texas:


  • Dr. Ken Matwiczak
  • Ms. Amanda Dillon
  • Ms. Katherine Zackel



MR. HYBL: On behalf of the Commission, we want to welcome all of you here, great afternoon. I’m Bill Hybl and I don’t think I have any sound. Okay. Chairman of the U.S. Advisory Commission on Public Policy. With me today are the Vice Chairman, Lyndon Olson, Ambassador Penne Korth Peacock, Commissioner Jay Snyder and Commissioner John Osborn.

The Commission is a bipartisan panel created by Congress, clear back in 1948 to formulate and recommend policies and programs that carry out U.S. Government public diplomacy. The Commission’s focus for 2009 and 2010 has been on efforts to measure U.S. public diplomacy programs and initiatives.

Measuring the effectiveness of public diplomacy programs is difficult but critical to promoting U.S. foreign policies abroad, managing programs effectively, and indicating to the public diplomacy professionals what works and certainly what is valued. The Department of State has long been engaged in evaluation efforts and we have met with some of its staff to understand those particular efforts. We commend these efforts, but we also believe more could be done.

To this end, the Commission entered into a property agreement last year with the University of Texas, Lyndon B. Johnson School of Public Affairs. This collaboration took the form of a two-semester policy research project involving approximately 15 LBJ School graduate students and one professor. The Commission believes that the LBJ was best equipped to undertake this cooperative project due to its outstanding track record in the field of public policy research.

The Commission asked the LBJ School to do three particular things. Number one, develop a public diplomacy model for assessing performance, a quantitative tool that can be used to determine the effectiveness of the Department of State public diplomacy initiatives, vis-a-vis, stated goals and objectives. Number two, review the Department of State’s current public diplomacy evaluation efforts and three, make recommendations for future evaluation efforts.

I’m not going to go into too many of the details. as Professor Matwiczak can. Professor Ken Matwiczak is here with us today with two of his associates to brief us on the report, copies of which hopefully all of you have at this juncture, including the DVD.

I would ask you hold any questions to Ken or to members of the Commission until we have finished this particular presentation.

Ken, you have the floor.

MR. MATWICZAK: Thank you, sir. Good afternoon, everybody, my name is Ken Matwiczak and I’m a senior lecturer at the LBJ School of Public Affairs, University of Texas at Austin, and it’s our pleasure to be here and tell you a little bit about the work we’ve been working on for about the last year or so.

With me are two of our recent graduates from the LBJ School, Amanda Dillon and Katherine Zaclkes and they’ll also be sharing the presentation this afternoon.

So with that, what we’ll be talking about as we go through is a little bit of background of how we came to be, how the project came to be, what we did throughout the project, how we went about our work, what we came up with, a notional model for measuring public diplomacy efforts, etc., and then a demonstration of how that model might be worked or might be used. You’ve got a copy of our handouts and our slides, also on your chair, and a copy of the handout that Amanda will be using later on. So with that, we’ll get started.

Affectionately, for those that don’t want to butcher my name, around the LBJ School I go by Dr. Mat or Professor Mat, saves everybody a lot of trouble. And for those of you that have been watching the news at all, the University of Texas, Austin campus is open again. Those of you that are in the dark, there was a shooting at the main library on campus at the University of Texas at the main library. The shooter came in with an AK-47, shot a few things up, didn’t hurt anybody but himself and shot himself, and there is only the one shooter, so that’s all we know and that’s all anybody knows, and we talked to people back on campus, too. But the LBJ School is on the far diagonal opposite side of campus, so all our people are safe up there.

What we’d like to do is spend, hopefully an hour that’s not going to be too boring, this afternoon, explaining what we did and what we came up with as a result of our work. And what it’s good for and what you can do with it from this point on. And then demonstrate how it might be used by the Department of State.

To begin with –– oops –– it’s like the Gecko with the –-

In the fall of 2008, Dan Firestein, who is an alum of the LBJ School of Public Affairs, came back to school. He was, at that time, the deputy executive Director of the Advisory Commission of Public Diplomacy, and he was familiar with the curriculum and the type of work that we do. We have two masters’ degrees at the LBJ School, the master of public affairs and the master of global policy studies. Each of those curriculum require the students to take a two-semester policy research project course that German referred to.

That policy research project course is taken by first year public affair students and second year global policy study students. In that course we take a team of anywhere from 10 to 16 students, and they become a consulting team for a real world client on a real project, a real question of importance or a real issue. And we contract with that client to answer their questions and perform certain services for it. And we’re under the sponsorship of that client. Well, Mr. Firestein was familiar with the work that we could do and the type of research that the students were capable of doing, so he suggested that we might want to consider engaging with the Commission to come up with a way to measure the successfulness of public diplomacy efforts, which really was kind of a state of disarray at the time. There’s no real hard way to do that. So we thought we’d try to take that project on, so that over the next spring we negotiated with the Commission with Mr. Chan, the Executive Director with the Commissioners on what exactly we could do and what they were expecting out of the project.

Here’s what we agreed to try to do as Chairman Hybl pointed out. In the end, what we wanted to come up with was a model that could be used for measuring public diplomacy efforts. Now why did we get involved with this in addition to having a lot of work going on with public diplomacy? Some of you may not be familiar with how extensive our work in public diplomacy is. Our former dean is the current Deputy Secretary of State, Mr. Steinberg, Jim Steinberg was our dean. And so we have that linkage in public diplomacy background there. But in addition to that, part of the work I do as a senior lecturer and the research I do is on performance measurement, and we’ve done several projects in the past on measuring performance in government entities and how you go about organizing that work in the interest of governance, in the interest of communicating with the public, etc. So there was a mutual attraction between public diplomacy measurement and our work with performance measurement in the past. So then what we wanted to try to do is leverage that work we had done before and apply it to public diplomacy efforts and how it –– and show how it could be applied to public diplomacy efforts.

So in September and August 2009 is when the project actually started. We enrolled 16 students in the course, one of them only stayed a semester because she had to go off and do some other work, but we ended up with 15 students, following through two semesters. In these project courses they are typically projects that are run, managed and conducted by the students themselves, under the guidance and supervision of a faculty member. So they actually do all the planning of the work that needs to be done. We bring in 15 students from varied backgrounds. In the student’s case here, I think most of them could spell public diplomacy when they started the course. I’m not sure how much more they knew about that, so we had to do some training. We had to do training on performance measurement, evaluation, etc.

And to explain how we went through that initial phase of the project, the research we did, until we came up with a model, I’d like to ask Katherine Zackel to talk you through that portion of the project.

MS. ZACKEL: At the beginning of the project, we came up with an original timeline for our research. First, during the period of September through January, we were going to begin background research using Department of State sources and also extensive literature review. We also planned to develop and implement a fishing survey and a later in-depth survey to get feedback from the public diplomacy professionals.

We planned trips to embassies to do case studies and work with the public affairs officers in the field. We also planned to begin developing a conceptual model to measure the effectiveness of public diplomacy.

From January to July, we planned to further develop the model, to collect data from the Department of State to populate the model and also go out into the field and test the model and then get feedback from those public affairs officers that we would be working with. And then finally we would present a final conceptual model and report to the Commission.

We faced several challenges in meeting this original timeline. Our greatest challenge was that we began work in late August of 2009 and we did not receive funding until March of 2010. We also did not receive access to any kind of comprehensive data from the State Department regarding current or past public diplomacy programs. We also received just limited access to the Department of State personnel within Washington, D.C. and in the field. Because of these challenges, it was difficult to survey professionals, collect data, receive feedback, or even study the current measurement tools that were out there.

We did receive support and guidance from the Commission and the Commission staff. We also met with Therca Montgomery, the director of the Evaluation and Measurement Unit. We met with Rick Ruth and Robin Silver from the ECA Office of Policy and Evaluation, and we also able to rely on some personal contacts. Some of the students had interned in embassies or within Washington, D.C. for the Department of State and others had other professional contacts from previous experiences.

Because of these challenges, our approach was modified. We interviewed several public diplomacy professionals. Karen Hughes, the former Under Secretary came to speak with our class and give us contacts regarding the strategic goals of public diplomacy programs. We also met with Carl Chan, Gerald McGlauflin and David Firestein of the Commission staff. Commissioners Peacock and Osborn both came to our class to speak with us and answer any questions we may have. And finally, we had two diplomats in residence at the University of Texas, Craig Engle and Bill Stewart were able to give us their perspective from the field.

Performing the survey was difficult because we didn’t have access to Department of State personnel. We did reach out to those personal contacts that I mentioned earlier, and we also reached out to academia. We found academics who previously served for the Department of State and we also found some who were focusing their research on public diplomacy. So we were able to get the survey out, but not to as many people as we originally had hoped.

We did do an extensive literature review, relying on publicly available sources and we did do an extensive review of public diplomacy activities, primarily using embassy websites.

Our analysis of the current measurement tools that are used by the Department of State showed a lack of coordination and a lack of a unified effort. There seemed to be some duplication of efforts, yet at the same time, there was not consistency in measurement across time, across locations or across different types of public diplomacy programs.

We also found that a lot of these measurement tools focus on outputs, rather than outcomes. Outputs tend to be more descriptive data, such as how many people attended a given public diplomacy event, or how many events occurred within a given month at a given embassy. We focus more on outcomes, which we found somewhat lacking in current measurement, and that would be related to the actual strategic goals of public diplomacy. In the end we found no strong correlation between the public diplomacy programs that go on in the field, the cost of those programs and the effectiveness in meeting strategic goals of the United States.

Through our research we developed a conceptual model, called the public diplomacy model for assessing programs. We began by developing three outcomes. These were understanding, influence and favorability. Understanding is the foreign audiences’ and foreign publics’ ability to comprehend United States policies and United States culture. Influence is our power to effect change in those foreign publics and favorability is our ability to seek their approval or admiration.

We developed metrics to help measure these outcomes and several suboutcomes and we used these metrics to design a model to be able to link the programs that we do out in the field to the strategic goals of the United States.

And to go into more detail about the model, I’m going to turn it back to Dr. Matwiczak.

MR. MATWICZAK: Much of the work that Katherine talked about was self-motivated, self-generated. Because of the limited access we had to Department of State resources, so late in the project, we weren’t able to travel, we weren’t able to conduct surveys, etc., because of the limited funding and the limited access. So the model that we’re about to talk you through or describe to you is a notional model. This is what could be used, or it could provide a framework for further work in measuring public diplomacy efforts. So anything we talk about here, the three outcomes –– the outcomes being the so what or what effect did these measures have, did these efforts have, as opposed to how many of these did we do. The outcomes measure are more directed towards the effect of those. Even these three outcomes are suggested based on the research and the interviews and the survey results that we got back during the initial phase of the project. So we used this as a basis for developing a model for measuring performance. From these we developed metrics, etc., that again, are notional, we think might be a way to measure some of the efforts.

For those of you that are familiar with multicriteria decision-making models, multi-attribute utility models, the framework for measuring performance that we adapt is essentially the same as MAU and uses the multi-attribute utility approach, in modeling approach. For those of you that aren’t familiar with it, don’t worry about the big words, I’ll try to explain our thinking and our rationale for the type of model that we put together.

So what I want to do for the next few minutes is spend some time talking about what the model is, what it looks like, what elements are in the model, how it can be used in the strategic planning process to measure performance and make decisions as we go through. When I finish my talk, then I will turn it over to Amanda who will actually bring up the spreadsheet that we do use and show you how simple it is to use to put in measures of performance that you’re interested in, to set various parameters in the model, interpret the outcome and use the model to do decision making, etc.

So with that, the spreadsheet, the model itself, is based on a simple Excel spreadsheet. We wanted to come up with a way that was portable, accessible and easily understood by all potential users. We didn’t want to make any proprietary software, we wanted this to be publicly available so that almost anybody at any level within the public diplomacy world would be able to use this model, make their own decision making, whether it be at the embassy level in the field or at the State Department level for strategic decisionmaking. So we kept it simple using an Excel spreadsheet.

The basic organization of the model within this spreadsheet then, and you have a framework on your seat there, is a hierarchical structure, a tree structure, working with top level outcome measures that are broken down into subsequently more detailed levels beyond that. And Katherine will demonstrate them –– I mean Amanda will demonstrate that a little bit when she explains how the model might be used. But the first page of the second handout, not the PowerPoint one, is a breakout of one element of one of the outcomes. The understanding outcome, how well those –– the public, general public in the target area understand the message that we’re trying to communicate with the PD efforts. And from there we broke it down into subsequent smaller measures, which we thence broke that into what are the physical things that will inform those suboutcomes. So basically it’s a hierarchical model of broad outcomes, broken down subsequently into smaller measurable bites.

We take those measurements and we translate them to a common scale, so that we can compare how we are doing with respect to generating an understanding of our efforts, or our information that we’re trying to disseminate, versus how much influence that is having, measured vis-a-vis, as the chairman mentioned, vis-a-vis, some strategic goals. So our success in accomplishing these strategic goals are all measured on the same scale and we came up with a common scoring scale, if you will, to use on that. And it’s all measured against the expectations as established during the strategic planning process. Because we have an easily communicated hierarchical structure with visible measures broken down into subsequently smaller measures available and visible to everybody, all the scorings out on a common scale, it becomes an easy tool to use to communicate strategic goals, how we measure success of those strategic goals, etc.

We’ve done this approach to measuring performance in several organizations in state government in Texas. One with a local river authority in Austin, the LCRA, another with the Texas comptroller’s office, one section of that, and we’ve done another project with our local metropolitan transit authority. We’ve also used this approach in trying to model the performance of the budgeting system within the State of Texas government. That one didn’t go so well yet, they’re still working on that. But this approach has been used and the biggest benefit that those users got out of this approach was right here. This is the biggest benefit of the model is to get the strategic planners talking to each other, saying what is important, what are our priorities, how do we know when we’re successful, define successful, etc. If you can accomplish this, the rest of this is easy and it falls straight out.

So the simplest spreadsheet model, with hierarchy. So what does this model have in it? Well, as mentioned, the hierarchy. We take the outcomes and we break them out into subsequent measures. The types of measures that we’re talking about are the outcomes, are the so-what or the effects. Increased understanding of the U.S., increased influence of the U.S. Government in the opinions of the elites in the foreign country that we’re targeting, etc. So the so-whats are the what –– so-whats and the outcomes, the impacts. Depending on the outcome that we’re looking at, we may want to target our efforts at the general populous. You may want to target your efforts at the elite decisionmaker to the country or even the government itself. Public diplomacy effort may have that target, the option to include or exclude any target audience is built into the model.

The policy of area of interest. Are we trying to influence their opinions about environmental issues in the country? Security issues in the country? Or just educate them about the U.S. culture in general, so cultural issues or cultural areas. What are the policy areas of interest that we’re trying to influence? And how do we measure whether or not those efforts are successful? Is it just how many people attend or is it a combination of how many people attend, how many people subsequently change their opinion by a vis-a-vis a survey? Change their opinion about us. And we heard testimony in the previous Commission hearing in July from the offices within the State Department that are doing this type of work. Again, there is no unified framework, there isn’t a unified structure. This is a vehicle for getting those people to talk to each other also. So these are the elements that make up the model. The outcomes, who are we targeting, whether we try –– the message we’re trying to communicate, and how do we know whether or not that message is successful.

Within the model then, the decisionmakers can establish priorities on every single one of these elements. Is understanding much more important than influencing somebody in this country? And you can change this priority, depending on what you’re trying to measure, whether it’s comparing programs at the mission level, or comparing missions in the region level, comparing embassy efforts at the regional level, or at the national level comparing regions, and the success of PD efforts within the regions. So depending on what your goals are, you can establish different priorities for different elements –– all these different elements of the model in there. And you can use these priorities and change these models because there’s a spreadsheet to do "what if" analysis, what we call sensitivity analysis. What if this was a higher priority? Would that influence the decision I’m going to make about this program later on? So we can do some sensitivity analysis.

Incumbent upon the decisionmakers are to sit down and define what they expect out of an effort, out of the public diplomacy effort. Force the decision makers to think, how do I know I’m successful? What do I consider to be a success? And at the ideal level, I’d be perfectly happy if we achieve this level of performance and we got this result. A hundred percent of the population that we surveyed after we did this radio broadcast totally understand, from survey results, what the U.S. is trying to accomplish in Afghanistan. That might be the ideal level. Then again, I am going to be on the opposite end of the spectrum, I would be thoroughly upset if we didn’t get at least this percent of the population. So what’s the least acceptable, the minimum level of effort? And what we’re going to do then is measure the success of these efforts, relative to some ideal standard, or relative to some least –– minimally acceptable level of performance on it. And the model allows you to simply put that in there. Change your standards if you like and watch the effect of changing those expectations also.

The final element that we can include in the model is some characterization of how much risk the decisionmaker is willing to take, vis-a-vis, giving additional resources, or just inherent in the operation itself, in the effort itself, might be a riskier effort to write an editorial than it is to send a broadcast or something in a particular area of the world. How much risk is that decisionmaker willing to take to accomplish that level of performance as a result? And again, this is changeable. Some people are very averse to taking risk with respect to certain other componentcy efforts. Others are willing to take a lot of risk. Put yourself in the situation of Donald Trump spending money, versus me or you spending the same amount of money. We’ll be a little bit more afraid to take a risk about dropping $100,000 for a particular effort than Donald Trump might be. So again, the idea of taking a risk to achieve a level of performance is incorporated into the model explicitly. And the decision has to address that explicitly also.

So an example of the kind of things we put into the model, we talked –– Katherine introduced the idea that we came up with three generic, if you will, outcomes. Our idea of what three outcomes might be used to measure successful public diplomacy efforts and they’re not lacking concrete. How did we come up with these? Well, through our research we found out that the evaluation measurement unit at the Department of State has an effort ongoing called the Public Diplomacy Impact Project in which they go out to several countries around the world and measure the impact of a particular public diplomacy effort. But again, they’re very broad measures of the impact and they measure it in six areas, based on surveys and follow-up surveys. These three represent a subset of those six areas that the evaluation measurement unit considers outcomes of successful public diplomacy. So again, this is a subset, based on work that’s already being done in there. It can be expanded, it can be reduced, it can be changed, however you want to work it.

At the next level in the hierarchy then is, who are these efforts targeting in there? And this is on that spreadsheet that you have in front of you, the hierarchy spreadsheet you have in front there. Who are we targeting these efforts to? The one that you have on the top page is the understanding outcome. There, that’s the first page that you have, broken down into the target audience. In this case the page that you have is the general population. We’re targeting these efforts at how well the general population understands something. And what is it that we’re trying to get them to understand? Something in these areas. We’re trying to influence their understanding of U.S. culture, of our understanding of our foreign policy, our street policy or something. Again, these are notional. They are measures and areas to be developed as a result of the interviews we conducted, the surveys we conducted, etc. Again, they may change depending on the decision maker, depending on the department goals, etc.

So we’re trying to influence, on the example you have, the general public’s understanding of one of these. And I forget which it is in the spreadsheet, if it’s foreign policy, I think, or culture, that’s broken out. Well then, how do we know they understand U.S. culture better? We break it down to the next level and we actually have the performance measures. And in our case, in contrast to some of the efforts that are already ongoing in the Department of State, Katherine referred to them as output measures. There is the map, the mission activity tracker efforts going on currently at the embassy level, that generates data backup into the Department of State, which reports, here’s what we’re doing on our public diplomacy work at –– in public affairs work at the embassy level. We had these kind of events, we had this many of those events. At these particular events we had this many people attend of this type. You might be able to conjecture those are output measures. Those are bean counts. We had this many people. They don’t tell you, so what? What happened as a result of 100 people coming to this particular event? So what we were looking for and what we’re proposing as a result of some of our metrics are changes in attitudes, whether they’ve done surveys or interviews or percent participation or if you were looking at security policy, etc. How many, was there a change in the number of demonstrations or in the participation in anti-U.S. demonstrations in the street? So we’re looking for deltas, changes in percents and changes in participation, rather than bean counts, of actually how many people participated. Because it’s awful hard to track millions of dollars per head at a particular embassy party or an art exhibit in some foreign country, etc. But what we’re trying to do is establish a link and say, if I spend this much money, then I can track that down through the hierarchy and back up through the hierarchy and say, I had this event at this embassy and I measured how successful it was and I can track that all the way back up to its overall impact on Department of State’s public diplomacy effort, back up the hierarchical chain that you have there. So these are notional, again, suggested ways of measuring the performance there.

Once we have these performance measures defined, we go back and we have to establish what’s most important to us. Yes, it’s going to be iterative, we might establish priority and say this is the most important as we go through it, but then again, an iterative process, go back and say, all right, what is our most important outcome? Is it most important that they understand us, or that we are able to influence that general public as a result of our public diplomacy efforts? Again, it has to be set by the decision makers as part of the strategic planning process, the strategic goals at the Department level and every level in between, all the way down to the embassy and the mission level.

We’ll establish priorities –– they pick up a priority on this 0 to 10 scale. Ten is absolutely must have, got to have this, extremely important, zero is I don’t care, absolutely no importance. I mentioned earlier that we could eliminate some of the target audiences simply by establishing a zero priority. So we can target our efforts and say this is absolutely of no meaning to us and give it a zero priority and that eliminates it from the model directly. Again, so they can establish how important a particular measure is. A member, as everybody knows, if everything is top priority, priority ten, nothing is top priority. Again, so that’s part of the decision making process. The discussion process that’s going on around the table and up and down the chain of command.

Once they’ve got all this hierarchy built and all these measures put in place, saying this is how we determine whether or not we’re successful at understanding in influencing the public’s understanding, breaking it all the way down to the measures. Then they go down to each bottom level measure, the metric Excel and say, all right, what is –– what do I expect out of this effort? I’m going to throw a concert in Zimbabwe, we’re going to put on a concert in Zimbabwe and we’re going to keep track of who comes, etc. What do I consider a successful way of influencing their understanding of U.S. culture? Is it a survey, before and after survey that I do, etc. How do I measure that? What would be the ideal level of performance? If I change 100% of the concert-goers attitudes, is that ideal? Is that realistic? So we go through and we define realistic expectations with respect to how the results of that effort should be measured, and then zero –– and it’s scored on a 0 to 100 scale, because we’re all familiar with grades, we all get our grades in school on 0 to 100, people can relate to 0 to 100 scales, 100 being the highest score you can possibly achieve, zero being absolutely nothing.

So we’ll ask the decisionmakers, again, to go through, establish their measures, prioritize the measures and define what their expectations are with respect to those measures. And then go back again and say, how much risk am I willing to take to achieve that level of performance? The ideal level of performance. And there are ways in the model to characterize their risk behavior. Those of you familiar again with multi-attribute utility decision making theory know that we can break the risk behavior down into this, where we are willing to take a risk to achieve high levels of performance, a risk taker. Or we’re not willing to get high levels of performance, but we’re more worried at the low end of the performance spectrum, changing what we can do at the least cost, the least influence on resources, they may be more risk adverse. Risk neutral, we don’t necessarily care either way.

So the decisionmakers, the people sitting around the table have to come up with these elements and they can put them into the structure that we provided. Amanda, again, will show you how we go about doing that. What do you get out? When you put all these things in and you say –– push the button, well, not really push the button, it’s being calculated as you put the things in, so it’s dynamically changing as you put all these elements into the spreadsheet model. What do we get out of it? We get out at every level in the model a performance score on a 0 to 100 scale. So we know how we did on this 0 to 100 scale in effecting the understanding of the general population with respect to U.S. culture. Do they understand U.S. culture. We can compare that into how successful to where we wanted to be, vis-a-vis where we wanted to be, in our ability to influence them. So at the very top level, understanding income in influence and favorability rating of the U.S., we can find out how successful were each of these outcomes, relative to where we wanted to be, and at each level subsequent that, all the way down to the lowest performance metric. This is what we wanted to have our survey say the change in attitude was as a result of that concert. Did we achieve that on a 0 to 100 scale? And at each level down, we can attribute how successful we were to the higher level by looking at the subordinate measures. So if we did not do real well at the very top, we can go down to the next level and say, where did we fall apart? Was it in our ability to influence their understanding of U.S. culture, or their understanding of the U.S. foreign policy? Where were we really weak? And we can focus then, if it was foreign policy, focus on foreign policy. And break foreign policy down.

Was it the method or means we used to disseminate the message? Was it the broadcast, was it the media, the journal print media? Or where was the weakness there? And then break it down further into what actually failed in this particular area. And we can start focusing and making strategic decisions based on that breakdown of where we were weak versus where we were strong in the model.

The advantage of a hierarchical structure of a model, that’s what we just talked to you right there. What that provides you then is, in the model, the physical model, audit trail of results. What did we do well, what did we not do so well? And it’s all there for everybody to see, at all levels that you’re using the model at, the decision makers on down. And it gives the opportunity for decision makers to address those particular areas. So we have an audit trail of where we expended the resources, what the results of expending those resources in that particular area were. And we put that on a graph because people are visual to begin with. So we use a graph, and Amanda will show you some graphs, it’s in the handout that you have, show you how we can take those graphs and break them down into subsequent levels and apportion blame, or credit, if you will, at each level in the hierarchy. So we take all these numbers, we give you the numerical results, we can do that in the table if you want, but visually it makes more sense when you’re sitting around a conference table and having a discussion about goals, objectives, etc.

So how can we use this? If you accept this idea of a hierarchical structure broken down into subordinate elements, each of which is measured relative to some expectation on level of performance, what can we do with that? Well, the obvious thing is, put it up there and say, where do we want to go with the strategic planning process? What do we consider successful? If we spend this much money on public diplomacy, how do we know it’s money well spent? And how are we going to determine whether or not we are successful in that? So it gets the decision makers talking at all levels to define, what’s our goals? What are our objectives? How do we measure whether or not we’ve accomplished those goals? Objectives from your planning process were smart, specific, measurable, obtainable, realistic and time-driven. So how do we know when we’ve achieved those goals? And these –– on these objectives, how do we measure to what extent we have accomplished these objectives? Again, you can see the hierarchical structure evolving or developing there.

We can compare them and so this model or this structure, this framework can be used at all levels. You can sit down in a mission or an embassy and compare programs. Where are we strong, where are we weak? So within a single mission you can compare across several periods. We change the amount of resources we allocated to this effort. We put more into this. Did that result in a change in our performance, or the outcomes that we achieved as a result of those efforts? So we can take a single program and track it over several years, or you can take a single program and compare it over multiple missions or multiple countries. Say we’re spending $10 million in Zimbabwe, $15 million in Namibia and $40 million in South Africa. How are we doing, relative to where we expected to be as a result of those efforts? And we can compare those three or four countries on there. So you can use it all different types of levels.

Again, we’ve already mentioned –– identified strong and weak results, we can do a "what if". What if this wasn’t the number one priority? What if I made this number one priority, how does that affect my interpretation of the results and our efforts? It changes your graphs, but you have those changes in front of you now, the old one to the new one and you can see the resulting input. What did it affect in the chain or in the hierarchy, how did it influence my overall measure of performance and my ability to accomplish the goals? So we can do "what if" analyses.

But what’s really nice on this is, one of the things that we did with the state agencies and the other government entities that we did involves this –– we can take your budget, how much money did we spend on that? What was the change in the overall score, if you will, as a result of adding these resources to it? Was this money well spent? We spent an additional a million here, we’ve got this much change in the accomplishment of our goals. But we spent a million dollars over here and we only got this much in the overall attainment of our goals. So it helps the decision makers again, identify where our money is being well spent. Now that’s not explicit in the model yet, but the ability to do that is there in the model.

By the way, this is also drawing from some work done by Excenture and they have a book on it called “Public Sector Value” or “Public Service Value Models.” This is where we first got introduced to this idea and this concept. They had us test their model out with these public entities and there were some weaknesses and fallacies that we found in them and we’ve since adapted it to this approach now, where we use relative scoring scales, on a 0 to 100 scale. Their method of doing cost effectiveness would blow most accountants away, though. Try and bring in the cost –– the cost of money over time and just an unbelievable amount of factors, number of factors in the cost effectiveness. But you can do that in this model if you want, or you can keep it simple in the gross expenditures.

So how do we go about using the model? That’s a basic layout of what the model structure is, the elements of the model and what we’re trying to accomplish with the model and how it might be used. What I’d like to do is turn it over to Amanda now and have her show you what we can with the model, how the user can actually input things into the model and change them in the model.

MS. DILLON: Thank you. So in order to use this, as Dr. Mat has already pointed out, you have to identify each of these components within the model in order to decide, I guess, how to most effectively use it.

So first you have to come up with your outcomes. Now we’ve selected influence, understanding and favorability, but these can be changed if the State Department decides that this isn’t necessarily the direction they want to move in. Second is the audience. We selected the audiences of members of foreign governments, elites and the general population. Now, suboutcomes is a breakdown of how the outcomes are achieved. For example, under understanding, the three suboutcomes are dissemination or the spreading of information, reception or before audience actually receiving the information, and finally comprehension, whether or not they understand what they received.

Next is the policy areas. This component consists of the type of information and the areas in which public diplomacy works, and these can vary from foreign policy to environmental policy, all the way down to U.S. culture.

Next, as Dr. Mat explained is the priority. This is the importance of a component and as he explained, you cannot use –– not everything can be a priority of ten. And if a program or a particular project needs to be cancelled, you can enter in a zero in that particular priority and it will cancel out that element of the spreadsheet and nullify it.

Finally, the programs in metrics. This is the quantifiable, measureable information that you’re trying to read. Now we couldn’t access any of this particular quantifiable, measureable information, though we sort of came up with our own, a close proximity of what’s actually used. So located in the handout is the example metrics I’ll be using. On the second page we’ll be looking at how good the State Department is doing at increasing understanding of U.S. culture through dissemination of information. And these three metrics are cultural events, the relative change in the number of events open to a general population, and that’s changed from the foreign –– from the previous fiscal year; entertainment media, this is the relative change in the number of television and radio shows portraying U.S. culture and school visits which measures dissemination to youth. This is the number of embassy visits that a school receives, or, I'm sorry, the number of schools the embassy visits for the official purpose of presenting U.S. culture. Now the performance levels need to be established for each of these particular metrics. Now these measurements set a bounds. So if you wanted 20 school visits as your ideal measurement, this translates to 100, or a perfect score, whereas five could translate to a zero.

Risk is the next section, this is the willingness to commit significant resources for an undetermined outcome. We used a negative 10 to positive 10 scale, where negative 10 and close to 0 is risk adverse, near 0 is risk neutral and closer to positive 10 is risk-loving, or risk accepting.

Our example is, as I said, to increase understanding of the general population to U.S. culture through the dissemination of information. And as you can see, it seems that this particular ambassador is more risk adverse, or risk neutral with dissemination and more risk-loving with reception.

Now if you look at the very last page of the handout, you’ll see how the spreadsheet represents itself graphically -- hold this up. Now here’s where you can actually visually see the higher. So in order to measure the general population, the dissemination of cultural information to the general population, we look at school visits, entertainment media and cultural events. Then to see their understanding of culture we have the comprehension, reception and dissemination. The dissemination section of which is made up of this particular graph. And finally, this is the understanding of the general population and this is made up of culture and policy. Culture, which is made up of this particular section.

So let’s say the ambassador to Cairo wants to focus his public diplomacy efforts on the youth. Now this would involve disseminating information particularly through schools. So we would change school visits to a priority of 10, entertainment media, which is also generally youth-driven to a 7, and let’s say he’s willing to commit significant resources. So we’ll raise the risk level to a 7 on the school visits and a 7 on entertainment media. Now if we look with the current performance that the embassy has in these areas, the performance isn’t too good, and as a result we can see that from your 1 to your 3, we do have some growth, but because they’ve committed significant resources and made school visits a top priority without actually changing the number of visits, we can see that the general population, the dissemination of information –– scroll this down a little bit. The dissemination of information is practically nonexistent in your (inaudible) and some in your three, and as a result you can tell by the difference from the graphs that you have how understanding, especially in the cultural arena decreased as a result of the same amount of work going into something that had double the effort and double the resources.

Now, let’s say the ambassador believes that he has the same priorities, the same performance, but he thinks that having cultural events is a particular security threat, and so 100 is not a good ideal. He thinks it should be closer to 40, with the same minimum of 10. Now without changing the performance level, say the embassy does the exact same in this three year period that they did in the past three year period, with the same amount of priority on everything else, we can see that because of the cultural events, we’ve managed to increase a little bit of this section, which then increases the dissemination slightly and increases the understanding of the general population about U.S. culture a little bit. Now, this tool is incredibly flexible and is more important in the strategic planning and evaluation side of public diplomacy efforts.

Now, the PD map is useful as evaluation and planning and it’s flexible enough to fit in any office and to measure the effectiveness of public diplomacy in achieving any outcomes that want to be measured. Now the thing that everyone needs to be careful about is that while this is incredibly easy to manipulate, if it’s used properly in the strategic and planning process, it can be most effective.

I’m going to pass it back to Dr. Mat.

MR. MATWICZAK: Okay. That’s just a quick overview of how simple the model is to use. That was one element of the model in the report. We have built a complete model for our notion of the understanding outcome, we assembled that and that’s in the appendix, appendix D of the report is the complete model for the understanding outcome. You can do likewise for this favorability outcome and for the influence outcome, depending on the (inaudible) makers, but the understanding provides the framework for you to understand how these things are calculated.

This is done in Excel spreadsheet, so all the formulas that we used in calculating these, based on expected value, are based on expected value and the formulas are visible. They’re available to anybody, you can change any of the parameters, that’s a danger, again, but once you know what you’re trying to do with the model and how to use the model, you can do what you want with those formulas. They’re nothing protected, copyrighted, or otherwise hidden from view in this model.

What we’re hoping that you get out of this is that it is a simple structure, a flexible structure that can be used in the strategic planning process. So, it’s a notional model, it’s our thoughts based on our background and the background work we’ve done on the research and some of the other efforts going on in performance measurement within the government sector of how public diplomacy could be measured, and provides a structure for accomplishing that. Again, it’s Excel, that means anybody can use it, anybody who is familiar with spreadsheets at all can use it, at any level. You can change what the elements of that model are and the results are easy for everybody to understand. All those graphs that Amanda showed are on a 0 to 100 scale. And you can change what those graphs look like by changing your priorities, changing your risks, changing what you determine is your level of expectation for ideal performance, etc. And it provides, because it’s a spreadsheet that’s already in place, an excellent tool for doing decision making in "what if" and sensitivity analysis.

The structure is there, you can build on it. You can copy/paste these worksheets into another one, change the names in the blocks to what you want them to be, change the importance and priorities of the model by simply changing those columns on the spreadsheets that you have in front of you that are labeled ideal, or least desirable, or risk, or priority, within the scales that we’ve established. And if you don’t like those scales, you can go to the equations and formula and change the formula to adapt to a new set of scales. But again, you might not want to do that because it’s tailored to expected value decision making and we want to try and keep everything in the same scale. But the structure is there. Anybody that wants to use it can go play with it, try it, use it in different organizations, not just public diplomacy. But the main benefit of this approach is to get the decision maker to sit down around the table and talk about what do we expect to accomplish by doing this? Where do we want to go, how do we want to go about doing it, what do we expect as a result of doing this effort, whether it be public diplomacy or whatever, and how do we know how well we are doing in achieving that level of expectation.

Now, does 0 to 100 mean anything? No, it only means something relative to where we want it to be, where we expect to be. Change the expectations, sure, you’re going to change the score, result there. But you have a common scale, a common way of comparing efforts across the programs, across regions, across areas, et cetera.

So that is our notion of a way to measure public diplomacy efforts, the success of public diplomacy efforts. And with that, chairman?

MR. HYBL: Your glasses? Those yours Dr. Mat?

MR. MATWICZAK: Yes. I’m lost without those, sir.

MR. HYBL: On behalf of the Commission, I’d like to thank you, Dr. Mat, Katherine, Amanda, thank you very much for being with us. Questions or comments by members of the Commission? Ambassador Olson?

MR. LYNDON OLSON: What –– by not having access to State, not having the kind of access I think you all desire, and by not being able to retrieve certain data, are you confident that the assumptions that underlie the report are not compromised? What would be the kind of data you would want from the State Department that you couldn’t get, period?

MR. MATWICZAK: In the model, in the notional model we built, the kind of data we wanted is built into the model, on it. Now whether or not the State Department actually has the capability to collect the type of data for those elements –– some of it they do through their public diplomacy impact efforts in some countries. Some of it is available through mission activity trackers that are also being reported up through the MCs on their efforts, but the kind of data that we think needs to be incorporated into decision making is in the model itself. Again, it’s notional, it’s our thinking of it. Now, not having access to the actual data, we got snippets here and there, we had conversation with the director of BMU, Therca Montgomery, we’ve had conversation with ECA Bureau and how they measure things, so we’re confident that the types of measures we came up with, although not the actual data, would be doable by the State Department and is the type of data you want to incorporate into the planning process. Does that answer your question?

MR. LYNDON OLSON: Yes, I think so, good.

MR. HYBL: Ambassador Peacock, any thoughts or questions?

AMBASSADOR PEACOCK: Yes. First of all, I’d like to congratulate you and your students –– they’re graduates now –-

MR. MATWICZAK: Thank God for that ––

AMBASSADOR PEACOCK: –– they’re gone now. On what you’ve done. What I’m wondering is, with so much paper in Washington, as there is always, if we could have a condensed version of this.

MR. MATWICZAK: The report?

AMBASSADOR PEACOCK: Of the report. I mean, this is marvelous work, it’s been done over the last year with not as much cooperation as you might have liked, but could there be an abbreviated version to catch the attention of some persons at the State Department that then could be moved to the next level, when in fact they find how thorough ––

MR. MATWICZAK: Sure. We can talk to Mr. Chan about how we can accomplish that. We could do something, come up with something like that sure.

MR. HYBL: Commissioner Osborn?

MR. OSBORN: Professor, thanks, add my thanks to you and the students. I don’t want to put words in your mouth –– I guess I’m interested in if you have a broader policy perspective, aside from the model itself, is it –– is it –– how would you summarize the lessons in this? Is it that the Department would be well served to consider some kind of quantitative basis for program evaluation? Is it –– you know, you did mention, of course, the essentiality of the planning process, but how would you sort of summarize a prescription that comes out of your work in developing the model?

MR. MATWICZAK: How much do you want me to dance? We went to a –– we participated in a –– I took the students to participate in a conference on public diplomacy work in efforts –– that was hosted by Arizona State University, Phoenix, in the spring. And so it was a meeting of the minds of academics and some practitioners and former practitioners in the PD world. And I think I would not be –– we gave similar presentations –– the model hadn’t been finished up by that time, but based on the feedback that we got from people in attendance at that presentation, the last meeting of the Commission, are working with EMU, the evaluation measurement unit, the ECA Bureau, Educational Cultural Affairs Bureau, and the work that’s being done –– what would I suggest be the next step? I think one of the biggest things they need to do is sit down and get the decision makers in one room and say, what are we trying to accomplish? We need -- the Under Secretary has come up with her strategic goals. I think we need to take those strategic goals now that Under Secretary Hale has come up with and sit down around the table at this point and say, how do we know we’ve met that goal? And start talking about what is success, how do we define success, where are we willing to devote the resources to achieve that level of success? I don't think there is a unified –– based on our conversations throughout the year with people that we had access to in the Department, the measurement efforts that we did see, there is not a unified effort, or a unified approach to public diplomacy and evaluating success of public diplomacy. There’s just not a strategic push in that direction. The Secretary has her level, but there’s a disconnect between –– from that level on down to the embassy, or the implementation level. Something is not getting communicated between there and I think the next effort will be started, top up and start saying this is what we’re going to do as part of the strategic planning process.

MR. HYBL: Thank you. Commissioner Snyder?

MR. SNYDER: Again, thank you for your work, Dr. Mat, it’s an enormous undertaking and we really do appreciate it. I have a question about the subjectivity of the quality versus quantified –– a portion of your report. The qualitative portion of the report, you know, there’s an enormous amount of subjectivity and you talked about, can’t rate everything a 10 –– have –– did you at all address in your studies, how to address the qualitative side of public diplomacy? Especially when it is –– sometimes can extend not only more than a year or two, but five, ten, fifteen years, such as like our –– some of exchange and our Fulbright programs.

MR. MATWICZAK: Well that’s, that’s both one of the benefits and the downfalls of this type of approach. That’s why I frowned a little bit when you started asking the question, the qualitative side, because in the decision analysis world, in the multi-attribute utility, multi-criteria decision world, a lot of people pooh-pooh this approach because it is so subjective and so qualitative, but that’s not where your question ended up. What I think I hear you saying is, all right, we’ve got numbers to measure everything. Our initial charter was to come up with a way to quantify success in public diplomacy efforts. So when we have a qualitative or subjective outcome, if you will, of efforts on that, how can we quantify that? We’re hoping that by coming up and incorporating it in the models, saying, well, okay, how good is your gut feel, if you will, on it. We have a way to translate that gut feel, or that subjective impression into some qualitative measure. And if you look at it in that context, and somebody doesn’t agree with it, okay, well, what don’t you like about it? You think it’s a different scale, change it and let’s see.

MR. SNYDER: But if you’re running something with a –– if you’re hope is to have numerous people imprinting data, then you have this enormous amount of subjectivity that sort of creates a –– how great a variation and how does one address that variation and in turn, how does that skew your data? Or do you have any of that?

MR. MATWICZAK: No, it’s –

MR. SNYDER: It’s still a little too early for that?

MR. MATWICZAK: I think what we tried to do is get around that with this idea of ideal and minimum level of performance. Ideal performance, substandard performance, by defining those ahead of time, say at the decision maker level. Now that would be passed down, I would think, from maybe the Department level to the Bureau and from the Bureau, here’s my expectations down to the region and from the regions to the missions or to the actual public affairs options on the ground. Say, we expect this kind of alchemy –– if you’re going to do this kind of work, here’s what we’d expect to happen as a result of that so, at the program level, if you will, everybody would be assessed against the same standard. So the subjectivity would come in and how do you think you accomplished –– or how well you did what you set out to do, relative to the standard. So there’s where the subjectivity comes in, into determining how well I did and how well I think I did, and yeah, I might boost it up a little bit, but if I took that result up to the decision maker level and said, here’s my report and here’s how I think I performed, they put it in their next level model and say, oh, come on, you’ve got to be kidding me. You’ve only got this many people showing up and the survey results show this and you think you did that well? What if you in fact, we think you did this? Does that affect the overall performance of your organization, et cetera. So the sensitivity analysis provides a vehicle for addressing that kind of subjectivity. Does that answer your question?


MR. HYBL: Any –– by the Commission, any other –– any further comments or questions? If not, is there a motion to accept the report which was contracted for in 2009 from the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin?

AMBASSADOR PEACOCK: Mr. Chairman, I move that we accept the report.

MR. HYBL: Is there a second?

MR. OSBORN: Second.

MR. HYBL: Is there discussion? All in favor, indicate by saying “Aye.”


MR. HYBL: Opposed, “No.” Thank you and thank you for the report. We would now certainly extend to any of you that have joined us today the opportunity to ask questions of Dr. Mat, certainly his assistants are here, we appreciate the work that they have done and we would only ask that you identify yourself so we know what to call you. Please, any questions, comments? Yes? By the way, the speaker will pick it up. It’s really kind of neat in here.

MR. DAN SREEBNY: It’s wonderful technology. Dan Sreebny, I have the pleasure of doing work for public diplomacy at the Department of State. I first want to thank you and your team for an excellent and very thought-provoking report. I’m delighted that our colleagues who are the leaders at the State Department and evaluation of public diplomacy met with you, visited you, worked with you and, of course, a lot of that information is available on the web site, of the research that has been done, so I was a little confused by not having access to the research. Also suggests that they do coordinate much more closely with each other than might have been taken by the audience.

I have two related questions that I would appreciate your help on, because I see intriguing value for this approach for strategic planning for public diplomacy and insuring that we don’t get so caught up in doing the individual programs that we forget to say, what are we trying to actually accomplish? I have a little more difficulty with it as an evaluation process, and I guess the two questions are –– one is, this looks at the discrete programs and activities and of course we have to do each program and activity discretely, but public diplomacy is not a discrete activity in another discrete activity, it’s the strategic use of a variety of tools and channels to reach audiences and achieve goals, both foreign policy goals and public diplomacy goals that support that. And how would you handle that approach, building between the discrete acts and over time as was mentioned, so you see what is the change in knowledge, influence, and I would suggest also interaction with the United States, not just the government, but individuals over time of an individual who has been on an exchange program, is on the speaker program, has gone to concerts, been to interviews and the build up through all that.

The second related is the challenge that I know many of us face between causality and correlation, to show, not what was the outcome, but what impact the specific public diplomacy efforts had on an outcome when there are so many outside factors, disasters, wars, opportunities, sudden cultural groups that offer to come to a country if you can help them –– a variety of things that are not part of what is actually done, but in terms of the outcomes, both desired and real, are a very important part. So how do you factor that in?

MR. MATWICZAK: Thanks, excellent questions and we’ve heard them before, but they are things that we had to wrestle with as we went through the project. Let me address your first question first about public diplomacy efforts not necessarily being discrete activities, et cetera.

I think the framework for the model as it’s currently constructed provides you the opportunity to look at programs, if you will, to say we have this program that consists of these type of efforts and look at it as a whole and figure out what you expect to get as a result of this set of programs or this discrete set of efforts. But in addition to that, it allows you to dissect this program, the interaction of these efforts into the individual discrete pieces, if you will. But at some level you can just roll it into a larger ball called a program and say, let’s do this program here and let’s do this program here. This program might consist of these type of efforts, this program might consist of these type of efforts, we don’t care what happens at the individual level, but we care about what happens at –– as a result of this particular set of efforts. So the structure is there for doing that.

On your second question about all the outside influences that impact on the results of public diplomacy efforts, that question came up constantly throughout our research and everybody we talked to said, how are you going to isolate the results? You can’t. It’s almost impossible. We can’t say whether or not a Michael Jackson concert in Cairo had any influence on their understanding of U.S. culture. Maybe it had positive, maybe it had negative, but that wasn’t an overt effort on the State Department. The media is there, we have television, we have the print media, et cetera, it’s all influencing their understanding of our culture, for example. And it’s not an overt effort by the State Department to change your opinion. How do you isolate that? You can’t. All we can do –– all we can suggest that you might want to do is capture the context in which your efforts are taking place. And so, in a report, up the chain. And you have the results that we really didn’t influence, we didn’t change their opinion about U.S. culture very much. What else was going on around there at that time? Well, they had an embassy bombing going on at that time, we had the counteraction by the U.S., sent the Marines out there to clean out this neighborhood after an embassy bombing, that might have had some influence on how well they look at our culture, something –– we’re just nothing but animalistic or something to that effect. So you can’t segregate them. It’s impossible to separate the outside influences, but you need to capture them, it’s part of the planning process and part of the reporting process. What were the things going on outside that might have influenced it? And I think having those available as part of your decision making process is important and the model provides a way of helping you identify what might have been or what are important outside influences that we need to look for.

What caused this to change so much? We put this much money into it, why did it go negative? What else was going on? And we teach that to our students in statistical analysis courses and in other courses, too. So it’s never the answer. Quantitative numbers are never the answer. They are information to inform the decision and form the answer. I hope that answers your question.

MR. DAN SREEBNY: It does. I should just note also that I was remiss in failing to note that your excellent research work coincided with Under Secretary McHale’s leadership to create and now to start implementing for a strategic framework which does provide priorities and they have already taken the steps to align the resources in line with foreign policy and public diplomacy priorities.

MR. MATWICZAK: And I think that’s to the Under Secretary’s credit and to the influence of the Commission in the work that the Commission was doing, too.

MR. HYBL: Mr. Streetny, we’d like to thank you for being here as one of the distinguished professionals in public diplomacy in our country.

Yes? Other questions?

MS. DEBBIE TRENT: Hi. I’m a former USIA employee and now I’m back in school trying to finish up my doctorate on U.S. public diplomacy toward Lebanon. And one of the concepts that I’m working on is measuring the effectiveness of public diplomacy toward Lebanon.

MR. HYBL: I’m sorry, I didn’t understand –– I can’t hear you.

MR. MATWICZAK: If you move towards the little saucers ––

UNIDENTIFIED SPEAKER: Flying saucer. And speak louder.

MS. DEBBIE TRENT: I am a former USIA employee, academic exchanges and other exchange programs and I’ve also been studying public diplomacy. Is this better?

MR. HYBL: No, you’re fine now.

MS. DEBBIE TRENT: Because I can’t hear myself, other than this. And I wanted to commend the research team for doing a mixed methods study and approach to measuring the effectiveness of public diplomacy. It’s something that I wrestled with as a program manager and I wrestled with it so much that I decided to leave USIA and go back to school and try and figure out, well, how do we really –– how should we do effective public diplomacy?

And one of the Commission members talked about the problem of subjectivity in measurement and that’s because it’s a problem, but it’s also just a fact of analyzing communication process and when Dr. Mat discussed –– or emphasized the importance of the communication that using this model, using this approach encourages among the program officers and among the people who are making decisions about what kind of public diplomacy programs to conduct. That’s one of the most important qualitative efforts that needs to go on in measuring public diplomacy or any kind of program to increase mutual understanding between cultures. So the numbers are important and the bean counting is a fact of all of our lives, but it’s the quantitative –– it’s the qualitative work and engaging all the stakeholders in each discrete program and then overall that can help us get, you know, rich data, that, taken all the context like the embassy bombing, like whatever it is that’s going on at the time, and I think that this model does a pretty good job of trying to at least encourage all that stakeholder analysis going on, you know, in coming up with the metrics.

MR. HYBL: Good, thank you. Other questions or comments? Yes, sir?

MR. TAYLOR: Yes, my name is Adrian Taylor with the Bureau of African Affairs, Public Diplomacy, Public Affairs, and I was wondering ––

UNIDENTIFIED SPEAKER: Department of State?

MR. TAYLOR: Yes. I was wondering, when it comes to evaluating different programs, or whatever have you, like in your text, did you all address like the kind of prevailing models that might exist, with respect to the different options, or –– and if you didn’t, could you say a little bit about the different frameworks that exist to do the kind of evaluations and assessments that you all have in mind? Like has there been a general standard that already exists out there in the literature or in the practitioner domain, or?

MR. HYBL: Thank you.

MR. MATWICZAK: If I understand your question correctly, is this work being done in other arenas? I’m not sure –– the public diplomacy arena or outside it, or ––

MR. TAYLOR: Well, particularly in public diplomacy because I’m imagining that like this is one of many other models of evaluation that may or may not exist and I was wondering if you spoke to that in your review of literature or if you can talk about that a little bit.

MR. MATWICZAK: Just to briefly go over what we did find out –– thank you for the question. It’s good to keep us honest, too, that we actually did our homework on that.

We’ve talked to several research organizations, academic, about the type of work we were doing and about their efforts to measure public diplomacy efforts and especially USC. There’s a new research entity, I forget the name of it, opening up at Harvard at the Kennedy School of Government, which focuses on public diplomacy that Secretary McHale was just up there in the spring to open that center and we spoke with them. It turns out that there is no unified effort to measure public diplomacy. The closest we could find to measuring, if you will, public diplomacy results, was within the State Department itself. The efforts of Trinca Montgomery in evaluation measurement unit, and I spent several hours with her talking about her work with the PDIs, public diplomacy impact reports, which are very expensive, very general responses as a result, and about her work with mission activity trackers. When we say we couldn’t get access to the data, the MATs, the mission activity trackers is available on intranet, the State Department net. Outsiders can’t have access to it, we couldn’t get it until we had a contract, which didn’t come until March, which is way too late for processing. So I got a look at it via Mr. Chan. Mr. Chan’s work, in his office, sitting at his desk and looking at the computer, but it’s nothing that we could actually use in our report. So the evaluation measurement unit and office policy in planning, in resources, within the State Department, is doing good work and that’s her charge. But she’s working in isolation. The Bureau’s are doing their own thing, if at all. ECA and Dr. Robin Silver in Educational Cultural Affairs is doing performance measurement, but she’s focused solely on ECA programs and the implementation of those programs. And much of the work that she’s doing informed what we did here. But we got her on board in the process too late, too. It was spring again, so we couldn’t really engage her throughout the entire project, but we did try to take advantage of the type of work she’s doing.

Again, that’s all we could find in the Department that’s being done to measure public diplomacy. The two people, Trinca Montgomery and Robin Silver know each other exists, but they don’t share data, they don’t exchange information, they don’t talk to each other.

MR. TAYLOR: Just sort of to add, I do think that Under Secretary McHale is making a concerted effort to integrate those programs as part of her strategic outline that she came out with a few months ago.

MR. MATWICZAK: Yes, sir.

MR. TAYLOR: Part of that is to integrate these different areas assessed and then have them cooperate with each other. I’m not saying it’s happening yet, but that was part of her plan.

MR. MATWICZAK: And let me interject that this model that we’re proposing might provide a framework for doing that, or having that conversation as a result of that.

MR. TAYLOR: Right.

MR. HYBL: Yes?

MS. PILON: I’m Juliana Pilon, I’m the Director of the Center for Culture and Security at the Institute of World Politics, which is a school, we give master’s degrees. And I wanted to ask, although I think I probably know the answer –– I assume you did not discuss with the Department of Defense how they evaluate their –– right. But USAID also evaluates its public diplomacy programs and I’m talking public diplomacy, rather than their, you know, the other programs. So I wondered if you had talked to them. The reason I say this is that I actually did interview, about three years ago, when prior to writing my book about –– it’s called, “Why America Is Such a Hard Sell,” and it is on Amazon, so I can’t make any money on it. It sells for very little. But I taught –– that was one of the issues I was curious about, and the public affairs director at the time told me that they do evaluate some of their programs for –– so you may want to discuss with them. But that was not –-

MR. MATWICZAK: No, USAID was not on our radar. We were working with the Department of State.

MS. PILON: Oh, okay, fair enough.

MR. MATWICZAK: That’s who we were contract work ––

MR. MCGLOUGHLIN: If I could interject for a second ––

MR. HYBL: And identify yourself, please.

MR. MCLOUGHLIN: I’m sorry, Gerald McGlauflin, I’m the senior advisor to the Commission, a member of the staff. If I could address that. USAID has also told us that –– repeatedly told me, repeatedly, that they do not conduct public diplomacy as it is defined by the Department of State. That all their efforts are designed to get publicity for their own programs, that are not part of comprehensive public diplomacy strategy and fall outside the scopes of (inaudible).

MS. PILON: Probably note that my discussion was off the record.

MR. HYBL: Oh, thank you. Any further thoughts or questions? If not, I want to thank everyone –– I want to make sure everyone has a chance here. But if not, thank you for being here and thank you again to the LBJ School at the University of Texas. We appreciate, and I speak on behalf of the entire Commission, your being here this afternoon and also your interest in public diplomacy.

I want to thank, certainly Carl Chan, the Executive Director, and Gerald McGlauflin and IFES for being a great host. We didn’t have to show badges to come in. Thank you very much.

(Whereupon the meeting ended.)

* * * * *