Minutes of the U.S. Advisory Commission on Public Diplomacy April 2010 Official Meeting

Remarks
Los Angeles, CA
April 23, 2010


 

Meeting Location:
The University of Southern California Center on Public Diplomacy
Annenberg School for Communication and Journalism
University of Southern California

 

 

P R O C E E D I N G S

MR. SEIB: I'm particularly happy to greet the Vice Chairman of the Commission, Lyndon Olson. He and I are the only people here who don't have an accent when we talk. Lyndon's from Waco, Texas, and he and I were in Texas for many years simultaneously.

One of the pleasures of doing this job as Director of the Center on Public Diplomacy is having a dean who genuinely cares about public diplomacy and knows a lot about it. Sometimes I wish he knew a little less, so I could tell him things, but he's really a master of the field. He played an important part related to public diplomacy in the Obama Administration's transition process at the end of -- following the campaign and up to the Inauguration, and Ernie Wilson, go ahead.

MR. WILSON: Thank you, Phil. I want to welcome everybody to sunny California. I can't say it's hot in sunny California, but at least it's sunny California.

It really gives me a lot of personal pleasure and professional pleasure to welcome the members of the Commission and staff and others who have come to talk about this really, really important issue.

As I think everyone around the table knows, this is an issue that we've been engaged with for some time. I'm very proud to say that we have -- we created the first Master's Program in Public Diplomacy. We have a blog which I think is a source of a lot of attention around the world. I'm delighted to know that when I visit, I was recently in Abu Dhabi and Dubai and people pull me aside and say I know you're supposed to be talking about this media thing but let's -- could you set aside some time to talk about public diplomacy?

And none of this is automatic. It came about initially through the very hard work of my predecessor Jeffrey Cowan who used to be, by the way, the head of Voice of America before he became Dean of the Adam Burke School. He has been a stalwart supporter of public diplomacy as a discipline and to his great credit, he said, "This is not just something that Americans do." He said, "The Brits have a public diplomacy policy toward the Nigerians. The Nigerians have a public diplomacy policy toward the Senegalese," etcetera, etcetera, etcetera.

So we're trying to develop a discipline, a way of creating talented, skilled people who are experts in public diplomacy to advance their own goals, whether those are NGO goals or governmental goals or private sector goals.

We also believe that we have tremendous opportunities and really obligations to be of assistance to our colleagues in Washington. We have worked -- we worked very, very closely with the previous Administration, with the Bush Administration on public diplomacy issues.

One of the first meetings I had, one of the first telephone calls I got when I became dean was from Karen Hughes and she said, "When are you next coming to Washington?" And that continued dialogue and, as you know, the center won one of the first Benjamin Franklin Awards that was given by the State Department and we're very proud of that.

I like to think we enjoy good relationships with the current Administration, as well, and I hope we will enjoy good relationships with future Administrations in Washington.

It is a real honor for us to be able to host you. This is an important institution. You are doing important work at a very critical moment, I think, in the foreign policy design of the Obama Administration.

I also want to thank Phil Seib for his vision, his hard work, his accent, and other -- Phil and I were -- we went to high school in Washington. We have another sort of, you know, connection there, as well, and his great stature, he and others who have worked so hard to make the Center for Public Diplomacy a real center for the world.

So with that, Vice Chairman Olson, again I welcome you and your colleagues to the Annenberg School and I look forward to a very robust and very interesting conversation.

MR. OLSON: Thank you, Dean. Thank you very much. Before I -- I have about a page and a half of prepared remarks just because it's a little more orderly, but I'd like to introduce my colleague Jay Snyder from New York who's with us today and our staff, our Executive Director Carl Chan, and Gerry McLoughlin, who's the Senior Advisor to the Commission.

On behalf of the entire Commission, let me express our thanks to you and to the University of Southern California Center on Public Diplomacy for hosting us today.

A brief word or two about who we are. The United States Advisory Commission on Public Diplomacy is a seven-member bipartisan presidential panel. Its mission is stipulated in the authorizing legislation. It's to apprise the United States Government's public diplomacy efforts.

For our purposes, this essentially means that the United States Government's activities are intended to understand, to inform, and to influence foreign audiences in support of the U.S. foreign policy objectives. These activities are primarily lodged with the United States Department of State but other U.S. Government departments and agencies are included, most notably the Defense Department, and also they engage in much public diplomacy activities that fall within the purview of our Commission's mandate.

On the basis of its examination and evaluation of the United States Government public diplomacy operations and programs, the Commission makes recommendations to the President, to the Congress, and to the Secretary of State, usually in the form of written reports.

Our topic this morning is the United States Government Public Diplomacy and How Its Programs Can Be Measured. A program, of course, is a tool to achieve a goal, to realize a strategy.

The Annenberg School for Communications Dean Ernie Wilson and the Director of the USC Center on Public Diplomacy Phil Seib will address the question of the new public diplomacy strategy. Bob Banks from the Department of State will then speak on measuring public diplomacy programs and how we can tell if they're realizing any strategy we create.

I would ask you to hold your questions until the end of the program and why don't we go ahead and begin this morning? Also, before I forget it, all of our proceedings -- the proceedings here should be on our website following in the next few days. So if you want to go back and dig in, it'll be there.

I'd like to recognize our first speaker and I assume is it you and Phil?

MR. SEIB: Ernie will go first.

MR. OLSON: Ernie will go first, then you, and then Bob. Okay. Great.

Mr. WILSON: Very briefly. I did not come with prepared remarks but I want to thank you all, thank the university for having -- University of Southern California for having us.

I'm fortunate. This is my third visit here. I was able to come here and discuss a previous report that the Commission had written in 2004 and then again in 2006 as part of another open commission meeting.

I think the program that you've developed here and I know it started under Jack, who I have enormous admiration for and his work at VOA, which was extraordinary, for putting together an officially-degreed program on public diplomacy and something that we as a country and, I think, a lot of other countries around the world are in great need of and so I want to just give you my personal appreciation for having us here and for the work that you've been doing.

Thank you.

MR. OLSON: Thank you so much.

MR. WILSON: Well, again, I welcome this opportunity to re-engage in a conversation that I haven't had the opportunity to really be as fully involved in since I stopped working on the transition back in December or so.

But let me just provide a bit of a context. I will end up on the impact and measurement question, and remember that great movie, The Graduate, where the uncle comes in at one point and says to Dustin Hoffman, "What should you study in the future?" The guy's idealistic. He's young, and he says, "Plastics."

So my version of that will be impact measurement and data mining. Equally boring, but I'm going to try to sort of set that within a context, based on some of the things that we're able to see out here in Los Angeles. Maybe I'll just mention that very briefly.

As someone who grew up inside the Beltway and worked on Capitol Hill and worked on the Supreme Court and worked in the White House and agencies and so forth, there is a view that you get in Washington that's different from the view that one gets 3,000 miles away from Washington and so again I really applaud the Commission for coming out to visit with those of us out here in the hinterlands and listening to our perspectives.

And one of the things that was so striking for me as someone who's worked in the bowels of a lot of these agencies is that often in Washington you come up with an idea and the response is that's sort of a good idea but the Principal Deputy Assistant Secretary won't like that or you have another idea and they say, well, but, you know, the Chief Deputy Staff to the Subcommittee on that won't fly there.

When I come up with these ideas or Phil comes up with these ideas in L.A., the response is that's pretty cool, let's try that, and I think there really is a sense of innovation and openness and transparency and entrepreneurship that we have here on the West Coast, especially in L.A., to a certain extent up in San Francisco, that in my years in government I didn't find to the same extent, and so I would offer that to you, our sort of outside the box thinking.

But going back to inside the Beltway, let me just say a few words about sort of the relationship between public diplomacy, more soft power issues, and the other extreme, hard power issues, and whether they can be brought together in smart power ways.

This is something I know that the Secretary is very interested in. She talked about this before going into the State Department. She testified on this for her confirmation hearings and so I do want to say a bit about that and that is that for those of us who do work on public diplomacy and who work on soft power kinds of issues, we're deeply committed to it. It's something that we believe strongly will have an impact on war and peace, on life and death.

This is a very, very serious issue for those of us who believe deeply in it. At the same time, there's sometimes a tendency to believe that the goodness of our intention and the goodness of our work will automatically be recognized by those on the Hill, those in publics beyond our classrooms or beyond the work that we do on a day to day basis, and institutionally and in budgetary terms we know that's not always the case.

As I think even Secretary Gates has said, there are more members of the military bands and more lawyers in the Defense Department than there are diplomats in the State Department, not to mention those who work in the more public diplomacy field.

So I think one of the challenges for those of us who really care about public diplomacy is both to make arguments for the enormous, enormous importance of public diplomacy in a world that is becoming more distributed, networked. The traditional power hierarchies are flattening out. There are new media forms that are coming to the fore, but also to recognize that it's just one half of a broader full policy agenda which includes hard power and it's impolite to say but hard power means coercing people to do what you want them to do, which involves guns and bombs and rockets and unpleasant things like that.

But part of what I would urge us to think about moving forward is how do we have an intelligent combination of public diplomacy, soft power on the one hand, and to think about how that integrates with hard power on the other. I think this is a classic movement. This is one of those inflection moments when that is possible and it's possible in part because of the -- I'll start off on the hard power side -- the deep commitment of Secretary Gates who has probably been the most forthright proponent of soft power and public diplomacy of any Secretary of Defense that we've had, even to the point of at least beginning to think about mentioning budgetary reallocations which, of course, is the lifeblood of all of these activities and programs.

So again, I want to underscore and urge, I guess, urge us to think about the ways in which traditional national security hard power issues integrate with the important soft power instruments that we have, whether it's public broadcasting, etcetera, and this, having worked on the National Security Council, with a man named Richard A. Clarke and his directorate, I have seen this in action, but we need to do more of it at the interagency level, so we have a full government wide notion of public diplomacy.

The second point I would urge is that we find even more robust ways to bring in non-governmental voices as channels for articulating America's interests. I don't want to say as instruments of American foreign policy. I don't want to say as articulating American values because it's happening anyway.

This is something that Ann Marie Slaughter, who's the Director of Policy Planning at the State Department, has written a lot about the importance of networks, global networks, governmental, corporate and nonprofits. So that's the second point that I would like to make.

The third one is a revelation I had that I'd just like to share with you. Several -- about a month and a half, two months ago, The Economist had a cover story on data mining. I took it with me on three airplane rides and was bored to tears every time I read it. I just couldn't -- I would rather read the Escape Hatch that you get in the back of the seat rather than finish this cover story on data mining, boring, boring, boring.

Two Fridays ago, I had a meeting up in Santa Monica with a start-up company called Demand Media. You might want to visit their website, very cool stuff. If there's interest, I can describe it for you in detail, and then the following Monday, with a group called Evolve 24.

The main point is that what they do is they go out and they search 32,000 web pages a day, plus millions of Twitters and Tweets on a daily basis. They can tell you where those Tweets and Twitters are taking place. They can tell you something about the demographics of the individual. They can tell you if the person is saying I love Iran, I hate the United States, I hate the Iran, and I love. They can chart that in micro seconds and so when you go to these presentations you see the words, the concepts, the evaluations moving in micro seconds across the screen.

It sort of freaked me out a little bit that this kind of capability is there and I mention this because we're talking about measurement today. What they are able to do for Proctor & Gamble and what they're able to do for Exxon and other companies is when they launch a public diplomacy effort or a public relations effort of advertising, these companies can measure -- let's say you have two ad campaigns, A and B. A gets launched. What happens to the Tweets and what happens to the Twitters and the blogging and the blogosphere goes way up very possibly.

Strategy B comes forward. What happens to the Tweets? Nobody Tweets about it. No one on the blog says anything about it. So if you're a policymaker or a corporate executive, what does that suggest to you? You move your money from Strategy B to Strategy A.

I met with the CEOs of the top five public relations companies in New York a couple of weeks ago, Weber Shandwick and Hill and Noble, Naplyn, and one of the biggest things they were concerned about was exactly this issue of measurement and the term they used that I will leave you with is what is the premium that one gains from public relations initiatives in the corporate sense?

I think in terms of what we do is what is the premium that we bring to the table in public diplomacy? What's the value added? What difference does it make? Well, we all know in our heart of hearts that it makes a difference. We all know that over time, over a generation, these wonderful exchange programs, etcetera, etcetera, it makes a difference, but I simply want to suggest to you that there are these new technologies that are now available which have only become available in the last 12 months or so that will allow us to measure the impact or at least begin to understand the impact in different ways, in addition to the traditional in-depth interviews, etcetera, etcetera.

So let me stop here and say that in all of these things, the Center for Public Diplomacy and the Master's Program in Public Diplomacy and the Annenberg School in general, as well as our college and university, are very much looking forward to talking further with you about these and other issues, and I'll turn it over to my boss Phil Seib.

MR. SEIB: Oh, okay. Thank you very much, and again, Commissioners, thank you for being here.

What I've put together today is a short list of some issues to consider to give us some context when involved in talks about measurement and really the overall goal that I have in mind, and some of this material is drawn from a speech I gave at Chatham House in London last month, which I think you have copies of, is the overall goal is to bring public diplomacy more into the center of U.S. foreign policy.

I think a case could be made now that it's on the outside. It's kind of a satellite in distant orbit. I think that's a bad thing. So I want to run through a few of the issues that I think we might consider and measurement is really an important part of all these.

First is broadcasting. That's the elephant in the room and it's a costly elephant because he ain't just eating peanuts. I mean that's -- I think we have to -- something has to be done about Al Hurray. It's an embarrassment on many levels and the whole approach to broadcasting needs to be reconsidered.

Al Hurriya has so far cost over $620 million. Its audience is actually declining. Shibley Telhomme, who does some of the best polling in the Middle East, a few years ago, found it commanded two percent of the audience. His most recent poll about a year and a half ago was .5 percent of the audience, and when you start putting dollars to people that's awfully expensive to buy those few viewers.

The channel name itself is insulting. Al Hurriya means the Free One and what I've heard frequently, and I get to the Middle East four or five times a year, is who are you to tell us that your channel is the Free One and ours are not. So there's a good start right there, is the name ought to be changed.

Further, another problem, this is directly to measurement, the public response of Al Hurriya to these kinds of concerns has been consistently misleading because they regularly cite not their audience but their audience reach. They say, well, we have up to 26.7 million now in the Middle East. Well, that's like saying that CBS is watched by 300 million people every day because, sure, CBS can reach every household in the country. That doesn't mean everybody watches it. Al Hurriya can reach 26.7 million people but so what? That's a matter of technology more than anything else.

Now I'm not saying that broadcasting should not be continued but I think competing, trying to compete head to head with Al Jazeera or, for that matter, with BBC Arabic is foolish. What I would recommend is an emphasis on co-production for a U.S.-run channel that could take product that is co-produced with production companies in the region or as content for indigenous channels within the region. So that's one issue to consider.

Related to that, the second one is new media. There needs to be a strategy for this. Actually in the Secretary's Office, Alec Ross a very good command of this topic, but within the public diplomacy establishment it seems to me there's less -- there's a recognition that it's important, but there's less of a strategy.

How much emphasis do you place on new media? Where does it work and where does it not work? Things like Internet penetration and so on? What kind of content should there be? What some planners of public diplomacy policy need to be doing, I think, is anticipate what public diplomacy 3.0 and 4.0 are going to be like because they're going to be upon us very quickly.

I know Jim Glassman was very proud when he was the Under Secretary of adapting public diplomacy 2.0, but 2.0 is just about over and we should be looking ahead.

A third point, virtual states and DS foreign populations. Let me give you an example what I mean by virtual state. Let's take Pakistan. Is Pakistan today the land mass northwest of India or is it something else?

Well, I would argue it's much more than that. There are a million Pakistanis in the United Kingdom. There are more than a million Pakistanis in Saudi Arabia. There are clusters of Pakistanis throughout the world and U.S. public diplomacy needs to be built on the recognition that traditional borders, the borders around that land mass northwest of India, for example, are largely irrelevant; that is, DS foreign clusters need to become prime audiences for public diplomacy efforts.

The reason for that is, I can go on at greater length and I will about this, is that these folks, if you're a Pakistani in London or Pakistani in Toronto, let's say, you can stay in touch with the home country through the Internet, through cell phones. It's not like in the old days when, say, at the beginning of the 20th Century there were immigrants to the United States. Once they got here, even though they might have lived in, say, a Russian community within New York City or a Polish community, they were here and they didn't really have that kind of connection back home. So they would readily assimilate.

With these virtual states, those Pakistanis in London or in Toronto might feel that they're not assimilating, they don't need to assimilate, they're still part of Pakistan. So the point is the public diplomacy strategy designed for Pakistan has to recognize the virtual Pakistan, not just the physical one that's in South Asia.

Fourth point. Listening. This is one of the pillars of public diplomacy, but I think this has to be done more comprehensively and systematically.

One thing that I think is done pretty poorly in Washington, except maybe by the CIA, is content analysis of global media. Public diplomacy cannot be a monologue. We have to determine what publics want and need, not just try to sell a benign portrayal of the United States and one of the best ways to get a sense of what's going on is sophisticated content analysis and just what Ernie was talking about.

The computer tools are out there now that you can do quick and thorough and very valuable analysis of everything from newspapers to Tweets and that has to be done in a comprehensive way, so we know what people are thinking. You can't just say, hey, everybody, America's great. If you're a Muslim and you live in the United States, you're happy and that's it. No. You have to know what the publics are interested in.

The fifth point. Terrorism. Good public diplomacy is good antiterrorism, and I don't think we should be shy about that. I noticed Under Secretary McHale the other day was saying that she wanted to calibrate public diplomacy a bit so it wasn't seen as merely being an antiterrorism or counterterrorism effort.

But if you look at terrorism as a pyramid and you see at the very top you've got the real bad guys, like bin Laden, they're going to have to be dealt with fairly assertively, but as you go farther down the pyramid, the closer you get to the base, it becomes more numerous and younger and less committed to violence. Those are the people you have to reach and public diplomacy is a way to do that. Obviously we can go into further detail on that.

Related to that, the sixth point, young people. In addition to this implicit antiterrorism effort, there's a need to better connect with the rising generation of leaders in government, business, professions, and the arts around the world.

International visitors programs are key to this and they should be significantly expanded and not cut back. I know there's a tendency to make these victims of budget cuts. I think that's a terrible mistake and going both ways. McHarry Guitar can certainly speak to this from the private sector side, but reaching out to the private sectors in various countries and bringing those people over here.

The seventh point. There are only eight. I'm almost done. I think U.S. public diplomacy should be less Middle East-centric. Russia, Latin America, sub-Saharan Africa and elsewhere need more attention. There's a great article in the current issue of the Atlantic Monthly. I think it's called "The Empire Builders." It's about China and Africa, and Africa is going to just slip away if we don't pay more attention to it.

I'm going to Russia next week, in fact, to talk to people there, both about journalism and public diplomacy, and IO think we really need to have a greater focus on places in the world beyond the Middle East. That's an important region, there's no doubt about that, but we can't be so intently focused on that.

And the last point is just that we should never forget the fact that public diplomacy is ultimately about hope. It should be a manifestation of hope. It should be a manifestation of shared aspirations, shared dreams and shared respects.

In other words, there has to be an altruistic underpinning of public diplomacy. There has to be that sense of mission. Ernie was mentioning just how easy it is for things to get so tied up in bureaucratic snafus. That's probably inevitable to a certain point, but what you cannot lose is this driving purpose of hope as part of what's done.

So there is a quick agenda of some things we might talk about. I think that measurement works in to quite a few of those and so, Mr. Chairman, I'll turn it back to you.

MR. OLSON: Thank you, Phil. Bob.

MR. BANKS: Thank you. Good morning, everybody. I'll be talking about PD Evaluation and I'm going to be using a PowerPoint. Sorry about that if you're an anti-PowerPointite, but I guess since I've been here I've gone over to the dark side insofar as that.

You'll also notice a different accent from what's preceded you.

I want to make a disclaimer first before I start and that is that I'm not a social scientist. I'm not a statistician. I'm not an expert in the field of public diplomacy. I'm a Foreign Service officer, a field officer, a practitioner.

That said, over the last eight or nine months I spent a lot of time reading through the literature on public diplomacy evaluation and researching this topic, and I think I can tell you with some confidence that I now know more about this topic than probably 99.9 percent of the people on this earth. So even though I'm not an expert, I don't think there are a lot of experts out there, frankly.

As near as I can determine, there's exactly one extant peer-reviewed article on evaluating public diplomacy. I think there are several reasons for that. One is that PD itself is a relatively new field. The term only came into existence in the late '60s. It has not had a long academic career. It's not been an academic discipline for very long. The program here is, so far as I know, the first and only at the Master's level, and I think it started in 2004. So it's still new, so the research hasn't quite caught up to the topic, and, finally, evaluating PD is hard.

How do I know that? Because every single writer who's ever written on PD evaluation says so. So it must be hard. Why is it hard? One reason is that the results frequently are what you're measuring is long term and we don't have the sustainability, we don't have the funds, we don't have -- you know, there are currents in our foreign policy that prevent us from tracking these things over the long term. It's hard to track people from exchange programs over the long term. It takes a lot of effort, a lot of time, a lot of money.

We're often measuring concepts that are intangible, like perceptions or attitudes. Those things are difficult to get at without getting to the ground, doing interviews, doing focus groups, which again are time-consuming and costly. We always face with PD evaluation the concept of the attribution problem. How do you get from the PD program to the PD result, especially if the result is three or four years -- only seen three or four years after the PD program? How much happens in life between the program and the result? You know, how can you connect the PD program to the result and tune out everything that happened in between in terms of what defined or what made that result happen?

If you're tracking elites using interviews, using focus groups, it's hard to do, you know. The higher up a person is, the harder it is to get them to sit down for a lengthy interview or participate in a focus group. Occasionally you will get that to happen, but it's not often, especially at the very highest levels.

We've heard some people refer to the isn't it obvious syndrome without saying isn't it obvious, but for a lot of us, for many years, before PD evaluation became a buzzword, they would say, well, why are you doing these exchange programs? Well, isn't it obvious why we're doing it? You know, people go on exchange programs, they come back, and they, you know, are converted, you know. They have a change of attitude, change of belief about the United States. We don't need to evaluate this program. We know it works. In our heart, we know it's a good thing to do, but, you know, all exchange programs are different. They produce different results. Sometimes you can get a better result by doing it in a different way and the only way you can tell how to do that is through measurement.

Evaluation is expensive. Good evaluation frequently takes between one and two years and costs several hundred thousand dollars. When we started doing evaluations, we did not have any useful baselines. Once we decided what it was we wanted to measure, we had to go back and find those baselines. What was the data that we needed? How were we going to get that data and where was it going to come from?

Another syndrome that sometimes intrudes here in terms of evaluation is, you know, don't step on my program. Frequently in an evaluation or an inspection, you have a group from outside your unit come in and tell you, you shouldn't be doing this, you should be doing this, and, you know, you sense that, well, you know, I'm on the ground, I'm running this program, I'm better positioned to know, but here's this outside group coming in telling me what, you know, I need to be doing.

Lack of continuity. You have different Administrations coming in, one after the other, different priorities, different programs, different emphases. At the post level, the mission level, PAOs change every three years. The programs change every three years. You know, where is the continuity in terms of the measurement of evaluation?

Today, a lot of the issues that we face are multilateral issues. They're not bilateral issues. Frequently, we're dealing with these on a coalition or an interagency basis. As soon as you go outside your own entity, you're talking different languages. The different languages translate into evaluation, as well. You know, how do you get everybody on the same page in terms of, you know, evaluation, what it is that you're going to measure, how you're going to do it, what metrics you use, etcetera? This is going to be an increasing problem as we face a lot of these sort of global issues together.

And the proliferation of new media, I think, is going to be a challenge to everyone. I've talked to a couple of evaluators in the last two days in the Department of State. You know, this one's a tough one. You know, it's going to be difficult to get a handle on it.

I want to mention, though, that our bureau has just launched a three-year study of social media to get at exactly the kinds of things that Phil was talking about and they've started the process now and we'll see what emerges from it, but it's a good start, I think.

It's hard, but it's important. Why is it important? It helps you allocate your resources, justify your budget requests to Congress, you can motivate employees and make your programs better, and it can moderate expectations.

A couple of years ago, in 2003 there was an article by Newt Gingrich called "The Rogue State Department" in which he complained that in the run-up to the Iraq War 95 percent of the Turks were opposed to U.S. action in Iraq and he said this was a complete failure of U.S. communication policy, the 95 percent.

The question here in terms of expectations: can public diplomacy in the country where you have 95 percent of the population change that figure? How many more millions or billions would you have to spend to move that 95 to 90 or whatever?

So evaluation is a tool, I think, to adjust our expectations as to what actually public diplomacy can accomplish.

If public diplomacy is effective, we can show it to be effective through evaluation. Maybe we might be more inclined to rely on public diplomacy than on hard power.

And lastly, creating this evidentiary base for the effectiveness of public diplomacy should allow public diplomacy and public diplomats to take a seat at the table in discussions, the famous take-offs and landings.

What's required for successful PD evaluation regime in any organization but I think now specifically here? Leadership. I think that you always start there. It needs to be built into the system. It needs to be systematic. It needs to be built into the organization at a systems level. You need to give it enough resources. The standard for the private sector for evaluation is eight to 10 percent of the budget of any program should go toward evaluating that program, eight to 10 percent.

A few years ago during the process when OMB was measuring State Department programs, we had a budget of .43 percent for evaluation. The Gates Foundation has a budget of 15 percent for evaluation. So you see what importance they place on that.

Any organization's objectives need to be the -- the public diplomacy objectives need to be connected back to the organization's objectives. It's essential, especially in terms of making clear and clarifying the mission.

Acting on data. If you're not going to act on the data, don't bother collecting the data. You're wasting your money and you're wasting your time.

And then communicating results. Sometimes a frequently lost art is you get all the data. You need to know to whom that data needs to be presented and how it needs to be presented to be most effective. So these are some of the keys.

Now looking at evaluation at the State Department briefly, there are two evaluation units that deal with public diplomacy in the State Department. One is in our bureau. It's called the Evaluation Management Unit or EMU. It was stood up in 2008. They're looking at measuring things like information outreach, social media, and post-level or mission-level activities.

The Educational and Cultural Affairs Bureau also has its own Evaluation Unit. It's been in operation for 10 to 12 years. They look at exchanges and cultural programming. They've done approximately 54 complete and full evaluations.

In 2006, the ECA Evaluation Unit was given a PART Score of 98 percent which was the highest score for evaluation in the entire United States Government. So it's a unit that has a good reputation for evaluation. They use both quantitative and qualitative approaches.

The State Department's efforts at evaluation stem from this piece of legislation called The Government Performance and Results Act of 1993 which led in 2002 to the OMB's issuing the PART Process. The GPRA, if we call it the GPRA, is a statutorily-mandated law that says that you must -- every agency in the government must have a mission statement, strategic plans, annual plans, and performance plans, and so they all now do.

OMB created the PART, which is the Program Assessment Rating Tool, in 2002, and its goal was to measure a thousand U.S. programs. It's based on a questionnaire that is filled out and then assessed and they look at four things: program purpose and design, strategic planning, management, and, the last one, results.

And these are the scoring systems for when the PART looks at your program. You run from effective to results not demonstrated which means that they don't have enough data because you're not doing enough of the things they require to get data and so they just give you -- that's the worst score you can get.

What did PART say about the Department of State? When the Department of State decided to look into this issue of measurement, they went to all the mission performance plans, these are the ones that the post develops, and they counted up all the different performance measures on all these different plans worldwide and they come up with a total of 898. If you have 898 performance measures, you don't have any performance measures.

So one of the first things they did -- and three-quarters of them were outputs, not outcome measures. It's what we put into the program rather than what we got out of the program.

So we didn't have also an ability to track the connection between budget dollars and programs, to show a direct link between how much we're spending on a program and its effectiveness. As a result, the first go-around we got results not demonstrated, meaning they simply did not have enough of the required means to make a judgment about whether public diplomacy was performing or not.

So what did the State Department do in the face of this sort of negative assessment by OMB? They created an Office of Policy, Planning, and Resources in 2004. They reduced the performance measures to 15 from 898. They brought their score up to the performing category in '07. As I mentioned, ECA got the highest rating in the entire USG of 98 percent. They issued the U.S. National Strategy for Public Diplomacy and Strategic Communications under Karen Hughes in 2007. They established a Logic Model which is a standard for how you design and measure programmatic outputs, and they started to launch a number of programs designed to get the data that they did not have based on the OMB requirements, and in 2008, they stood up the EMU which is the Evaluation Measurement Unit in our office.

So what are we measuring now? What are the 15? This is what the performance measures for public diplomacy as mandated, as agreed to by the OMB. This is what we now measure, and we do it on the basis of because a lot of the programmatic results cannot be determined on a long-term basis, we have to find ways to get intermediate or short-term results and that's why you see most of them being annual rather than long term.

Some outputs remain because we need to know the extent of the reach and what it is that we're putting out there. It helps you to -- even though it doesn't give you the results, it helps you to indicate the level of activity which is another indicator toward results.

So OMB says, okay, this is what we're now going to -- this is what public diplomacy should now be measured and we're looking at again short-term changes in perception, intermediate behavior changes, and then long term what we're looking for is all those intermediate behavior changes ultimately resulting in institutional changes over the long term.

So the OMB says, okay, this is what you measure, now go get us some data on all of those 15 things because the data didn't exist before. So what did the department do? They went out and devised ways of getting the data to meet the requirements of the OMB.

The first thing they -- one of the first things they did was put together something called the Program -- I'll remember the title, but it's an impact, Project Impact Program.

Essentially what they were trying to do is find out what impact public diplomacy has had on a discrete group of people and then compare that with people who are in the same demographic but who did not participate in the public diplomacy program.

So they took this global sample starting in 2007. Again, these were the parts of the performance measures that they were looking to find data for. This is what they're measuring for, and they started in these countries and territories and, like I said, they're comparing the attitudes and behaviors of people who have participated in PD programs versus the same demographic characteristics but of non-participants.

They did this through interviews, focus groups, surveys, and using in-country data collection firms. So each post gave them 500 contacts. They were in these sort of demographic groups, youth, community leaders, business, religious, etcetera, etcetera, randomly selected from within the 500. Then they did the testing of the two groups.

The results were positive across the board. Having looked through a number of these, I would say that in almost all cases the people who have participated in public diplomacy programs had a deeper understanding of the United States in the areas measured, had more favorable attitudes towards the U.S., and were less likely to hold anti-American views, the first time that any public diplomacy measurement actually had been able to show the impact of the program.

In places like -- where you would anticipate wide anti-American sentiment, the numbers were low. You know, it's not -- I don't want to sugarcoat it and I didn't bring up, you know, if you had an 80/20 pro-U.S./anti-U.S, it didn't make it 80/20 the other way, but the number of people who might have had anti-American views, while still high, was less than what -- for PD participants was less than for non-PD participants.

MR. OLSON: Can I ask, Mr. Banks, about that?

MR. BANKS: Yes.

MR. OLSON: There's a methodological issue about this. Just from a number, I mean that doesn't tell much, doesn't tell me much, and I think that the more interesting question is the before and after of the intervention.

I mean, in other words, I think you began to address that in your secondary remarks, but it seems to me shouldn't those be the primary remarks? You have an intervention and then you want to see what the results are after the intervention rather than what the attitudes are in sort of a stacked way.

I wonder if you could address maybe -- sort of forefront that a little bit more for us.

MR. BANKS: Yeah. Yeah. That's certainly a technique and a mechanism used by evaluators, is the pre, the during, and the post survey. I don't know whether -- frankly, I don't know whether these countries, they had done pre-surveys. I'm assuming that they had perhaps used INR surveys to get sort of broad measures, but in terms of the individual groups being measured, I don't know that, but that's a very good question.

MR. OLSON: Your second point, 73 percent of the PD program participants had favorable attitudes towards the United States, presumably the post officers weren't going to select people in Al-Qaeda or who were really, really, really opposed to the U.S. because they wouldn't pass the smell test.

MR. BANKS: Right.

MR. OLSON: And so it seems to me -- and I think this gets to the point that you raised earlier about the cost of doing this; that is, if one is serious about this sort of pre/post evaluation, it takes, as you said, Mr. Banks, it's at least 15-20 percent, and I think this is a -- this demonstrates one of those problems if you just ask -- the second issue may reflect -- it could be that 80 percent of the people who went into the program had favorable attitudes before they went into the survey and then 73 percent had favorable attitude once they come out.

So I think it underscores your position that to do this really well, it costs a lot of money and most people, most agencies and the private sector, nonprofits and government don't want to spend the money to get the kind of robust results that I think you're calling for.

MR. BANKS: Yes. It would be interesting to see what kind of longitudinal study and follow-up they have for these groups. It started in 2006. I think they've done it every year now for a couple of years. So they're tracking over time, but -- and maybe the initial study in 2007 was sort of trying to establish that baseline, you know, and then we'll see what happens over time.

But, you know, in the Palestinian territories, for example, I mean, you know, the level was low to begin with, I mean, you know, very long to begin with. So those who were -- I mean, you know, if you're looking at canvassing the elites, you're not going to have probably -- you're not going to be canvassing Al-Qaeda. So you're not going to be in a position to know.

MR. OLSON: Thank you.

PARTICIPANT: On the longitudinal counts, do you know if the data are looking at the same people? Are they actually panel studies or revisiting the same group of people that were originally questioned?

One question would be short-term effects versus long-term effects and whether if you participated in the PD program in 2006 will you still retain the same level of favorable appreciation for the United States as in 2010 or does it diminish over time, in which case it would suggest that the other important lesson is you have to keep going back to the same people as opposed to only reaching out for first-time.

MR. BANKS: That goes to -- your question goes to the very height of the difficulty of doing this, you know.

All of these people, you know, they may have at one time been members of the elite in positions. Then they leave those positions and they leave the country, you know. Tracking them over time is extraordinarily difficult. It's really challenging, you know, and so -- and I don't, frankly, know whether they're going back to the same demographic. I'm assuming they are.

PARTICIPANT: I had a quick question. Seventeen percent, is that a good outcome? Because to me that's not a very good outcome, 17 percent are less likely to hold anti-American views.

PARTICIPANT: Bob, I didn't mean to interrupt you, but I just have two quick questions I wanted to follow on to the young lady's and the first one is, who did the study? Are these things being done in-house or have they been contracted outside the State Department?

And the second is did they do a statistical analysis of the effectiveness? Seventeen percent could be a spectacular number or it could be a terrible number. Do you know if they've done that?

MR. BANKS: Ninety-five percent of all of the evaluations that the department does are contracted out. This one, I think they used subcontractors in the country, but they also sent teams, and I think it was contracted out.

They do that because, you know, they don't want the data to be, you know, questioned in terms of --

PARTICIPANT: No, no, I wasn't implying it was bad or good. I was just wondering because I know that there was -- that there are internal teams that did some evaluations and I just didn't know if this was -- if they had made a commitment to do this internally or if it was a subcontractor.

And did they do statistical analysis of these changes, even though it's not -- and is it considered statistically significant?

PARTICIPANT: That's exactly my point.

PARTICIPANT: That's why I'm -- sorry.

PARTICIPANT: No. I'd like to know.

MR. BANKS: I can't answer that. I don't know.

PARTICIPANT: Okay.

MR. OLSON: At some point it's -- I say at some point. I mean, we got a whole body of academia here talking about a substantive issue that we assume works, and I keep asking myself is -- I'm a CEO of a big old insurance company in New York and everything we did, all of our -- everything was based on data that was discernible, quantifiable, knowable, predictable, dependable and credible.

(Laughter.)

MR. OLSON: Otherwise, at some point we were going to be in a courtroom and someone with a green eyeshade and a Number 3 pencil was going to sit there and some great plaintiff's attorney was going to say let's talk about the credibility of the data.

Who created the data? Let's talk about the person who created the data and their value systems and their assumptions. So that person gets on the stand. They go, well, that's your value but, and I will give you a quick example, albeit not earthshaking, but during my ambassadorship, I had this -- I was in Sweden and Sweden was the Number 1 IT country in the world behind the United States and their capital formation and their ability to evolve in the marketplace was very different. They're a social democracy. There are many entitlements, a lot of pressures for capital and corporate development, and at some point what you can do in the United States, you couldn't do there and yet they're very, very open market, free market, great free traders.

And the embassy and our PAO, this wonderful young man named Joe Crucic, and Joe decided he'd just take off on his own, that's a compliment, by the way, in that world, assume the initiative and go out and develop a club, which to many of us might sound trite, of young people who had great intellectual property who could not penetrate the markets of the big multinationals. Intel, I mean we all know who they are, ad infinitum, ad nauseam.

And all of a sudden, two to three to five people became 10 to 12 people became 50 to 70 people became 200 people became 400 people who have these meetings at the embassy and it occurred to me that maybe the United States Government shouldn't be in the business of creating all of this stuff and that they should take this outside and form their own association which they did.

But the point of all this is in the development of this entity of all these people and these relationships and how we did it in the United States and how IT is developed in the Silicon Valley and in Austin and Chicago and the Golden Triangle, all of these things, about 15 of us sat around with all these opinions. I mean, it was Plato's Academy for six months.

Now you tell me how in the hell you discern data, credible data, being able to interpret the data, whose ears are hearing it, whose eyes are seeing it, and then the very nature of institutional memory to me in that process was de minimus because we all left in three or four years.

Like you said awhile ago, are we -- are the players developed themselves and that leads me to ask the question which I guess you probably shouldn't ask but since we're kind of independent, I think we can, is this quantified? Is this measurable?

Today, we want everything to be quantifiable. We want everything in our life to be measurable. We don't know our public officials. We don't know if they're authentic. We see it all on TV. We have become so ideological, that is there a human component here that we cannot acknowledge, that we can't do this the way we want to do it with easily-quantifiable numbers because it's too hard to measure but we can't live our life without measurement, and to me it's a very fundamental question.

I'm not saying we shouldn't. I'm not saying that we didn't measure what we did in this whole IT world, but I'm saying is it meaningful?

MR. TAPLIN: Mr. Chairman, I totally second your point on this, and I would point to perhaps the Velvet Revolution in 1989 as an example of a case where the leaders of that revolution, whether it's Helensa or Havel, all point to specific connections they had with American artists, American -- whether it was sending great paintings around the world of the abstract expressionists or whether it was getting American popular artists to go on the other side of the ocean.

The fact that the leaders were influenced by these outreaches which then led to other things, and I am sure if you had polled Czechoslovakians in 1981, maybe the polling wouldn't have been so good, but the point was that the key leaders were already affected by the public diplomacy outreach that we had made and that was what made the difference.

PARTICIPANT: Could I ask speakers to identify themselves?

MR. TAPLIN: Oh, Jonathan Taplin, Professor, Annenberg School of Communications.

PARTICIPANT: Professor, I think there are two sides. I think a lot of people feel that, you know, the exposure issue to our culture is something that we can't quantify. On the other hand, our government has decided that they require us to quantify.

So it seems to me that part of the mission and part of what we're trying to understand and hear your opinions about is that, although we cannot personally quantify did the Velvet Revolution occur two weeks sooner because of this exposure or two weeks later because of this exposure, it's really more of an issue that there was an exposure and it did occur, so we can take gross quantifiable events.

It did have some effect. We don't know how much it was and maybe we'll never be able to input three units of exposure does this and four units does that, but I think it's also important that we find some means to sit down and start to look at our programs.

I mean, I know that Phil mentioned TV and his personal opinion about it and I think we can go offline and discuss that a bit, but we need to do something and we need to get some ideas about how to measure both across and over time and so I think that, you know, although it's hard to put an exact number on it, we've got to find some structures to give us some idea.

We don't have unlimited budgets. A fellow commissioner, a gentleman who sat on this Commission in a prior time said the ideal -- if we could do a 100 million exchange programs a year, we must probably -- we know it's great, it's our best program. We just have to close the Department of Defense and we'll be in great shape, 100 million a year, and we'll make great progress, but we have to find some ways to do it, and I guess that's why we're here today, is to get some idea of how do we measure across, how do we measure over time, and to be able to compare which programs we have to can put more emphasis into and which we can put less.

So I think you're right that we have to do that. On the other hand, we've got to find some way to measure it. We don't have options to do nothing.

MR. MICHER: My name is Rick Micher. I'm the Director of International Programs at an organization called the Center for Civic Education here in Los Angeles which bears a direct relationship to your subject matter because our International Programs were not created but they were put on the map by USIA first under the Clinton Administration when CityTos, which is the name we call our network, was made a centerpiece of Vi Ken Kendall, the Deputy Director of USIA, of Public Diplomacy efforts.

And we survived after the abolition of USIA after the end of funding for civic education by the State Department with a line item in the Department of Education's budget. So my department receives about $5 million a year to conduct civic education programs in now about 80 countries around the world and we are -- so we're a producer of public diplomacy. We're usually managed within embassies by the Public Diplomacy Office, and we are under the same kinds of pressure to produce quantitative results that your Commission is.

I want to endorse the comments by Mr. Olson by saying it's not quantifiable. We provide curriculum materials to young people, K through 12, and train teachers in the use of those materials, to introduce interactive child-centered democracy education curriculum into the classroom.

It changes fundamentally the relationship between teachers and students, between schools and parents, between schools and communities, and ultimately it tends to make active questioning citizens of young people.

We do it in countries where U.S. policy is viewed as hostile, where the populous is angry at the U.S. Government, and yet our programs are welcomed and we have a hard time meeting the demand for them.

If you're a parent, you know that most -- at some point in your child's life, you wonder if anything you've done actually is going to have any result because often parent-child relationship is one of hostility, where what you teach seems to be coming back to you as the exact opposite from what you intended, and all of us who are parents or now a grandparent have had the experience, I suspect, of having a child 10 or 15 years later say now I know why you did that, now I know what you meant when you told me that.

And the kind of learning that we are talking about, the kind of interactions that we are talking about, in our case democratic values, in this case considerations of the meaning of American society for another culture, these are the main things that are fundamentally not measurable in the time frames that the U.S. Government is asking for.

Indeed, our programs often produce more hostile attitudes on the part of young people toward their own government. Thank goodness our host governments aren't evaluating us because when young people learn about their government, often for the first time, they usually become much more critical of it which is, as an American, I think we would agree is not a bad thing.

In Saudi Arabia or Pakistan or Malaysia or Thailand, it's not a bad thing for young people knowing more about their government to become more critical.

MR. OLSON: Could I just make a quick intervention?

MR. MICHER: Let me just finish one last point.

MR. OLSON: Absolutely.

MR. MICHER: Just that after becoming more critical, when given the tools, they then become more convinced that they can do something about the government that they're more critical of.

MR. OLSON: Just two quick points. One is that we have a colleague, Nick Call, who teaches here at the school.

MR. MICHER: We know Nick very well.

MR. OLSON: He makes a very important point, which is that there's no such thing as public diplomacy instrument. There are six different flavors of public diplomacy. Some forms of public diplomacy, like exchanges, take a generation or two to come to fruition.

As you know, these are greatly supported on Capitol Hill as a separate line item budget. There are other -- at the other extreme are things that you can measure on a day to day basis.

So I think we have to do two things. One is to assume that when we're talking about measurement, there's no such thing as public diplomacy because when you're saying if you're going to publish something or evaluate something that takes a generation to have an impact, that's one, and that's sort of what you're addressing.

There are other things that you actually can measure on a day to day basis and so I would hope that we could avoid the either or syndrome that says we either can measure everything or we can measure nothing because we're talking about increasing differentiation of tools and channels of public diplomacy. That's the broader point.

The second point is that, you know, you say that. I've heard people who say the exact opposite to what you say. So why should I believe you? I mean, you're convincing. You're dressed well and so forth, but there are a lot of people who are convincing and dressed well, and at a time when the travel budget is being cut, I say this wearing my Corporation for Public Broadcasting chair hat, I can't go to the Hill anymore and say support Big Bird and I assume that my Public Broadcasting colleagues can no longer go to the Hill and make the argument you just made and say education is cool without having any measurements.

So my plea and what I hope we can help you accomplish as an academic institution is to be as skeptical and as, you know, I've tried to be, but to say that (a) we can evaluate everything, (b) the tools that we use to evaluate one thing may not be appropriate for measuring other things, and (c) as Jay said, we don't have a choice because OMB, the Provost of the University of Southern California and most other executives are now telling those of us who are in the programmatic business we will take away your money if you do not come up with some robust forms of evaluation.

So I would kind of put in the plea to let a thousand flowers bloom and assume there's going to be no one adequate evaluative tool. You've got to be more sort of nuanced, I think, in making that. Sorry.

MR. WILSON: No, no. That's a great observation.

PARTICIPANT: Hi. I'm Josh. I'm a student in the Master's Program in Public Diplomacy.

I believe it was the Millennium Micro Scholarship Challenge Grants --

MR. OLSON: Can you speak up just a little? We can't hear you.

PARTICIPANT: Oh, sorry.

PARTICIPANT: Please speak loudly for the microphones. I'd just encourage everyone to think of the poor people who have to do the transcript later. Please do identify yourselves and speak as loudly as you can. Thank you.

PARTICIPANT: I'm Josh. I'm a student in the Master's Program. I believe it was the Millennium Micro Scholarship Grants, Challenge Grants that devised a metric that, among other things, measured people's attitudes, students' attitudes towards democracy, self-empowerment, and the variety of other things that you had spoken to, that I don't know that policymakers would see them as core goals of American public diplomacy. They might, but at least these metrics will capture those positive results.

So I don't know how immeasurable they are. I know that Millennium was able to quantify them, and that's it.

MR. OLSON: Thank you, Josh. Yes, ma'am?

MS. GITARD: Well, I just have to say in full disclosure I'm from Dallas. So I love hearing the terms "big old" because what you say resonates so well.

MR. OLSON: Did you hear him say ha-ha awhile ago? It took me a second.

MS. GITARD: I'm Kerry Gitard. I am the Executive Director of Business for Diplomatic Action and an Adjunct Faculty here at USC teaching Corporate Diplomacy, but I also had the benefit of working for Charlotte Biers after 9/11, a lot of scar tissue from that time for discussion.

However, what you say about measurement and I respond to viscerally because my organization is about engaging in the private sector and public diplomacy efforts and we work now with about a hundred companies and never once have I had a company ask me can you measure impact?

The reason they joined our effort is because they felt it was the right thing to do and that they wanted to find a way to help and they live in data every single day.

What they're starting to look at now is, well, if we are going to measure something because people are asking the question after seven years on, where are you having impact, it's in three basic areas, and I just throw this out there because I think this is something that possibly the government could look at.

But one is in partnerships. Can you quantify the number of partnerships you have in the private sector, NGOs, with other companies, with other governments, and that we can do very, very quickly, and show reach, which gets back to the, you know, whole notion of even social networks and how many people are actually impacting and connecting with because that connotes action, and my organization is all about action and what can you actually show that you're doing.

But the second thing would be listening and this was something after 9/11 that we didn't have a lot of research that we could do anything with because most of the listening research that was available was done by the CIA or INR and it was classified, so the policymakers couldn't do anything with it, she same in the private sector.

So the listening research. Where can we really get deep into some of these attitudes? We testified before the 9/11 Commission. We said, you know, you can't just look at the 50,000 foot level. You have to go deep into looking at really what -- back to the points that have been made before. What does society think about these issues and break it down at a very minute level which my board of directors, many of them are CEOs or heads of advertising agencies and PR firms, so they get that data. You understand this data. That's a very easy thing to get down to.

And then the third area would be for the big issues that matter, when we're in a crisis situation like 9/11 where the Federal Government had two or three things we had to communicate, we can measure very quickly awareness and recall of those messages, where the platforms are moving that information, right, but that was something the Federal Government just lacked fundamentally and that gets back to the resources and skill sets.

So I just throw three things out there because from my perspective, seeing it both in the State Department and now from the private sector, I don't want us to try and boil the ocean. I think we should be very clear about some of the areas where the data is going to be useful and valuable so that you can actually do something with it, not just create it so that it exists somewhere.

And then the final thought that I had, I was just thinking back to training and this is something that I value so much about PD officers and that I went to 70 some odd of our embassies. I worked for Diplomatic Security doing cyber terrorism before I jumped into this PD world, and it was all personality-driven.

We had some great ambassadors, we had some great PAOs, and it totally depended on that individual's personality. I think the measurement could be very interesting if we look at measuring the PD skill sets of our officers and ambassadors and possibly the entire Foreign Service Corps because I think it's a critical skill set that absolutely every single person needs, top to bottom, within the foreign policy structure.

The military gets that. The military is starting to do this and institute it top down. I wonder if that's on an area that this measurement could be extremely valuable.

So thank you.

MR. WILSON: I just have a quick question. You said that the corporation, and I had the opportunity to know Keith a bit since the inception of the organization, and it's a fantastic group and you have great client basis, --

MS. GITARD: Thank you.

MR. WILSON: -- but you had mentioned measuring the public and private partnerships, and are you talking about how many or how effective they've been. Is that the question you're getting?

MS. GITARD: Not how effective, just how many, because what we hear time and time again from companies we work with is they said we'd rather you be focusing on doing the work than measuring the efficacy of every single partnership.

So it's basically for us sheer numbers and we're an entirely grassroots-driven network. So I've done it literally with zero resources and I can actually quantify the number of senior advisors, academic advisors, universities, other NGOs, and when you look at it in total because we partner with others in the public diplomacy space, not just with business, when you step back and look at it in total, and then you see what each of those organizations represents just in terms of bodies, you go wow, and you can see that impact sort of mushroom.

But we don't get discrete down to the quantifiable this is the number of effective or semi-effective partnerships. We just assume that aspirationally we're all aligned to do the same thing and anybody we partner with is a spiritual partner, if you will. So they've all been very effective and I think that's how we've been able to sustain ourselves for so long with very little budget and very little staff.

So does that answer --?

MR. WILSON: Yes, and thank you.

MS. STARK: I'm Pamela Stark from School of International Relations, and I'm with the Program of Public Development here at USC.

I think it might be helpful to go ask a very basic question when we're thinking about measurement and that is, what are we measuring, and I don't think we can think in terms of measuring in terms of attitudes toward democracy or sort of broad things that we think are great U.S. values that we would like to share with the rest of the world, although that certainly is part of what we want to do with our public diplomacy policies.

But I think we need to take a step back and ask ourselves what are the real core objectives. Public diplomacy is a piece of U.S. foreign policy. So the question is what is the objective of U.S. foreign policy in a given country? What is the context within which we are operating, and given that, what is the specific objective of policy?

The objective of public diplomacy in Pakistan is very different than the objective of U.S. public diplomacy in Mexico which is very different than the objectives of U.S. public diplomacy in Vietnam because we have different foreign policy objectives and we're operating in a very different context and as a result, we use different foreign policy tools and we use them in conjunction with other soft power tools and often with hard power tools.

So I think we need to think in terms of measuring not public diplomacy writ large. I don't think that captures the strength and the utility of public diplomacy but ask, okay, this public diplomacy strategy which is a short-term advocate piece strategy in Pakistan that's targeted on this segment of the population, how well is it working, and then we think about the measurement tools that allow us to measure before, during, and after kind of measuring versus an educational exchange program with Mexico that simply is designed to pull two already highly-interdependent countries and cultures and societies even closer together and even at that, who are we targeting? How is it different targeting individuals from regional institutions versus individuals from elite institutions?

So think in terms of what is the foreign policy objective, how does the public diplomacy strategy do in that objective, what is the context within which we're operating, and then measure it because that allows us really to see how effective public diplomacy is as a tool of U.S. foreign policy in a given situation rather than just thinking about foreign policy -- excuse me -- public diplomacy writ large without a clear concept of what the goal or the objective is.

MR. OLSON: I'd like to affirm that point. I'd like to affirm that point of view. Communication is a tool and what we have in common is thinking how do we effectively use the tool and what ends does it accomplish.

There's another side to public diplomacy, however, where communication is unfinished and part of public diplomacy is to engage people in the production of different elements of civil society, given current digital network globalized society.

So part of measurement is a way to get to weight and part of weight is how public diplomacy not only teaches civil society skills but adapts civil society skills of journalists, of NGOs, of particular organizations, how they adapt these to new communicative situations.

Phil Seib's opening remarks of non-populations that are circulating around the globe that inner-connect and network. So we think of public diplomacy as tool-driven, that's one kind of measurement, but if you really want to create weight for public diplomacy, think of it as a space where you engage in cases that are establishing new forms of communication or figuring out how networks recreate the possibilities of diplomacy which is what your group was speaking to.

So it is a tool, but it's an unfinished set of tools in these cases that take on new communication issues, serve the interests of our government, and globally the Internet is an unfinished world and to me last summer an exciting case of public diplomacy involved the State Department, Iran and huge populations of people on the Internet.

So within these emerging cases, you could see new forms of communications emerging and one of effectiveness is grasping those, learning from them, contributing to it, which makes public diplomacy more than a tool, like advertising, but makes it establishing new possibilities for global inter-connection and democratic practices.

MR. ARMSTRONG: Matt Armstrong. Among my many titles, I'm Adjunct Professor here at USC.

MR. SEIB: Also among our first graduating class. Tell them some of the things you've done since graduating. They might find that helpful.

MR. ARMSTRONG: Do you want me to? One of the things, Bob, that was in your presentation that I think hasn't come up in the conversation, except snippets like Pamela had, you know, context matters and some other conversations, is that the measurement's not happening in a Petri dish.

There are a lot of other things that are going on and I was wondering when I looked at the numbers that you put up there, are they tracking what events occurred in that area or to that audience because I believe in the virtual states, as well, and you can not only mobilize diasporas but you can actually create diasporas.

But what events were going on that may have pushed -- you know, it may have been we got a good number because nothing happened at that time or we happened to have a good response, and then the other area that I see almost no redress on is what is the adversary doing? If they ignore our message, what does that mean?

Frequently, that means they don't feel or believe what we're doing is a threat. So not always. They're not always as adroit and adept as we claim they are, but what is the adversary doing to respond? What are the -- if we have red and blue, red being adversary, blue being friends, what are the white parties in the group, in the sense the neutrals, how are they interpreting and reflecting this material? So again, this adds to the complexity of the environment, that it's a qualitative analysis, not a quantitative analysis because there are all these factors going in and you can't just say, well, what do you feel today, what did you feel yesterday?

You have to figure out what other messages and what was going on and perceptions and realities are they experiencing over time and again how are other things being checked? Do we track what Russia today is saying? Do we track --

MR. BANKS: In individual country analyses, they do mention, I believe, I remember this, events that may have occurred during the sampling period which may have impacted the results of the sample.

Just a couple of other thoughts, based on the conversation. I don't know. If anybody else wants to say anything, they're welcome to.

But this question about adversary metrics, we actually talked about that a little bit with the Director of Strategic Planning and Evaluation at the BBG recently and he mentioned that, you know, the extent to which hostile countries jam your -- is jam your broadcast, is it an adversary metric, and it is going on in lots of places around the globe, and it's one way of looking at how effective you are.

If countries are jamming you, maybe that's a metric that shows you that they think they're effective, and this goes to the question of one size fits all. It goes to the question of, you know, using different metrics to get at different targets, different thematic areas.

We had a guy -- I talked to a guy recently who was a PRT leader in Iraq and he developed what he called the Banana Index and the Banana Index --

PARTICIPANT: PRT?

MR. BANKS: Provincial Reconstruction Team in Iraq. He was the leader in Western Anabar Province. His Banana Index was that on any given day he would go into the market and if the bananas were green or they weren't there, he knew security wasn't sufficient enough for the farmers to get the bananas to market. If the bananas were yellow, he knew that the security situation was improving, and he took that Banana Index to the guys who were doing the official evaluations, the terrain evaluations and all the rest of it, and they said, but, you know, for him that was an on-the-ground metric that only, you know, somebody who lived there, walked that dusty market every day would know and he could judge what was going on in his region just by the Banana Index. So all kinds of tools are possible.

MR. WONG: Jay Wong. I teach at the Annenberg School.

I just want to make a point about the importance of working with academic institutions like ours to help improve the measurement and hearing some of these examples, I can in fact think of three possibilities here.

For instance, the example Bob gave about the point of the flawed design. At the design stage, if you come to us we will immediately point them out because it's a very typical design. It's not that sophisticated. It's not that difficult. You know, it's just adding the one dimension which is the before and after. You already have the control group.

A lot of times what we see is that people do the before and after but without the control group. So you have one piece. We're missing the other piece. That makes your sort of conclusions a little bit suspect. I mean that will be sort of the consultation with the academic institutions like ours. I think we will easily help you solve this problem.

The other one we talk about, the cultural programs, the educational programs, the impacts, the long-term impact. Some of these tools, as Ernie said, we need to look at different tools. I mean in sociology and in our studies, there is, you know, memory studies and I think a lot of times, you know, memory studies will be very appropriate for asking people who participated in these programs 20 years ago because then you're really looking at the long term because I think the government is not going to pay attention to what happened 20 years ago, right?

What you're evaluating will be like what happened yesterday or a month ago or a year ago and even the baseline is set up. It's just, you know, in the very narrow time frame which is a few years. So by this time, you know, the concept can easily apply and the example that Kerry was talking about, the partnership, and here we have quite a few scholars does wonderful research in network analysis and it's a network.

How do you apply the network tools to analyze a partnership network not only in terms of the quantity but also in evaluating the quality? We have the basic tools there to assess that. It's how we can link the basic research with this very applied situation, and I think it's just we need to think of ways to sort of make this have a collaboration more effective and efficient.

Thank you.

MR. OLSON: Thank you.

MS. TURINE: I'm Rae Turine. I'm CEO and Founder of a new media company called allvoices.com, but prior to that I was a venture capitalist, and my biggest issue and please don't take it in any way, shape or form. I'm not familiar with PD, but based on science and based on business, based on funding for any program or start-up, the goal is to first set up the metrics that can be measured before you actually go and do something and those metrics should be defined by the program owner as saying these are the metrics and these are the time horizons these metrics need to be measured and once you define those metrics, then you know what you're looking for. Whether your program does well or not in the short term, then you can make those corrections, but without measurements, you don't know whether you're succeeding or you're failing.

So I think the metrics and measurements, whether they're quantitative or qualitative, needs to be set up front. So it's very similar to the scientific approach. You have a hypothesis and you're going to prove it, whether it works or not, and since some situations you have a program, whether it's 17 percent success rate or 80 percent you need to have an idea of what the outcome should look like, and if you don't, then you don't know what to measure, and so a feedback loop constantly, whether it's a short-term goal or a longer-term goal, needs to be established and time periods, whether it's months or years, there should be something set up to measure what the impact is in 10 years.

Maybe you may not be the right person to measure it, but someone else is going to measure it, and I think that's why measurement and some sort of metrics needs to be part of regardless of what you do.

Partnerships is a great example. The number is one metric. Now how many of those partnerships are effective in achieving your goal? Probably very few. Then you'll know what you need to focus on rather than hundreds and hundreds of partnerships. You may need to focus on a narrow number of partnerships.

So I think I am a creature where everything gets measured every month, every year. So measurement is very important for you to correct your course and looking at it that way as opposed to whether someone's turning your program down and so I just wanted to add that.

MR. OLSON: Well, I want to add that I understand, Ernie, what you're talking about evaluation, Pamela. I mean, in a way, what you were talking about is Ernie's definition, I think, of evaluation. You've got to evaluate what it is that's discernible, knowable.

The thing that concerns me about this data -- well, I don't know what this data is. I haven't seen it. I sit in kind of a part-time quasi-governmental entity. I go back to my cattle ranch in Texas and I opine in L.A., right? I don't know if the data, the interpretation of the data we have -- I totally agree with you.

All data, if it's meaningful data, it has to be credible data. It has to be knowable and discernible, and if it's not knowable and discernible and quantitative, then it becomes to me problematic because you could predicate the assumptions based on poorly-contemplated numbers or data or outcomes and, you know, if we talk about whatever the data is, whatever this universe of data is we're talking about, let's go back to what was in 2007 when we started, Karen started, you referred to it in your remarks where we were -- it was not the EMU but --

MR. BANKS: The PEA.

MR. OLSON: There was an entity that was analyzing some data. Maybe it was the PDI.

What worries me is that do we have the ability within our house to properly analyze the assumptions in the numbers or are we saying this is the assumption? We have limited resources and once we make the assumptions, the numbers have lives of their own and then they become instructive to our decisions and to me all this conversation is, is recognizing that there has to be credibility in the interpretation of assumptions of the numbers and whatever it is we can assume, there's some things we can't assume, Pamela.

There's other things I absolutely agree with Ernie. We can't assume and it doesn't matter what we think, we're going to have to live with the best outcome we can have relative to some kind of accountability and so, I mean, my concern is how we implore those who are looking at this data to ask about in the beginning maybe going forward -- going forward is the issue of resources to interpret the data.

Pre here, you know, it's where did those numbers come from and how did we get them and what time did we get them? Do you -- oh, I'm sorry.

PARTICIPANT: Just one point. We know the data's not perfect and we also know that we started at a very, as Bob pointed out, we started a little bit behind the eight ball to the point of U.S. Government's concept on being unable to evaluate --

MR. OLSON: Results not demonstrated.

PARTICIPANT: Right. So that was a pretty bad starting point eight years ago or whatever it was. So I just want to -- I guess in my mind, I just think it's important that we don't get too caught up with trying to get perfect data. We all have to admit that we know that this data is by far imperfect and that there is lots of improvement to be done and maybe part of the role that we have is to sort of start to look at how we can improve it, but from going from zero basis to what is being considered, you know, an acceptable basis is a great stride.

Now we have to make it even more perfect and even better data, but we're not going to get perfect. I'm just -- like as in your experience in the private sector, I mean, if the data doesn't meet -- it's a very high standard that the data has to meet to be usable. We may not have to get to that high a standard on the first go-around.

MR. BANKS: I'm glad you said that because I wanted to make that point.

What I was trying to show -- I don't want to get hung up on, you know, 17 percent or 87 percent. What I was trying to show was that starting in 2004, there have been procedures and processes and mechanisms put into place in an attempt to get data.

You know, I don't know whether there was a design flaw in this particular -- I'm not -- you know, like I said, I'm not an expert on this and so, you know, I can't tell you that myself. What I can tell you is that in my dealings with folks who are doing these evaluations that they strike me as very impressive.

The RFP, the call for proposals, for a recent evaluation doing this is 50 pages long. I said, "Well, what do you look at when these contractors are coming in to do these evaluations?" She went through the whole process and they go down even to the level of the advanced degrees that the people who will be doing the evaluation have, so that they know that those people are competent to do that evaluation.

So I hope, you know, I don't leave you with the impression that, you know, we're coming up with all this data that's maybe not supportable. All I'm telling you is that, you know, as Jay said, you know, this forward movement on this, we're trying to fill in the gaps in our knowledge. We're trying to find ways to find good data that will help us, you know, put together programs and so I hope that comes across.

MS. GARRETTSON: Katie Garrettson, and I'm thrilled to say I just passed my oral assessment in D.C. to become a public diplomacy Foreign Service officer.

(Applause.)

MS. GARRETTSON: I'm going through security clearances now, so still in the process.

My question is kind of a little change of focus. Given the difficulties in quantifying success and progress in public diplomacy, how successful do you think the current Administration's push to increase its Diplomatic Corps will be and also how do you see the embassy, the work of the embassies evolving over the coming years? Specifically, what focus do you think the public diplomacy officers should be taking on the ground around the world? To anyone on the panel.

MR. SEIB: First of all, congratulations. Fantastic. It's a very hard exam and so obviously you must have worked very hard.

I think that Under Secretary McHale has sort of outlined in a document that came out a couple weeks ago the strengthening of U.S. engagement with the world the idea of truly building a bureau of public diplomacy and actually invigorating it and having it be treated as equal portion of state craft rather than as a lesser role and so I think that her work there is -- you know, some people may agree with all of it, part of it, some of it, but it's been a huge step forward.

I mean, I think that we in our last mission had discussions from one of the Under Secretary's deputies about that this is the beginning of a process to upgrade and increase the public diplomacy footprint inside the State Department and in turn inside of embassies.

To your question about public diplomacy, I can talk myself personally. I can't talk about the State Department as a body or in any other way, but I think the public diplomacy is something that has to be done by every single officer. Some people forget that our consulate, our first duty consular officers have as much exposure to individuals as most public diplomacy officers who make it their full-time position.

So I think that we need to realize that everyone in the State Department is a public diplomacy officer in the broad sense, that we need to encourage our Ambassadors and our posts to get everyone engaged in outreach, and that I think that with that in mind, I think that we are going to, going forward, have to make that a portion of the evaluation process.

MR. OLSON: Anybody else? Yes, sir.

MR. FULTON: I'm Bob Fulton. I had the pleasure of being one of the founding professors of the Annenberg School but, more importantly and relevant to this discussion, is that I served in every Administration from Nixon up to Clinton's first -- President Clinton's first and to five separate agencies, including the State Department and you have received a lot of excellent suggestions here this morning, Robert. I assume it's going to be your task to pull them all together or something.

But my point is that, reverting to Pam's good comments, was that you need a paradigm or a construct that you can rapidly computerize, if you haven't already, in moving forward with this initiative that was outlined in the Strategic Plan, and I think you need to take an account in that paradigm of not only all good suggestions that you had here today but also the larger issue of, and specifically again relating to Pam's suggestion, that you have to deal with individual countries.

There are two unobtrusive variables that you definitely need to look at and that's how much money are we spending in that country and how many people are applying for visas to come and share the American dream or the American experience.

I think they have to be on an individual country basis and then fold in all these others. We had the professor here at USC who I had the good fortunate of studying under. His name was J.P. Guilford, and he developed what was called the Structure of Intellect, and in that Structure of Intellect we did a lot of multiple regressions across all forms of intellect.

I think one of your chaps, measurement chaps should go back and take a look at it because I think it will give you a good paradigm and organization for all these variables that you have to deal with.

So I say bless you, my son, or bless you. God speed.

MR. OLSON: Would you identify yourself?

MS. ABAYO: Corolla Abayo, Association on Strategic Initiatives here at the Annenberg School.

Having worked in and out of government, I would stress the point of no perfect data and I would also say you will not get unpolitical data and at the end of the day, we have to remember that the major reason that anyone will ask you for your data if you are a government agency is because it determines budget and that also means that in gathering the data, you will -- most officers, with all due respect, will remember that that's why you're collecting the data and so as part of the assumptions that you were referring to earlier, we need to recognize that the thing that will be ideally will support an argument for appropriate resources and will rarely provide data that will say we're doing squat and we're not producing anything and so there's a built-in bias already in the data collection that is done in-house or that is done within the government that will perhaps echo back to Jay's point about the importance of having external evaluations conducted that could provide a bit more of that credibility, not to denigrate the in-house efforts but to say that they are therefore a part of a very narrow particular focus and that is to ensure continuity of the financial resources.

MR. OLSON: I think that is a superb point, and I think it goes to the heart of all of -- still, if your data is credible, if it's known and definable, it's credible, you can make a political judgment off of that if you want to. If you are manipulating the assumptions and those things that are knowable for political purposes, then I would say it is an intellectual exercise in futility and a waste of money, and we should accept it and not opine about it.

I mean that is to me at the heart of this whole data assumption because if we throw it out in an academic world and say this was the data, then all of that data really was -- I mean, look, there isn't a person in here that doesn't understand politics and how all that stuff works, but, you know, we're kind of in a world where we're not really plagued with that and so how do we get to the truth? How do we get to appropriate predictability of that information, and I think the point's very well made.

It's not just that some of this stuff is unknown. It could be known and still be biased and that's okay if it's biased, as long as the underlying stuff was as good as we probably could get.

There was -- yes, sir.

PARTICIPANT: Government and political leaders, government officials, NGOs, and people, they should work together. There is a story in Korea. A wise king address to the people. We are facing a very difficult time, a disaster. I knew it was coming. I've been preparing, but it came sooner and harsher.

Now, I talk to every section of the country and I made up a plan. It's not a perfect plan but it's a good plan for everybody. So now can you support me? Can you stand with me next few years? It may be hard. You may have to live with less and a difficult time. Can you stand with me? Can you support me? And people looked at the king and say it's a good plan. It's not perfect, but it's a good plan for the whole people, for the whole country, and they say, King, we stand by you, we support you.

That's what we need in America now, you know. They have to make a good plan talking to the people, to NGOs, to whatever, and make up a plan which makes sense to everybody. Somehow we don't get -- everybody, they're interested -- I know, I want this, this isn't a good one. So I'm asking to the panel, as far as the United States is concerned, where does U.S. stand in the public diplomacy and talking to the people?

MR. OLSON: Well, I would tell you I'm asking and you're on the answering end, but I'm going to -- I'll give you an opinion.

I think in our country today, you're talking about what do we say to our people about compromise or about reasonableness or about less than ideological extremes. That's what I heard you asking.

I think we're living in a day in this country where we're tremendously divided. There's just -- civility is fleeting and for a hundred years, I'm 63 years old, we kind of had a continuum of -- an economic continuum, political continuum, Democrats, Republicans, blue collar, white collar, based on capital and labor.

I think that's changed today. I think our continuum today is very much more along a kind of behavioral line of fundamentalism and non-fundamentalism, and I'll define fundamentalism as that behavior that has to compel other people's beliefs and it's on the left and it's on the right, and as a result we have divided ourselves in a pretty unhealthy way because it's really tough in my judgment for the fundamentalist Democrats, fundamentalist Republicans, fundamentalist -- non-fundamentalist Republicans, non-fundamentalist Democrats and the non-fundamentalists are people who are fairly in the moderate, in the middle. Their differences are not that great, but we've become so damn ideological that reasonableness is fleeting and that we cannot reach a common center.

I think a lot of that goes to how we address each other and how we talk to each other and what we saw to each other, and, you know, when I was in elementary school, in first grade, my report card had those little boxes and the first one was plays well with others. Well, we're not playing well with others, with each other. It was the most fundamental lesson of our childhood and yet we really don't play well with each other and we're capable of doing that and there are a million reasons, 24-hour news cycles, all these kinds of things, and I think we probably have been purposefully divided through reapportionment and redistricting throughout the country.

But I think it's harder for us to get to the middle today. So when you say that the king would say, you know, trust me here, I'll get you through all of this, well, Americans kind of -- you know, we have to be able to give a little bit and today, it's just if you don't believe what I believe, you're not a good person, and it's all over. It's on both ends of the political spectrum. Most of us, I think, are kind of in the broader middle and we might be left middle or right middle, but we're not -- we don't have to compel everyone else's behavior.

Part of getting -- I think my answer would be part of getting to where you want to be is that I think there are good smart thoughtful people in public life, Democrats and Republicans, that can get us to a reasonable place, but as a population, our public -- our civic life today is telling us that if you get to that middle place, you've compromised and it's bad, and I just think we're at that point and it's going to be really damn hard to get there, even if we can get there, if our public policy takes us there.

You didn’t ask for a political answer, but it is the message. It is normalcy that you're talking about and giving up something and we all know what that means today. I don't know if you know what it means, but it's called upon.

MR. WILSON: Listening might be one of the effects of public diplomacy which is a way of asking the question how do we provide attitudes from other spaces that reflect the diversity of opinion internally and about the United States. So we think of public diplomacy as persuasion, but getting measures for listening and bringing information in so policymakers can make smarter decisions would be one way of thinking about effective public diplomacy. It's kind of upside down.

PARTICIPANT: That's where we spend our time on the private sector side, the measuring of the listening more than anything else. That guides all of our activities. We match the actions to the outcomes of the listening exercises.

MR. OLSON: In a strange way, when you take that -- I mean, I promise I'm not -- I don’t care.

If you have a need to compel others' behavior, be it a Christian fundamentalist who says that you can only go and be and have certain ideological ends, if you think the way you're only Christian or only believe in God or whatever, you think the way I do, or you're a Democrat who, if you have to us force, would say that you're evil if you hunt or if you go to war or if you -- I mean judgments that we make about -- and if you don't think the way I think, you're not a Democrat, if you don't think the way I think, you're not a Republican.

In our foreign policy, you know, this whole listening notion to me is the opposite of having to compel. Do we believe that every nation -- I believe every man wants to be free, but in the export of our foreign policy, is democracy a value that we define as to what democracy is? I mean, you get into real complicated world, but all of this stuff, I think, is in the water today about compelling.

Our foreign policy is a tool, an instrument to accomplish certain ends for us. The way we do it is, you know, at a time of distress like 2001 was different than the way we may do it today and that whole paradigm of force has changed the landscape tremendously, although we're still in it.

Yes, ma'am.

MS. WALSH: Hi. I'm Pauletta Walsh, and I was with the State Department for four years as television media advisor to Europe. I was the first regional person in that field. I was --

MR. OLSON: Tell us your name again.

MS. WALSH: Pauletta Walsh.

MR. OLSON: Pauletta. Welcome, Pauletta.

MS. WALSH: And I'm so happy to have you all here. So basically I wanted to talk about my job in that position was to go, to travel to countries. I created documentaries in foreign languages for distribution in those countries as a way of speaking to other nations and not all of that has been done. I think a lot more of it should be done and we were talking about the broadcasting area. I think it will be.

But I just wanted to bring up pa couple points I was thinking. One was that it seems to me we could better evaluate it if we split it to three tranches. That would be attitudinal, things that change attitudes long term, things that change policy individually, like an individual policy that we're currently implementing, and then individual country to country niches that Europe -- I went to 22 countries and did documentaries in multiple languages and that was very, very effective, but I always assumed that each country was an entirely individual policy area and then maybe looking also across the area, like in broadcasts.

I was trying to think about the best outcome we had that we could verify. We did a partnership with Al-Arabia on a wonderful profile of Ambassador Schulte who then was our Ambassador to the IAEA. He used to joke he never had a good day. Okay? So this is an Arabic product and El-Arabia was from the Arabic world and then I was from the State Department and this went out for broadcast. We were very, very fortunate to be able to find out how to -- how it was successful because it went out once and then it was such high demand that it went out, it actually was broadcast another six times, and then the program that it was on actually got financial support and commercial support post that.

So it was one of the cases where we really, really were able to document the success of a particular thing. So at least for the broadcast area in partnerships with broadcast, private/public partnerships, that would be a way to do it.

With the Web, obviously with anything on the Web, you obviously have an internal way. So maybe by splitting them out into their kind of baskets of both intention and delivery, you might be able to figure out a way to create what overall would be a more robust way.

MR. OLSON: Thank you.

MR. WILSON: Was this with USIA or recent --

MS. WALSH: I came as a presidential management fellow to the Department of State. I was there from 2004 through 2008.

MR. WILSON: It was after USIA. Where was the cost for this?

MS. WALSH: Well, actually so, first of all, I was with Broadcast Services and I did a thing for them for Iraq called -- a documentary on Iraqi Americans that was out of Iraq Supplemental. Then other things, some of them, one in Italy came out of the Italian. The one I was just speaking of came out of Embassy.

Then when I was at EURPPD, they actually gave me my own little budget that I had where I could be tasked from anywhere from any of the various areas.

MR. WILSON: Did you have any interface with the PD people who were emerging?

MS. WALSH: Oh, yes, that's who I worked with all the time, but I went into country, I would meet everyone there, the PD groups.

MR. WILSON: But the budget came from other desks?

MS. WALSH: No. I had my own small budget and then it would be supplemented either by something in country or -- usually in country, not from the desk in EURPPD, generally, but one of the things we did was also in Montenegro and that I think was supplemented. It was a partnership with the country of Montenegro, State Department, Georgetown University. It was a public -- it was a basketball diplomacy project and that also involved private with a bank, opportunity bank. So they were kind of put together according to the region.

MR. BANKS: Which group were you out of? You were with the PD group out of Main State or were you out of one of the Europe Regional Bureau?

MS. WALSH: Well, first of all, out of Public Affairs. I'm sorry. Out of Public Affairs, first at the Office of Broadcast Services, then out of the Office of the Under Secretary and then subsequently out of EURPPD under the leadership of Dan Freeney.

MR. OLSON: We've gone 10 minutes over. Thank you all. Good luck to you in your career.

MS. WALSH: Thank you.

MR. OLSON: And we are going to be back in here, I believe, at 1:30, is that right? At 1:30. This is delightful for us, and I thank you all so much for attending.

Everybody's welcome at 1:30 today, and so if there's no further business, Commissioner, if there's not, --

MR. SEIB: Thank you, all.

MR. OLSON: -- the Commission will stand adjourned, subject to call of the Chair.

Thank you.

(Applause.)

- END -