Minutes for September 16, 2014 Official Meeting

Townhall
Washington, DC
September 16, 2014

   

“Data Driven Public Diplomacy: Progress Towards Measuring the Impact of Public Diplomacy and International Broadcasting Activities”

Tuesday, September 16, 2014
10:00-11:30a.m.
The Capitol Visitor’s Center

COMMISSION MEMBERS PRESENT:

Mr. William J. Hybl, Chairman
Mr. Sim Farar, Vice Chairman

Ambassador Lyndon Olson, Vice Chairman

Ms. Anne Terman Wedner

COMMISSION STAFF MEMBERS PRESENT:

Ms. Michelle Bowen, Program Support Assistant

Dr. Katherine Brown, Executive Director

Chris Hensman, Senior Advisor

John Pope, Summer Analyst

Kayli Westling, Summer Analyst

GUEST SPEAKERS PRESENT:

Dr. Amelia Arsenault, Assistant Professor of Communication, Georgia State University

Dr. Craig Hayden, Assistant Professor, American University

Dr. Shawn Powers, Assistant Professor of Communication, Georgia State University

Dr. Jian (Jay) Wang, Associate Professor, Annenberg School for Communication and Journalism, University of Southern California.


MINUTES:

The U.S. Advisory Commission on Public Diplomacy held a public meeting on September 16, 2014 from 10:00-11:30a.m. at the Capitol Visitor’s Center in Washington, DC. The meeting was focused on the findings from the Commission’s most recent report, “Data Driven Public Diplomacy: Progress Towards Measuring the Impact of Public Diplomacy and International Broadcasting Activities.” The Commission Members welcomed four guest speakers to brief them and the public on the findings of the report. The speakers were: Dr. Amelia Arsenault, Assistant Professor of Communication, Georgia State University; Dr. Craig Hayden, Assistant Professor, American University; Dr. Shawn Powers, Assistant Professor of Communication, Georgia State University; and Dr. Jian (Jay) Wang, Associate Professor, Annenberg School for Communication and Journalism, University of Southern California. It also included leaders in the public diplomacy and international broadcasting community, including Rick Ruth, Roxanne Cabral, Jean Manes, Bruce Sherman, Robert Bole, Lynne Weil, Suzie Carroll, Dr. Robin Silvers, Dina Suggs and David Ensor.

The purpose of this meeting was to review the core findings of the six-month study on research and evaluation methods currently underway at the Department of State public diplomacy offices and at the Broadcasting Board of Governors. It largely identified five major areas of needed change: (1) increased recognition on the part of State Department officials of the importance of research in public diplomacy; (2) movement away from State Department and BBG’s risk-averse cultures, which can negatively impact how research data and evaluations are conceived, conducted, reported and used; (3) more consistent strategic approaches in developing and evaluating public diplomacy and international broadcasting activities; (4) increased training in strategic planning, including research and evaluation; and (5) more funding and personnel to conduct more meaningful evaluations at both agencies that can correct the course of programs and activities. The guest speakers went through core recommendations for research design changes in the Educational and Cultural Affairs Bureau’s Evaluation Unit, the International Information Programs’ Analytics Unit, the Office of Policy Planning and Resources (R/PPR), and the evaluation and strategy offices at the Broadcasting Board of Governors. The Commission staff also reviewed the key structural and organizational changes necessary for research to become more institutionalized within both agencies.

The Commission Members, experts in attendance, and the audience posed questions about the methodological and structural changes necessary to move toward more systematic measurement of programs’ and campaigns’ impact. The specific answers to the questions can be found below in the transcript. The Commission will continue to investigate issues that will better support measurement and evaluation for public diplomacy and international broadcasting.

The meeting closed by briefly discussing the Commission’s mandate and plan for the remainder of the year. The Commission will meet publicly again on December 11, 2014 to discuss the findings from its Comprehensive Annual Report.

TRANSCRIPT:

William J. Hybl: Good morning and welcome. I’m Bill Hybl, Chairman of the U.S. Advisory Commission on Public Diplomacy. We want to thank all of you for being here. I know I speak for the staff and the Commissioners in saying we think today’s topic is a very important one. Since 1948, the Commission has been charged with apprising U.S. government activities that intend to understand, inform and influence foreign publics. It also works to increase the understanding and support for those same activities. The Commission conducts research and symposiums that provide assessments and form discourse on public diplomacy efforts across government.

This meeting is based on the findings of the Commission’s first official report for 2014. It is in on the progress being made toward data-driven public diplomacy and international broadcasting. This is the result of six months of research conducted by members of the staff of the commission and nine contributors, and it has led to some very interesting results. We’re very fortunate to have some great speakers with us today who are experts in the field. They have looked into what has transpired both within the Broadcasting Board of Governors and at the State Department.

I’d like to introduce the Members of the Commission who are with us today. Sim Farar, the Vice Chair of the Commission, is from Hollywood, California. Ambassador Lyndon Olson, our former Ambassador to Sweden, is also Vice Chairman and from Waco, Texas. We also have with us Anne Weidner, who is from Chicago, Illinois. Ambassador Penne Peacock from Austin, Texas and Lezlee Westine of Virginia could unfortunately not be with us today.

We’d like to thank Senator Barbara Boxer for being our host today at the Capitol Visitor’s Center and also Walker Zorensky who assisted in acquiring this room for us, which is a great spot. It is important to have this centralized space so that staff members of Senate and House Members’ offices can join us. Let me also thank the members of the staff. Katherine Brown has done a terrific job of putting together a great staff, and you will see today some of the fine work that they have put together. I think that each of you have a copy of the report at your seat.

Now, let me introduce our Vice Chair, Sim Farar.

Sim Farar: We’re honored to have with us today some of the reports’ contributors and we appreciate them being here. They will discuss the main findings and recommendations for the different public diplomacy and international broadcasting offices that they have reviewed. First, we’d like to welcome Jay Wang who is from my home city of Los Angeles, California and he is one of our Trojan fans here. He is an Associate Professor at the Annenberg School of Communications and Journalism at the University of Southern California and he is also the Director of the USC Center on Public Diplomacy. Dr. Wang will discuss the Advancing Public Diplomacy’s Impact report in addition to the work of the Educational and Cultural Affairs’ Bureau’s evaluation unit.

Secondly, we’d like to welcome Amelia Arsenault and Craig Hayden. Amelia is an Assistant Professor of Communications at Georgia State University and she is a research fellow at the Center of Global Communication Studies at the Annenberg School for Communications at the University of Pennsylvania. Craig is an Assistant Professor at the School of International Studies at American University. Dr. Arsenault and Dr. Hayden will both focus on finding from their review of efforts to track the impact of digital media activities in the International Information Programs Bureau and the Public Affairs Bureau.

I’d like to also welcome Shawn Powers. Shawn is an Assistant Professor of Communications and an Associate Director of the Center for International Media and Education at Georgia State University. Dr. Powers will discuss the Offices of Performance Review and Strategy Development at the Broadcasting for Governors.

The contributors’ detailed biographies are available at the back of the room; they are very extensive but quite interesting to read. After they discuss their findings from the respective offices our Executive Director, Katherine Brown, will present the overarching findings and recommendations for structural and organizational change for research and evaluation at both agencies. We will then open up the discussion to questions from the Commission Members and then to the entire audience.

There are several people in the room we also look forward to hearing from. We’re honored to have much of the State Department public diplomacy and BBG leadership with us today, including Rick Ruth, Roxanne Cabral, Jean Manes, Bruce Sherman, Rob Bole, Lynne Weil, Suzie Carroll, Dr. Robin Silvers, Dina Suggs and David Ensor.

I’d like to ask Dr. Jay Wang to come up to the podium and review his findings. Dr. Wang please, thank you very much.

Dr. Jay Wang: Thank you Sim. Good morning, everyone. I’m here to discuss the recommendations for the Advancing Public Diplomacy’s Impact (APDI) report and also the ECA evaluation projects. Two other colleagues also contributed to these assessments, but they are unable to attend today’s meeting.

We focused on examining research strategies and their implementation. This includes research objectives, designs, execution, interpretation and reporting. So I want to first highlight some of our recommendations for the APDI project. The impact report is intended to be a benchmark study to assess if public diplomacy activity largely impacts target audiences. The study employs a quasi-experimental design to attempt to compare responses from participants and alumni of U.S. public diplomacy programs to the responses of those who have no contact with the local U.S. Embassy in seven countries. We are very encouraged to see an emphasis on measuring the long-term impact of public diplomacy on foreign publics. Here are our recommendations to enhance the project.

Our first recommendation is to redesign the study to focus less on aggregate findings so that it can provide the foreign policy community with more detailed and analytical information and link public diplomacy programs with policy objectives. In addition, a few case studies will support efforts to better understand why and how public diplomacy works or doesn’t work in achieving desired outcomes with various populations in different parts of the world.

Our second recommendation – I’m just going to go through it and in our report we have more detailed analysis. The second recommendation is to redesign the future study so that it allows for more actionable data and better feedback for policy-making. This includes describing the insights gained from this study with other public diplomacy practitioners and policy makers so that they might be used in the development and advancement of ongoing and new public diplomacy initiatives, and in informing U.S. foreign policy. An expansion of the evaluation unit staff would also support the reports’ redesign and ultimately, its wider dissemination within the department and to Congress, oversight agencies and other stakeholders.

Third is to focus on key publics rather than convenient populations and to include publics in future evaluations in an effort to gage the broader impact of public diplomacy of foreign populations. Currently the nonparticipant group is primarily determined by demographics, which has quite significant limitations.

Fourth is to increase the sub-level analysis for richer insight. By this, all we mean is that we suggest including, for instance, recency based on participant segments of program experiences and intensity in terms of the nature and amount of exposure to U.S. public diplomacy in the analysis.

The fifth is that it is important that we contextualize and qualify the interpretation of the data and discuss the participants’ knowledge, interests and understanding of the United States. More specific questions included in the interview schedule would help to measure the subject’s actual knowledge and understanding of the U.S. and, if possible, establish a basic set of questions of what their knowledge, interests and understanding are for every participant even before they start a public diplomacy program with the U.S.

Our sixth recommendation is to incorporate greater contacts of the political, geopolitical, economic and sociological dynamics affecting the country. The analysis of data should also consider both the timing and types of programs as well as a number of programs in which respondents participated. The impact of other influences of participants’ views as we have seen from academic studies, such as media consumption and educational backgrounds, could also be incorporated. Finally, we encourage constructive criticism through evaluation. It is important that we provide results that are supported by the research design. There are certain claims and certain statements made in this report that require stronger support in order for these claims to stand. So these are the seven recommendations that we have for the impact project.

Now let me go to the ECA evaluation units.

The effectiveness of exchanges conducted by ECA in reaching longer goals is the most difficult and probably the most expensive to measure. ECA has provided much of the direction for research and assessment for diplomatic activities writ-large through logic models and step-by-step guides for the State Department. Their work is laudable given both the financial and bureaucratic constraints. We also make seven recommendations for how to adjust and enhance their research designs and implementation.

First is to develop specific research-based goals and objectives that connect programs to foreign policy gains. A standard set of questions and measurements about knowledge and understanding about the U.S. would also help ensure the validity of the findings and a gross study comparison. Second is to separate short-term goals from long-term goals. We recommend that we can more effectively distinguish short-term and long-term objectives in program plans. The short-term objectives are easier to measure and can contribute to program planning and development. There is an opportunity to tie them to long-term evaluations. The third recommendation is to reduce as much as possible the reliance on self-evaluation of program participants. Fourth is to supply greater context of country, regional, global trends to the activity being measured. We recommend placing more emphasis on qualitative fieldwork and comparative studies that would provide insights regarding the influence of public diplomacy activities and of program outcomes. This also requires more funding for measurement activities. Especially those studies that involve multiple methods require time. Yet such data is valuable for informing the design of future evaluation projects. Fifth is to encourage constructive criticism through evaluation. We need to develop studies directed at detecting and understanding reasons for the formation and shifts in attitudes and behavior among foreign publics towards the United States.

Our sixth recommendation is to focus on clarifying the descriptions of research processes and to further develop guidelines and principles that guide research and data reporting. This includes measurements that provide for learning and insights from evaluations to be used to improve and develop programs. We aim to use qualitative research to inform quantitative research designs and not just illuminate the quantitative findings. Finally, our last recommendation is to distinguish between what is inferred versus what is directly assessed in reporting the research data. We should provide more objective measures of programs’ impact from participants rather than just relying on inferences drawn from interviews with those involved in the programs and also those involved in the implementation of these programs. So I will stop here. Thank you.

Sim Farar: I believe now we’re going to hear from Dr. Amelia Arsenault and Dr. Craig Hayden. Thank you.

Dr. Amelia Arsenault: Hi, everyone. Thank you so much for being here. I’m going to be talking about the first four points on the slide that summarize our findings on our evaluation of the social media activities underway. It looks at the Audience Research and Measurement office in IIP, which is the soon-to-be renamed the Analytics Office. I hope I got that right. And it looks at the Office of Digital Engagement in the Public Affairs Bureau. In terms of leading this off, a quote I wanted to include in the report, which I left off in terms for length, is the idea that something that theoretically used to hang in Albert Einstein’s office. It said, “Not everything that counts can be counted. And not everything that counts can be measured. And not everything that can be measured counts.” And I think that that’s something important to think about when we’re thinking about social media.

Social media is still relatively new. Evaluation techniques for social media are even newer. And I think that the teams that are involved in this sort of evaluation should be commended for the work that they’re doing. This is particularly given the fact that due to legal restrictions and financial restrictions, they are not able to use the full range of tools that are available to other people conducting analytics outside of a government organization such as the State Department. And so, that being said, ideally we would like to see an expanded methodological toolkit. Because of the legal restrictions on the government’s collection of individual data by the Privacy Act of 1974, the public diplomacy evaluators can collegiate aggregate data. But they cannot identify who is following, sharing and viewing U.S. government social media properties and to what effect. So until the user restrictions are lifted, I think that evaluators should explore methods such as focus group interviews, which are not subject to the same legal restrictions. But because of staffing and financial considerations, expanding this range of methods is difficult. And right now what we’re seeing is a system in which the methods that are available to the evaluators dictate the questions that are being asked about social media and its impact on public diplomacy, rather than the reverse, which is the preferred research approach.

Secondly, and I think that this is a common issue outside of the State Department and within, we we are concerned with the conflation between outputs and outcomes. Analytics reports often confuse social media outputs such as likes or re-tweets with desired public diplomacy outcomes, which is a positive sentiment or engagement. And we’d like to stress that more social media activity, which is an output, does not necessarily translate into things such as increasing engagement or positive sentiment, which we have described as an outcome. Evaluators are well aware I think of this limitation and it’s a common problem, as I said. But written reports generally highlight increases in raw numbers and not necessarily comment on the extent to which these outputs contribute to their desired public diplomacy outcomes.

Our third recommendation is to continue to integrate the evaluation unit within the policy arm of IIP. Under the pending IIP reorganization, the Analytics office will be more closely associated with their campaign office. This is a positive development and it should help to better facilitate integration between evaluations and policy. One of the things that we would like to stress is that these sort of movements are incredibly important because evaluations are really only useful when they are used to inform subsequent activities and designed in such a way where they clearly measure the goals of the project or initiative. Program design and long-term strategic planning should draw upon the knowledge and input of evaluations and evaluation team members whenever possible. Obviously, things happen in the world. There have been a few international events the past few weeks that probably would have surprised people a few months ago. So, planning out evaluations is not a perfect science.

Fourth is to build consensus early, or as early as possible, on what is being measured and why. There needs to be a better definition of foreign policy goals and the objectives of social media campaigns and messaging overall. We are tweeting. We are using Facebook. But why and to what effect? One of the things that we really wanted to stress overall is the expansion of social media metrics to a host of social media projects across the State Department. Few people outside of the State Department understand the bureaucratic divisions within it and that an office that is not necessarily directly mandated to improve public diplomacy maintains its own social media account. And so we’d like to see more individual analysis of different social media activities as well as looking at the contexts of social media activities that are taking place across the State Department.

I am going to turn over the microphone to my colleague Craig Hayden. Thank you.

Dr. Craig Hayden: Thank you very much. Picking up where my colleague left off, I would like to move on to the next recommendation. Now, at this point, let me pause for a second. You may be noticing that there are some systemic critiques. We see them repeatedly across different contexts. I think that this is both a good and bad thing, and we recognize some of the challenges. But that also suggests that there are some avenues for movement ahead in addressing them.

We recommend supplying greater context on the country, regional or global levels for the activity being measured. Now this means not only thinking about the context wherein a campaign might take place and taking advantage of the resources on the ground. For example, if there was situation and perceived medians are utilized through social media campaigns, we need have accurate media surveys and audience analysis on the ground of what role Twitter plays in the political conversations or conversations upon important leaders that would make sure that this Twitter campaign would actually serve a particular policy objective or forward some sort of campaign. So this involves both long-term contextual cultural knowledge and demographic analysis as well as the already sophisticated quantitative network analysis.

The next point I would like to talk about is comparing U.S. activities with those in the international system. The U.S. has been recognized by scholars and noted diplomats for its path breaking use of social media from a few years ago to now. Other countries are also using social medial and sometimes in ways that don’t necessarily reflect our values and ethics. But at the same time, they are reaping success in terms of their foreign policy objectives. Likewise, allies and other nations are using social media for nation branding in addition to short, medium and long-term campaigns. As part of measurement and evaluation, we need to understand and benchmark our efforts against these other countries. And again, there is some awareness of this. However, we believe resources need to be dedicated to understanding what other countries are doing in terms of their outreach and engagement through social news media.

Next, we encourage constructive criticism through evaluation. Now on the one hand, I think that there is recognition amongst the evaluation analytical units that the information being provided is contributing to course correction. On the other hand, there needs to be structural and organizational incentives to report bad news. And this can then factor into designing or perhaps changing programs that may not be working. So, in short there needs to be an environment where this kind of behavior is encouraged. An environment where analysts and evaluators can report their findings in such a way that there can be potentially a 180 degree turn in the strategy or tactics that are being used in this public diplomacy campaign. Again, we recognize that there have been positive developments. But these kinds of changes don’t occur overnight. So we encourage those organizational leadership efforts that are trying to create an environment where constructive criticism and course correction can be part of how measurement and evaluation contributes to the public diplomacy factor.

Finally, we recommend increased data sharing across the State Department. One of the things we noticed in our reporting on the evaluation units is that there is a lot of really good work being done across these units. But their work would be better if they could find out information from other units. Now I think this gets at something overall about the recommendations we’re providing. But these recommendations are not mutually exclusive. For example, if we want to get greater context on a public diplomacy program or campaign to understand the audience or the cultural context, we need to have different offices and different personnel who are gathering this information freely share the information in order to design better programs, but also to create an environment where evaluation can be facilitated and course correction is easier. Likewise, going back to this issue of context. We mentioned earlier, my colleague Professor Arsenault, talked about the need for other kinds of methods. Understanding audience and context in a political environment requires different kinds of expertise and methodologies. So we encourage the development of different forms of analysis. Not just network analysis not just public opinion polling, but getting that audience analysis and getting at immediate surveys to uncover the fabric in which social fabric, the media fabric within which all the social media campaigns take place. Thank you very much.

Sim Farar: You will now hear from Dr. Shawn Powers from Georgia State University. Thank you very much.

Dr. Shawn Powers: Good morning and welcome. Thank you for joining us this morning. I want to start by thanking Katherine Brown for organizing and spearheading this effort. It really was a tremendous model for collaboration between non practitioners and practitioners. I hope the outcome is a productive one for everyone here. Also I want to thank Bruce Sherman and Sonia Gloeckle for their openness and transparency and generosity with their time. It was very nice and helpful, and it helped me to produce this report. And of course I want to thank the Commission Members for supporting this report and the research that came out of it. It really was remarkable process. I led a team of three researchers that reviewed the Broadcasting Board of Governors’ audience research. My team includes two tremendous colleagues who cannot be here today: Eric Nisbet from Ohio State University and Matt Baum from Harvard University.

The depth and scope of the work conducted is impressive and is commended in the report. BBG’s research budget is far below industry standards. And we urge Congress to allocate additional resources for research in order to ensure American broadcasters can effectively engage with audiences around the world and reach significant audiences in parts of the world where information is far from free. We want to help improve the excellent research that is already being conducted and therefore we offer eight recommendations.

First, we recommend further integrating research methodologies. What we would like to see happen more often is integration between the large participant surveys that survey a simple thousand people who access American broadcasting and use that information to then conduct a small focus groups and in-depth interviews based on similar questions or questions that arose out of the survey.

The second recommendation is to connect reports’ research designs more specifically with strategic objectives. Reports using in-depth interviews to investigate the habits and impressions of audiences while important weren’t always directly connected to the strategic objectives of the broadcasters. We encourage research products to be used to help set up strategic goals for both the language services within VOA and the surrogates, those office directors can then get back to researchers to communicate what those strategic goals should be.

The third suggestion is to nuance data collection in repressive states – countries such as Iran, China. The surveys that pose explicit questions to respondents about how they feel about democratic governance, U.S. foreign policy, and U.S. political leaders are unable to access the information we want to get out of these survey participants. Instead, we recommend using tactics that have been used in the private sector that ask more broad questions about generic forms of democratic governments and other more subtle questions to access opinions about American culture, American policies, and support for domestic regimes.

The fourth recommendation is to lessen reliance in self-reported data about attitudes and knowledge. It is an important metric, but some reports of learning from BBG programming need improvement. But simply people stating they learned something is not the same thing as them having done so. The existent self report measures are highly abstracting, ambiguous and could improve upon the more valid means of measuring these questions. One way would be to ask a series of questions about knowledge or questions on each topic to access accurately audience beliefs. We recommend expanding the core country questionnaire to include audience attitudes and assessments of democracy, freedom of expression, internet freedom, corruption, accountability, media sustainability and accountability within its audience over time among countries regionally and globally.

Fifth, we encourage a continued expansion to look beyond audience reach metrics. Reach for the number of people that have accessed BBG contents in the past week is the first thing that outside observers ask about. Yet it is of limited value. It offers little information about how much programming was consumed, what was remembered and what they actually thought about the programs watched. Knowing audience reach shows that BBG programming is certainly important but lacks insight as to why others are not tuning into BBG programming or why they don’t find it compelling or prevalent. Analysis needs to be institutionalized in the evaluation process in order to extend the reach of potential impact of BBG content. The impact framework that’s currently being implemented includes other indexes and factors and we encourage their inclusion and institutionalization into the overall research process.

The sixth suggestion is to improve the long-term trend analysis. We recommend increased use of panel designs, which are being piloted this year. With BBG we also encourage piloting something akin to Nielsen-type monitoring of actual media usage for a sample population and specific target areas. Emphasizing television and/or online media consumption would also be quite helpful. This could include snap polls and interviews to create a much more rigorous testament of what people are watching and why, and what they are taking away from the content. Where feasible, these snap polls can be administered in real time where people are watching or consuming internet sites or television content.

Seventh, we recommend improved and pertinent analysis of regional and global research. We recommend employing more advanced statistical methods for analyzing cross-national survey data such as hierarchical linear modeling, multi-level analysis, and active cross-national survey data to identify and measure global and regional predictions.

Eighth and finally, we suggest innovative analysis of survey results. We recommend opening up parts of the BBG and Gallup data to a trusted community of academics and stakeholders for peer review on a routine, annual basis to offer initial analysis of what the data means as well as the additional validity checks. This could coincide with expert workshops, with academics, stakeholders and pollsters, into developing a set of achievable goals for this research.

To conclude, in order to properly implement the new research designs that have been proposed and the techniques suggested here we strongly endorse an increase in BBG research budget. Current research efforts count for approximately 1 percent of the BBG’s budget, well below industry standards of 5 percent of the operating budget. Additional resources required for the BBG to effectively achieve its current mission in today’s highly competitive and fast-changing media environment. Thank you.

Sim Farar: I’d like to introduce now -- most people already know -- Katherine Brown, who is the Executive Director, along with her brilliant staff of the United States Advisory Commission on Public Diplomacy. A round of applause for Katherine, who deserves it.

Katherine Brown: Thank you, Sim. Thank you, everyone. It’s great to see you all here and I also want to first thank our staff: Chris Hensman, Kayli Westling, John Pope and Michelle Bowen. There were a lot of moving parts to this project and a lot of logistics and travel to coordinate. We could not have done this without you. So thank you very much. I also wanted to thank the Commission Members for your leadership and guidance. This project is something we identified a year ago when we first met as a group after being reinstated to figure out what was the number one priority for the Commission. And we decided to look at this issue as our first in depth report. I also wanted to explain that our reauthorization language specifically asks us to look at impact assessments for public diplomacy and BBG activities. So we decided it was important to take a look at the work that is already being done and to base any further support or constructive research on that. And to look really at how research is done at the beginning to design programs and then how research is done to determine impact.

In the past year, I’ve gotten to know many people in this room and I’ve learned that public diplomacy has been a leader in the State Department in this area. The Broadcasting Board of Governors, as we all know, has been a leader in the inter-agency on this issue, especially in its audience research and its willingness to open up its work to the public. We really see that State Department public diplomacy leaders and officials at the BBG face several structural constraints in producing consistent sound research to inform and evaluate campaigns and programs. This is something that you’ve heard recurring in the presentations that just took place. This includes a lack of time and staff to conduct thoughtful, long-term evaluations. In addition, there is a risk-averse culture where often officials can misinterpret setbacks as failures and therefore people can downplay shortcomings. We also see some legal constraints. There’s the Privacy Act of 1974, which inhibits the robust online research the digital media team, the Analytics team within IIP, can do. There’s also the Paperwork Reduction Act of 1980, which ensures that whenever there’s research that to be conducted with over 10 people, there needs to be a waiver. So we definitely want to deeply review these legal constraints.

The change we recommend is to get to a place where outcomes are more systematically reported and that they’re meeting the needs and the requests of Congress, which we feel are just going to increase. And we also feel that this research, if structural and organizational change happens, can be more effective in providing feedback and supporting program managers for more efficient and impactful campaigns and programs. What we’ve seen is that a lot of this change is already underway. There’s been a lot of change in the last year in regards to when we started this study. The ECA bureau is focusing on more alumni programs that will allow for more longitudinal studies. We also see more of a move to connect their evaluations with foreign policy goals. The new IIP leadership is already reorganized officially as of yesterday. And their new analytics office will be part of the campaign design and program designed at the beginning of campaigns, to ensure they have goals communicated at the outset that they can later measure. We’re also seeing a lot of changes in the Policy Planning and Resources office at the State Department regarding developing strategic tools to help Public Affairs Officers in the field. There will be a redesigned Mission Activity Tracker. There’s also a new strategic planning tool called the Public Diplomacy Implementation Plan that’s being released next month. And we feel that this also will contribute to better research.

At the BBG, pending congressional notification I believe, they’re already working to reorganize their offices into an Office of Research and Assessment. Which we feel is a fantastic move that will help provide guidance and make sure that research is connected even more so to strategic planning. And this creation of an Impact Framework is something that’s brand new and it’s something that we feel has great promise to look at the impact that BBG has worldwide.

We conducted a workshop in July 2014 when we gathered all of the contributors to this report – there are nine contributors and eight who actually wrote the appraisals. Four of them couldn’t be here with us today and one, Nick Cull, is a historian who wrote the preface about the history of evaluation at the USIA in the opening of this report. But we gathered everybody at the USC Washington office to talk about what you heard in the reviews that Sean, Amelia, Craig and Jay just gave. We see so many challenges in the methodologies that we’re advising. So we made a list of the structural and organizational change necessary to accommodate for them.

The first one is State Department specific. But this is really to support the work of the Policy Planning and Resources Office. We feel that creating a Director of Research position to be a sort of maestro within the R cone and to make sure that there is cross bureau research being done would really support the public diplomacy officers even more. This director position could regularly design and advise on standardized research questions and methodologies and procedures that directly link practice to strategy and to the foreign policy objectives. The office would also give more organization legitimacy and authority to research and support researchers’ needs and prioritize these objectivities in ways that serve long-term objectives. In thinking about the ideal for this position, we think they would directly report to the Under Secretary for Public Diplomacy and Public Affairs and help translate research into an interpretable and actionable form for him or her. We also feel that this position could help with the State Department’s interagency coordination and interfacing more with BBG’s Office of Research Assessment. That’s our first recommendation.

The second recommendation is really to support evaluation with more expertise. We saw these teams of two or three people at times doing this work. In the Public Affairs Bureau, just one person is measuring analytics. The staffing is nowhere near where it should be to be able to do the kind of robust methodology that we think would really help the outcome research.

The third recommendation is in regards to funding. If you look at the research and evaluation budget for the Educational and Cultural Affairs Bureau, you’ll see that it is just 0.25 percent. At the BBG, I think it has dropped to 0.7 percent going into Fiscal Year 15. This is not enough to do the scale of work that’s needed. So, definitely it is a very tight budget environment and if congressional funding is not available, we would like to see some reallocation of the budgets more towards research and evaluation.

The fourth and fifth recommendations look at the laws that impede research and evaluation work, especially at the State Department. We are going to do a further review of the Privacy Act of 1974 with legal experts. We weren’t ready to make a recommendation on the topic yet. But we are concerned about what this law does to really providing robust online audience research. Essentially, the law currently roadblocks analytics in a sense that, according to State Department lawyers, influential figures abroad cannot be identified with online analytical tools despite the fact that they are knowingly broadcasting information using tools such as Twitter. Still, we’d like to look into that further. The Paperwork Reduction Act of 1980 again, limits the scale of research that can be done. The intelligence community does have a blanket waiver, so this includes the Intelligence and Research Bureau. And we’d like to see that waiver also applied to research offices.

The next recommendation is what you’ve also been hearing about for some time, something that is repeated within GAO reports of past years. This is to increase inter-agency cooperation and data sharing. We felt that the need is still there, although we’re seeing encouraging signs of it happening. We would just like to see it be institutionalized and systematic.

The next one of course is the culture in these agencies changes that afford setbacks to be aired, and make sure that these reports can be critical for the course correction of programs and campaigns.

The next recommendation is establishing guidance and training on research and evaluation. We do feel that this should be something that is automatically part of the curriculum of incoming Foreign Service Officers. But also the many different people that do work at the state department that are not FSO’s and advanced training should be reinforced at every level.

The last recommendation is something that we can do at the Commission. We were thinking about how do we continue, how do we set up a structure that systematically ensures that we’re providing feedback and making sure that we’re giving the best advice from the outside? That’s creating a subcommittee in research and evaluation. If we can make this work to ensure that the information will be shared, then we can conduct quarterly meetings with an array of experts not just the people in this room who have worked so hard on this report, but also with market researchers and people that are in private industry. So that is something that we can do to make sure that we continue to support these efforts and provide guidance on methodology. On that note, I just wanted to thank again the contributors: Dr. Hayden, Dr. Arsenault, Dr. Powers and Dr. Wang -- and also Dr. Erik Nisbet, Dr. Matthew Baum, Kathy Fitzpatrick, and Dr. Sean Aday who worked very hard on this. And thank you to all the offices that collaborated with us we are deeply, deeply grateful for your transparency and your openness to work with us. At this stage, I’m going to turn over to the Commission Members for their questions. Anne Wedner?

Anne Terman Wedner: I have more of a question than a comment. I really want to thank everyone at BBG and State who were so accommodating for all of these requests and the exposing what you’re doing to a set of academics who haven’t necessarily walked in your shoes. And that is, tremendously, can feel like a risky thing. And we really appreciate the fact that you were so open and so supportive of this process and help us to get a conversation going. So thank you for doing that.

Sim Farar: I also want to thank everyone for their support. It’s a lot of work. And on behalf of all the Commissioners here we can’t thank you enough. This report is, as you all know is submitted to the President, Congress, Secretary of State and also the American people. So it is a very involved report. You all worked very, very hard and it means a lot to us. I do have a question for either Dr. Powers or Dr. Wang; here’s my question; what are the reorganizations currently underway at the State Department and BBG right now that can make a difference here? What are we working on at this point now?

Katherine Brown: And you can feel free to stand up and also open it up to other people.

Sim Farar: Should we give the microphone?

Katherine Brown: They can stand up.

Dr. Shawn Powers: Thank you for the questions. So the report outlines the specifics of the reorganization that’s ongoing through BBG. I believe it’s being implemented this week if I have it correct. But there’s a centralization of the researchers’ responsibilities to the strategic planning office. And again the details are outlined in the report. And so I don’t want to go into too many specifics because I may use the wrong words but it is evolving in the right direction to a more centralized control process so that research can be intertwined with the objectives of broadcasters.

Sim Farar: Absolutely, please do.

Katherine Brown: If you can just introduce yourself, Rob.

Rob Bole: Yeah, I’m Robert Bole, the director of global strategy at the Broadcasting of Governors. So first of all, I have to congratulate you on the report. I actually read the report with a lot of enthusiasm and interest and it was actually a page-turner in some ways, which is an odd thing to say about these issues, but it was really useful and we appreciate it very much. I want to talk just briefly about that particular issue. So, we are a very complex organization. It has multiple entities: we have the IBB which overseas five broadcasters. A lot of the research structures of those broadcasters are represented in the room, including the Director of the Voice of America, Radio Free Europe, Radio Free Asia, the Middle East Broadcasting Network, and the Office of Cuba Broadcasting. At IBB, we handle most, if not all, of the Voice of America and OCB research. And we support it and work with Radio Free Asia and RFE by providing funds funding to be able to do work. The research structures always work together to prioritize the research that we were going to do. However, on the IBB side we had offices that had research functions and digital analytic functions spread across three different offices. So we made the move to start work together to bring those into one organization that we cleverly called the Office of Research and Assessment. The focus is to try to bring our digital capabilities and our survey capabilities and our qualitative capabilities and our panel and other experimental research into one office. And we have Sonja Gloeckle who is now the Director of Research driving that. She is in essence the chair of all the researchers in the BBG pulling together a strategic plan.

In the Office of Strategy Development we take a look at the priority countries. So what’s at the top tier, second tier, third tier, fourth tier and so on. We use a wide variety of factors to look at those tiers. We look at development goals. We look at security and we look at foreign policy objectives. And we tend to take a look at those, all of our language services in the countries and put them in these tiers. And then that will start to drive to where we put our research, our limited research dollars, as well as start to focus on our qualitative and panel based research. And I take Dr. Powers point about trying to integrate those things in a better way, which is exactly right.

The other thing that the Office of Strategy and Development does is work with the language services to develop very specific audience goals. So that’s one of the things we focused on this last year and in 2015 will continue to in places such as Myanmar. We ask, what are the audiences goals that we want to look at? Who does RFA target and how that is different or connected to what VOA is targeting? Those audience goals, defining what the target audiences are, then drive the specific research questions. And we want to ask things such as, What’s the immediate environment like for those people? How is it developing? We want to take a look at how we’re reaching them. Are we connecting with them? There are some improvements in the report that we will certainly take under consideration.

And then finally, tactically, how do we drive day-to-day decisions? Especially by using things such as real-time data from social media and web metrics? I want to drive to the Commission that we’re really trying to change is that we’re really trying to move research away from these grand questions to frontline managers who need to make decisions every week about what they’re doing, how they’re doing it and trying to develop both qualitative and quantitative methods of delivering that data very specifically. So we try to take a strategic look at where we’re headed to where we’re trying to go. So that’s the organization that we’re trying to drive to and it really wouldn’t happen without the research directors in the audience here and they’re incredibly smart people.

David Ensor: I actually don’t have anything to add to what you’ve already said.

Katherine Brown: Any questions? Anyone else? Well first, after hearing from Rob we do want to open up the opportunity to anyone else from BBG or the State Department to provide commentary, reactions or make any statements you wish. And I see Roxanne Cabral right there. Roxanne, you can stand up or come to the podium.

Roxanne Cabral: I have a cold so I don’t project as well.

Katherine Brown: Yeah, come on up.

Roxanne Cabral: Hi, my name is Roxanne Cabral. I’m with the Office of Policy, Planning and Resources, also known as R/PPR at the Department of State. I recently joined a couple of months ago so I see this as an exciting opportunity to work with the Commission. And we really appreciate the Commission commissioning this report, convening this group of experts to come up with this fascinating study of how we can conduct evaluations. I think the report is timely for a couple of reasons. One, the global environment has changed dramatically in the last few years. The way we connect with people, inform people, inspire people and persuade people has changed. It’s shifted. How we communicate and influence in the world today is not like it was five years ago. Evaluation metrics is going to be important in the way that we look at the global environment and the way that we conduct public diplomacy from here on out. So the external environment is an important consideration in taking this report in context. But, internally at the State Department we have new leadership in public diplomacy. We have a fairly new Under Secretary, several new Assistant Secretaries and so forth. And they are also taking a closer look at how we conduct public diplomacy in this new global environment and how evaluation of metrics can help support what our ultimate goal is aligning resources to our policy in the most effective ways. And this report is a very helpful guide in helping us develop a holistic approach to looking at our resources, our structures, our process and evaluation of metrics fits perfectly within that. And it’s a key part of that. So, Katherine thank you very much and to the Commission very much for this report. We look forward to working with you more closely on this and developing a more holistic strategy. Thanks.

Katherine Brown: Thank you, Roxanne. Is there anybody else? Jean? Great, fantastic.

Jean Manes: Hi, I’m Jean Manes from IIP. There’s just a couple of notes on the question of structure that were raised that I wanted to address. One is the Analytics Unit for IIP. It has moved to being next to what we’re calling Campaigns. And what that really means is more of a tactical structure. A real life example of that is that we’re working with ECA on the implementation of “100,000 Strong,” which is a White House priority on increasing the amount of students from Latin America to the United States and vice versa. What normally happens in public diplomacy world is that we start a campaign without actually doing research on the front end of how we’re targeting that. And so we have, believe it or not, the online capability now to actually know how people are talking about studying in the United States online. So when you look at an age group of say 17, 18-year-olds, or 15-to-17-year-olds, our target audience for the campaign, you can look at how they’re even talking about the topic. One of the quick things that we learned is that 100,000 Strong has no resonance. So there’s no point of us talking about that issue in the terms that we care about if that’s not how the audience is talking about them. By moving the Analytic team into Campaigns you’re able to do that front-end analysis on word choice, on languages -- if the campaign is in English, Portuguese, Spanish -- and also monitor how content plays. So you’re able to put up content, see the reaction and start gearing content to what’s being more successful and integrating as you go. In the past, IIP had both analytics and evaluation. The evaluation was more focused on a year and a half evaluations that were useful to feed into longer-term policy but weren’t yet necessarily useful on the tactical level. So that is one of the structural changes that have happened in IIP, which was official as of yesterday. It was signed to move Analytics to the frontends so that you can actually understand the audiences from the beginning and develop a campaign strategy going through and then integrate as you go.

Katherine Brown: Great. Many thanks, Jean. Rick?

Rick Ruth: Thank you all, I’m Rick Ruth from the Bureau of Educational and Cultural Affairs. At ECA, if I may use the acronym ECA, our mantra is evaluation for use. We want our products to be timely, to be accurate and to be actionable. I’m not a subject matter expert. I’m not a research or a social scientist. Luckily I’m joined buy some of my colleagues here who are and surrounded them at ECA so we have that expertise to draw on. But, we are interested in getting information to policy makers, to decision makers, who can use that kind of accurate information on a timely basis to make decisions. That’s the purpose of our evaluation. Some of the structural changes and some of the legal changes, regulatory changes that the Advisory Commission has recommended would certainly enable us to do that far better. Things that are eye glazingly boring to the average person such as the Paperwork Reduction Act cause such grief every single day. And yet it is almost impossible to get anybody to take up that babble because nobody wants to be seen as the person who is waging war against the Paperwork Reduction Act. I complement and I thank the Advisory Commission for being willing to recognize that there are these kinds of structural impediments that cause real professionals trouble every single day In addition to the larger issues that getter better recognition.

ECA has been for a number of years sort of in somewhat of a leadership role, as you see reflected through the report within the State Department for evaluation. But that’s a precarious position to be in because you have to stay there. People expect to stay there once you’re out in the lead. And I think that the recommendations of the Commission and the experts that the Commission has gathered are ideally designed to help us stay there. We welcome all of them. We will be looking very carefully at all of them and with the implementation of these kinds of recommendations I hope that ECA can maintain that kind of position particularly in the area of linking all of our evaluations and our reports to foreign policy. That’s very important to us. I very much like the contextual approach to not just discuss how our participants reacted to the programs and what they did subsequently, but where that fits into the national, political, economic and strategic scene. That’s very key and also again, as is always true in every bureaucracy, I fear we need to do a better job of linking information that we do obtain to all of the other parts of the bureau and the public diplomacy cone so they can be unified in that sense.

The last thing I want to say is that ECA, of course we are in the people business and ECA is perhaps even more so uniquely in the business of human beings. And so certain kinds of concepts are discussed, such as it’s important to fail. You have to be ready to fail, every program can’t be good. If you’re not failing then you’re not taking risks. That has to be seen in a rather different light in ECA. It’s one thing, I don’t mean to say this at all slightingly, it’s one thing to say we tried a new line of tweets and nobody cared. We put up a new website and nobody was interested. It’s a very different thing to say we brought a hundred high school students from North Africa to the United States for a leadership camp for six weeks and it was a disaster. We don’t ever want that kind of failure to happen. So we’re always looking for ways to do better but it’s an interesting context when you’re stock and trade is living and breathing human beings that you move back and forth around the world. Thank you.

Katherine Brown: Thank you very much, Rick. We’re going to open up to the audience for Q & A. I’m sorry but we don’t have a roving microphone. But, we are in such a small room that we will hear your voice. So if you could just stand up, state your name and your affiliation. If it requires a microphone for the answer we will have the people come up. Are there any questions in the audience? If there aren’t this will be the shortest meeting we’ve ever had. Of course, Ambassador Olson.

Ambassador Lyndon Olson: This doesn’t have any real relevance to the Commission. But when you talk about links, I know from experience the DoD has their own version of public diplomacy. The intelligence community has their own version of public diplomacy. Are their links that are utilized from State throughout the system? Or are we really being very State specific here?

Rob Bole: Can I answer that from the BBG point of view? One of the things that we found recently in response to the Ukraine crisis and Russia, now it’s in Iraq, is that there’s a lot of opportunity between different communities to share information research. At the BBG, we’ve had an opportunity to sit down with them on a weekly basis and in fact we have a senior staff person who has been instrumental in our research; 20 percent of your job in the interagency is to share data. There are often times where State is doing policy and public opinion polling that we don’t have. Or they may be first in the field. Or we may be first in the field. So I see it happening not from a structural point of view. As in saying, if we have a mandated congressional link but we have certainly a really increasing rise, especially in form of crisis response conversation. We’ve been having and lots of data flowing in between each other. It’s been very good, very good for us.

Ambassador Lyndon Olson: And has it been meaningful?

Rob Bole: It’s been very meaningful data, because frankly it also helps us to participate in any kind of policy reviews that the White House may be having. So we are all working from the same data and we all talk about our different approaches to the public diplomacy challenge, but we use the same size data and have that conversation.

Ambassador Lyndon Olson: Well, my experience -- I’ve spent 12, 15, 16 years of my life in the private sector as CEO of an insurance company in New York. And the first thing you probably learn in the insurance business is that you’re only as good as your data. We have these analytical conversations and then we talk about budgets, restraints, etc. And everybody asks the question: is the data similar and credible? And if it is, who will test that? For instance, with the Defense Department, would they say that your data is credible? Would the intelligence community believe that your data is credible? Because if they won’t believe that your data is credible then we’re kind of whistling Dixie here, right? No offense to you Georgians. But I’ve listened to this conversation for several years now and I’ve often wondered what do they think about their data-gathering ability? Are they confident that their analytics are okay and they’re credible? And I think the academics have helped, at least as far as I am concerned the academics have helped, kind of confirm that this is not just a wild goose chase, that there is meaningful stuff out there to gather. During my ambassadorship, when you talk about exchanges, your judgment tells you it’s working right. I mean because it’s living, breathing, touching – it’s healthy. The invisible data is not always made easy to discern a conclusion from. So, I’ve often wondered are we doing this within the government? But it sounds to me that there’s a level of confidence amongst you all that you’re pretty confident that this is meaningful data.

Rob Bole: Let me answer from the BBG prospective. I think one thing is that the direct information environment has changed significantly. So you are getting to more people of talent that think about using data in a much more real-time fashion to apply to real life problems. Meaning, how do I get another customer? Or how do I see another units’ techniques that are being applied to thinking about how I influence audiences? How do I engage with audiences? So I think they’re starting to find a lot more active, practical application of data and research. I heard that throughout all of the presentations today that you’re starting to see this change. I think we were sometimes limited by the amount of research that we do on international media in terms of budget. It’s a tradeoff between footing out another television program in a critical region, or responding to a crisis, versus gathering another survey or data. That’s a hard balance all the time. I’m pretty confident from having long ago an econometrics background and to now overseeing research, and through talking with research directors that we caveat our information enough and we probably actually need to do even more of that to note that getting data out of Iran or out of China is going to be plus or minus something. But, I think we feel strong enough that we take an analytical approach using resources and using our knowledge to present a credible picture of what’s going on. It may not be an exact picture but it’s a credible picture. But I take the Commission’s input about trying to be more upfront about the plus or minuses on data in certain countries, about using more advanced statistical techniques and about aligning that with strategic goals, a very good point just to prove the credibility of our data. But I think overall I’d say we do a very good job with limited resources with the caveat of saying that we’re starting to improve that to continue to improve that. So, I feel that we have good decision making abilities. Can we have better? Yes.

Anne Terman Wedner: Can I just jump on, Lyndon? I think that observation was incredibly important not to make anyone in the room defensive about anything that’s going on but just in terms of our own overview as the Commission. This year we’re really focusing on BBG and the State Department. But ultimately there are more than 50 actors in public diplomacy by the federal government. And when each of these actors is sponsoring and doing their own research and evaluations it becomes duplication, unnecessary duplication to some extent. And also whether you’re getting the best research out of which arm and who can be providing that. So I think that looking forward as a next step for our own project as we embrace more and more of the agencies that it will be interesting to see if we can form some kind of working group that goes across agencies so that you guys have available to you also information from them and also a better understanding of what exactly they’re involved in. Because honestly, I don’t think anybody really knows. It’s a big very, very disjointed world. So I think that that is an excellent segue to what in a future year and a follow-up would be for us is trying to understand how we’re sharing across agencies on some of these ideas.

Katherine Brown: Anyone want to respond to that? Yes, please.

Brian Carlson: Hi I’m Brian Carlson representing Intermedia. A question, perhaps for the presenters, is that I come at this from the point of view of thinking that a lot of what we do in public diplomacy, at least with BBG and State, actually addresses sort of micro audiences in foreign countries. A number of Chinese, a number of Russians actually impact it by giving view on several public diplomacy programs. Does this cause you to think that we should or that State and BBG should really use more qualitative measures of research in interviews, focus groups, demographic studies? I mean it’s very hard to do a nationwide surveys and ask people without footwear. Should there be a shift in the way the resources are allocated?

Katherine Brown: Amelia do you want to take that?

Dr. Amelia Arsenault: I was going to say that I don’t think necessarily, I wouldn’t necessarily say that qualitative research should replace quantitative research. But it definitely can deepen our understanding of the findings from quantitative research. That was one thing that we were talking about as well. I mean I think that particularly with social media, it’s 10 years old if that. And we don’t really understand, we understand that people are tweeting, but we don’t understand really what that means. I mean nobody has a secret formula. And only qualitative research can really help to deepen that. I mean, yes: qualitative research is more expensive sometimes and you’re going deeper and narrower. But I think that you need both.

Katherine Brown: Yeah, Jay. Of course.

Dr. Jay Wang: I think that as we also pointed out in the report that it is true in some countries that it’s more micro than macro audiences. But the key is how you define what a population is. It’s not about population in the general sense of population. What’s your relevant population? So it could be a pretty macro group. And so I think deep in these various countries you completely can see that there are different segments across the world. So certain countries would require a more macro, kind of a more opinion survey approach that’s supplemental, supported with qualitative work. But in some of the countries maybe the micro approach is more appropriate.

Katherine Brown: Shawn, do you want to say anything?

Dr. Shawn Powers: Yeah, Brian, thank you for your questions. I think for us there is tremendous value in both quantitative and qualitative data and in integrating those inputs into the same reports and using them together, weeding them to tell stories of what’s going on is crucial. So you can use survey instruments to find out in broad strokes, but then knowing those broad strokes using focus groups and in-depth interviews to ask more specific questions based on the results of the surveys. That kind of integration can be really effective.

Katherine Brown: And Craig?

Dr. Craig Hayden: Yeah, I mean just to follow on what my colleagues were saying. I think that different kinds of research feed in on each other, right? So qualitative research can create questions that can then be tested and followed through with a large scale quantitative measures and audience analysis at the same time. Also, I think it’s important to understand -- I touched on this a little bit by stressing the importance of contextual analysis -- is understanding the cultural and social context of the communication platforms of the audiences that we have identified actually use. Sometimes that information isn’t necessarily available through either scraping the Internet for network data or for polling people. We need to be getting into the more micro level attention to how media is a part of everyday life. How it matters for particular purposes and so it’s not an either/or. And we recognize this is expensive and there are limited resources but ideally we would have both kinds of capacities.

Katherine Brown: Great. I saw, someone had their hand up back there. Yes, pleas.

Rebecca Walter: I had a question for Shawn.

Katherine Brown: Okay, and if you cold state your name and affiliation.

Rebecca Walter: Sorry, I’m Rebecca Walter. I work for audio analysis group. My question is about self-reported data. And then I was hearing form BBG about the requirements for snap polls. So I was wondering how we move forward from getting these little snapshots of data so that we can answer questions in a week’s time to those very extensive surveys?

Dr. Shawn Powers: I differ to Sonja or Bill Bell.

Dr. Sonja Gloeckle: Well for Audio analysis specifically --

Rob Bole: Sonia Gloeckle is the Research Director for the BBG.

Katherine Brown: Yeah, thank you. And Sonja, could you please stand up?

Dr. Sonja Gloeckle: Sure. For audio analysis specifically we obviously would get data from more than just one source exactly. And in terms of using snap polls, we would use that in conjecture with other already existing international surveys and through quantitative surveys. We do them whenever we can. We obviously do have some bureaucratic restrictions in getting things into the field. We can, and it’s very difficult for us to say we want to do something tomorrow. But we are working with, already managed to turn, ways to turn around time for example when the Ukraine crisis happened, we were in the field within several weeks and I think we were the first Americans to do so.

Katherine Brown: Thank you. Are there any last questions? We’ll close if not so, speak now. Okay, Ambassador Olson.

Ambassador Lyndon Olson: Thank you all very much for being here for a very insightful conversation on research and evaluation for public diplomacy. We’re very grateful to our speakers and the report contributors for taking the time to be here today and for the representatives here from state and BBG in addition to all of you in the audience for your participation. The Commission is moving forward with several activities to seek to understand the depth of impact of diplomatic activities. In addition to looking at how public diplomacy and international broadcasting is measured for impact we are looking at how they are conducted in high-threat environments, and the human resource dimension on public diplomacy. We ask you to save Friday, October the 24th for a half day event that the commission is holding with the U.S. Institute of Peace, the McCain Institute and with the Truman National Security Project on how to better support civilians working in risk environments to better engage foreign citizens. For more information you can speak with anyone of the Commission Members or with Katherine Brown our Executive Director or Chris Hensman, our Senior Advisor. There are also more materials available at the back of the room if you choose to get it. Our next meeting will be December the 11th 2014, when we will present our annual comprehension report on public diplomacy and international broadcasting activities worldwide. Until then, thank you very much.