Workshop on Electronic Media Engagement
The second half of the morning, we’re going to have presentations from the Public Diplomacy Office in the Department of State, and we have �'�' we have three presenters and a moderator, so you lose me very quickly.
The moderator is Cherreka Montgomery, a colleague who I have worked with for the last �'�' what was it, five years? Some horrible, long period of time like that, seems like five years and seems like two weeks sometimes.
She is �'�' she works in the measurement unit. In fact, she is the director of the evaluation measurement unit of the Office of the Undersecretary for public diplomacy and public affairs. She joined RIPR �'�' isn’t that a great acronym, RIPR, in October 2005. I keep expecting that office to change everything, you know, sort of super person come out; it never seems to work that way.
But before that she actually �'�' before 2005 you were working with the office as a consultant, weren’t you? Or is it 2005 when you came on as a consultant? And then she has been with it ever since and has, I think, contributed significantly to the way that the office and the undersecretary’s office as a whole thinks about measurement and evaluation. As I said, she is going to be the moderator, and if she wants you to know more about herself, she will tell you. So I’m pretty sure she won’t.
The other three presenters, the first one is going to be Dr. Christopher Toppe, who is a senior evaluation officer in RIPR, research methodologist and statistician with broad experience in national and international data collection. He is also an adjunct professor at Georgetown University where he teaches public policy research methodology.
Juliet Dulles is a social science analyst in RIPR, an evaluator and program manager with a background in international development and program design. She has an MA from Johns Hopkins. We don’t need to go through all the SEIS components of it.
And Dean Olsen is a social science analyst in RIPR as well, began his career with the Department of State as a presidential management fellow where he served as a management analyst in the facility security division office of overseas protective operations office of protocol, and that is a long way from RIPR, but he has obviously made good strides and moved out of that area into something which I think is probably more interesting. He has a PhD in public administration, homeland security and coordination. So you can see sort of the schizophrenia that he has to deal with every day when he comes to work.
Cherreka, your team may begin.
[ View slide presentation ]
MS. MONTGOMERY: Good morning, all. Thanks so much for coming. This is a real delight for us to be here and have an opportunity to really talk about one of our more innovative projects out of the Office of the Undersecretary in our new evaluation and measurement unit. We’re going to talk about our electronic media engagement program evaluation. It’s something that it’s really, in my mind, trailblazing. We’re definitely going to places we have never been before. Just sort of to start it off, I am Cherreka Montgomery.
So I am going to give you a brief intro. It’s always with public diplomacy and working with the Department of State �'�' sometimes we’re inspired by our great external agencies, one being GAO, but in this case it was both the GAO and the Office of Policy, Planning and Resources, as well as the Bureau of International Information and Programs who a couple years ago decided that it was really time for us to take a closer look at how public diplomacy could continue to use Web 2.0 or sort of digital media campaigns to further engage our key foreign audiences, those particularly in some key locations overseas.
And this is a great quote from GAO that I like to refer back to to keep us all sort of focused and guided, and I think GAO got it right with this one. They talked about dynamic shifts in how target audiences obtain and use information.
And that is really the new �'�' and I think Senator Hagel mentioned this yesterday, our new paradigm that we’re actually experiencing is that information is being transmitted rapidly. And the way that we used to be able to communicate and program public diplomacy, it still may work in many cases, but particularly with younger audiences and those young demographics, how are young people getting information, how are they forming impressions, how are those impressions therefore being transmitted into their behaviors overseas? And that’s what we’re most important �'�' that’s what we’re most interested in.
This is a brief outline of high level of what this project is all about. The acronym is EME. The project benefits �'�' what we’re really looking at is trying to really document, measure and provide evidence of efficiencies around current reach, scope and depth of Web 2.0 efforts.
When we sort of started this project off, we wanted to take a real deep examination of what the bureau of international information programs were doing, and the reality set in. We’re doing so much when we take a look at what does electronic media mean. It means blogs, it means tweets, it means SMS, it means podcasts. It means all the things that we’re doing digitally to community. And so what we decided to do is we decided to pause and take a step back and go through a rescope effort.
And so the first year of this evaluation is really about market research. It’s really, we’re going to do audience analysis, we’re going to do situational analysis, trying to understand what are current audiences overseas doing with these tools to share information, who is driving those information patterns, what topics are they most interested in, and sort of lay some baseline data around the reach, scope and depth of Web 2.0 penetration in some key demographics as well as audiences.
Our primary objectives, first and foremost, is that we want to do this formative evaluation of IIP. That will begin October 1, 2010. Right now we’re doing the market research study in a number of countries, and my colleagues will talk about that. I think the other exciting thing is that we’re doing Web conversation crawling and Web conversation analysis and also sentiment analysis.
And I do want to sort of give you a disclaimer that everything that we’re going to do around Web conversation crawling and any data that we’re going to give you today, it is public data. That is, information that’s available in the digital blogosphere, digital ecosphere that is not password protected. So we are not going behind walls and plowing through people’s private e�'mails or things that are locked down; this is all public information.
What we hope to do �'�' and here is a takeaway, is that we are developing a methodology right now for how you measure and how you assess the impact of digital engagement. We’re trying to develop this methodology and bring it home to public diplomacy and therefore turn it over to the Bureau of Information Programs.
It has never been done before. We started off without any clear idea of what to do, but we think we have assembled the best and brightest to help us lay this foundation.
The other thing that we want to do is really come up with some new performance measures. We have some core performance measures for public diplomacy, but we need to branch out. We want to establish both the methodology and metrics of success that public diplomacy could use when it’s implementing massive digital campaigns around the world.
So to really simplify all that I have just said, here is our two phase approach. The first one is market research; that’s where we are today. Market research entails needs assessment. We’re always about doing a needs assessment and a gap analysis. We know that it is very difficult for many of our posts, particularly given just infrastructure issues, staffing issues, to actually use these tools.
We want to actually get out into the field, have conversation with the posts, have key stakeholder dialogues with our public affairs officers at these key posts. We’re holding focus groups with local audiences, and we’re really paying close attention to young people, people who are between the ages of 18 and 25. Sometimes we even go a little bit lower, to the ages of 16, to really get a sense of how young people are engaging an using these tools and how are they communicating messages.
We are also taking a look and also doing structured interviews. We’re going to talk a little bit about how we’re launching some online surveys to help us really sort of collect and document this information as well.
Big event analysis. Well, public diplomacy and really particular thanks to Undersecretary Judith McHale, we are really in a campaign mode, so all of our posts have the message that the undersecretary and we here in Washington want to see big things. And so at events, we mean, if there is an event such as the Shanghai exhibit or the Shanghai expo that is taking place in China in a couple weeks, that is an event, and we know that our past section in Beijing is doing lots of things to message around that, to get the word out, to really communicate what the U.S. interest is in this particular event.
And so what we’re planning to do is actually point a telescope using a tool �'�' my colleague will talk about this, so we can actually do crawling, Web conversation crawling as well as blogosphere crawling to sort of see what the conversations are today and then when we actually apply our intervention, we apply our messages or our campaign strategies around the expo, as an example, what is the after effect. So that is what we mean by both unforeseen events, but in this case a planned event.
Web conversation tracking, we’re calling them monitoring topics but really in public diplomacy lingo, this means our strategic objectives. Every post around the world, whether or not they’re following the mission resource strategic planning goals or the strategic objectives as articulated by Undersecretary McHale, we have strategies, we have goals in mind.
But in order for us to transmit those goals into the tools that give us the ability to crawl in the blogosphere as well as to sort of do the sentiment analysis, we have to transfer those objectives into topics. And I think a lot of us already know this, sometimes we can have the best ideas, the best phrases here in Washington, but once we sort of message that out, it may resonate to audiences very differently. And that’s the things that we’re finding out in our project to date.
We’re also doing trend analysis. We’re taking a look at particularly young people, young and a very critical demographic over a period of time, particularly the course of eight months in this fiscal year, how are their opinions changing? We have launched a number of snap polls in Pakistan. We’re launching some online surveys, even today, in Indonesia as well as Argentina and some other countries as well.
We are also therefore right now looking at our partners, particularly our private sector partners to help us give us the best of breed, what some private sector companies and others have used as performance metrics to measure their impact and how they engage their key audiences or their constituents or their consumers through digital and electronic media campaigns.
And Phase 2 will, again, start in fiscal year 2011 �'�' is really a well designed, formative evaluation of the Bureau of International Information Programs and their electronic media campaigns.
This slide I know is a little bit difficult to read, so let me just sort of give you the high points. This evaluation, what it is is market research. You want to know how are people using Web 2.0, what are the most strategic platforms? Facebook may be the best thing for us, but Facebook may not be what is really being used in places like Algeria, for example. What are the Web media user demographics? Who is online? Who are the drivers?
So we’re actually applying some social network analysis methodologies to help us do some mapping about who those audiences are, which platforms are used, what does the message reach, what are people responding to. We’ve got some very interesting data from Indonesia that shows that there is in fact yet another gender gap. You have got more women who are blogging in Indonesia versus men, but yet men are really �'�' make up the population of online users. I think that is very interesting.
Also, we’re going to come up with �'�' recommend best media communications plans because we’re applying both audience analysis and situational analysis. We’re going to do some mapping of the foreign audiences, what they’re saying, the volume of conversations that take place monthly over a quarter basis as well as tracking the sentiment and tone. We’re going to talk a little bit about that, and this is how we’re converting the strategic PD messages into topics that we are therefore feeding into these new tools.
We’re doing this crawling both in English as well as in select key foreign languages. We have already talked about private sector best practices.
I think the other thing that we’re really most excited about is we’re developing or launching a process that identifies for the bureau of international information programs where they are today, documenting their efficiencies, their programs, their processes to date, but also giving them a blueprint of where they could be in the future. How do we really make this bureau of inter-nation information programs, that 21st century, new age bureau that the undersecretary has articulated in her new strategic framework?
What this program, what this evaluation will not do is that we’re not going to do a close examination of every single program or office within the bureau of international information programs. Our focus is all about electronic media platforms and the delivery of those tools as a way of delivering the public diplomacy strategic objectives to our key foreign audiences.
This program evaluation will not be an aggregate assessment of all that we are doing in the Department of State around electronic media. I mean there are some cost constraints. I mean there is scope. There is a lot of things happening every day in the digital ecosphere. We’re only going to focus on select countries and we’re not going to, in this evaluation, evaluate the overall performance or the impact of everything that these select posts are doing when it comes to public diplomacy.
So if there is anything, if there is nothing else you remember from me or take away from this particular slide that is, we’re only pointing our telescope to focus on electronic media and how we’re using those tools. Of course, we want to make sure this is independent evaluation, and so we’re partnering with what I think are some really great partners.
Office Remedies, Incorporated, is a small, woman�'owned business here in the Washington, D.C. area that has a lot of experience doing analysis even on the national security side. GFK Roper is one I am most proud of. GFK Roper are sort of the behind the scenes, and they’re actually the authors of the nation brand index, so they’re bringing their hexagon approach to public diplomacy to help us do the electronic media program evaluation.
When it comes to cutting edge technology and the tools that are helping is us sort of to point that telescope, to do the crawling, to code sentiment and analysis, we’re turning to Morningside Analytics who have been a very good partner for the Department of State at some of our other interagencies in the past. And Crimson Hexagon, which I think is most exciting. They’re out of the Harvard area, and they are the ones actually helping us and working in partnership both with Morningside and GFK Roper to take all the information we’re finding online, dumping it into yet another tool that gives us the sentiment analysis and coding capabilities.
So my job is to be the facilitator and to open this thing up. I am going to turn it over to my colleagues, but I do want to tell you that we have divided our presentation to three components. Dr. Chris Toppe is going to talk about really the Web 2.0, what we have discovered so far, some of our pretest data from our Web 2.0 tools and sort of that processes.
Ms. Juliet Dulles is going to talk a little bit about sort of the operational analysis. She is going to give you some highlights on some of our focus group findings and also, our Pakistan snap poll findings.
And Deal Olsen, Dr. Dean Olsen is going to bring it home for us and talk a little bit about our core evaluation methodology and strategy as we begin to approach the bureau of international information programs evaluation.
Chris? Thank you.
MR. TOPPE: Good morning, everyone. It is a pleasure to be here and have the chance to chat with you about what we’re doing. I want you to think about this from a 360 degree circle, a whole process and there are communications taking place in the internet, on the internet, in the blogosphere, through Web logs and Twitter and all kinds of things that kind of overlay in a broad sense what is going on in country. We have an interest in that.
It’s a new way of doing social network analysis. It’s a new way for us to reach out to new audiences. We really want to know who is talking to whom and what are they talking about, what are the topics that bring them together. They have choices. They can self identify, self select into certain groupings. We’d like to know, how do they do that. We’d like to know what kinds of things drive them when they are interested in interacting with each other online and do they have any interest in topics that are important to public diplomacy.
So we don’t create the topics they talk about. We’re actually going out and looking at what have they chosen to talk about, what are they doing? We do this �'�' I will explain this to you in a minute. I’ll let you look at it while I tell you how we get here. We analyze the linkage, linkages between people.
So this is language agnostic. It is language neutral. And we are looking at inbound and outbound links between people. And what we find is that certain people link to certain other people with some predictable frequency, and that once we find out what these linkages are we can then turn them into a map.
Now this is actually a three dimensional sphere displayed in a two dimensional space. So if we had this three dimensional, we could rotate it and look at all the different things about it. It’s easier to see. But each color is a grouping, is a cluster of people who have chosen to come together and interact with each other. So your blue cluster are people who talk about the same kinds of things, your green cluster, your magenta cluster are all people who have chosen to come together and interact with each other. And we are measuring how they link to each other.
Each circle is an individual or an individual blog or a Website. The size of the circle indicates their influence. The larger the circle, the more influence they have, which means the more links they have going into their site or links they have going out of their site. So we see two things here. We see who is talking to whom and we see the relative importance of individuals within each one of those clusters.
Now as we look at this, we can see that that’s interesting, but what does it mean? So we have actually hired, we actually have people who are language experts and recent culture experts. They have been in country in the last two years. And they go in and they read blogs and they find out what is it that is driving these people together, and then they are able to name the cluster about what their general topic is.
So in the upper right-hand side we have a fashion cluster, that tends in �'�' and by the way, this is Indonesia, and this is �'�' we just received this the last week, so this is actual data. The language, the cluster of fashion is primarily women; they’re talking about many things. They talk about their husbands. They talk about their kids. They talk about recipes, but their primary thing that seems to cross all their conversations at one point or another is fashion, the fashion cluster. It doesn’t mean they only talk about fashion. That’s kind of the way that we say this cluster is the fashion cluster, and we know from public information that this is primary female.
You can also see in the upper corner �'�' upper right, the celebrity cluster, also primary female. Now if you drew a diagonal line kind of from the upper left or the bottom right, the top have of that is primarily female and the bottom half is primarily male. And if you look at your colors, there is a very little crosspollination between those clusters. So we have clusters of people who are talking about topics. They are united by those topics. And they don’t seem to have much interest in the topics that other people are organizing around.
Now if we showed you Russia, it looks very different. If we showed you China, it’s totally different. So we know that we can’t just say in every country, in every culture and every language, we’re going to get the same clustering and the same topics. They differ by country. They differ by culture. They differ by language. We find this to be a very interesting and fascinating kind of thing.
So now we have identified certain clusters of people that are opportunities for public diplomacy that are different from how we normally define our target markets. These people may be elites. They may be semi�'elites, but these are different ways of defining entry points into the market.
Now that last slide fascinates me as a social scientist, but what is really interesting to us as we think about developing these tools for public diplomacy is can we influence the conversations? This is one of the key things that we hope to be able to do with this technology, and that is track the level of conversation through the blogosphere in response to events.
So President Obama is going to Indonesia or the Cairo speech, can we track how that goes through the blogosphere? Can the posts make an intentional effort to do something and we know when that occurs in time and we can say, we’re going to track conversations around this event prior to that time and we’ll track conversations when the event occurs, and we’ll see how much the conversation increases and how long the increase lasts?
This to us is a very exciting possibility for this kind of technology, but it’s more than just that because public diplomacy is not about just tracking conversations and tracking events because, tell you the truth, if we go back on this thing, is that positive or negative? Are they saying good things? Are they saying bad things? Did the speech resonate or not? Are they pleased or displeased?
So we need to take a look at a deeper level about specific kinds of messages. One thing we know about public diplomacy is that we tend to talk in very broad terms, which is different from the terms that people use in normal conversations. So one thing we need to do is translate those public diplomacy, Washington, D.C. kinds of terms into terms that people use when they talk.
So we take our strategies and objectives, events, narratives, we take those things and then we work with posts to try to understand if people are talking about these things in your culture, in your language, what does that sound like, what words do they use when they are talking about things and what words indicate whether or not those things are positive or negative?
So here is an example. Is the PD message being accepted or rejected? And we’re looking here at a public diplomacy object, to improve or increase understanding of U.S. policy, societies and values. If we search the internet, we would never find that term. That’s not the way people speak. But we condense that down into what we call our sentiment analysis terms, conversations about America and Americans.
We actually go into these web logs and we examine how people are talking about America, and we come up with a series of terms that we can then classify conversations into. On the positive side, they enjoy American pop culture, they have a positive view of American character. On the negative side, they dislike American culture, you know, they have a negative view of Americans.
So now we can actually go in and we can tell the machine, watch for these terms. Again, the �'�' once we move into this stage, it is language agnostic. The machine is looking for character strings, and it says, when you find these character strings, this gets coded as a positive or it gets coded as a negative. So now we’re saying not only did the conversation increase, but did the conversation increase in the way you wanted it to increase, did we get positive response out of this event or did we get a negative response, and what is the relationship between that?
Let me back up one second to this point here. Oops. I’m going the wrong way.
Up on the upper right hand, that women�'dominated sector, let’s say you post a card up, wanted to do something to influence the women�'dominated sector, the fashion sector, the celebrity sector, that upper left hand �'�' right hand quadrant. And they have actually an activity or an event planned because they would like to improve their relationship with women, another key strategy of public diplomacy, so they do something and were able to watch it.
This is hypothetical data. Don’t go out there and say look at the great things these geniuses have done at EMU because we haven’t done this yet; it’s hypothetical. But if you look at this in terms of a prototype, there is an event planned to attract, to influence the conversation in a particular cluster.
In this case, the cluster is called, the light blue line, the fashion cluster. This is in Indonesia, primarily women, and the embassy did an event, and we can see that before the event, there was little positive conversation. There were some, but relatively little positive conversation about the United States. After the event, you can see the conversation increased and came to rest at a place higher than it used to be, whereas in the male�'dominated public discourse cluster, the event had almost no impact.
This is what we’re hoping to be able to provide to IIP and to public diplomacy as a tool so that when we are doing outreach through electronic media we can actually follow what happens and get some idea of the impact.
Where are we now? We have been able to see how people relate to each other in the blogosphere. That is the map that we showed you. And we have been able to identify what interest tied those groups together. So we have the blogosphere clustering, and we have been able to go in and identify the topic that holds those clusters together.
We know those clusters tend to remain static. Over time a cluster will expand or contract, but generally speaking the cluster formations tend to remain static, which is very good for us because we don’t need to constantly relook at the structures of the blogosphere in a country.
What we have underway is working with post to identify topics of interest to the post that we can then search for in the blogosphere, and we’re trying to identify other audience segments that are already interested in topics that are of interest to public diplomacy.
That would be great. That is the 50,000 foot view of this circle going around the blogosphere of this country. But that is not enough, okay. What we need to also know is what is going on in country on the ground at the post level with their infrastructure and their audiences, and for that we are going to turn to Juliet Dulles.
MS. DULLES: Good morning. My name is Juliet Dulles, and I’d like to give you a brief idea of how we’re taking the 50,000 foot view and bringing it down to the post level and trying to get a holistic view of what electronic media engagement looks like when you go to the post.
So what does the new paradigm look like for PD at the post level? Well, if we take a moment to think about that context, electronic media functions through a series of spheres, the first being the post and the vision, the messaging, the platforms that are being used, staffing, the IIP services and products that they use and the audiences that they plan to be reaching with electronic media.
The next sphere which shapes and constrains what the post is doing is the infrastructure of the country, what is the level of internet penetration, the bandwidth, censorship, government monitoring, SMS use and smart phone adoption? What do these technologies mean to how it is we’re going to communicate certain messages?
And finally, audiences. What are these key foreign audiences interested in knowing? What are their attitudes and opinions, languages that they’re using and culture?
So we’re using a mixed method approach, both qualitative and quantitative data collection, to better understand, to get that 360 degree view of posts and their functioning. We’re applying a post needs assessment, looking at messaging and lessons learned from electronic media engagement to gather more information about infrastructure. We using also the post needs assessment data collection to better understand the technological factors that constrain and support messaging. Focus group quantitative online data collection as well as open source research, there is a good deal of open source material that can tell us about what is available.
Finally, for audience data we are using focus groups and quantitative, the quantitative online data collection to get a very nuanced and textured view of what audiences are using and what they want to know.
Our post needs assessment is giving us that more detailed look at what’s happening at the post and so we’re taking, we’re asking questions about how posts are operating, what is the infrastructure that is influencing their operations and post goals and objectives. Finally also what cultural barriers are posts encountering?
For foreign audiences we’re fleshing out the information that we’re gathering with the Web conversation crawling. So having seen what the conversations are and knowing something about who those people are who are having those conversations, what do we know more about how they use the internet and what their capacity is to connect and receive messages PD is sending?
What I’d like to do is give you some information from our data collection and focus groups in Pakistan snap polls. And this data is providing us a narrative that enhances and it helps us understand the conversation crawling we’re doing as well as adding nuanced or quantitative data.
In Indonesia, what we found is a muted sense of optimism about the country and its prospects. We have also found that respondents there are information consumers but they are not information creators. They are interested in sports and pop culture, but less so education in the United States. And this is particularly key as this is a key PD goal for post Jakarta.
They are however interested in studying abroad, and I think this is an example of how this focus group data can help inform what we’re going to be doing with assisting posts and supporting them in constructing PD messages.
And turning to Pakistan, the information we gathered in the focus groups and snap polls data there shows us how this is truly a country in turmoil. Respondents there expressed a desire to be understood, in particular expressing a frustration with search procedures when traveling to the United States. They have a particular interest in skills transfer and technology and an interest in capacity building in the education, healthcare and energy sectors.
It’s interesting to note that both countries have a suspicion that USAID comes with strings attached. And so again, I think this is another example where this data is going to provide additional information on how to effectively communicate with people using electronic media.
So having gathered the data with the 360 degree view of how posts are operating, we also need to look at the products and services provided by IIP, looking at how they measure their success and the kinds of services that would best serve our posts in PD. So this will be done by my colleague Dean Olsen.
MR. OLSEN: Thanks, Juliet. My name is Dean Olsen. I am a social science analyst in the evaluation management unit. And today I am going to talk to you about the formative evaluation of the international information programs electronic media product services and metrics. I’m going to cover three points with you, our formative evaluation research design, how we operationalize that research, and some of the IIP platforms that they use and the metrics that they have currently.
This here is a depiction of our logic model. This model is what we’re using to conduct our evaluation and it is the foundation of our research design.
We have four components of the research logic model, the inputs, processes outcomes and ultimate outcomes. Inputs is the data we want to gather, process is how we’re going to analyze that data, how we’re going to get it, process outcome is what we want to achieve out of it, and ultimate outcomes is the impact.
We have three levels, the post level operational analysis, the Web conversation tracking and the IIP formative evaluation. I really want to cover in detail today the IIP formative evaluation, but I want to show you that we have each level covered.
This here is Web conversation tracking, which was covered in detail by Chris Toppe, and you’ll see that not only is the innovative technology being used, we also have key informative dialogues. We’re gathering qualitative information to help inform that process as well. We’re getting �'�' we’re covering issue priorities and getting the value tones of those priorities to help inform the Web tracking conversation.
Now in detail I want to cover the IIP formative evaluation. For the input section, we’re going to be covering three areas, product platform selection and usage, the metrics and the identify their measurement needs. We’re going to do this through key informant interviews. Primarily we’re gathering information, documenting their views, content analysis and developing a matrix to analyze their platforms and their platform usage.
Like any good research, we begin inquiry by asking key questions. We have broken down our questions in three different categories. One is the platform selection and usage. The other is metrics, and finally, the last one is IIP post process.
In the platform selection usage process, we’re asking the question, how does IIP use, what platforms is IIP using and how do they use those tools and how does that compare to industry standards? In the metrics, we’re asking what is success and what does it look like? What does success mean to IIP and post?
From the IIP process and post process category, we’re asking the question, how does IIP work with posts and how do they get feedback from posts. This year is the IIP social platform matrix. We worked in concert with IIP to develop this matrix that describes the current platforms that they have, the type of platform, the direction of operation strategies, the level of control IIP has over the platform, the marketing strategy, whether or not this platform is unidirectional, bidirectional or committee generating, the intent in audience and whether �'�' what the intended reach is of that platform, global, region or country specific.
In America.gov, as an example, it is a Website. It is trying to engage an audience directly through themes and messages from the U.S. IIP has total control over this Website. It markets through Google ads and Facebook ads. It was developed in about January 2008. It is increasingly more bidirectional. Instead of just pushing information out to the audience, now it is engaging the audience. For example, the audience can download videos, take quizzes or somehow engage with the Website, interact with blogs. We’re trying to engage an audience that is educated but not experts in the area between the ages of 18 and 34, and the reach of America.gov is global.
Platform selection and usage is very important as we see that IIP message is global. Considering the size of the audience that IIP is trying to reach, considering the size of the internet, and thinking about the size of the IIP message, we see that it �'�' the message splash is very small in comparison.
IIP uses a variety of platforms to message, Websites, blogs, social networks, Web chats, podcasts, webinars, as examples, and they use multimedia. They use YouTube, SRS and RSS feeds.
Some of the specific examples are America.gov, E�'journal, Facebook, Conex, Democracy Video Challenge and Obama Today. IIP messages in seven languages. They message in English, Russian, Chinese, Spanish, Persian, Arabic and French.
The Democracy Video Challenge is a very unique way that we engage the world in the meaning of democracy. We asked audience members to create a short video clip and complete the phrase ‘democracy is’. This is an ongoing dialogue that we have with foreign audiences about democracy.
IIP messages globally and to many different cultures that are disparate and heterogeneous. Using the World Cup as a backdrop, the following video I’m going to show you, demonstrates not only the capacity, the creative capacity of IIP, but the dynamic nature of messaging on social media.
VOICE: Left foot, looking for a chance and in the corner, yeah.
VOICE: If you look at what the planet, if you will, has in common, other than humanity, the next thing might be the sport of �'�'
MR. OLSEN: What we see with that video demonstration is the creative capacity. It is amazing how IIP messaged to the world. Now through that particular video, how is IIP making an impact? Are they changing attitudes? How do we know this? How do we know that IIP has influenced behavior in some way, positive or negative? This is the challenge we have in electronic media engagement project here, trying to measure this particular video, the impact that it might have on an audience member.
Was this particular video shared with somebody else? How was it propagated throughout the social media? This is what we’re trying to do here in our project.
Currently, IIP does have some output measures, and this is the IIP Web traffic report. America.gov as an example, what we can do today is we’re able through �'�' because IIP has total control of this particular Website, they’re able to track the number of times that I as a member visits the Web, that Website, how long they spend on that Website, whether or not the �'�' member was international or a domestic visitor and the depth of their visit. As an example, did they download a video, did they take a quiz or somehow engage with that Website?
But this type of measurement is not enough. That doesn’t give us the answers to the questions that we’re asking this project. Have we influenced behaviors, have we changed attitudes? Have we created more understanding? This type of information is useful in ways, but it doesn’t answer those broader impact questions, and that’s what we’re trying to answer with this project.
So we have identified additional measurement needs, three of them, audience analysis, situation analysis and sentiment analysis. Some of the questions that we’re asking in the audience analysis for example is who are the key influences in the social media realm, where are they holding conversations and where are there opportunities to reach new audiences?
From the situational analysis aspect, we’re �'�' the questions are where are the audience? What are their interests? What are they talking about? And can social media bridge the last three feet of the public diplomacy effort? Can it do that?
In the sense of an analysis, we want to know, did it create positive or negative change? Was there a behavior change?
Electronic media engagement formal evaluation is a challenge. What we want to do is we want to understand how foreign audiences obtain, interpret and act upon information. We want to know, has the audience member increased his knowledge about the U.S. government policies and efforts? Is that change positive or negative? Is their behavior change? How can we know that?
And we want to be able to track information online and through social media whether or not that information was transmitted through a video, an SMS, RSS feed, on the America.gov Website, E�'Journal, so forth.
This year is a reference to the Democracy Video Challenge. I give this to you today because it is a good demonstration of the campaign that we have with engaging foreign audiences about the meaning of democracy in an ongoing conversation online. Unfortunately I can’t pull it up on this particular venue, but the reference is there for your view.
And now I can turn it over to Cherreka Montgomery for closing remarks.
MS. MONTGOMERY: I actually don’t have closing remarks. I want to hear your questions if any. Any questions in the �'�' yes.
MS. MONTGOMERY: That’s a great question, and I want you to know that when we kicked this project off back in October it took us until December to actually begin to execute on this effort. What we know is this. There are no legal parameters to prohibit us from crawling or monitoring conversations that are not password protected. So everything that my colleagues and my staff have discussed here are things that are not password protected.
When you do anything that is password, it is behind a wall and we do not actually probe. I think now that I’m up here I just want to follow up on that, that what we’re looking at for sure is not anything that takes place in the United States. It’s definitely foreign audience conversation, but one of our challenges has been really to discern where those servers are. And we have come to find out, through this process, that many of the servers that are hosting these foreign audience conversations are located in the United States although they are disseminating overseas.
So again, the challenges to us in EMU is quite complex. Other questions?
QUESTION: (Inaudible) electronic media conversations (inaudible) in involving communication going on in Indonesia (inaudible)?
MS. MONTGOMERY: That is a great, great question. So the scope for this project is electronic media. So presumably it’s everything. And one of the things that we have done with these tools, particularly with Morningside as a key partner is that we have only begun to look at through technology the blogosphere and what is taking place on these key Websites. So what �'�' so really right now through the electronic tools is just Websites, they’re hosting podcasts or Web chats, we’re capturing that. If there are any blogs, we’re capturing that.
Through online surveys we have developed questionnaires that ask about SMS use. We ask them about do they tweet and those kinds of things. I think one of the things we’re trying to figure out as we sort of advance this to IIP is how do we best use our resources in year two and three of this effort to really hone in on these key platforms.
And I think that’s where the gap analysis from this year will lead us, whether or not the bureau wants us to take a closer look at Facebook, which I know in Indonesia, they’re putting a lot of resources into their Facebook campaign and their strategy. And I think what Juliet mentioned is that what we found is that among young Indonesians that are using Facebook, they are not using Facebook to share information. They are not using Facebook to say, "Hey, U.s. Embassy is doing something cool." They’re just sharing photos.
So the question then becomes do we want to invest in Facebook as a platform to communicate our PD strategies or do we need to have another level of effort or strategy to do so? Yes.
QUESTION: Due to the fact that you have (inaudible) to deploy this campaign to �'�' nations, how do you determine if you are using Facebook in China and (inaudible) that you are not using Facebook in Indonesia because (inaudible)? How do you determine where your resources go so that you can get valid data (inaudible)? Does that make sense?
MS. MONTGOMERY: It does make sense, and I think your question is a good one. The unfortunate thing is that we’re just beginning this process and we don’t have the answers yet. We are planning to travel to China. I mean China is a whole, is a different animal all of its own. It’s very complex, and the things that we have to do here in our project, and I know that Jay is telling me to get off the stage, is to really preserve our bilateral relationship and so we’re working in very, very close consultation with the regional bureau and are past Beijing for this.
Do I have time for one more question or do I need to stop? One more question.
PARTICIPANT: If it’s a short question and a short answer.
MS. MONTGOMERY: Well, it’s Ted.
QUESTION: God help you.
QUESTION: It looks like (inaudible) and I’m wondering as PD has advanced in evaluation, have you started looking at other behavioral science and social science theories to expand (inaudible)?
MS. MONTGOMERY: Yes, we have. And for EME, we’re not just using the Kirkpatrick scale at all. We have got a number of scales. And actually, Chris Toppe is my statistician, so Chris, I don’t know if you have a 30 second answer or do you want me to �'�'
MR. TOPPE: Most times, just �'�' answer the question �'�'
MS. MONTGOMERY: We have handouts and Chris Toppe is my methodologist, and he can go on and on about our new scales. Thank you so much.
PARTICIPANT: Sorry. Thank you all very much for your participation. Thank you very much to the team, and if you have got a couple of questions for them, you can capture them outside, but we will be bringing the next panel onboard in just a couple of minutes.
Cherreka, thank you very much.