Google+ Hangout: Beyond Government & Traditional Reporting: Documenting Human Rights
MR. MAHANTY: Good morning and good afternoon and good evening, depending on where you are in the world. And welcome to today’s State of Rights event. My name is Dan Mahanty. I’m a senior advisor in the Bureau of Democracy, Human Rights, and Labor at the U.S. Department of State, and I’m lucky enough to be the moderator for today’s discussion. I want to start off by thanking each of the panelists, who I’ll introduce shortly, for their participation, but I really want to welcome all of our online participants from all around the world, particularly anybody who was able to join at our embassies overseas. And a particular welcome to former Assistant Secretary Mike Posner, who will be joining us from New York University’s Business School Center for Business and Human Rights.
I’m going to start this morning by asking each of the panel some questions, then I’m going to jump right in and start taking your questions from Twitter. There’s a team right here in this room that’s monitoring Twitter. For those of you that are out there in the Twitterverse, please feel free to submit questions anytime using the hashtag #StateofRights. Our team here are going to funnel them to me in real time. Additional resources for today’s event can be found in a digital archive of this event on humanrights.gov/stateofrights, where the event is also being broadcast.
Let me just start with a little background on this series, the State of Rights. The State of Rights is a U.S. Department of State Public Diplomacy initiative led by the Bureau of Democracy, Human Rights, and Labor. The series brings together experts and citizens for an interactive dialogue on global trends that affect both emerging powers and fledgling democracies. Using Twitter hasthtag #StateofRights, online audiences can and have joined the discussion and debate of policies of global importance such as today’s, which is documenting human rights, and continue this conversation online and with each other after the event is over. Each event in the State of Rights series will be translated into a variety of languages and archived to allow civil society organizations and others to easily download this (inaudible) carry through their local networks.
So let’s jump right into today’s discussion. Our subject today involves citizen witnesses documenting human rights. We’re going to focus on the increasing role of video and documentation and awareness of human rights abuses and violations, and some of the innovative work that citizens, community media collectives, NGOs and other non-traditional reporters are doing to expose human rights abuses (inaudible).
As we dive in I’d like to introduce our participants today. We have joining us Madeleine Bair, Program Manager of the Human Rights Channel WITNESS, who’s also our partner organization who helped us put this event together, so thanks, Madeleine.
MS. BAIR: Thank you.
MR. MAHANTY: We have – thanks. We have Christoph Koettl, Advisor for Crisis Response at Amnesty International. And we have Irene Herrera, among other things a Venezuelan filmmaker, journalist, educator, and co-founder of Video Venezuela. You can find their very impressive biographies and Twitter handles on humanrights.gov along with a lot of resources that they each have on their – associated with them. And if we have time today after the conclusion of today’s discussion, I’m going to turn it over to them to see if they can talk a little bit about some of the resources that they have online that they’ve made available for various human rights organizations and citizen videographers around the world.
So with that, let’s jump right to it. I read last night the Human Rights Channel Year in Review, and it just brought to mind the fact that in the course of the past year, video and video captured by amateur photographers through cell phones and other things really captured public attention and raised awareness of events in unprecedented ways, and oftentimes raising issues to a global level. We have seen videos of human rights issues related to abuses and violations in our own country here in the United States in Ferguson, Missouri, through and including Tahrir Square, and Iraq with ISIS. Videos have really caught the attention of millions of viewers. More and more events are being caught by video and more and more people are using video in human rights work.
And that’s where I would like to start the discussion with WITNESS and with a question for Madeleine:
In addition to the high-profile events that I raised that you highlight at the beginning of the Year in Review of WITNESS, could you maybe talk about one or two examples in which regular – so-called regular citizens or average citizens have documented abuse or brought issues to the attention of authorities or to the press or international audiences in a way that’s effected change? And maybe you can talk about that a little bit?
MS. BAIR: Sure, and thanks for the question, Dan. There are examples from all over the world each and every day, and what we’re seeing is there are more than 120 hours of video uploaded to YouTube every minute, and among those are these sorts of videos that capture abuse.
And I’ll just give two examples that we’ve been following recently. One is from Brazil. Just last month, there was a raid by the military police into a favela in Rio de Janeiro, and in that operation a 15-year-old boy was shot and killed and a 19-year-old was seriously injured. And the official statement from authorities was that those – that the death and injury happened because the police were attacked by armed criminals who were attacking their vehicle. And yet just a few days after that happened a video emerged that, actually, the shot and killed 15-year-old had taken during the raid that clearly showed him and his friends hanging out doing nothing except for just sharing jokes outside on the street when police started running after them.
Immediately what happened when this video emerged was that the authorities had to rescind their initial statement. They opened up an investigation into nine of the police officers involved, and actually, the commander of those officers was taken off duty during this investigation. This is an issue in which police has one of the highest rates of – Brazil has one of the highest rates of police abuse in the world, and yet we normally don’t see this sort of action taken against authorities when there is an extrajudicial killing. So that’s just one example from just a few weeks ago that we’re watching as this investigation goes underway.
Another example is from Ukraine. And just last year, the protest movement that came to be known as Euromaidan caught the attention of audiences around the world, in part thanks to livestreaming and video taken by protestors themselves, taken by first responders and journalists. And more than a hundred people were killed in the clashes that took place among protestors and police in what many human rights researchers say have amounted to crimes against humanity. I was actually talking with one of those human rights researchers based in Ukraine just yesterday who’s working on compiling hundreds of videos from the Euromaidan protests. And what she said to me is, “You know, it’s ironic. We’ve never had so many videos documenting this sort of abuse, and yet we have yet to see justice really take place and significant investigations into the abuses that were shown in those videos.”
And so that goes to the point that despite the ability more than ever to document these sorts of abuses by authorities and by criminals, just because there are videos doesn’t necessarily mean that there will be justice or that those videos will be used as evidence in the process of justice.
MR. MAHANTY: So I guess just following from that, if I could ask a follow-on question. I mean, how does – how do you – what advice can you give or perhaps what – how do you learn of what it takes to take a video from its value in raising awareness of the issues, especially when there’s so many videos out there, to being able to really use it effectively in a change process or to promote reform or to encourage accountability or justice?
MS. BAIR: Sure. I mean, it takes a number of ways and there are a number of steps in the process towards justice in which video can really motivate change. One is simply bringing media attention to the issue, and I think that’s one thing that you mentioned, Ferguson, Missouri. Police abuse in that city in the U.S. was brought to the attention thanks to local residents who were taking video of the protests, taking video of clashes with police. And that’s part of what led to investigations and now suggestions into reforms that we’ll be seeing.
There are other cases in which people take video of one particular instance of abuse and expect that video to be used in a legal process, and yet there are many barriers to making – that stand in the way of making that happen. And that’s one of the things that we’re working on here at WITNESS is providing filmers with the skills to be able to shoot video and shoot information when they’re on the ground so that that video can be used in a judicial proceeding.
So that includes things like capturing the context, capturing the location, making sure that when you’re shooting a video you’re gathering enough information so that when people are watching it, sometimes years down the road, they can really understand what’s happening.
Another challenge that we see is that videos in a legal process are often – their authenticity is brought under question. And so one piece that we’ve been watching is from 2010, in Jamaica. Again, it was a case of an extrajudicial killing of an unarmed man who was laying on the ground and shot to death by a police officer. This was filmed on a cell phone and brought to attention in Jamaican media. But years down the road when that officer was brought to trial, the video was never used, and one reason is that the prosecutor said, “Well, we couldn’t track down the filmer, so we can’t say that this video actually was real.” So filmers themselves need to understand how they can preserve their videos, protect its chain of custody, archive it, so that they don’t just put it on YouTube where it might disappear.
So these are just a few of the ways that we’re trying to empower citizens to make sure that video that they do film – and oftentimes very much put themselves at risk to film – can be used in the process for justice.
MR. MAHANTY: That’s actually a great segue into another topic of conversation I’d like to raise, which is this challenge of authentication. I mean, at the end of the day human rights documentation has always faced this challenge, whether it was the report writing or other forms of videos that you – medium. And so I’d like to turn the – I’d like to turn to Christoph to ask a question specifically about this. I mean, with the trend of falsified and even recycled video and the challenge of authenticating video in order to use it for effect, what can be – first of all, what can be done to combat the trend of falsified or recycled video, if anything? And secondly, based on your experience and a lot of the work that you’ve done, I mean, what do you see as technology options that exist or could be created to enable people to be able to more thoroughly authenticate videos and make it more able to withstand scrutiny?
MR. KOETTL: Sure. So I think that is really one of the crucial questions that we are all faced with, not only human rights investigators but of course also journalists or government officials who are on a daily basis confronted, as Madeleine was, I think, explaining very well this new content that comes in either through our emails or through websites. And the issue here is that there are a lot of individuals out there who are just reposting content or, very rarely, I think, actually post fake videos or fake images. That’s more or less a problem. I think the biggest issue is really that content is recycled, and we can list a lot of examples where we see videos or pictures that actually are very old or they come from a completely different country altogether.
So I think there are two things how we can tackle this. On the one hand is I think all of us, I think, can start becoming a little bit stronger in very basic digital literacy and verification literacy. And what I mean with that is it’s very, very simple; there are some very simple tools and techniques out there that I will talk about in a second that allow you to check if a picture or a video that is posted on Twitter or on YouTube, to double-check that if that is actually new or if that has been posted before. So every time a new conflict breaks out – and I think the Gaza conflict of last summer was a good example – a lot of people are posting pictures on Twitter that are actually from the previous conflict, and it’s very easy to detect that. And I think all of us, I think, can learn that, no matter if it’s in very basic trainings or reading up on these sort of topics.
I think a second issue, of course, or a second solution is, of course, that goes beyond technologies – what I know WITNESS is doing a lot is you can work actually with people who film on the ground to train them how to film in a certain way to make authentication a little bit easier on the other end, and it’s not an issue of not trusting them. I think it’s more what I’ve just pointed out, is that people are – there are individuals out there who really on purpose post wrong content.
In terms of technology solutions, I think the key takeaway is that there is not one tool that allows us to verify a specific piece of content, though that is a big issue, so we have to go back to a very long list of tools that allows us to verify content. And I can list a few things here that I am doing normally. I have posted a lot of this on my Twitter account and I know resources are posted also on the event page, so people don’t have to take notes.
There are two big, I think, things that I normally do that are really, really crucial when looking at content. First is I look actually at the metadata, and that is important even when dealing with content from online or social media. So on the one hand, I’m getting increasingly content that is directly sent to us, so we have actual metadata that we can look at. But also sometimes we get content sent to us that comes on the – that comes from social media and the metadata tells us that that is not original footage, but somebody has downloaded that from somewhere. And there are a few tools out there, for example, for pictures. You have basic metadata or (inaudible) viewers such as Invisor Lite. You have a similar tool that’s called MediaInfo for video that tells you any sort of metadata that is related to the video. And again, these resources are all posted on my Twitter account and on the event page.
So the second piece is that really takes up the most time, is doing an actual content analysis. So you look very, very carefully at the content of a video or also of a picture. In the case of a video, I slow down the video and pull out every single detail. I use a tool called VLC player that allows you to slow down the video; it allows you to enhance the brightness of the video and that sort of things, to really look very, very closely at what we see in the video. And that allows us to extract very specific geographic details or also very specific details about, let’s say, perpetrators or other individuals visible in the video. We take these sort of details and then use a lot of open source tools provided, for example, by Google such as Google Earth. You can look at satellite imagery. You kind of – to, like, verify the sort of geographic detail that you see in these tools. There are other mapping tools such as WikiMapia. I use tools such as a time zone converter to determine when exactly a video or a picture was published.
So I think the takeaway point here is there’s a long – very long list of tools that we have to use, which of course makes the work very, very cumbersome and very long. So I think what’s missing at this point is definitely more consolidated tools that are also free and open source because you have, of course, very expensive and proprietary forensic software that is not feasible for the work of human rights workers such as myself.
MR. MAHANTY: Well, that’s a lot of tools. It sounds like there’s some really valuable stuff out there. And with that, I’m going to turn it over to Irene now both to comment on some of Christoph’s remarks there but also, I mean, if you could use a little bit of time here just to describe, I mean, based on your extensive experience using video for a variety of different sort of social change – if you could just comment kind of on your experience and bring it all together that would be great.
MS. HERRERA: Okay. Yeah, I would like to mention three key points, and it goes back also to what has already been said, to what Madeleine and Christoph have already said. But one is that misappropriating or re-appropriating a video normally has a great impact and normally will always backlash or backfire on the particular groups that decide to recycle that video or falsify that video. We saw this in the case of Venezuela, where a video was misappropriated. It was a video that did not belong to Venezuela. We saw this last year in the unrest that happened in 2014. And it was later used by the government to discredit, obviously, all the other content that might have been true based on this one video that was misappropriated.
So normally, activists or people on the ground have to be very conscious of the effects that this could have and how it could later backfire and discredit other content. So it’s a point that must be made important. And I think also in societies when conflict is developing, social media literacy skills comes in handy quite often because sometimes you will have people that they want to galvanize their cause, and therefore they will go ahead and retweet a video and so on to later find that that video wasn’t quite true, it didn’t quite happen in the same place, and so on. So that point becomes very important.
The other thing is, as Christoph was mentioning regarding digital tools, there needs to be – and of course, Amnesty and WITNESS are both doing it from what I understand – a lot of communication or more communication with a lot of digital creators, digital innovators, to see how we can work together to develop open source tools and other kind of tools. We were talking about this recently, but for example, we do have image reverse engines that – where we can look at other images that have been used and see where – in what context they’ve been used in the past. But we still don’t have this for video, so it becomes a little bit more difficult; even though we can work with thumbnails, sometimes it becomes difficult to do a search on – a reverse search on a whole piece.
So again, working closely with companies and digital companies willing to help develop these tools is key.
MR. MAHANTY: I mean, that sounds like a pretty fascinating tool. And do you think that the technology exists, or is that something we can look forward to as a kind of reverse search engine for images?
MS. HERRERA: Well, the reverse engine for video, it – for now, it seems that a couple of people have already thought about it, and it seems to be in a project stage, and perhaps – who knows, we might see something like this develop soon, hopefully.
MR. KOETTL: If I may, maybe I can quickly jump in here. So again, to emphasize again the reverse image search for – so for pictures that already exists, and that’s one of the very basic tools that I mean when I talk about, like, verification literacy, because it’s a browser extension or it – basically right-click on a picture on Twitter and you click the button that says “Search this image on Google,” and it shows you every single website that has that picture included.
So that’s all powered by Google, by Google Image Search. If you go to Google Image Search, you have the option to upload a picture or to copy and paste a URL right into there. And that allows you within seconds to tell you that this picture from the Gaza conflict is actually from the previous conflict in 2009. So that already exists.
The only solution we have so far for YouTube that I’m aware of or for videos that I’m aware of that Irene was mentioning is we can pull out thumbnails of videos, because when you upload a video to YouTube, it creates always the three same thumbnails. So we can do the same reverse image search with the thumbnails of a YouTube video. That is very, very effective and useful, but it has still a lot of limitations. People can customize these thumbnails or you might miss something. So we don’t really have a powerful tool yet to do a reverse video search. That, according to my knowledge, does not really exist. So I think that’s something that we would love to have, I think.
And connected to that, since we are on that topic, if that’s okay, I think the second challenge is everything we talk about right now is we talk about verifying maybe one single video of one single image. What do we do if we have 100,000 videos of Syria? This sort of menial approach does not really work yet unless you have 20,000 people who help you with that work, so that’s one approach. I think a more efficient way would be to work with computer scientists and use really advanced computing to at least filter through the videos and maybe tell you at the end of the day that out of these 100,000 videos, there are only 5,000 videos that include a tank. And then you can work with a bigger team to look at these 5,000 videos. So that’s, I think, a second challenge that we still facing.
MS. BAIR: If I could jump in here quickly, just to back up for a second, one reason authentication of videos by eyewitnesses and citizens on the ground is so important is because the main way that citizens are sharing videos is by way of social platforms like Facebook, like YouTube, because it’s so easy and immediate. And yet when I share a video to one of those platforms, it strips most of the metadata from my original video to share it online. So the technology that could be used to authenticate any piece of footage, it exists. When I take a photo or a video with my cell phone, within that footage is metadata that can authenticate that this was taken by this particular tool in this location and time.
And so what we’re doing at WITNESS is working on developing ways to allow users to control that metadata and keep it together with that footage so that I can take a video and share it with Christoph at Amnesty International, and Christoph can look at the footage and see the metadata that it was taken with. So one tool that we’ve created to do this is InformaCam, which was created by the Guardian Project, and that allows just that. Another tool that I’ve seen in development also came out of Venezuela, and it’s called Photo Ahora*. And it works on the same concept. What it does is it allows activists who are on the ground at protests to take pictures, share them on Twitter, and once it’s shared on Twitter, it includes a line of text saying that this photo was taken at this place and time.
So these are other ways that we’re working around this to really allow citizens to be able to document footage and make sure that it is verifiable and trustworthy. But then again, that metadata can also be used to put those filmers and documenters and activists at risk, which is something else that we can talk about.
MR. MAHANTY: Yeah, I actually would like to talk about that. I am going to take a look and see what kind of questions we have coming in from Twitter to keep the conversation going. We have – but first of all, we have – we do – all of you have talked a little bit about some of the technologies that are out there that have been engineered by different companies and businesses. We do have a question from Phil (ph) on Twitter, who wanted to know if there are any good examples of businesses playing an active role in protecting human rights defenders and civil society space on this issue. And maybe you covered that a little bit from the technologies, but if any of you wanted to elaborate on the role that the private sector is playing, I think Phil might appreciate that.
MR. KOETTL: You want to start, Madeleine? I have a feeling you – I don’t want to put you on the spot, but I have a feeling you had stuff to say here.
MS. BAIR: I mean, I think – I’m sharing right here a link that we can share on the Google Hangout event page, which is to the Photo Ahora* app. That’s, I think, the main way that I have seen companies, corporations really play a role here, is in using their technology and really understanding how their technology can be used to both facilitate the documentation of human rights and hinder it. And so we’ve seen a number of companies really take great steps to make sure that their technology is not used to harm human rights defenders, but can actually benefit them. So the app that I just mentioned, Photo Ahora*, was developed by a – I believe it was a marketing company in Venezuela that had the technology, had the development skills to develop this very quickly.
WITNESS has also been working with YouTube to curate footage on the Human Rights Channel, which is youtube.com/humanrights, but also really to educate the people behind that platform to understand how their platform is being used in ways to facilitate this documentation. And one tool that they developed was a tool to enable users to blur faces. So right now, when you upload a video onto YouTube, you can actually go to enhancements and click on the face-blurring tool. So this is really important, because oftentimes, when we’re taking footage of protests or when we’re interviewing human rights defenders, we want to share their stories and expose their stories, but it might be necessary to also protect their privacy, to keep them safe. And so that’s the reasoning behind this face-blur tool and a really positive move on behalf of YouTube to protect human rights filmers who are using their platform.
I would love to hear of other examples, though.
MR. KOETTL: I have just maybe a very small example that just came to my mind. Again, it’s more a tool that helps with the verification work, and it’s – there’s a company called Storify, and that’s more a journalism outlet. It’s basically Associated Press for social media; that’s a good way to describe them. So they really look a lot at verifying social media content for news outlets, but they also develop a lot of technological tools to help them with that work. And some of that is proprietary, which I understand – that makes sense.
But occasionally, they also provide very simple tools that they make public, and one of these examples is the – a web add-on, a plug-in that’s called the Storyful Multisearch that you just can install in your browser. And what it does is allows you to search across several social media platforms. So what you can do is you can plug in the unique identifier of a specific piece of social media content, and you search it across platforms, so you see everyone who has posted that specific content.
So they just made that available for free. They used it internally. They made it public. I think that’s a good example how private companies can contribute to the verification challenge. So that’s definitely great. I think what Madeleine already was touching on a lot is really big companies like Google, I think – or like – or Twitter definitely can help with the challenge. I would love to see that they could help with the challenge that – all the metadata is basically altered when something’s uploaded to the site, so that’s one big challenge. But especially with YouTube, I think there are definitely options that maybe people can do more specific tagging or people can contribute to a specific video that’s uploaded on YouTube and you can tag specific things or a specific point in a video. I think that’s all very, very simple technological solutions that could be easily implemented. It’s really more a policy decision on the side of the companies.
MR. MAHANTY: Thanks. Let’s see. Let’s turn back over to our questions that are coming in. Let’s see here. Okay. So this kind of turns the discussion around a little bit. We have a question from Annie (ph) on Twitter, who wants to know not only about video being used by citizen journalists out there, but also a growing culture of at least themselves to be equipped with cameras and video equipment to document their own interactions, both to protect themselves but also to promote the sort of prevention of these abuses and accountability. And she goes on to projects in places like Brazil, California, South Africa, and other places.
Could any of you comment on this, and do you see this becoming a more common feature of police organizations who are trying to govern their own behaviors?
MS. BAIR: Sure. It’s a really interesting point, and obviously it’s something that we’ve been solving more in the U.S. as the idea of police cameras has grown among police departments here. And we have a number of models that this questioner mentioned from around the world of ways that security forces themselves are documenting their own interactions with civilians.
One case that I saw very recently was out of the West Bank in which there is a video of Israeli Defense Forces handling a teenage boy and using a military dog to harass and abuse this teenage boy. The story came out because a video of this interaction was leaked on Facebook and came out in the news media. And in the video, we can see that the soldiers themselves are wearing cameras on their helmets. And yet despite the fact that human rights activists in Israel were calling the attention of this incident to authorities and said that there should be an investigation, the footage from those helmet cameras was never used, not until this footage leaked on Facebook and in the news.
And so I think that is an example of how there might, again, be more video, but we need to make sure that that video can be accessible, that the public has a way of knowing what video exists and how we can access it, because otherwise that video can be turned on and turned off or accessed or made inaccessible by the authorities that be. And so I think that it’s something that we need to continue to probe, as there are more videos from government officials themselves: How can that video actually be used to expose abusive authorities or abuses against authorities?
MR. MAHANTY: A question before we return to a Google Hangout question, and that’s: Have any of you seen or witnessed responses by those who would want to conceal their activities in order to prevent being documented on video committing their crimes? I mean, either criminals themselves or security organizations that are more and more being seen on – being – whose activities are being on camera.
MR. KOETTL: If I understand the question correctly, if people in the – who might commit a human rights violation try to conceal their selves, do you see the immediate impact that the camera has? To be honest, not much comes to my mind immediately. I think what you see a lot is security forces or any person that, I think if you think about yourself, if you’re in front of a camera, you change your behavior, you get more nervous, you might turn away. So we see that a lot; people turn away, they put up their hands to do that sort of thing. I think that’s a different point that I want to make that is often overlooked, because we talk about face blurring.
So we do that – we did that recently. We had a case from Nigeria where you see perpetrators committing a very clear human rights violation, because they’re extrajudicially executing people in front of a camera. When we released it, we actually blurred the faces of the perpetrators. And that’s a very important point because we as human rights organization, we promote the rule of law. So we want to make – we want to ensure that the person that you see committing that crime or that human rights violation, that person’s identity has to some degree protected as well, because we want to promote a fair trial, and that person should be brought to a fair trial that – and that person should enjoy due process as well.
So I think it’s a very interesting question that even on our end, we sometimes take these sort of steps to conceal the identity of specific perpetrators. If we know who the person is, we might share that confidentially. We might share that in a report if we feel really confident. We might share it with any institution that is investigating that incident. But we have to be careful to just publish the footage without any – not enough context or accusing somebody who doesn’t have a fair trial yet. So I think that’s a very, very interesting question, I think.
MR. MAHANTY: We may have time to return this issue of ethics later, and I do want to jump to the security questions that are coming in. But also, Irene, did you want to comment on the earlier conversation as well?
MS. HERRERA: Yeah. Regarding – I think one thing that a lot of citizen witnesses or citizen journalists on the ground have to – and it’s a topic that we’ll be discussing more and more this year – is digital security and what can be done so they can store safely their material. But also, in the case of Venezuela for example, we saw a video – we saw one video where it was clear that policemen wanted to take away a girl’s phone because they know that she had caught them committing – behaving inappropriately when trying to arrest some of the students. So they clearly were trying to take her phone. She screamed and yelled and she was able to keep her phone, but we did see some of these attempts on the police to take away phones.
Another thing that we did see was people that were being discovered that were filming from balconies, for example, and the police knew exactly what floor or more or less what floor it could’ve been or what apartment building it could’ve been, and so on, and sometimes they would go into the apartment buildings.
So I think in those – as Madeleine also spoke earlier a little bit about security on behalf, but I think digital security is going to be a very important issue as we go on. And we have applications that sometimes in very extreme situations – like Wickr, when you have to destroy – auto or self-destroy information so it won’t be found on your phone, for example, or just applications that a lot of sometimes citizen videos, in the case of Venezuela, they do circulate through WhatsApp (ph), so hopefully by then, somebody might have a copy of the video before your phone is taken away. So if people on the ground can quickly spread that video, even if it’s to a friend or to a network and quickly make sure other people have the video, then there’s a possibility that the video will be conserved and will not be lost if the phone is taken away, which we can often see in these kind of situations.
MR. MAHANTY: That’s a --
MS. BAIR: If I could --
MR. MAHANTY: Yeah, please, go ahead, Madeleine.
MS. BAIR: Yeah. Just to add quickly, what Irene mentioned of filmers being targeted is something that we’re seeing more and more as activists really take on this role of filling in the gaps when there are situations in which there either aren’t journalists on the ground or it’s inaccessible for human rights investigators, and so citizens are really filling in the gaps to document a situation. So when traditionally we would expect journalists to be targeted by authorities, now we’re just seeing average people who are pulling out their cell phones and being targets as well.
And that’s another use of the face-blurring tool, because in several different situations of protestors really filming a social movement to galvanize attention to it, galvanize solidarity, either within a country or internationally, we have seen the government then flip the switch and say, “Oh, you’re using video to document this issue. Well, we’re also watching this video.” And we’ve seen in Iran, we’ve seen in Syria, we’ve seen in Colombia and many other places authorities show to the press, publish on their website, state on state media, “These are videos of activists and we’d like the public’s help to identify the people in it.” So that really sends a chilling effect through the filmers and anyone caught on video.
And so one strategy that we’ve seen, aside from more sophisticated technology to protect people’s anonymity, is also just filmers documenting a protest from the back so that the people involved in the protests are not themselves exposed. And it’s something that we began to see in Syria. As Syrian activists began being targeted for taking part in the revolution, more of the video documenting what was going on would be taken from behind or in strategic ways so as to not reveal the identity of activists.
MR. MAHANTY: And this actually speaks directly to a question that we got from the Google Hangout from Ron (ph) about digital and physical safety and security, so I’m glad you started talking about it. And I’d like to turn it over to the whole group, if you want to elaborate at all on recommendations for the ways that activists might deal with physical and digital security threats when they’re documenting, but also after they’ve documented it. And then again, when they transfer that to human rights organizations or others, what security precautions should be taken in order to sort of protect the media?
Does anybody want to comment? Christoph --
MR. KOETTL: Sure. Maybe I have like one specific thing, because I know others will have more input on this as well. I think one huge challenge is, of course, if you film or you’re operating as a human rights defender, as an activist, as a citizen journalist in very insecure, hostile environments. You’re using your cell phone or maybe just a simple camera to record human rights violations or a specific criminal act, you’re recording this and you have the record right with you in your hand of that specific – you have specific content, but also you have all your contacts with you. So if somebody – if authorities take your phone, they get all your contacts, they get all your content, and that’s sort of very, very sensitive information. So that’s a huge security risk.
And so of course the question is, is there a way around that. And there are some apps in development that are built on InformaCam, are similar to InformaCam, and Madeleine probably can speak to if that is a functionality that InformaCam has as well. There are tools such as the International Evidence Locker or the Eyewitness app that record either a picture or a video and it sends it immediately away from your phone, so it doesn’t leave really any traces on your phone. So that is, I think, a huge step forward in terms of protection, because if your phone is taken by authorities you do not have any of the content on your phone. You could use a very – like an empty phone that you just take with you without any of your contacts in there, so that’s a huge, I think, like, really protection progress that I see happening in that regard.
MR. MAHANTY: Did you want to add anything to this discussion based on your experience as someone who’s dealt with this and developed a number of videos and --
MS. BAIR: Sure. Yeah, I mean, the – what Christoph mentioned is great. There’s such a spectrum of tools available and technology that activists really need to learn each and every day, because every time activists and journalists develop ways to protect their data, those who are trying to access their data develop even more sophisticated ways. So it’s – the technology is constantly changing and it’s something that activists need to be aware of. One thing that we’ve always stressed at WITNESS in terms of training human rights defenders is to be aware of the risks that they’re taking on and the risks both to themselves as well as to those whom they’re filming.
And sometimes activists are willing to take on those risks, but if they are, and especially if they are taking risks to film, then it’s even more important to make sure that what they are filming can be saved and protected. And so whether that’s a matter of using a burner phone, using encrypted email, encrypting their hard drives, all sort of steps that activists are taking to protect their technology, that’s among the spectrum of what we’re training activists on.
And as far as InformaCam goes, that is exactly one of the elements of InformaCam, is because while it does add on metadata to enhance the evidentiary value of your footage, it also embeds and encrypts that metadata because we know that that that can also put a filmer at risk by identifying exactly where they are or whom they are based on the tool that they’re using.
So as much as we might want users or platforms to expose metadata so as to help us authenticate footage, we also need to give users control of that metadata and make sure that they’re aware of how that metadata could put them at risk. And so whether that’s making that metadata encrypted and only available to targeted recipients or putting users in control of whom they’ll let see that metadata, those are things that we’re considering.
MR. MAHANTY: I just want to follow up from that to return to this kind of broader ethical question, and we – a couple of you mentioned the tactic of blurring out faces in order to protect the identity of folks, but when there are public interests at stake and human rights organizations are advancing a human rights issue for – with advocacy using video, who – how are decisions made and who gets to decide what should and shouldn’t be seen in a video, either in the public or by the security organizations or in the justice (inaudible) themselves? How are those decisions made, maybe?
MR. KOETTL: I think – maybe I start on this. So how we approach it is we are around doing that work for over 50 years in the sense of we are documenting and research human rights violations. So a lot of these concerns are not new; the format is of course new. So we always – for 50 years we are thinking about what are the risks to the individuals that we are interviewing. In many cases, we change the name of individuals when we interview them and publish a testimony to protect them, right. So the question is, like, how do we transfer these sort of challenges and risk issues into the digital age?
So there are a few tools out there, right – as we already discussed, like face-blurring or maybe we don’t show a specific video. But we definitely want to be very, very conscious of what we publish. Do we blur out specific faces? Would that specific piece of video put an individual at risk or in a specific situation that might create problems for that individual? So we have like our internal processes and we have to do a proper risk assessment with any content that we put out there. So that doesn’t really, I think, change with new content. I think that the challenge is that there’s so much more detail visible in videos and pictures that we didn’t have before, so that’s where it gets a little bit more challenging. We know that a lot of content that is on the internet, we are not able to get informed consent of the individuals visible in the picture.
So what do we do with that? That’s why we have to be – we still sometimes want to publish this because it is important that the world sees it, but we want to think about first if we put the individuals visible in the video at risk. So yeah, it’s I think very, very important and we think about very carefully about this every day.
MR. MAHANTY: That’s great.
MS. BAIR: Yeah, just to add to that quickly, I mean, right now it really often comes down to the video platforms, the online platforms like YouTube and Twitter, and those are companies and not human rights organizations with 50 years of experience of an organization like Amnesty International, and oftentimes it’s their own policies that are dictating what sort of footage remains accessible for the world to see, for human rights investigators or journalists to find. And going back to my conversation with a Ukrainian human rights researcher yesterday, looking at hundreds of pieces of footage from the Euromaidan protests, so many of those videos are no longer on the YouTube platform, and that’s due to a variety of policy reasons; sometimes it’s because they violate policies against graphic content, sometimes it’s because of copyright claims, and sometimes it’s because governments have requested YouTube to take down a video for a particular reason.
So these are all things that, again, these platforms need to understand, because while certain pieces of footage might not be appropriate for public usage, might incite hate, might put some of the people filmed at risk or victimize them, that the footage can also have evidentiary value and can also be critical for those of us who are monitoring and researching human rights abuses. So it’s one thing that we’re thinking about now with these platforms is how can we make sure that when there are videos like that that might violate particular corporate policies, we can make sure that they are archived for appropriate audiences of that sort of footage.
MR. MAHANTY: That’s really helpful. We’ve got limited time. In fact, we’ve only got time for one more question and then a conclusion here. But the way I’m going to do this – we actually have two questions that we have coming in, and so what I’m going to do is I’m going to offer both questions and offer all of you a chance to answer one of the two questions depending on which one you find you may have more experience or insight into.
So the first question that comes in that adds to the discussion about the private sector earlier, which is: “What can or should tech and other companies do to make some of the digital security tools more accessible to activists, and are they already accessible?” And then the other question – it’s different but also really interesting, is: “How might video and other technology be used to document more subtle abuses like corruption?”
So I’ll throw those two questions out and I’ll let each of you, in turn, answer whichever question that you want to. So why don’t we start with Irene since I think we’ve gone to you last for all the other questions. So Irene, do you have any comment on either of those?
MS. HERRERA: Yeah. I was drawn to the second question, but I would need to think about it a little bit more. I think in terms of how to – we get (inaudible) a little bit and more into investigative journalism, and it would be interesting to see workflows for how these subtle abuses could be documented. And perhaps it has to do with – maybe the evidence is not so obvious because you don’t film something that’s actually happening and so on. So this would require – yeah, it’s something I would have to think about a little bit more because you would have to develop a workflow or some sort of procedure that – where – yeah, whatever you’re saying on camera could actually be verified or could be true, and then from then on, be able to gain the attention of other people to look into particular cases. So --
MR. MAHANTY: I think that’s (inaudible) helpful unto itself, so if you have any more elaboration, maybe we can publish it on the website. But because we have only a couple minutes left, let’s turn to Christoph and then we’ll conclude with Madeleine and then I’ll offer some concluding comments and we’ll call it a morning.
MR. KOETTL: Sure, maybe I’ll take the first one, then. So I think there are already a lot of digital security tools out there. I think two things that are important is, number one, they should really be useful for people who are on the frontlines. So sometimes I see very, very fancy apps and tools that just don’t work everywhere in the world, but they’re more, like, developed for Western world. So that’s generally important.
But I think for – specifically for tech companies, I think – and I’m not a digital security expert, but I think it’s one of the most important things for digital security tools is that they are very transparent and very open source. That sounds maybe a little bit counterintuitive at first, but because they’re open sources, more and more people can test them, they can find bugs in them, you can do security audits and find out what are the problems with the specific software. So I think any tech company who is working on digital security has to be very transparent and very open about their tools, so it’ll make – because the result will be that they are much more secure eventually. So that’s something I think where tech companies, I think, have a big responsibility, because otherwise, they sell you something that might not be very transparent and it turns out two years later that actually it was not secure at all.
MR. MAHANTY: That’s helpful, thanks. And Madeleine, the last word goes to you.
MS. BAIR: Thanks. I’ll first give a great big second to Christoph’s suggestion. And I’m going to take on the second question because I think it’s really important, what you’ve raised, that it’s so much easier to document on camera a police beating a protestor or barrel bombs. It’s – they’re a very dramatic footage that’s easy to capture. But it’s much more difficult to use video and for average citizens to document ongoing human rights abuses that maybe aren’t so dramatic, maybe don’t have one particular incident to document.
And I’d like to point to one example which is in India, and it’s called Video Volunteers. And it’s an organization that equips community correspondents and trains them on how to tell stories about human rights issues taking place in their own community. And the issues they take on range from lack of potable water in their community, to the fact that a local school doesn’t have any schoolbooks or electricity, to acid attacks on women or on the Dalit community. And they produce very short video reports that at the very end give their audience an action that they can take to make change, to do something if they were moved by this video, give them a phone number to call or a particular representative to contact to address this issue. And they have a really impressive number of success stories.
And so I think it also goes back to your first question, which is how can videos really go about to motivate change. And the way Video Volunteers addresses this is through very targeted issues and videos with actions to really empower the audience to take part in.
MR. MAHANTY: Thank you so much. And I’m going to violate my own rule, which is I’m going to – Irene, if you had anything to add, since I put you on the spot at the beginning, you can feel free to do that now, or we can just go ahead and put some stuff on the website at the conclusion today. So if you have anything to add, we welcome that now.
MR. HERRERA: Yeah. As I am familiar with the work of Video Volunteer and agree with Madeleine, one thing that came to mind is obviously learning how to interview and – people that have been victims of these maybe subtle human right abuses that are not so obvious. So again – and WITNESS does have material on that, for example, how to interview women that might have been abused or that might have been victims of rape and so on and so on.
So one thing would be to – yeah, to develop better tools and to just really understand how to interview people that have been victims of these other subtle human right violations that perhaps we don’t visibly see them, but that give them a platform and a voice where they can tell that story.
MR. MAHANTY: That’s really great. Well, unfortunately – at least unfortunately for me, we’ve come to the end of our time. I did want to thank again our three really amazing panelists today, Christopher – or Christoph Koettl, Irene Herrera, and Madeleine Bair. So thank you all very much. All of our panelists have provided us with a wealth of different resources that are available, many of which were discussed today. We’ll put those onto our resource website that we have, humanrights.gov, or at least links to them, as well as the Twitter handles for our three guests today. And I would invite them as well to go ahead and tweet out anything that they mentioned today that they think deserves more emphasis, using the hashtag #stateofrights so that we can draw a larger audience to them.
And as we mentioned before, today’s Google Hangout will be available in documented form on the humanrights.gov website where we tend to get a lot of viewers after the fact. So we look forward to really spreading this video out there. I think it’s really productive. I learned a lot; a lot of resources out there. So just a huge round of thanks to everybody for joining and to the State Department team here in the room with me who helped put this together.
So thanks to everybody and I hope everybody out there in the world has a great day.
MR. KOETTL: Thank you.
MS. BAIR: Thank you.
MR. HERRERA: Thank you.