The Whistle’s Impact: A Case Study by the University of Cambridge

The Whistle has been featured as a case study on the University of Cambridge’s Research Impact page.

A mobile app simplifies the process of reporting for the witness, whilst simultaneously prompting them to include the metadata information required for verification. Aside from providing more metadata for the fact-finder to corroborate, The Whistle also serves to educate witnesses about what data is most helpful and why. It can also signpost them to sources of information and support around security and human rights.

Looking forwards, the application plans to facilitate the reporting and verification of civilian witness accounts of human rights abuses in partnership with NGOs across the world.

Click here to read the full article.

The IPF speaks to Rebekah Larsen about the importance of The Whistle

Penelope Sonder from The IPF spoke to Rebekah Larsen, a Research Assistant at The Whistle, about the necessity of digital human rights reporting and the challenges associated with verification, privacy, and communication.

Acting as a new kind of mediator, The Whistle aims to help organisations to correctly and quickly verify as many reports as possible so they can actually address the human rights violations. Social media has done wonders for citizen journalism, but it also means that human rights organisations are often short on time and resources because of the sheer volume of material to be verified and the diffusion and complexity of digital tools used in verification.

“The Whistle aims to help human rights organisations verify more reports and more quickly. This way, more voices are heard and more human rights violations can be addressed.”

Click here to read the article on The IPF’s website.

The Whistle featured on the University of Cambridge website

The article provides context as to why digital verification is needed in the realm of human rights fact-finding.

“In our digitally enabled world, a legion of ‘civilian witnesses’ has sprung up: individuals “in the wrong place at the wrong time” who capture an event and then publish the scrap of footage or the incriminating photograph on social media. But amid the fog of propaganda, hoaxes and digital manipulation, how can we tell what’s real and what’s fake?”

It goes on to detail The Whistle’s initiative to provide NGO’s with a tool to help make the verification process more efficient and effective.

“Cambridge researchers are developing an automated tool, ‘the Whistle’, to help verify the authenticity of digital evidence.”

Click here to read the full article on the University of Cambridge website.

Why new smartphone apps aren’t the answer to refugee justice

Smartphones are critical for refugees, not only to communicate with family and friends but to serve as a potential reporting mechanism for human rights abuses.

Whatsapp, a free app which enables users to make calls, send texts, and share photos and videos globally, provides refugees with a way of communicating with family or friends. Facebook Messenger allows messages and calls between Facebook users and is accessible to anyone with a Facebook account. Maps.me enables user to find their geographical location anywhere in the world, including at sea, wirelessly. Dropbox enables users to store and access and submit critical information, such as immigration forms, using a third-party digital space instead of the individual’s device’s storage.

These apps, stemming from existing web-based services designed to allow free and, essentially, limitless communication and sharing, have empowered refugees by enabling them to access and disseminate critical information.

New apps that have been designed to help refugees, however, are costly as they require individuals to learn how to navigate a new interface, perhaps use more data, and are less trustworthy (due to unfamiliarity) than those that they already use. Indeed, the mere act of downloading an app in the first place requires users to break with existing behavioural patterns on their devices, reducing the probability of successful adoption of any other app, beyond popular social media and communication apps.

One solution to this problem is incorporating a reporting mechanism within a familiar app such as Facebook messenger, using bots. Since many refugees already utilise Facebook messenger to communicate, there is not only a higher level of trust in the app, but they already possess the knowledge of how to use the app. By including a bot in Facebook messenger, a refugee would be able to submit information about a human rights abuse by simply sending a message and responding to questions. These questions would be designed to prompt users for verifiable information, and would also record their geo-location and other important metadata needed to verify their report.

Nevertheless, another more pressing barrier to these new initiatives is the lack of Internet access. Although refugees have data plans from their home countries, they lose connectivity at sea and in rural areas, often only establishing connection through an international carrier once they have reached their destination. The places refugees live, such as camps or rural areas, often lack digital networks and infrastructure or have expensive connectivity available. This short clip created by BBC Media Action and their research with DAHLIA simulates the reality of a refugees access to internet as well as their use of social media and communication apps.  Due to the scarcity of Internet that refugees face on a daily basis, many mobile apps created in response to the refugee crisis will have little, if any, impact on the real situations of refugees.

Technology designed to aid refugees must therefore aim to fit into the daily lives, which often includes limited or no access to Internet and the afore-mentioned communication apps. It is however possible to produce scripts, which when combined with platforms and tools such as Twilio and Google Sheets, are able to act as SMS bots capable of surveying phones and collecting data, without the need for a data plan. Such an endeavour would nevertheless still leave open-ended questions in terms of security, dissemination and trust.

Any form of technology which aims to aid refugees must be directly related to problems they encounter on the ground while also be adaptable to other circumstances. Overall, it’s important that the design of such technologies result from a sustained relationship between local NGOs on the ground, refugees, and technologists.


For more information about refugee connectivity, see the UNHRC’s Introduction to Refugees and Connectivity.

Closing the Feedback Loop

Closing the feedback loop is one of the biggest tasks organizations face today. The feedback loop refers to the process through which organizations hear and respond to those in need through reporting mechanisms. In the realm of development assistance and human rights monitoring, a “broken feedback loop” describes a situation in which organizations hear but fail to respond to citizens in need. The feedback loop between citizens and organisations thus remains infinitely open, leading to ineffectiveness as well as lack of trust and accountability.

For years, international development agencies, governments, and nongovernmental organizations (NGOs) have been hindered by time, cost, and distance in closing the gap between hearing and responding to citizen feedback. In order to repair a broken feedback loop, it must be closed through ensuring all voices are fairly heard and responded to. Attempts at closing the feedback loop involve putting in place mechanisms that aim to ensure all voices are heard and elicit an appropriate response. These practices often stress citizen feedback, participation, or civic engagement.

In development assistance and human rights monitoring, it is important for citizen feedback initiatives to clearly identify the roles and responsibilities of all stakeholders within the feedback loop. This not only includes determining who is involved, but also their roles with regard to providing, monitoring, responding to, or acting on citizen feedback.

Modern information and communication technologies (ICTs) have laid the groundwork for connecting citizens on the ground and third party stakeholders/NGOs at the response end. The Whistle aims to make use of recent innovations in the field of ICTs and Human Rights, to facilitate the reporting and effective verification of violations, using human and algorithmic techniques.

Over the years, academics have identified three generations of fact-finding actors and tactics. First generation fact-finding was undertaken by intergovernmental bodies and involved traditional monitoring mechanisms, such as on-the-ground research undertaken by intergovernmental bodies, which were often too infrequent or not timely enough to be of use in cases warranting rapid responses. Second generation fact-finding, primarily undertaken by international human rights organisations, relied heavily on witness reports. Third and current generation fact-finding is centered on the growing number of players and the use of ICTs for fact-finding. This generation is increasingly flexible in its fact-finding methods and tools. Intergovernmental organisations, INGOs, and NGOs now collect and verify facts through an array of methods and mechanisms including crowd-sourcing, social media, photographs and videos.

Unfortunately, catalyzing and sustaining the motivation of citizens to participate are among the greatest challenges associated with feedback mechanisms. For this reason, it cannot be taken for granted that citizens, when given the opportunity to provide feedback, will do so. Citizen confidence and education are important in this respect. If citizens are informed about digital human rights reporting mechanisms, educated about to use them and what happens to their data, they become more confident that their voice will be heard and responded to. If this is achieved, more abuses could be reported, and more victims could receive redress. This underlines the importance of closing the feedback loop.

The Whistle aims to close the feedback loop by empowering all stakeholders within the feedback loop. By strengthening citizen’s capacity to submit information by prompting them with required information fields at the front end. The NGO Dashboard prompts fact-finders with several cross-check indicators, enabling them to rapidly filter out false information, not only ensuring that citizens upload verifiable information that can, and will, be acted upon by third party organisations such as NGOs and INGOs, but also reducing the time, cost, and distance associated with hearing and responding to citizen witnesses of human rights abuses.


To learn more about our verification techniques, read our piece on The Art of Verification

The Art of Verification

While the use of video to record human rights violations is not new, the drastic impact of new technologies, stemming from the increased availability of mobile phones and the proliferation of digital social networks, have profound implications for human rights researchers, NGOs, and international organisations. For example, a large amount of videos on YouTube are in fact small scrapes that have been re-uploaded. These recycled clips lack the original meta-data necessary to verify time, location, or contextual information. Consequently, this common practice has required researchers to develop and learn new tools and methodologies to identify the original source. Such new techniques often deviate from analysing traditional photographic video materials collected during field research.

In the age of ubiquitous camera usage, editing capabilities, and citizen media, the risks of getting digitally shared information wrong is high if the proper steps are not taken. While citizen media provides an extreme level of detail (including landmarks, signage, or vegetation), a permanent record of violation (if preserved correctly), as well as visual documentation of violations that would otherwise go undetected, it does not require a proper verification methodology.

PHEME, an organisation which focuses on the veracity of big data, relies solely on algorithms to verify social media content by analysing its information (lexical, semantic, and syntactic), criss referencing data sources with open-source data bases, and the information’s diffusion (how, when, and by who was this information transmitted and received). This algorithm based verification practice presents diffusion patterns in the form of “message types” (neutral, confirming, denying, or questioning rumour) in order to verify or dispute a digital source. PHEME’s emphasis on algorithms undoubtably has the potential to speed up the verification process, but should ultimately be coupled with a human element of verification.

Algorithmically, it is possible to identify both verifiable and non-verifiable traits belonging to a video by, for example, running a reverse image search to determine the videos originality, the geo-locations authenticity, and in what context it was captured. Thumbnails can also be matched to specific locations identified on Google street view, although this process often requires more human input.

Even if an algorithm deems a video to be original and the geo-location to be authentic does not mean that what the video is purporting to be true is in fact true (or vice versa). It is this no surprise that, traditionally, human rights reporters, NGOs, or international organisations deploy fact-finders on the ground to verify the situation, either by conducting interviews or field reports. For this reason, we believe that the proper approach to social media verification is still mainly human centred. The challenge for new tools with be to facilitate the circumvention of the inherent dangers and obstacles of hard-to-read places, by allowing a greater degree of overview of the context of reported media, via the provision of means for better cross-referencing. This enables fact-finders to make a more sound judgement.

Instead of relying solely on algorithmic verification techniques, albeit an important part of the verification process, we believe that analysing citizen media should by no means be considered a separate endeavour from traditional fact-finding, which is largely centred on witness testimony and fact-finder reports. The Whistle cross-checks social media reports by employing both top algorithmic indicators and human input. We are aware of the current field and how time consuming it is for fact-finders to verify information, so The Whistle does the work for you by facilitating human input and involvement in the verification process. The art of verification, for The Whistle, is a mix of both algorithmic and necessary human involvement.

The Whistle aims to speed up and simplify the verification process by prompting users to supplement their human rights reports with metadata and corroborating information form other witnesses. The Whistle app then engages the back-end cross-checks involving a variety of third party information sources and tools, such as weather and map databases. By doing so, The Whistle provides human rights researchers, NGOs, and international organisations with a wealth of cross-referenced information, reducing both the time and digital expertise necessary to verify digital reports of human rights violations.

The Whistle at RightsCon: Calls for Collaboration

In March 2016, three members of The Whistle team traveled to San Francisco for the annual RightsCon conference, the largest get-together of those working on the intersection of technology and human rights, consisting of human rights advocates, researchers, lawyers, academics, tech company representatives and government officials. We gave a brief presentation on The Whistle’s initiatives in the conference Demo room, providing a quick overview of the aims of the project and potential collaboration efforts.

The Whistle, a digital human rights reporting platform, would enable fact-finders to get digital reports of human rights violations from hard to reach places, and allows civilian witnesses to document these events as they unfold. At the same time, however, the stakes are high in terms of getting this type of information wrong. This is in part because it is relatively manipulable. Some images or videos may be staged, used as propaganda, or for other misguiding purposes. Fortunately, there is a proliferating number of tools to support the cross-checking of digital information. We now have tools to cross-check location and time, extract metadata, unearth details of the source’s digital footprint, to trace back the provenance of the information, and so on. Yet despite these tools, civilian witness information is not being used as human rights evidence as much as one might expect. We’ve identified at least three causes we want to highlight – though we know there are more – which we refer to as the ‘bottleneck’.

1. Civilian witnesses’ lack of digital and information literacy
We have heard from fact-finders that civilian witnesses do not necessarily know what metadata is or that they should include it with their information. This paucity of metadata in their information makes it much harder for fact-finders to verify it.

2. Human rights fact-finders’ lack of digital literacy with respect in particular to digital verification
Though the fundamentals of verification remain the same, the tactics and tools for verifying digital information are new and changing rapidly. This complexity might be discouraging fact-finders from turning to digital information from civilian witnesses.

3. Human rights fact-finders’ lack of time
Even for those who are up to speed on digital verification tools, this process takes time. Individually, each of those tools may only provide a limited indication – if anything – about the veracity of the reported information, and opening up each tool and entering the information to be cross-checked is not only time-consuming but a nuisance.

The question then, in the context of what one journalist called a ‘big data problem’ in Syria, and a limited number of hours in the day, is who gets heard by fact-finders? We are particularly worried – given the complexity of verification and the time pressures of fact-finding – that those who are easier to verify are more likely to be heard. Those harder to verify, due to a lack in digital literacy or footprint, may be less likely to be heard, and it is precisely these civilian witnesses who may be most likely to need human rights mechanisms.

The Whistle is currently in the research and design phase, funded by ESRC and by the EC’s Horizon 2020 programme, and is working with one collaborator, WikiRate, which aims to improve corporate accountability, including workers’ reports of abuses, which is where The Whistle plays an important role. We are actively looking for collaborators such as active fact-finding NGOs and tech companies.

Below are the slides which accompanied our presentation at RightsCon.

 

10 Things to Know About Social Media Verification

To give you a brief overview over some of the most important aspects and challenges of verification and how The Whistle fits in to this picture, we have come up with the following list of ’10 Things to Know About Social Media Verification’.

 

1. Collecting verifiable information

Much of the burden of the verification process can be taken off the reputation of the source at the input stage by prompting witnesses to submit as much corroborating information as possible. One of The Whistle’s main aims is the empowerment of civilians, or specifically, the uninformed witness, in regards to social media verification in a human rights violation context by providing them with a channel to submit verifiable data. Collecting proper documentation is key, especially in times of crisis when social media witness reports are often fuelled by emotions.

2. Metadata
In order for social media verification to work, organisations must pay attention to the importance of metadata. Metadata comes in the form of descriptive titles, text, keywords, dates and timestamps, as well as location. Often times, when content is uploaded to popular media sites such as Facebook, Twitter, and Instagram, metadata is stripped or altered. In such cases, it may be unclear whether a certain image or video has been uploaded before or after a particular event, and therefore, whether or not it can be verified. For example, YouTube alters the date stamps of its videos to represent Pacific Standard Time. This process sometimes makes it appear as if a video is uploaded before the event it claims to show. Moreover, only a small percentage of content is automatically geolocated and the task of establishing the location of a specific image or video is more difficult when the metadata surrounding the original time and data has been stripped or altered. In order to verify an image or video, it’s common for human rights defenders to have to corroborate the location, time, and approximate date stamp in order to make sure that such image or video was taken in a specific context.

3. Source credibility
The importance of distinguishing source credibility is a significant factor in the social media verification process.There are three kinds of people who upload human rights violations media: fact finders, witnesses, and perpetrators. Fact-finders are those who corroborate information in order to make a claim regarding an event. Witnesses are, of course, civilian witnesses to human rights violations who are able to report information through social media platforms. Perpetrators, however, are those who post false or misleading information online. It is therefore important to keep in mind that deliberate hoaxes do happen, and social media verification must include mechanisms designed to differentiate and identify specific sources.

4. Content credibility
The actual content of what is being uploaded onto social media platforms or submitted to verification forms must be scrutinized in order to detect the veracity of the content. Today, is it easy to edit and adjust images and videos to make them look as if something has happened when in reality it hasn’t; just because it’s a video or an image doesn’t mean it’s depicting the truth. It is possible for people to stage a seemingly heinous event and, albeit rarely, manipulate, not only audience on the internet, but crucially, human rights defenders or organizations.

5. Pluralism
By arming civilian witnesses with knowledge about digital information verification, for example, the kinds of metadata that can make a claim easier to verify and then disseminate, civilian witnesses of human rights violations are empowered in providing meaningful information. Social media verification requires citizens to be educated enough to provide information that can be verified.

6. Speed
As with most processes, gathering and consuming information has a cost. Economically, the less time input and verification processes take, the greater the number of civilian witnesses heard. Increasing the amount of verifiable information during the input stage quickens the verification process; the human rights organization is not required to spend their resources on retrieving or corroborating information.

7. Collection and verification techniques
Techniques for collection and verification of social media content have changed alongside advances in technology. The widespread use of smartphones has equip everyday citizen witnesses with the ability to report human rights violations, aiding in the collection of information. The most popular social media verification techniques include a mix of traditional human-led investigation techniques and technology based tools. However, the employment of technology-based tools alongside human-led verification techniques is currently limited due to the recent emergence of the field of digital media verification.

8. Common verification processes
The least common type of verification processes are humanitarian; most are characterised by commercial and government aims and incentives. Additionally, out of those organisations who have attempted to include social media verification for humanitarian practices, not every organisation processing content makes a definitive claim as to its veracity.

9. Not all content will fulfill every verification check
It is rare that any single piece of content will meet all the requirements posed by both human and technological verification techniques and processes. It may be the case that a piece of content will fail to fulfill a certain technological aspect of verification, but through further human information corroboration it could in fact be verifiable. Conversely, the opposite is frequently the case, and we must thus be careful not to create an over-reliance on technological methods.

10. Social media is not a universal solution
While looking at the potential of social media verification in an ever globalising world characterized by increasing technological innovation, it is important to remember that social media verification is not a universal solution to human rights violation reporting. Disparities in infrastructure, access to technology, and state surveillance can easily lead to the underreporting of human rights violations.

For further reading see The Whistle’s report on The Digital Information Verification Field under the ‘Research’ tab on The Whistle’s website.