The Whistle has been featured as a case study on the University of Cambridge’s Research Impact page.
A mobile app simplifies the process of reporting for the witness, whilst simultaneously prompting them to include the metadata information required for verification. Aside from providing more metadata for the fact-finder to corroborate, The Whistle also serves to educate witnesses about what data is most helpful and why. It can also signpost them to sources of information and support around security and human rights.
Looking forwards, the application plans to facilitate the reporting and verification of civilian witness accounts of human rights abuses in partnership with NGOs across the world.
Penelope Sonder from The IPF spoke to Rebekah Larsen, a Research Assistant at The Whistle, about the necessity of digital human rights reporting and the challenges associated with verification, privacy, and communication.
Acting as a new kind of mediator, The Whistle aims to help organisations to correctly and quickly verify as many reports as possible so they can actually address the human rights violations. Social media has done wonders for citizen journalism, but it also means that human rights organisations are often short on time and resources because of the sheer volume of material to be verified and the diffusion and complexity of digital tools used in verification.
“The Whistle aims to help human rights organisations verify more reports and more quickly. This way, more voices are heard and more human rights violations can be addressed.”
Click here to read the article on The IPF’s website.
The article provides context as to why digital verification is needed in the realm of human rights fact-finding.
“In our digitally enabled world, a legion of ‘civilian witnesses’ has sprung up: individuals “in the wrong place at the wrong time” who capture an event and then publish the scrap of footage or the incriminating photograph on social media. But amid the fog of propaganda, hoaxes and digital manipulation, how can we tell what’s real and what’s fake?”
It goes on to detail The Whistle’s initiative to provide NGO’s with a tool to help make the verification process more efficient and effective.
“Cambridge researchers are developing an automated tool, ‘the Whistle’, to help verify the authenticity of digital evidence.”
Smartphones are critical for refugees, not only to communicate with family and friends but to serve as a potential reporting mechanism for human rights abuses.
Whatsapp, a free app which enables users to make calls, send texts, and share photos and videos globally, provides refugees with a way of communicating with family or friends. Facebook Messenger allows messages and calls between Facebook users and is accessible to anyone with a Facebook account. Maps.me enables user to find their geographical location anywhere in the world, including at sea, wirelessly. Dropbox enables users to store and access and submit critical information, such as immigration forms, using a third-party digital space instead of the individual’s device’s storage.
These apps, stemming from existing web-based services designed to allow free and, essentially, limitless communication and sharing, have empowered refugees by enabling them to access and disseminate critical information.
New apps that have been designed to help refugees, however, are costly as they require individuals to learn how to navigate a new interface, perhaps use more data, and are less trustworthy (due to unfamiliarity) than those that they already use. Indeed, the mere act of downloading an app in the first place requires users to break with existing behavioural patterns on their devices, reducing the probability of successful adoption of any other app, beyond popular social media and communication apps.
One solution to this problem is incorporating a reporting mechanism within a familiar app such as Facebook messenger, using bots. Since many refugees already utilise Facebook messenger to communicate, there is not only a higher level of trust in the app, but they already possess the knowledge of how to use the app. By including a bot in Facebook messenger, a refugee would be able to submit information about a human rights abuse by simply sending a message and responding to questions. These questions would be designed to prompt users for verifiable information, and would also record their geo-location and other important metadata needed to verify their report.
Nevertheless, another more pressing barrier to these new initiatives is the lack of Internet access. Although refugees have data plans from their home countries, they lose connectivity at sea and in rural areas, often only establishing connection through an international carrier once they have reached their destination. The places refugees live, such as camps or rural areas, often lack digital networks and infrastructure or have expensive connectivity available. This short clip created by BBC Media Action and their research with DAHLIA simulates the reality of a refugees access to internet as well as their use of social media and communication apps. Due to the scarcity of Internet that refugees face on a daily basis, many mobile apps created in response to the refugee crisis will have little, if any, impact on the real situations of refugees.
Technology designed to aid refugees must therefore aim to fit into the daily lives, which often includes limited or no access to Internet and the afore-mentioned communication apps. It is however possible to produce scripts, which when combined with platforms and tools such as Twilio and Google Sheets, are able to act as SMS bots capable of surveying phones and collecting data, without the need for a data plan. Such an endeavour would nevertheless still leave open-ended questions in terms of security, dissemination and trust.
Any form of technology which aims to aid refugees must be directly related to problems they encounter on the ground while also be adaptable to other circumstances. Overall, it’s important that the design of such technologies result from a sustained relationship between local NGOs on the ground, refugees, and technologists.
For more information about refugee connectivity, see the UNHRC’s Introduction to Refugees and Connectivity.
Closing the feedback loop is one of the biggest tasks organizations face today. The feedback loop refers to the process through which organizations hear and respond to those in need through reporting mechanisms. In the realm of development assistance and human rights monitoring, a “broken feedback loop” describes a situation in which organizations hear but fail to respond to citizens in need. The feedback loop between citizens and organisations thus remains infinitely open, leading to ineffectiveness as well as lack of trust and accountability.
For years, international development agencies, governments, and nongovernmental organizations (NGOs) have been hindered by time, cost, and distance in closing the gap between hearing and responding to citizen feedback. In order to repair a broken feedback loop, it must be closed through ensuring all voices are fairly heard and responded to. Attempts at closing the feedback loop involve putting in place mechanisms that aim to ensure all voices are heard and elicit an appropriate response. These practices often stress citizen feedback, participation, or civic engagement.
In development assistance and human rights monitoring, it is important for citizen feedback initiatives to clearly identify the roles and responsibilities of all stakeholders within the feedback loop. This not only includes determining who is involved, but also their roles with regard to providing, monitoring, responding to, or acting on citizen feedback.
Modern information and communication technologies (ICTs) have laid the groundwork for connecting citizens on the ground and third party stakeholders/NGOs at the response end. The Whistle aims to make use of recent innovations in the field of ICTs and Human Rights, to facilitate the reporting and effective verification of violations, using human and algorithmic techniques.
Over the years, academics have identified three generations of fact-finding actors and tactics. First generation fact-finding was undertaken by intergovernmental bodies and involved traditional monitoring mechanisms, such as on-the-ground research undertaken by intergovernmental bodies, which were often too infrequent or not timely enough to be of use in cases warranting rapid responses. Second generation fact-finding, primarily undertaken by international human rights organisations, relied heavily on witness reports. Third and current generation fact-finding is centered on the growing number of players and the use of ICTs for fact-finding. This generation is increasingly flexible in its fact-finding methods and tools. Intergovernmental organisations, INGOs, and NGOs now collect and verify facts through an array of methods and mechanisms including crowd-sourcing, social media, photographs and videos.
Unfortunately, catalyzing and sustaining the motivation of citizens to participate are among the greatest challenges associated with feedback mechanisms. For this reason, it cannot be taken for granted that citizens, when given the opportunity to provide feedback, will do so. Citizen confidence and education are important in this respect. If citizens are informed about digital human rights reporting mechanisms, educated about to use them and what happens to their data, they become more confident that their voice will be heard and responded to. If this is achieved, more abuses could be reported, and more victims could receive redress. This underlines the importance of closing the feedback loop.
The Whistle aims to close the feedback loop by empowering all stakeholders within the feedback loop. By strengthening citizen’s capacity to submit information by prompting them with required information fields at the front end. The NGO Dashboard prompts fact-finders with several cross-check indicators, enabling them to rapidly filter out false information, not only ensuring that citizens upload verifiable information that can, and will, be acted upon by third party organisations such as NGOs and INGOs, but also reducing the time, cost, and distance associated with hearing and responding to citizen witnesses of human rights abuses.
To learn more about our verification techniques, read our piece on The Art of Verification
While the use of video to record human rights violations is not new, the drastic impact of new technologies, stemming from the increased availability of mobile phones and the proliferation of digital social networks, have profound implications for human rights researchers, NGOs, and international organisations. For example, a large amount of videos on YouTube are in fact small scrapes that have been re-uploaded. These recycled clips lack the original meta-data necessary to verify time, location, or contextual information. Consequently, this common practice has required researchers to develop and learn new tools and methodologies to identify the original source. Such new techniques often deviate from analysing traditional photographic video materials collected during field research.
In the age of ubiquitous camera usage, editing capabilities, and citizen media, the risks of getting digitally shared information wrong is high if the proper steps are not taken. While citizen media provides an extreme level of detail (including landmarks, signage, or vegetation), a permanent record of violation (if preserved correctly), as well as visual documentation of violations that would otherwise go undetected, it does not require a proper verification methodology.
PHEME, an organisation which focuses on the veracity of big data, relies solely on algorithms to verify social media content by analysing its information (lexical, semantic, and syntactic), criss referencing data sources with open-source data bases, and the information’s diffusion (how, when, and by who was this information transmitted and received). This algorithm based verification practice presents diffusion patterns in the form of “message types” (neutral, confirming, denying, or questioning rumour) in order to verify or dispute a digital source. PHEME’s emphasis on algorithms undoubtably has the potential to speed up the verification process, but should ultimately be coupled with a human element of verification.
Algorithmically, it is possible to identify both verifiable and non-verifiable traits belonging to a video by, for example, running a reverse image search to determine the videos originality, the geo-locations authenticity, and in what context it was captured. Thumbnails can also be matched to specific locations identified on Google street view, although this process often requires more human input.
Even if an algorithm deems a video to be original and the geo-location to be authentic does not mean that what the video is purporting to be true is in fact true (or vice versa). It is this no surprise that, traditionally, human rights reporters, NGOs, or international organisations deploy fact-finders on the ground to verify the situation, either by conducting interviews or field reports. For this reason, we believe that the proper approach to social media verification is still mainly human centred. The challenge for new tools with be to facilitate the circumvention of the inherent dangers and obstacles of hard-to-read places, by allowing a greater degree of overview of the context of reported media, via the provision of means for better cross-referencing. This enables fact-finders to make a more sound judgement.
Instead of relying solely on algorithmic verification techniques, albeit an important part of the verification process, we believe that analysing citizen media should by no means be considered a separate endeavour from traditional fact-finding, which is largely centred on witness testimony and fact-finder reports. The Whistle cross-checks social media reports by employing both top algorithmic indicators and human input. We are aware of the current field and how time consuming it is for fact-finders to verify information, so The Whistle does the work for you by facilitating human input and involvement in the verification process. The art of verification, for The Whistle, is a mix of both algorithmic and necessary human involvement.
The Whistle aims to speed up and simplify the verification process by prompting users to supplement their human rights reports with metadata and corroborating information form other witnesses. The Whistle app then engages the back-end cross-checks involving a variety of third party information sources and tools, such as weather and map databases. By doing so, The Whistle provides human rights researchers, NGOs, and international organisations with a wealth of cross-referenced information, reducing both the time and digital expertise necessary to verify digital reports of human rights violations.
In March 2016, three members of The Whistle team traveled to San Francisco for the annual RightsCon conference, the largest get-together of those working on the intersection of technology and human rights, consisting of human rights advocates, researchers, lawyers, academics, tech company representatives and government officials. We gave a brief presentation on The Whistle’s initiatives in the conference Demo room, providing a quick overview of the aims of the project and potential collaboration efforts.
The Whistle, a digital human rights reporting platform, would enable fact-finders to get digital reports of human rights violations from hard to reach places, and allows civilian witnesses to document these events as they unfold. At the same time, however, the stakes are high in terms of getting this type of information wrong. This is in part because it is relatively manipulable. Some images or videos may be staged, used as propaganda, or for other misguiding purposes. Fortunately, there is a proliferating number of tools to support the cross-checking of digital information. We now have tools to cross-check location and time, extract metadata, unearth details of the source’s digital footprint, to trace back the provenance of the information, and so on. Yet despite these tools, civilian witness information is not being used as human rights evidence as much as one might expect. We’ve identified at least three causes we want to highlight – though we know there are more – which we refer to as the ‘bottleneck’.
1. Civilian witnesses’ lack of digital and information literacy
We have heard from fact-finders that civilian witnesses do not necessarily know what metadata is or that they should include it with their information. This paucity of metadata in their information makes it much harder for fact-finders to verify it.
2. Human rights fact-finders’ lack of digital literacy with respect in particular to digital verification
Though the fundamentals of verification remain the same, the tactics and tools for verifying digital information are new and changing rapidly. This complexity might be discouraging fact-finders from turning to digital information from civilian witnesses.
3. Human rights fact-finders’ lack of time
Even for those who are up to speed on digital verification tools, this process takes time. Individually, each of those tools may only provide a limited indication – if anything – about the veracity of the reported information, and opening up each tool and entering the information to be cross-checked is not only time-consuming but a nuisance.
The question then, in the context of what one journalist called a ‘big data problem’ in Syria, and a limited number of hours in the day, is who gets heard by fact-finders? We are particularly worried – given the complexity of verification and the time pressures of fact-finding – that those who are easier to verify are more likely to be heard. Those harder to verify, due to a lack in digital literacy or footprint, may be less likely to be heard, and it is precisely these civilian witnesses who may be most likely to need human rights mechanisms.
The Whistle is currently in the research and design phase, funded by ESRC and by the EC’s Horizon 2020 programme, and is working with one collaborator, WikiRate, which aims to improve corporate accountability, including workers’ reports of abuses, which is where The Whistle plays an important role. We are actively looking for collaborators such as active fact-finding NGOs and tech companies.
Below are the slides which accompanied our presentation at RightsCon.
On Friday the 10th of June, The Whistle took part in a workshop organised by the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), Cambridge. The workshop brought together technological initiatives, which could potentially serve as alleviating responses to the plight of those who live on less than $2 a day.
In their own words ‘Most of the benefits of cutting edge science are enjoyed by the world’s wealthiest 10%, while the bottom 50% bear the brunt of the externalities that new technologies so often generate’. The Whistle was one out of a host of other initiatives, including “DigiTally” (simplifying offline payments), and “Networking for Development” (developing drones that deliver network access to hard-to-reach areas).
Our team’s greatest hopes and aspirations are that anyone with access to a phone or computer is able to report human rights violations, whilst simultaneously rendering this data more readily useful and reliable in a shorter amount of time, via the provision of a verification “front-end”. We have no illusions as to the strong impact of inequality, particularly in terms of the required level of digital literacy, but through extensive interviews and analysis of a wide range of diverse organisations and their respective civilian groups of focus, we can make some strides in aiding the human rights fact-finding process. The workshop was an opportunity to gather feedback and comments on the way The Whistle works, as well as an occasion to extend an invitation for collaboration.
If you have any comments or would like to collaborate, or even provide a potential test-case, we can be contacted on our website (thewhistle.org), via twitter (@whistlereporter), or directly to Dr Ella McPherson (firstname.lastname@example.org).
To give you a brief overview over some of the most important aspects and challenges of verification and how The Whistle fits in to this picture, we have come up with the following list of ’10 Things to Know About Social Media Verification’.
1. Collecting verifiable information
Much of the burden of the verification process can be taken off the reputation of the source at the input stage by prompting witnesses to submit as much corroborating information as possible. One of The Whistle’s main aims is the empowerment of civilians, or specifically, the uninformed witness, in regards to social media verification in a human rights violation context by providing them with a channel to submit verifiable data. Collecting proper documentation is key, especially in times of crisis when social media witness reports are often fuelled by emotions.
In order for social media verification to work, organisations must pay attention to the importance of metadata. Metadata comes in the form of descriptive titles, text, keywords, dates and timestamps, as well as location. Often times, when content is uploaded to popular media sites such as Facebook, Twitter, and Instagram, metadata is stripped or altered. In such cases, it may be unclear whether a certain image or video has been uploaded before or after a particular event, and therefore, whether or not it can be verified. For example, YouTube alters the date stamps of its videos to represent Pacific Standard Time. This process sometimes makes it appear as if a video is uploaded before the event it claims to show. Moreover, only a small percentage of content is automatically geolocated and the task of establishing the location of a specific image or video is more difficult when the metadata surrounding the original time and data has been stripped or altered. In order to verify an image or video, it’s common for human rights defenders to have to corroborate the location, time, and approximate date stamp in order to make sure that such image or video was taken in a specific context.
3. Source credibility
The importance of distinguishing source credibility is a significant factor in the social media verification process.There are three kinds of people who upload human rights violations media: fact finders, witnesses, and perpetrators. Fact-finders are those who corroborate information in order to make a claim regarding an event. Witnesses are, of course, civilian witnesses to human rights violations who are able to report information through social media platforms. Perpetrators, however, are those who post false or misleading information online. It is therefore important to keep in mind that deliberate hoaxes do happen, and social media verification must include mechanisms designed to differentiate and identify specific sources.
4. Content credibility
The actual content of what is being uploaded onto social media platforms or submitted to verification forms must be scrutinized in order to detect the veracity of the content. Today, is it easy to edit and adjust images and videos to make them look as if something has happened when in reality it hasn’t; just because it’s a video or an image doesn’t mean it’s depicting the truth. It is possible for people to stage a seemingly heinous event and, albeit rarely, manipulate, not only audience on the internet, but crucially, human rights defenders or organizations.
By arming civilian witnesses with knowledge about digital information verification, for example, the kinds of metadata that can make a claim easier to verify and then disseminate, civilian witnesses of human rights violations are empowered in providing meaningful information. Social media verification requires citizens to be educated enough to provide information that can be verified.
As with most processes, gathering and consuming information has a cost. Economically, the less time input and verification processes take, the greater the number of civilian witnesses heard. Increasing the amount of verifiable information during the input stage quickens the verification process; the human rights organization is not required to spend their resources on retrieving or corroborating information.
7. Collection and verification techniques
Techniques for collection and verification of social media content have changed alongside advances in technology. The widespread use of smartphones has equip everyday citizen witnesses with the ability to report human rights violations, aiding in the collection of information. The most popular social media verification techniques include a mix of traditional human-led investigation techniques and technology based tools. However, the employment of technology-based tools alongside human-led verification techniques is currently limited due to the recent emergence of the field of digital media verification.
8. Common verification processes
The least common type of verification processes are humanitarian; most are characterised by commercial and government aims and incentives. Additionally, out of those organisations who have attempted to include social media verification for humanitarian practices, not every organisation processing content makes a definitive claim as to its veracity.
9. Not all content will fulfill every verification check
It is rare that any single piece of content will meet all the requirements posed by both human and technological verification techniques and processes. It may be the case that a piece of content will fail to fulfill a certain technological aspect of verification, but through further human information corroboration it could in fact be verifiable. Conversely, the opposite is frequently the case, and we must thus be careful not to create an over-reliance on technological methods.
10. Social media is not a universal solution
While looking at the potential of social media verification in an ever globalising world characterized by increasing technological innovation, it is important to remember that social media verification is not a universal solution to human rights violation reporting. Disparities in infrastructure, access to technology, and state surveillance can easily lead to the underreporting of human rights violations.
For further reading see The Whistle’s report on The Digital Information Verification Field under the ‘Research’ tab on The Whistle’s website.
The Centre of Governance and Human Rights* at the University of Cambridge recently launched the inaugural paper in the ‘Human Rights in the Digital Age’ series, which aims at facilitating the sharing of knowledge and practices of human rights in a digital age.
The paper by Christoph Koettl (Senior Analyst, Amnesty International), was launched on the 8th of March and provides a framework for best practices in the use of citizen media. Specifically, it builds on the increasing importance of documenting human rights violations and aids the review and assessment of open source content.
Koettl’s framework has served as an invaluable source in designing The Whistle, particularly in terms of streamlining and speeding the verification of digital information. Through following a methodological approach that is familiar and accessible to new and seasoned practitioners, The Whistle can improve access and overview of existing tools and processes to aid fact-finding.
How can we make it easier for citizen witnesses of human rights violations to capture the attention of journalists and human rights defenders? This was the question underpinning the digital platform that Meredydd Williams from the Computer Laboratory and I began to develop during Cambridge’s inaugural Critical Coding course in June 2014.
In this digital age, the documentation of human rights violations by bystanders is facilitated by mobile technologies, and the dissemination of this documentation is enabled by social media. The result is that journalists and human rights defenders have a ‘torrent’ – as Christoph Koettl of Amnesty International described it – of information at their fingertips purporting to shed light on violations. This torrent must be filtered, and a key filtering mechanism employed by journalists and human rights defenders is verification. Our digital platform supports this verification by enabling citizen witnesses to supplement their digital reports of violations with corroborating information.
We designed the platform with two fundamentals of communication in mind – fundamentals that have also surfaced in my research on human rights reporting:
1) Consuming information has a cost (money, time, or otherwise), and low-cost information is more likely to be consumed than high-cost information given fixed resources. Information subsidies – Oscar H. Gandy’s influential concept – are ‘efforts to reduce the price faced by others for certain information, in order to increase its consumption’ (1982, p. 8). Verification is time-intensive, so this platform aims to facilitate witnesses’ generation of information subsidies to make it faster (cheaper) for journalists and human rights defenders to verify their reports of violations.
2) The verification of information usually depends on an assessment of content credibility as well as source credibility. A source’s credibility is related to her resources – or forms of capital, in Bourdieu’s (1986) terms. These include reputation (symbolic capital), networks (social capital), education levels (cultural capital), and the financial capital that in part determines the levels of these other forms of capital. Capital creates information subsidies; specifically, a reputation of credibility in shared networks facilitates source verification, as does digital literacy about what information should be supplied to speed verification. The problem is that digitally-enabled citizen witnessing is often unanticipated, and, therefore, its witnesses are often unknown and may not know much about verification. This either means that verifying the information of resource-poor witnesses disproportionately drains the resources of human rights organizations and news outlets, or it means that these witnesses’ information is less likely to be used. We therefore wanted our platform to help level the source resource playing field with respect to information subsidies for verification.
Operationally, the platform consists of an interface for the citizen witness to upload her digital information and then enter details about its metadata and content – such as time, place, and source and subject identity. The platform would then draw corresponding data points from a variety of digital databases, such as those identified in the Verification Handbook’s ‘Chapter 10: Verification Tools,’ and would display them side by side with the reported data in an interface targeted at consumers of the information. For example, the platform would pull the Google Maps Street View corresponding to the reported location, which journalists and human rights defenders could then compare with the landmarks in the uploaded video of the violation.
The idea is that citizen witnesses and the platform itself take on much of the verification busywork around gathering other sources of information for corroboration. The human rights defenders and journalists would still have to ultimately make the judgment as to the veracity of the information – it is unlikely, in any case, that we will ever be able to automate this fundamentally subjective practice – but the consolidated output of the platform should make it faster for them to do so.
The platform should also help level the source resource playing field by reducing the cultural capital discrepancy of citizen witnesses. As one of my interviewees at a human rights organization explained it, for example, some Syrian activists had thought at first that just uploading a video of a violation to YouTube was enough, and it was only with time and experience that they learned that they needed to also include information about its time and place. By prompting for this sort of metadata, the platform boosts citizen witnesses’ digital literacy concerning verification.
The platform also moves towards redressing the relative paucity of symbolic and social capital among citizen witnesses and the knock-on effect of complicating the verification of their information. It does this in two ways – first, by collating a wealth of corroborating information related to the reported information’s content credibility. This allows the information as much as possible to speak for itself rather than depend on its source’s reputation for credibility (to be clear, this is not about having a good versus bad reputation, but rather about having the opportunity to generate a reputation at all in the right places – which depends on resources). Our platform’s emphasis on verifying the information’s content rather than focusing predominately on its source is ultimately why we decided to call the platform The Whistle rather than The Whistleblower. Second, by encouraging the source to gather together corroborating information about her identity to present in the platform’s output, The Whistle allows her to build up her symbolic and social capital in the networks of human rights reporting.
In sum, the hope is that, by supporting citizen witnesses’ generation of information subsidies for verification, our platform will increase the possibility that these reported violations receive attention and resources deployed towards their mitigation. In other words, by increasing the pluralism of human rights reporting at the source level, The Whistle aims to boost the pluralism of those who have access to the accountability mechanism of human rights reporting.
by Ella McPherson, cross-posted from Social Media and Human Rights
I’m delighted that The Whistle, a digital human rights reporting platform for civilian witnesses currently in concept stage, has received funding from Cambridge’s ESRC Impact Acceleration Account to put a small team together over the summer and into the autumn, including a project manager and a programmer. The team will work towards an alpha version by the end of the year.
by Ella McPherson