While the use of video to record human rights violations is not new, the drastic impact of new technologies, stemming from the increased availability of mobile phones and the proliferation of digital social networks, have profound implications for human rights researchers, NGOs, and international organisations. For example, a large amount of videos on YouTube are in fact small scrapes that have been re-uploaded. These recycled clips lack the original meta-data necessary to verify time, location, or contextual information. Consequently, this common practice has required researchers to develop and learn new tools and methodologies to identify the original source. Such new techniques often deviate from analysing traditional photographic video materials collected during field research.
In the age of ubiquitous camera usage, editing capabilities, and citizen media, the risks of getting digitally shared information wrong is high if the proper steps are not taken. While citizen media provides an extreme level of detail (including landmarks, signage, or vegetation), a permanent record of violation (if preserved correctly), as well as visual documentation of violations that would otherwise go undetected, it does not require a proper verification methodology.
PHEME, an organisation which focuses on the veracity of big data, relies solely on algorithms to verify social media content by analysing its information (lexical, semantic, and syntactic), criss referencing data sources with open-source data bases, and the information’s diffusion (how, when, and by who was this information transmitted and received). This algorithm based verification practice presents diffusion patterns in the form of “message types” (neutral, confirming, denying, or questioning rumour) in order to verify or dispute a digital source. PHEME’s emphasis on algorithms undoubtably has the potential to speed up the verification process, but should ultimately be coupled with a human element of verification.
Algorithmically, it is possible to identify both verifiable and non-verifiable traits belonging to a video by, for example, running a reverse image search to determine the videos originality, the geo-locations authenticity, and in what context it was captured. Thumbnails can also be matched to specific locations identified on Google street view, although this process often requires more human input.
Even if an algorithm deems a video to be original and the geo-location to be authentic does not mean that what the video is purporting to be true is in fact true (or vice versa). It is this no surprise that, traditionally, human rights reporters, NGOs, or international organisations deploy fact-finders on the ground to verify the situation, either by conducting interviews or field reports. For this reason, we believe that the proper approach to social media verification is still mainly human centred. The challenge for new tools with be to facilitate the circumvention of the inherent dangers and obstacles of hard-to-read places, by allowing a greater degree of overview of the context of reported media, via the provision of means for better cross-referencing. This enables fact-finders to make a more sound judgement.
Instead of relying solely on algorithmic verification techniques, albeit an important part of the verification process, we believe that analysing citizen media should by no means be considered a separate endeavour from traditional fact-finding, which is largely centred on witness testimony and fact-finder reports. The Whistle cross-checks social media reports by employing both top algorithmic indicators and human input. We are aware of the current field and how time consuming it is for fact-finders to verify information, so The Whistle does the work for you by facilitating human input and involvement in the verification process. The art of verification, for The Whistle, is a mix of both algorithmic and necessary human involvement.
The Whistle aims to speed up and simplify the verification process by prompting users to supplement their human rights reports with metadata and corroborating information form other witnesses. The Whistle app then engages the back-end cross-checks involving a variety of third party information sources and tools, such as weather and map databases. By doing so, The Whistle provides human rights researchers, NGOs, and international organisations with a wealth of cross-referenced information, reducing both the time and digital expertise necessary to verify digital reports of human rights violations.
- The Whistle’s Impact: A Case Study by the University of Cambridge - March 16, 2017
- The IPF speaks to Rebekah Larsen about the importance of The Whistle - February 19, 2017
- The Whistle featured on the University of Cambridge website - October 13, 2016
- Why new smartphone apps aren’t the answer to refugee justice - August 29, 2016
- Closing the Feedback Loop - August 13, 2016
- The Art of Verification - July 11, 2016
- The Whistle at RightsCon: Calls for Collaboration - June 25, 2016
- 10 Things to Know About Social Media Verification - June 8, 2016