The information age brought unprecedented availability of information to the public: an exponential shift in the ability of individuals to produce and access content. As a direct consequence, society needed to discover new ways of filtering the deluge of information in order to make it humanly understandable and cognitively manageable. Historically, the role of information gatekeepers has been played by human mediators, like journalists, who were able to (through training, expertise and experience) autonomously parse, prioritize, and present information deemed most meaningful to their audiences. This process of human information gatekeeping follows known and commonly understood dynamics of interaction between the gatekeeper and the audience, and is influenced by similarly commonly understood constraints and limitations of implicit and explicit bias. The amount of available content today makes it impossible for human mediators to cognitively manage and autonomously filter all the sources of information; for this reason, content providers have turned to machine-based automated filtering of information. This paper refers to automated filtering processes as algorithms (for an analysis of the term and its synecdochal properties refer to chapter 1.2). The focus of analysis of this paper is understanding the differences that algorithmic filtering brings to the type and quality of information the public is exposed to, as opposed to instances of human content mediation. Through a logical succession of arguments, it posits that there are three fundamental dimensions of change in how the public views and receives its information:
- Change in the type of content the public is exposed to
- Change in the journalistic agenda setting process that incorporates algorithmic processes
- Change in the trust relationship between the public and the actors behind the choice of information they receive
The starting point of the argument is an analysis of how much the mediascape has changed in the information era. The first chapter presents and analyses the quantitative differences of the society’s overall output of information between 1960, 2005 and 2016. It presents and analyses the numbers of content being produced, and compares the overall availability of information between the three periods in order to expose the exponential nature of information availability growth. To follow, the paper gives an overview of the meaning and functions of algorithms. An analysis of the underpinnings of how algorithms work to understand their abilities and limitations is crucial in order to understand how and why algorithms influence the composition of the new mediascape. This section expands on the following dimensions of interpretation:
- Separation of logic and control
- The limitation of basic computing operations
- Algorithm as synecdoche
While context in human gatekeeping is irrevocably linked to the information selection process, algorithms are largely seen, imagined, and implemented as decontextualized entities. Algorithm owners, algorithm developers, and the public (through the dynamic of perceived neutrality of algorithms) interpret algorithms as a purely logical problem, without real-life connotations. The next chapter analyses the perceived decontextualized nature of algorithms, and the dissonance between explicit and purposeful decontextualization of the creation process against the implicit contextualization of their implementation. Additionally, the chapter analyses the prioritization of efficiency in algorithmic processes that is prioritized over the creation of qualitative meaningfulness.
After analysing algorithmic gatekeeping on a functional level of creation, the paper turns to the differences algorithms bring to information selection in their implementation. The three dimensions this paper analyses are bias, self-perpetuation and restriction of content. These three dimensions are seen as the most relevant to the interpretation of the type of changes algorithmic filtering enacts. Bias, implicit and explicit, is analysed through the perspectives of algorithm owners and algorithm developers, focusing on how human bias filters into automated processes, how this can exacerbate inequality, as well as reasons behind — and effects of — explicit bias towards positive information. Self-perpetuation is the mechanism by which algorithms create a closed feedback loop of relevance, and how this loop is reinforced through human interaction. Lastly, this chapter analyses how active restriction of content deemed “inappropriate” further limits the audience’s choices through opaque and culturally biased mechanisms.
In conclusion, the paper expands on the three main dimensions of change mentioned earlier. The perspective on change in type of content is the black-box nature of algorithms, as well as the functional inability of individuals to holistically grasp the process through which complex algorithmic systems make the choice of prioritizing content. The change in journalistic agenda-setting analyses the concept of demand media and the risks of combining algorithmic filtering processes with human mediatorship that might exacerbate the self-perpetuating nature of algorithms. The change in trust relationship puts the spotlight on the lack of mechanisms within the public to assign or evaluate trust with algorithmic mediators. Lastly, this paper proposes a series of heuristics that can be useful mitigation strategies.