For cloud ethics: Unpacking ignorance in algorithmic decision-making
- abstract
The note critically examines Amazon’s sexist hiring algorithms as an exemplary case to illustrate the sociotechnical dynamics of ignorance prevalent in contemporary workplaces. With the increasing adoption of algorithmic techniques, the digital transformation of conventional management practices has raised concerns about biases and limitations in algorithmic decision-making, despite its growing popularity. Building on Louise Amoore’s (2020) seminal work on ‘cloud ethics’, we contend that algorithmic management is essentially ignorant and unsettled, with managers exploiting the innate biases of computational devices to realize traditional goals of efficiency and instrumentalism. Our ultimate aim is to demonstrate how Amoore’s concept of cloud ethics highlights the need to confront the ethicopolitical tension of undecidability that lies at the core of our entanglement with algorithms.
Introduction
Algorithms have become increasingly omnipresent in various aspects of our social lives, particularly in contemporary workplaces, leading to a digital transformation of conventional management and organizational practices. However, this reliance on algorithmic decision-making, exemplified by the recent emergence of ChatGPT, has mixed implications, revealing the many limitations of algorithmic management that extends beyond a mere technical phenomenon to become a fundamentally sociotechnical process (Jarrahi et al., 2021). In this short paper, we focus on the biases and discriminatory practices that underpin algorithmic decision-making. Building on Louise Amoore’s theoretical insights into algorithms, as outlined in her seminal work Cloud Ethics (2020), we re-interpret Amazon’s widely publicized sexist hiring algorithms as an illustrative case to explore how algorithmic biases in decision-making are influenced by both technical and socio-organizational dynamics of ignorance, resulting in the exploitation of non-knowledge being marked by a collective form of insensitivity and disregard for social justice.
While developing this paper, we were unaware of the inaugural issue of ephemera in 2023, which elaborated on the theme of ‘organized ignorance’ – a recurring motif throughout our engagement with Amoore’s writing on algorithms. Despite the extensive discussions of this subject, we still find it crucial to justify Amoore’s eminent position as an ignorance-centered thinker who deploys paradoxes and ambiguities to pinpoint the enduring vision of excess that is not only confined to algorithmic management but permeates our shared social existence with computational devices in general. Specifically, we aim to situate Amoore’s ‘cloud ethics’ within a contested ethicopolitical terrain that challenges established concepts, including the multifaceted sense of the term ‘ignorance’, by continually unsettling and defamiliarizing them.
Amoore’s cloud ethics as an examination of ignorance
We may characterize Louise Amoore’s Cloud Ethics (2020) as diagnosing various forms of ignorance that underlie our entanglement with algorithms in contemporary society. Although the author highlights the potential risks of blindly using algorithmic techniques, which generate outputs through pure computations, in human decision-making, she repudiates the prevailing discourse advocating for increased algorithmic transparency based solely on ethical codes of conduct. Instead, Amoore proposes a ‘cloud ethics’ that recognizes algorithms as an essentially contested ‘ethicopolitical arrangement of values, assumptions, and propositions about the world’ (2020: 6). This form of radical questioning embraces the fundamental ambiguity and uncertainty that permeate all decision-making processes, thereby recognizing the inherent cloudiness of human-algorithm encounters.
Despite the absence of the term ‘ignorance’ in Cloud Ethics, we may infer that Amoore’s concern with algorithmic decision-making informed by excess, incalculability, nonknowledge, or any other terms employed by her, speaks to the central theme of ignorance with which both this book and extant studies of organizational ignorance (Bakken and Wilk, 2018; Esposito, 2022; Lange et al., 2019; McGoey, 2007, 2012, 2019; Plesner and Justesen, 2023) are preoccupied with. In the following, we aim to explicate Amoore’s (2020) understanding of ‘cloud ethics’ by delving into its two fundamental components, specifically: (1) the meaning of ‘cloud’ and its implications for (2) the notion of ‘ethics’.
The meaning of ‘cloud’
Amoore employs the term ‘cloud’ in diverse metaphorical ways, incorporating insights from scientific, philosophical, and literary sources, which are not easily accessible to readers seeking a quick understanding. Despite the complexity of the book’s use of ‘cloud’, two clarifying points can be found in chapter one (Amoore, 2020: 30). One is physicist Charles Wilson’s invention of cloud chamber, a scientific apparatus for visualizing the movement of subatomic radiation that is otherwise imperceptible to the human eye. The other concerns cloud computing rooted in modern digital computational networks, whose on-demand service is made possible by deriving actionable patterns from data lakes, which can be seen as a digital manifestation of cloud formation from lake evaporation (ibid.: 47). Hence, a ‘cloud’ can be interpreted as a ‘technology of perception’ (ibid.: 15) mediating the way in which we make sense of data in this world.
Accordingly, the exposition of the ‘cloud’ as a sense-making apparatus is best exemplified in algorithms, which Amoore characterizes as a form of aperture. Despite the Latin root of ‘aperture’ (aperire) indicating an opening of vision through which light can pass, Amoore argues that the algorithm is simultaneously a restrictive device, used as ‘a means of dividing, selecting, and narrowing the focus of attention’ (2020: 162). This paradoxical relationship between opening and closure is mirrored in the ambiguous meaning of ‘cloud’. First, Amoore refers to the ‘Cloud I paradigm’, which is a linear scientific approach to observing, representing, and classifying data within cloud architectures (ibid.: 33). In contrast to this, she uses the term ‘Cloud II’ to describe the computational creation of a perceptible, recognizable, and attributable world. Amoore posits that the logic of Cloud II constitutes a fundamentally ignorant practice, as algorithms remain indifferent to the meaning of processed data (e.g., determining if an image depicts a cat), and instead rely on pre-calculated correlations to generate a ‘good enough’ (ibid.: 67) output for decision-making purposes.
Primarily directed at ‘Cloud II’, Amoore contends that algorithmic decision-making, by disregarding our capacity for knowledge and action, entails a ‘double political foreclosure’ (2020: 20). It involves the condensation of multiple combinatory possibilities of raw data into a single output, thus limiting our potential to know. Additionally, it warrants the preemptive generation of future possibilities based on that output, delineating what constitutes the appropriate modes of action within the world. As such, ‘cloud ethics’ is aimed at tackling precisely this circumvention of ethicopolitical possibilities.
What is a ‘cloud ethics’?
As we noted earlier, the book clearly distinguishes between ‘ethics as code’ and ‘cloud ethics’ (Amoore, 2020: 7). Whereas encoded ethics aims to regulate the aberrations of supposedly impartial algorithms, cloud ethics regards madness, nonknowledge, and undecidability as the constitutive condition of all decision-making processes (ibid.: 121). By definition, a cloud ethics ‘is concerned with the political formation of relations to oneself and to others that is taking place, increasingly, in and through algorithms’ (ibid.: 7). Hence, Amoore seeks to advance a relational conception of ethics which considers the responsibility for erroneous conducts unattributable to a reasoning subject, whether human or algorithmic, but rather emerging from complex networks of decision-making processes.
Nonetheless, the pursuit of control in response to the perceived madness of algorithms, which may generate outputs that are discriminatory, is a never-ending endeavor. To address the issue of bias, we tend to develop ethical codes that ‘are sought to modify and restrain the harmful effects of algorithmic decision’ (Amoore, 2020: 119). This ethical approach relies on the codification of rules that aim to prevent bias from encroaching upon the supposed rationality of algorithms, ultimately holding first-person subjects fully accountable for any misconduct (ibid.: 66). The goal remains to circumvent our ethicopolitical imaginations by rendering ethics calculable, thereby avoiding the plight to choose under cloudy circumstances. However, Amoore (2020: 146) proposes that it is necessary to relinquish our hold on responsible subjects and recognize the inherent risk and groundlessness of decisions made in collaboration between humans and algorithms.
Again, this entails a relational conception of ethics that transcends the notion of a rational subject in isolation. It involves critically examining the joint actions of humans and algorithms, while remaining vigilant to the ‘branching points’ (Amoore, 2020: 99) at which all decisions are made. This approach understands that the significance of decisions extends beyond the immediate output, encompassing the alternative possibilities that were foreclosed. In this sense, Amoore’s ‘cloud ethics’ requires careful reflection upon the trajectory of past decisions in order to question their incontestability. Meanwhile, it necessitates a forward-looking stance that embraces the unpredictability of the future and affirms the ‘singularity’ (ibid.: 100) of each encounter. Consequently, only by confronting the uncertainty of our past and future, staying with the trouble of our epistemic ignorance, and recognizing the impossibility of accounting for all decision-making consequences, can we transcend the limits placed upon ethics and take ambitious responsibilities in our relationships with others.
The paradigmatic case of Amazon’s sexist hiring algorithm
This section briefly touches upon the ideas in Cloud Ethics (2020) to revisit the well-known case of Amazon’s sexist hiring algorithm (Dastin, 2018). Amoore’s book features various examples of biased algorithms (e.g., Twitter’s Tay algorithm in Amoore, 2020: 109), which serves as illustrations of the complex sociotechnical dynamics inherent in algorithmic decision-making today. Likewise, we consider the Amazon scandal one such example and investigate how different media coverages of the event prescribe various solutions to resolve algorithmic biases, the adequacy of which Amoore’s cloud ethics, however, radically questions.
In 2014, Amazon developed an AI recruiting tool to automate job screening by assigning a score to candidates based on observed patterns of interest in resumés submitted to Amazon over the previous decade (Dastin, 2018). However, a year into the project, they discovered that past resumés came mostly from male candidates. Using these past recruitment data, the algorithm learned to factor in gender proxies, ultimately exhibiting a preference to masculine language in self-descriptions (e.g., ‘executed’ and ‘captured’) while considering feminine features (e.g., attending a women’s college) unattractive (Heilweil, 2020), which showcases the technique of aperture that ‘extract the salient features from [but] do not meaningfully differentiate the sources of their input data’ (Amoore, 2020: 160). Hence, the recruiting tool became systematically biased against female applicants, resulting in unintentional penalization of women. The developers internally questioned the project’s validity but remained anonymous when the scandal went public. Amazon ultimately abandoned the software and sought ways to integrate diversity into its AI recruiting engines, strategically diverting public attention from the incident and ignoring its wider societal implications.
There are differing views regarding the Amazon scandal. As critics argue that even if a fair and unbiased algorithmic hiring tool is developed, efforts to ‘de-bias’ algorithms may never be reconciled (Heilweil, 2019). While the hiring tool can be designed and tested to avoid bias against women, there is concern that bias may still exist for other racial and ethnic minorities, as algorithms, according to Amoore (2020: 8), ‘must necessarily discriminate to have any traction in the world’. Moreover, Amazon employees should be held responsible for blindly entrusting decision-making algorithms with the recruitment task, as they are poorly informed about machine learning capabilities and limitations (Lauret, 2019). For others, flawed algorithms are, however, considered not as biased and ignorant as humans. According to Lavanchy (2018), it is expected for the Amazon algorithms to inherit undesirable human traits such as bias and discrimination, as they are fed with recruitment data created by humans. Despite AI’s overall better performance, people tend to lose confidence in algorithms faster than in humans when they make a mistake. Thus, Lavanchy (2018) sees people’s cognitive limitations in thinking rationally around intelligent machines as the primary obstacle to implementing tech-driven solutions in contemporary organizations.
To mitigate the harms of AI adoption in organizations, some would suggest solutions that aim at correcting algorithmic biases, even if perfection is not attainable (Heilweil, 2020). This includes actively screening data from developers to remove biased correlations and maintaining class balance by including enough resumés from marginalized social groups in the dataset. However, it is controversial whether it is possible for Amazon to design algorithms that are completely fair, as mathematical precision in demonstrating fairness seems impossible. An additional measure is to open up the black box and reduce the opacity of AI. In order to establish a satisfactory basis for placing trust in algorithmic decisions, governments should propose laws that mandate companies to adhere to transparency guidelines for information disclosure, which would help promote accountability in algorithmic decision-making (Heilweil, 2019). This aligns with what Amoore argues as the desire ‘to render the cloud bureaucratically and juridically intelligible’ (2020: 38). This said, a non-transparent AI system should not be implemented in the first place.
Dynamics of sociotechnical ignorance in algorithmic decision-making
Our subsequent analysis undertakes a theoretical exploration of the Amazon case from the concept of ignorance we have derived earlier from Amoore’s discussion of ‘cloud ethics’. In so doing, we aim to see the case through a different conceptual lens, thus staying in line with Deleuze and Guattari’s philosophical ethos of discovering ‘new, remarkable, and interesting’ (1994: 111) insights from the phenomenon at hand. Furthermore, by drawing on the ideas that algorithmic decision-making entails a sociotechnical process (Jarrahi et al., 2021: 2-3), we identify and elaborate on two interrelated forms of ignorance that we deem pivotal in the case: (1) technical ignorance inherent in algorithms, and (2) social ignorance committed by organizational members.
Technical ignorance inherent in algorithms
Notwithstanding the impetus to rectify algorithmic bias in the wake of the Amazon controversy, we could invoke Amoore’s skepticism towards this aspiration, as it upholds ‘a particular ethicopolitical conception of the problem’ (Amoore, 2020: 109) that concentrates on regulating algorithms to curtail their unrestrained and impulsive actions. For Amoore, machine learning algorithms utilize abductive logic, which entails making conjectural inferences from novel data by reference to patterns and clusters generated from historical data (ibid.: 47-8). Indeed, Amazon’s hiring algorithm continually adjusts its computational thresholds by accommodating a certain degree of fallibility in their ‘speculative experiment’ (ibid.: 48) with the upcoming resumés. This results in a type of ‘bounded rationality’ (ibid.: 109) that can also be conceived as a form of bounded ignorance, whereby ‘the unseen and unspoken become precisely the generative materials of algorithmic decisions’ (ibid.: 111). Instead of sidestepping the cloudiness of the algorithms employed in Amazon’s recruitment procedure, we need to tackle what Amoore characterizes as the reductionist and generative rationale intrinsic to the technical ignorance of algorithms, which manifests in (1) their capacity to process data without comprehending their meaning, and (2) their aptitude to predict a radically uncertain future.
Through a genealogical investigation into machine learning algorithms, Amoore notes that the birth of this enigmatic technology is intricately linked to the idea of surpassing the hazardous shortcomings of human judgment (Amoore, 2020: 113). As Amoore highlights, the problem after World War II was identified as ‘human propensity for frenzied miscalculation’ (ibid.: 114), which necessitated the creation of intelligent machines capable of overcoming this weakness. This sentiment is expressed today by Lavanchy’s (2018) faith in the greater intelligence of Amazon’s algorithm over humans. Contrary to such convictions, sociologist Elena Esposito (2022) argues that algorithms’ strength does not reside in their presumed superiority of sense-making capacities. Rather, algorithms outperform humans in efficiency and optimality precisely due to their ability to process vast amounts of data without truly understanding it (ibid.: 3). In a related argument, Amoore suggests that algorithms are ‘always already unreasonable’ (2020: 115) because they function by disregarding the context of the data (also Amoore and Piotukh, 2015: 355), which in their mathematical forms are devoid of meaning. In the case of Amazon’s hiring tool, the algorithm would then have no knowledge of the competencies of each female applicant, yet it keeps correlating masculine features with high employability (Dastin, 2018), which were recognized as salient patterns leading to sexist outcomes.
The process by which algorithms cluster data and generate outputs that circumscribe our interpretive horizon is opaque and most likely beyond both human control and algorithmic self-control (Lange et al., 2019). Regarding the Amazon hiring scandal, the algorithms themselves were not biased as they lack any sense of morals (Lauret, 2019). Even if the data submitted to Amazon’s algorithm reflect past managerial prejudices against women, biased outputs only become significant when they are deemed convincing by employees in a specific decision-making context. Despite its semantic insensitivity, the much-celebrated efficiency of automated employment screening exhibits what Bakken and Wiik characterize as the ‘will to ignorance’ (2018: 1112) that enables the scientific conquest of new knowledge terrains. Consequently, the Amazon algorithm is able to enhance its rationality by reproducing clusters and patterns, with the assumption that computational errors ‘can be tamed with correct diagnosis and repair’ (Amoore, 2020: 119). However, the opening and closure of algorithmic aperture imply an ever-expanding vision of excess, leaving algorithmic knowledge forever haunted by the unknown.
Other than semantic ignorance, Amoore (2020: 112) also contends that algorithms achieve their predictive power by ignoring the sheer incalculability of the future. In making decisions, these algorithms rely on ‘past data archives’ (ibid.: 50) to calculate one future possibility in each instance, committing ‘the tyranny of ... reducing the multiplicity of potential futures to a single output’ (ibid.: 80). When asked to explain why male candidates are preferred, Amazon’s hiring algorithms point to nothing but past recruitment data where patterns of interest can be found. Esposito (2022: 90) notes that, to secure an unknowable future, predictive accuracy takes precedence over transparency. Within Amazon, predictive analysis through scoring and ranking systems is prioritized over detailed assessment of each candidate’s competency, indicating that their primary goal is to provide quick guidance for decision-making rather than ensuring equitable hiring practices.
In accordance with Derrida’s view of prediction, algorithms cannot be regarded as neutral observers with objective knowledge of the future, as the future cannot be foreseen since it ‘falls on me because I don’t see it coming’ (Derrida, 2007: 451). Amazon’s hiring algorithm is subject to the same ignorance, as the event of male candidates outperforming female candidates, for example, is ‘that which can be said [dit] but never predicted [prédit]’ (ibid.: 451). In this sense, the prediction designates a performative event that shapes Amazon’s hiring decision by ‘making a claim on the future’ (Amoore, 2020: 51), while ignoring the future unknown serves as a reasonable basis of algorithmic calculation.
Social ignorance committed by organizational members
Given the technical inscrutability of algorithms, Amazon’s use of algorithms to predict the employability of potential candidates is, from Amoore’s perspective, intertwined with the definition of ‘what normalities and pathologies could be in a society’ (Amoore, 2020: 114). As a form of ethicopolitical truth-telling (ibid.: 24), the generated rankings of candidates are crucial in informing managerial decisions, which are contingent on the reliability and trustworthiness of the algorithmic outputs[1]. Therefore, the social implication of misinterpreting the computational output can vary significantly depending on the level of trust vested in the algorithm within a particular decision-making context (Jarrahi et al., 2021: 7). Although the entanglements between humans and algorithms are often cloudy, organizations tend to privilege trusted algorithms that represent normality and rationality over problematic ones characterized by anomalies and madness. Our analysis, however, follows Amoore’s interpretation of Donna Haraway’s concept of ‘staying with the trouble’ (2020: 19), which allows us to trace the thread of two ‘troubles’ underpinning the social dimension of algorithmic ignorance: (1) the unattributability of authorship; (2) the doubtfulness of partial accountability.
First of all, Amoore points out that traditional accountability frameworks are inadequate for holding algorithms responsible, given the vanity of imbuing algorithms with an ‘author function’ (2020: 86) located in their source code. Despite efforts to regulate algorithmic behaviors through comprehensive accountability frameworks (Heilweil, 2019, 2020), the attribution of responsibility for algorithmic malfunctions is complicated by the distributed nature of authorship in human-algorithm encounters (Amoore, 2020: 94). In response to the deviational performance of Amazon’s hiring algorithm, there are two possible reactions: (1) termination of the program due to violated norms of gender equity, or (2) ongoing adjustments to the source code and training data to make the algorithm trustworthy again. In the former scenario, blame could be laid upon particular human agents for developing flawed source code or using algorithms without critical reflection. Alternatively, in the latter, Amazon’s management may still consider algorithms superior and strive to incorporate gender norms through adjusting the source code and experimenting with training data to regain trust in the algorithm’s performance. In either case, the responsibility can be assigned to either human agents or the algorithm itself, subject to continuous learning and self-improvement.
Despite the challenge of providing a comprehensive account of the Amazon case, it remains imperative for organizational members to continue exercising doubt and not relinquish their ethicopolitical imaginations. This is because Amazon’s managers are not standing outside of the issue but are, in Amoore’s words, ‘always already implicated in the algorithm as a form of adjudication of the truth’ (2020: 138). In effect, the Amazon team is complicit in entrusting algorithms to select potential candidates, seeking an engine that can rapidly ‘spit out’ (Dastin, 2018) the best resumés to avoid any paradoxical situation where the hiring decision could be otherwise. This creates a form of ignorance that can be interpreted as strategic (e.g. McGoey, 2007, 2012, 2019), especially in its bureaucratic manifestations (McGoey, 2007). Following Linsey McGoey’s notion of the bureaucratic ‘will to ignorance’, algorithmic management can be seen as deliberately made ‘unclear until one can gauge its performance over time’ (ibid.: 216). This engenders managerial complacency, whereby ‘divergences from natural functionality are viewed as an anomaly, as an aberration of correct procedure, rather than endemic to the system itself’ (ibid.: 218).
Notably, concerns for a nuanced approach to gender equity are marginalized, leading to Amazon’s handling of sexist outcomes as isolated occurrences that can be rectified without fundamentally questioning what Amoore conceives as ‘the cruelly optimistic promises of technoscience’ (2020: 147). Again, such strategic ignorance in the form of managerial silence is excused by ‘errors or difficulties in the interpretation of conflicting facts’ (McGoey, 2007: 231). Moreover, Plesner and Justesen conceptualize algorithmic decision-making as a form of ignorance that is perpetuated by both human and algorithmic actors in a ‘pluralistic collective’ (2023: 27-8) manner, symptomatic of blindspots and presuppositions. Consequently, the wicked problem of algorithmic bias lies in the inherent intractability of its collective social ignorance that is distributed among human and non-human actors, inducing managers to assume no responsibility as rule-abiding bureaucrats.
Confronted with such bureaucratic ignorance, Amoore’s notion of cloud ethics takes seriously the possibility of giving a partial account of our muddy entanglements with algorithms. Drawing on Derrida’s ethics of deconstruction, Amoore (2020: 147-50) proposes a doubtful form of relational ethics that goes beyond moral rules and utility maximization. This is consistent with the view of René ten Bos (1997: 999) that business ethics is never exhausted by the instrumental code of ethics that characterize modern bureaucracy. Instead, we should navigate an uneasy struggle between multiple decision-making alternatives of every past, present, and future decisions, struggling with doubt and focusing on the contingent ‘traces of rejected alternatives’ (Amoore, 2020: 162). Paradoxically, only by embracing the irreducible blindspots of our decisions and acknowledging their unyielding areas of ignorance can our responsibility become truly ambitious. Algorithms are not our enemies but have only redefined our ethicopolitical terrains in contemporary organizations, whereby ‘the moral predicament of people working in organizations does not reside in rules, but in the way people relate to them’ (ten Bos, 1997: 1012). In this sense, Amoore’s cloud ethics accentuates a vision of solidarity, wherein no computational attributes hold absolute authority over the unlimited potentialities embedded in our ethical self-conducts towards others (Amoore, 2020: 172).
Cloud ethics has never been concluded
This note explores Amoore’s notion of cloud ethics, which is an inhospitable terrain that is by definition inconclusive, in an effort to contribute to theorizing the multifaceted ignorance within contemporary algorithmic management. Rather than offering straightforward recommendations on how to overcome ignorance, Amoore’s approach raises difficult questions about the ambiguities underlying human-algorithm relationships. These questions are often left unresolved, as seen in the Amazon scandal, where neither the organizational resistance to algorithms nor blind faith in machines taking over decisions can give any final assurance (Jarrahi et al., 2021: 7). It is worth noting, however, that Amoore’s intricate comprehension of ethics and politics showcases her unwavering commitment to diagnosing the pressing issues of our time. For Amoore, the lingering incalculability and non-knowledge stemming from the algorithmic overproduction of data and information is not a hindrance to ethical deliberation, but rather a testing challenge that makes it impossible to decide without some level of doubt. This embodied doubt not only disqualifies all ‘final answers’ to the troubles we co-produce with algorithms, but it also opens up an uncharted future for us to consider seriously the potential modes of co-existence with algorithms.
Meanwhile, this contribution to an ethicopolitical openness also reflects the limitation of our approach: it leaves us with nothing clear regarding organizational practice. As such, our engagement with Amoore’s cloud ethics risks being seen as academic hairsplitting, ultimately failing to concretely question and transform contemporary algorithmic management after theoretically seeping into its intractable ignorance. We might even push our academic insights further away from practice when emphasizing the collective and distributed nature of algorithmic decision-making, which tends to, quite ironically, challenge our very assumption about managers ‘exploiting’ the technical biases of algorithms, thereby dispensing any human capacity to respond to such co-organized ignorance.
Nonetheless, serving as a cautionary note, Amoore’s cloud ethics remains practically significant for prompting us to engage in ‘risky speech’, openly contesting any promises made on ‘a person, a thing, or a technology’ (Amoore, 2020: 147). For example, scientists and managers in contemporary organizations can unsettle, if not dispel, the prevailing optimism of technoscience by refusing to pass final judgment on algorithm’s credibility in making intelligent decisions. Instead, they may continually dispute the reliability of algorithms, issuing critically-charged reports tracing and documenting their malfunctions, best made publicly available, at various stages of their workplace implementation – a vexing endeavor whose effects are ‘not unknown in advance’ (ibid.: 146). Fraught with doubt and personal risk, the aim of such performative speech is never to secure a predictable future for algorithmic management, but to urge us to acknowledge and work with uncertainty in all circumstances, opening up a multitude of possibilities for action despite our shared ignorance.
[1] From an etymological standpoint, the term ‘truth’ in English can be traced back to its Proto-Germanic root word, ‘treuwaz’, which conveys the notion of having or being characterized by good faith (Harper, n.d.). In this regard, the algorithmic tool employed by Amazon can only exert an impact on hiring decisions if managers are able to trust or repose their good faith in the algorithms.
Amoore, L. (2020) Cloud ethics: Algorithms and the attributes of ourselves and others. Durham, NC: Duke University Press.
Amoore, L. and V. Piotukh (2015) ‘Life beyond big data: Governing with little analytics’, Economy and Society, 44(3): 341-66.
Bakken, T. and E. L. Wiik (2018) ‘Ignorance and organization studies’, Organization Studies, 39(8): 1109-20.
Dastin, J. (2018) ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters, 11 October.
Deleuze, G. and F. Guattari (1994) What is philosophy?, trans. G. Burchell and H. Tomlinson. New York, NY: Columbia University Press.
Derrida, J. (2007) ‘A certain impossibility of saying the event’, Critical Inquiry, 33(2): 441-61.
Esposito, E. (2022) Artificial communication: How algorithms produce social intelligence. Cambridge, MA: MIT Press.
Harper, D. (n.d.) ‘Etymology of truth’, Online Etymology Dictionary. [https://www.etymonline.com/word/truth]
Heilweil, R. (2019) ‘Artificial intelligence will help determine if you get your next job’, Vox, 12 December.
Heilweil, R. (2020) ‘Why algorithms can be racist and sexist’, Vox, 18 February.
Jarrahi, M. H., G. Newlands, M. K. Lee, C. T. Wolf, E. Kinder and W. Sutherland (2021) ‘Algorithmic management in a work context’, Big Data & Society, 8(2).
Lange, A. C., M. Lenglet and R. Seyfert (2019) ‘On studying algorithms ethnographically: Making sense of objects of ignorance’, Organization, 26(4): 598-617.
Lauret, J. (2019) ‘Amazon’s sexist AI recruiting tool: how did it go so wrong?’, Becoming Human: Artificial Intelligence Magazine. [https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e]
Lavanchy, M. (2018) ‘Amazon’s sexist hiring algorithm could still be better than a human’, The Conversation. [https://theconversation.com/amazons-sexist-hiring-algorithm-could-still-be-better-than-a-human-105270]
McGoey, L. (2007) ‘On the will to ignorance in bureaucracy’, Economy and Society, 36(2): 212-35.
McGoey, L. (2012) ‘Strategic unknowns: Towards a sociology of ignorance’, Economy and Society, 41(1): 1-16.
McGoey. L. (2019) The unknowers: How strategic ignorance rules the world. London: Zed Book.
Plesner, U. and L. Justesen (2023) ‘Digitalize and deny: Pluralistic collective ignorance in an algorithmic profiling project’, ephemera, 23(1): 19-48.
ten Bos, R. (1997) ‘Essai: Business ethics and Bauman ethics’, Organization Studies, 18(6): 997-1014.
Wanjun Lei is a Master’s student in Business Administration and Philosophy at Copenhagen Business School. Alongside a background in Niklas Luhmann’s systems theory and Daoist philosophy, his current interest lies at the intersection of critical management studies with French poststructuralist philosophy.
E-mail: wale22ab AT student.cbs.dk
Guillermo Moya Rosado is a Master’s student in Business Administration and Philosophy at Copenhagen Business School. Please note that this brief introduction is not meant to set a precedent for future self-descriptions.
E-mail: gumo22ac AT student.cbs.dk