I need an essay writing.it requires less than 5000words. maybe, 3000~5000 words? it would be better I think. and I had a difficult time editing for avoiding plagiarism.
Assignment 3/Leicht-Deobald2019_Article_TheChallengesOfAlgorithm-Based.pdf Vol.:(0123456789)1 3 Journal of Business Ethics (2019) 160:377–392 https://doi.org/10.1007/s10551-019-04204-w ORIGINAL PAPER The Challenges of Algorithm‑Based HR Decision‑Making for Personal Integrity Ulrich Leicht‑Deobald1,2 · Thorsten Busch2 · Christoph Schank2,3 · Antoinette Weibel4 · Simon Schafheitle4 · Isabelle Wildhaber4 · Gabriel Kasper4 Received: 11 September 2017 / Accepted: 27 May 2019 / Published online: 7 June 2019 © The Author(s) 2019 Abstract Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory design methods, and private regulatory regimes within civil society can help overcome these challenges. Our paper contributes to literature on workplace monitoring, critical data studies, personal integrity, and literature at the intersection between HR management and corporate responsibility. Keywords Algorithm-based decision-making · Personal integrity · Moral imagination · Critical algorithm studies · Workplace monitoring Data have been discussed as “the new oil” (Tarnoff 2017; Thorp 2012) that organizations need to extract and mon- etize using algorithms or sets of defined steps structured to process data (Gillespie 2014). As a result, modern workplaces increasingly become quantified and monitored by algorithms (Ball 2010). For example, the technology firm Xerox Services applied a recruitment algorithm to support HR managers in their hiring decisions, offering them a score of how well an applicant’s qualifications fit to a job (Peck 2013). Moreover, the bank JP Morgan applies a fraud pre- diction algorithm to identify whether its employees behave in accordance with the company’s compliance regulations (Son 2015). Against this background, scholars in the fields of business ethics (Martin and Freeman 2003), critical algo- rithm studies (Ananny 2016; Kitchin 2017; Willson 2017), workplace monitoring (Ball 2001), and management (Bern- stein 2017) have discussed the use of algorithm-based deci- sion-making, problematizing issues regarding privacy (Mar- tin and Nissenbaum 2016), accountability (Diakopoulos 2016; Neyland 2015), transparency (Ananny and Crawford 2018; Martin 2018; Stohl et al. 2016), power (Beer 2017; Neyland and Möllers 2017), and social control (Ajunwa et al. 2017; boyd and Crawford 2012; Zuboff 1988). Technology firms and business consultants have, by con- trast, predominantly painted a “rosy and often naively opti- mistic and ultimately rationalistic picture of the business role and functions of big data” (Constantiou and Kallinikos 2015, p. 53), praising the technological sophistication and * Ulrich Leicht-Deobald
[email protected] Thorsten Busch
[email protected] Christoph Schank
[email protected] Antoinette Weibel
[email protected] Simon Schafheitle
[email protected] Isabelle Wildhaber
[email protected] Gabriel Kasper
[email protected] 1 INSEAD, Fontainebleau, France 2 IWE-HSG, University of St. Gallen, St. Gallen, Switzerland 3 University of Vechta, Vechta, Germany 4 FAA-HSG, University of St. Gallen, St. Gallen, Switzerland http://orcid.org/0000-0003-4554-7192 http://crossmark.crossref.org/dialog/?doi=10.1007/s10551-019-04204-w&domain=pdf 378 U. Leicht-Deobald et al. 1 3 usefulness of algorithm-based decision-making. The tech- nology firm IBM (2018), for example, advertises its HR arti- ficial intelligence algorithm Talent Watson as empowering “HR teams to increase the efficiency and quality of their operations.” In a similar vein, the analytics provider SAS (2018) claims that “fact-based decisions, powered by ana- lytics, enable organizations to more accurately define their strategy and be successful.” Novel technological advance- ments, however, do not simply offer opportunities for more effective organizing but also come with broader social and cultural implications (Dourish 2016; Martin and Freeman 2004; Orlikowski 2007; Verbeek 2006). Zuboff (2015) reminds us that implementing a novel technology is not an autonomous process that humans have no control over. Instead, such an implementation is also a social process that organizational members can actively participate in, object to, and game with (Friedman et al. 2013; Shilton and Anderson 2017). In this paper, we analyze how algorithm-based HR decision-making (i.e., algorithms designed to support and govern HR decisions), may influence employees’ personal integrity, defined as a person’s consistency between convic- tions, words, and actions (Palanski and Yammarino 2009). As Margolis et al. (2007, p. 237) put it, HR management has “the potential to change, shape, redirect and fundamentally alter the course of other people’s lives.” Hence, we expect that algorithm-based HR decision-making has profound effects on those governed by these decisions: the employees. We focus on personal integrity as an outcome because it is an innate human ability to make sense of one’s own deci- sions, behavior, and actions. According to Koehn (2005), personal integrity is a necessity for truly being human. Fol- lowing this view, we suggest that although personal integrity may be useful for organizations, above all it is a fundamental human value for its own sake. We claim that algorithm-based HR decision-making can shift the delicate balance between employees’ personal integrity and compliance more toward the compliance side because it may evoke blind trust in processes and rules, which may ultimately marginalize human sense-making as part of the decision-making processes. This is particularly true because algorithms lack the capacity for moral imagi- nation (i.e., to be aware of contextual moral dilemmas and to create new solutions). Thus, HR managers’ reliance on algorithm-based decision-making may crowd-out employ- ees’ personal integrity in favor of compliance, which is lim- ited to employees’ conforming to externally generated rules and regulation. Our manuscript offers three important theoretical contri- butions. First, our paper extends prior workplace monitor- ing and critical algorithm literature by showing how current algorithm-based HR decision-making applications can limit employees’ personal integrity. This is vitally important as the line between monitoring employees at the workplace and in private has increasingly become blurred (Rosenblat et al. 2014). As such, employees cannot easily opt out of work- place monitoring, if at all (Ajunwa et al. 2017). Thus, harm- ing personal integrity at work might also have significant spill-over effects on employees’ private lives (Rosenblat and Stark 2016). Furthermore, critical algorithm studies have examined algorithms directed toward constituents outside the organization, such as platform users (Bucher 2012, 2017; Mager 2012; Willson 2017), customers (Crawford 2015), consumers (Carah 2015), or freelance workers (Kushner 2013) but less on algorithms influencing employees and managers within organizations. Our manuscript joins prior business ethicists’ assessments (Leclercq-Vandelannoitte 2017; Martin and Freeman 2003; Ottensmeyer and Heroux 1991) suggesting that algorithm-based HR decision-making is conducive to social control, creating what Zuboff (1988, p. 323) refers to as “anticipatory conformity.” Second, our manuscript contributes to the literature on integrity and compliance by exploring the consequences of algorithm-based HR decision-making for personal integrity. We suggest that the novel challenges of algorithm-based HR decision-making for personal integrity go beyond fac- tors that have already been described in literature, factors such as rigid organizational structures or employees’ own self-interested behavior (Adler and Borys 1996). Even before the advent of big data, institutional structures of HR practices have partly compromised employees’ personal integrity (Wilcox 2012). However, we suggest that while algorithm-based HR decision-making aggravates some of the already known quandaries (Ekbia et al. 2015), it also creates novel tensions, such as increased information asym- metries between management and employees, thereby reduc- ing employees’ sense of autonomy and, hence, further shift- ing the delicate balance between integrity and compliance toward compliance. Finally, our paper contributes to literature at the intersec- tion between HR management and corporate responsibility by highlighting employees’ personal integrity as a central intrinsic value to enact moral agency. Greenwood (2002) suggested that HR management tends to implicitly draw from normative assumptions of consequentialist and deon- tological ethics, highlighting criteria of efficiency and fair- ness when assessing HR-related processes, such as employee recruitment, evaluation or performance appraisals (Legge 1996; Miller 1996). Instead, our analysis is loosely rooted in discourse ethics (Beschorner 2006; Busch and Shepherd 2014; Scherer 2015) suggesting that personal integrity is a human potentiality in its own right that should be bolstered against ostensible claims of technological efficiency. Our paper is organized as follows: Initially, we describe the advancements of algorithm-based HR decision-making that provide measures for organizations to monitor their 379The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity 1 3 employees. Next, we suggest that algorithm-based HR deci- sion-making is neither as objective nor as morally neutral as it is often portrayed. Then, we argue that algorithm-based HR decision-making as marketed by technology companies supports the implementation of quantitative indicators and compliance mechanisms at the expense of employees’ per- sonal integrity. Finally, we suggest four mechanisms, namely critical data literacy, ethical awareness, the use of participa- tory design approaches (i.e., defined as a methodology to include future users in the implementation process, Van der Velden and Mörtberg 2015), and private regulatory regimes within civil society to reduce negative consequences of algo- rithm-based decision-making. A Brief History of Algorithm‑Based HR Decision‑Making Attempts to gather information about workers and to create transparency regarding workplace behavior are by no means new phenomena (Ananny and Crawford 2018; Garson 1989; Rule 1996). Indeed, they can be traced back to philosophers, such as Adam Smith and Jeremy Bentham (Rosenblat et al. 2014). Bentham’s idea of the Panopticon has been influ- ential not only on philosophers, such as Foucault (1977), but also on management theorists (Ball 2010; Fox 1989; Zuboff 1988). It is routinely being invoked by surveillance critics and critical algorithm scholars to this day (Galič et al. 2017; Introna 2015). At the turn of the twentieth century, management theorists, such as Frederick Taylor, based their productivity experiments on the assumption that unobserved workers are inefficient, which introduced the need for con- stant performance monitoring (Saval 2014). Following Ball and Margulis (2011), we understand the terms “workplace monitoring” and “workplace surveillance” synonymously, as both terms “denote similar practices, namely the collection and use of data on employee activities in order to facilitate their management.” However, in our manuscript we use the term workplace monitoring as it has a less value-laden and more neutral and connotation than surveillance. A first step toward algorithm-based HR decision-making was the introduction of electronic performance monitor- ing during