Assignment 3/Leicht-Deobald2019_Article_TheChallengesOfAlgorithm-Based.pdf Vol.: XXXXXXXXXX Journal of Business Ethics XXXXXXXXXX:377–392 https://doi.org/10.1007/s XXXXXXXXXXw ORIGINAL PAPER The...

1 answer below »
I need an essay writing.it requires less than 5000words. maybe, 3000~5000 words? it would be better I think. and I had a difficult time editing for avoiding plagiarism.


Assignment 3/Leicht-Deobald2019_Article_TheChallengesOfAlgorithm-Based.pdf Vol.:(0123456789)1 3 Journal of Business Ethics (2019) 160:377–392 https://doi.org/10.1007/s10551-019-04204-w ORIGINAL PAPER The Challenges of Algorithm‑Based HR Decision‑Making for Personal Integrity Ulrich Leicht‑Deobald1,2  · Thorsten Busch2 · Christoph Schank2,3 · Antoinette Weibel4 · Simon Schafheitle4 · Isabelle Wildhaber4 · Gabriel Kasper4 Received: 11 September 2017 / Accepted: 27 May 2019 / Published online: 7 June 2019 © The Author(s) 2019 Abstract Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory design methods, and private regulatory regimes within civil society can help overcome these challenges. Our paper contributes to literature on workplace monitoring, critical data studies, personal integrity, and literature at the intersection between HR management and corporate responsibility. Keywords Algorithm-based decision-making · Personal integrity · Moral imagination · Critical algorithm studies · Workplace monitoring Data have been discussed as “the new oil” (Tarnoff 2017; Thorp 2012) that organizations need to extract and mon- etize using algorithms or sets of defined steps structured to process data (Gillespie 2014). As a result, modern workplaces increasingly become quantified and monitored by algorithms (Ball 2010). For example, the technology firm Xerox Services applied a recruitment algorithm to support HR managers in their hiring decisions, offering them a score of how well an applicant’s qualifications fit to a job (Peck 2013). Moreover, the bank JP Morgan applies a fraud pre- diction algorithm to identify whether its employees behave in accordance with the company’s compliance regulations (Son 2015). Against this background, scholars in the fields of business ethics (Martin and Freeman 2003), critical algo- rithm studies (Ananny 2016; Kitchin 2017; Willson 2017), workplace monitoring (Ball 2001), and management (Bern- stein 2017) have discussed the use of algorithm-based deci- sion-making, problematizing issues regarding privacy (Mar- tin and Nissenbaum 2016), accountability (Diakopoulos 2016; Neyland 2015), transparency (Ananny and Crawford 2018; Martin 2018; Stohl et al. 2016), power (Beer 2017; Neyland and Möllers 2017), and social control (Ajunwa et al. 2017; boyd and Crawford 2012; Zuboff 1988). Technology firms and business consultants have, by con- trast, predominantly painted a “rosy and often naively opti- mistic and ultimately rationalistic picture of the business role and functions of big data” (Constantiou and Kallinikos 2015, p. 53), praising the technological sophistication and * Ulrich Leicht-Deobald [email protected] Thorsten Busch [email protected] Christoph Schank [email protected] Antoinette Weibel [email protected] Simon Schafheitle [email protected] Isabelle Wildhaber [email protected] Gabriel Kasper [email protected] 1 INSEAD, Fontainebleau, France 2 IWE-HSG, University of St. Gallen, St. Gallen, Switzerland 3 University of Vechta, Vechta, Germany 4 FAA-HSG, University of St. Gallen, St. Gallen, Switzerland http://orcid.org/0000-0003-4554-7192 http://crossmark.crossref.org/dialog/?doi=10.1007/s10551-019-04204-w&domain=pdf 378 U. Leicht-Deobald et al. 1 3 usefulness of algorithm-based decision-making. The tech- nology firm IBM (2018), for example, advertises its HR arti- ficial intelligence algorithm Talent Watson as empowering “HR teams to increase the efficiency and quality of their operations.” In a similar vein, the analytics provider SAS (2018) claims that “fact-based decisions, powered by ana- lytics, enable organizations to more accurately define their strategy and be successful.” Novel technological advance- ments, however, do not simply offer opportunities for more effective organizing but also come with broader social and cultural implications (Dourish 2016; Martin and Freeman 2004; Orlikowski 2007; Verbeek 2006). Zuboff (2015) reminds us that implementing a novel technology is not an autonomous process that humans have no control over. Instead, such an implementation is also a social process that organizational members can actively participate in, object to, and game with (Friedman et al. 2013; Shilton and Anderson 2017). In this paper, we analyze how algorithm-based HR decision-making (i.e., algorithms designed to support and govern HR decisions), may influence employees’ personal integrity, defined as a person’s consistency between convic- tions, words, and actions (Palanski and Yammarino 2009). As Margolis et al. (2007, p. 237) put it, HR management has “the potential to change, shape, redirect and fundamentally alter the course of other people’s lives.” Hence, we expect that algorithm-based HR decision-making has profound effects on those governed by these decisions: the employees. We focus on personal integrity as an outcome because it is an innate human ability to make sense of one’s own deci- sions, behavior, and actions. According to Koehn (2005), personal integrity is a necessity for truly being human. Fol- lowing this view, we suggest that although personal integrity may be useful for organizations, above all it is a fundamental human value for its own sake. We claim that algorithm-based HR decision-making can shift the delicate balance between employees’ personal integrity and compliance more toward the compliance side because it may evoke blind trust in processes and rules, which may ultimately marginalize human sense-making as part of the decision-making processes. This is particularly true because algorithms lack the capacity for moral imagi- nation (i.e., to be aware of contextual moral dilemmas and to create new solutions). Thus, HR managers’ reliance on algorithm-based decision-making may crowd-out employ- ees’ personal integrity in favor of compliance, which is lim- ited to employees’ conforming to externally generated rules and regulation. Our manuscript offers three important theoretical contri- butions. First, our paper extends prior workplace monitor- ing and critical algorithm literature by showing how current algorithm-based HR decision-making applications can limit employees’ personal integrity. This is vitally important as the line between monitoring employees at the workplace and in private has increasingly become blurred (Rosenblat et al. 2014). As such, employees cannot easily opt out of work- place monitoring, if at all (Ajunwa et al. 2017). Thus, harm- ing personal integrity at work might also have significant spill-over effects on employees’ private lives (Rosenblat and Stark 2016). Furthermore, critical algorithm studies have examined algorithms directed toward constituents outside the organization, such as platform users (Bucher 2012, 2017; Mager 2012; Willson 2017), customers (Crawford 2015), consumers (Carah 2015), or freelance workers (Kushner 2013) but less on algorithms influencing employees and managers within organizations. Our manuscript joins prior business ethicists’ assessments (Leclercq-Vandelannoitte 2017; Martin and Freeman 2003; Ottensmeyer and Heroux 1991) suggesting that algorithm-based HR decision-making is conducive to social control, creating what Zuboff (1988, p. 323) refers to as “anticipatory conformity.” Second, our manuscript contributes to the literature on integrity and compliance by exploring the consequences of algorithm-based HR decision-making for personal integrity. We suggest that the novel challenges of algorithm-based HR decision-making for personal integrity go beyond fac- tors that have already been described in literature, factors such as rigid organizational structures or employees’ own self-interested behavior (Adler and Borys 1996). Even before the advent of big data, institutional structures of HR practices have partly compromised employees’ personal integrity (Wilcox 2012). However, we suggest that while algorithm-based HR decision-making aggravates some of the already known quandaries (Ekbia et al. 2015), it also creates novel tensions, such as increased information asym- metries between management and employees, thereby reduc- ing employees’ sense of autonomy and, hence, further shift- ing the delicate balance between integrity and compliance toward compliance. Finally, our paper contributes to literature at the intersec- tion between HR management and corporate responsibility by highlighting employees’ personal integrity as a central intrinsic value to enact moral agency. Greenwood (2002) suggested that HR management tends to implicitly draw from normative assumptions of consequentialist and deon- tological ethics, highlighting criteria of efficiency and fair- ness when assessing HR-related processes, such as employee recruitment, evaluation or performance appraisals (Legge 1996; Miller 1996). Instead, our analysis is loosely rooted in discourse ethics (Beschorner 2006; Busch and Shepherd 2014; Scherer 2015) suggesting that personal integrity is a human potentiality in its own right that should be bolstered against ostensible claims of technological efficiency. Our paper is organized as follows: Initially, we describe the advancements of algorithm-based HR decision-making that provide measures for organizations to monitor their 379The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity 1 3 employees. Next, we suggest that algorithm-based HR deci- sion-making is neither as objective nor as morally neutral as it is often portrayed. Then, we argue that algorithm-based HR decision-making as marketed by technology companies supports the implementation of quantitative indicators and compliance mechanisms at the expense of employees’ per- sonal integrity. Finally, we suggest four mechanisms, namely critical data literacy, ethical awareness, the use of participa- tory design approaches (i.e., defined as a methodology to include future users in the implementation process, Van der Velden and Mörtberg 2015), and private regulatory regimes within civil society to reduce negative consequences of algo- rithm-based decision-making. A Brief History of Algorithm‑Based HR Decision‑Making Attempts to gather information about workers and to create transparency regarding workplace behavior are by no means new phenomena (Ananny and Crawford 2018; Garson 1989; Rule 1996). Indeed, they can be traced back to philosophers, such as Adam Smith and Jeremy Bentham (Rosenblat et al. 2014). Bentham’s idea of the Panopticon has been influ- ential not only on philosophers, such as Foucault (1977), but also on management theorists (Ball 2010; Fox 1989; Zuboff 1988). It is routinely being invoked by surveillance critics and critical algorithm scholars to this day (Galič et al. 2017; Introna 2015). At the turn of the twentieth century, management theorists, such as Frederick Taylor, based their productivity experiments on the assumption that unobserved workers are inefficient, which introduced the need for con- stant performance monitoring (Saval 2014). Following Ball and Margulis (2011), we understand the terms “workplace monitoring” and “workplace surveillance” synonymously, as both terms “denote similar practices, namely the collection and use of data on employee activities in order to facilitate their management.” However, in our manuscript we use the term workplace monitoring as it has a less value-laden and more neutral and connotation than surveillance. A first step toward algorithm-based HR decision-making was the introduction of electronic performance monitor- ing during
Answered Same DayMay 22, 2021Macquaire University

Answer To: Assignment 3/Leicht-Deobald2019_Article_TheChallengesOfAlgorithm-Based.pdf Vol.: XXXXXXXXXX Journal...

Abhinaba answered on May 30 2021
157 Votes
CYBER SECURITY
Table of Contents
1. Introduction    3
2. Issues Relating to Ethical Dilemmas in Computing    3
2.1 Homogeneity    3
2.2 Understanding the Structural Framework    4
3. System Resilience    6
3.1 AI Undergoing Changes    6
3.2 Usage of Internet    7
4. Evaluation of the Article    8
4.1 Assessment    8
4.2 COBIT 2019 Framework for Cyber Ethics Approach    9
4.3 ACM Code of Ethics Providing Guidance    10
4.4 Organizational Response    11
5. Conclusion    12
6. References    13
1. Introduction
Over the year’s technological advancement has paved the way for the creation of cyber security it has led to the situational discomfort of dire impediments. These impediments are the resultant effect of the complexity, the gravity, and the vast array relating to the ethica
l dilemmas that exists within the cyber security framework. Therefore, it is meant that the ethical code of conduct relating to the cyber security is based on the merits relating to its own curricular focus that are effective in the preparation of the cyber security professionals.
Today if adequate attention is paid then the Internet strives to be one of the fastest growing infrastructures concerning the life of everyday practice. Therefore, the technical environment on today’s ground appears to embed many technologies that strive to survive as the latest in lieu of the changing face concerning the humankind. This paper aims to shed light on the ethical dilemmas that appear to stand in the way of a cyber-practitioner and analyse those issues effectively. This is done by taking into account a specific article that has been taken for evaluating the problems of the ill-being of the employee in consideration of the IT Sector.
2. Issues Relating to Ethical Dilemmas in Computing 
2.1 Homogeneity
The IT or Information Technology supposedly appears to play a central role concerning the industry, commerce, and the government. It also plays important part in the facilitation of the activities relating to education, medicine, society as well as entertainment in a matter that appears to be writ large. As opined by Bada et al., (2019) the matter relating to the benefits concerning the economic as well as social benefits are effective in getting impacted due to this. Therefore, the matters relate to the cyber security personnel. They face major hindrance in relation to their ethical dilemmas where they are unable to act in accordance to their technological elements. Along with this, the Information Technology appears to strive as one of the most problematic implications. These implications appear to exhibit some of the most negative impacts that pave the way for ethical dilemmas in our society. Henceforth, it is functional in posing as well as creating some of the major impediments relating to the ethical code of conduct. It is effective in containing the matters in general aspects that includes almost three main types relating to the ethical issues. These issues appear to be the personal privacy, along with the accessing of the rights, and the matters concerning the harmful actions. These issues stand at the threshold of the exploration of facets that functions in each case various ways for which is the resultant aspect of affecting the relations thriving in the public concerning that of this technological change. The matters concerning the terms involving the personal privacy, is effective in enabling the IT and the data, it paves the way for exchange of information.
2.2 Understanding the Structural Framework
This is undertaken in a large scale concerning anybody, on it can be in any locations or in any part of the world, and therefore stand irrespective of times. Such a situation leads to an increased potential concerning the disclosing of the information. It also leads to the violation of the privacy concerning any individuals. As observed by Fielder et al., (2016) it might also involve the groups of people which can be due to the widespread disseminations in a global context. Therefore, it appears to stand as a major challenge as well as responsibility concerning the maintenance of the privacy and the integrity concerning the data, which, belongs to that of the individuals. It also inclusive of the matters that appear to be essential for taking precautions for ensuring the accuracy relates to the data. Therefore, this happens to strive, as the matter for protecting it from any kind of unauthorized access or the matter relates to the accidental disclosure that might appear inappropriate to the individuals. It is therefore the second aspect in consideration of the ethical issues that strives in the computing systems involving the access right. It is solely due to the current popularity concerning the international commerce relating to that of the Internet. It means that the topic relating to the computer security as well as the access right appears to moved swiftly. This is done in lieu of a state of from being completely low priority concerning the corporations as well as the government agencies relating to a high priority. The interest has heightened to a level by break-ins of the computer system at popular places such as the NASA or the Los Alamos National Laboratories in the United States. However, it appears to be the attempts relating to matters such as illegal access concerning the government of the United States as well as the military computers. Mostly the computer hackers who appear to have been widely reported undertake these. They undertake these functions without any implementation relating to the proper policies of computer security as well as the strategies, whatsoever. It also involves the network connections relating to the Internet appears to be at par with the risk it is subjected to. The cyber security personnel are however unable to function in consideration of the computer ethics, because they face harmful actions. This means that there might be injury or any other kind of negative consequences (Dua & Du, 2016). This can be something that might lead to the situational discomfort of the undesirable loss relating to the information. It also involves the loss of property, as well as the property damage, or the impacts relating to unwanted environmental hazards. This thrives as a kind of principle that prohibits in the usage of the computing technology. This happens to stand in a way, which might results in a situation, which might be harmful to any of its users. They are mostly the public, the employers, as well as the employees. There happens to be certain harmful actions that also include the intentional destruction or the matters for the modification concerning the files as well as the programs that leads to loss. These serious losses in consideration of the resources or the unnecessary expenditure relating to the human resources might be undesirable. These cause the situation where factors such as the effort as well as the time are required for purging the systems from any and every kind of computer viruses.
3. System Resilience
3.1 AI Undergoing Changes
The system resilience relating to the Artificial Intelligence is often observed to have been increasingly deployed relating to the threat and anomaly detection or that is popularly known as TAD. The cyber security personnel face this dire hindrance relating to the TAD and can effectively make usage of the existing data concerning the security for training for improving their pattern recognition. As observed by Cherdantseva et al., (2016) there comes the ethical dilemma which the personnel face that more advanced TAD systems does not require...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here