首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 202 毫秒
1.
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.  相似文献   

2.
The paper presents a critical appraisal of Floridi’s metaphysical foundation of information ecology. It highlights some of the issues raised by Floridi with regard to the axiological status of the objects in the “infosphere,” the moral status of artificial agents, and Floridi’s foundation of information ethics as information ecology. I further criticise the ontological conception of value as a first order category. I suggest that a weakening of Floridi’s demiurgic information ecology is needed in order not to forget the limitations of human actors and/or of their surrogates, digital agents. I plea for a rational theoretical and practical view of such agents beyond utopian reasoning with regard to their potential moral status.  相似文献   

3.
There has been increasing attention in sociology and internet studies to the topic of ‘digital remains’: the artefacts users of social network services (SNS) and other online services leave behind when they die. But these artefacts also pose philosophical questions regarding what impact, if any, these artefacts have on the ontological and ethical status of the dead. One increasingly pertinent question concerns whether these artefacts should be preserved, and whether deletion counts as a harm to the deceased user and therefore provides pro tanto reasons against deletion. In this paper, I build on previous work invoking a distinction between persons and selves to argue that SNS offer a particularly significant material instantiation of persons. The experiential transparency of the SNS medium allows for genuine co-presence of SNS users, and also assists in allowing persons (but not selves) to persist as ethical patients in our lifeworld after biological death. Using Blustein’s “rescue from insignificance” argument for duties of remembrance, I argue that this persistence function supplies a nontrivial (if defeasible) obligation not to delete these artefacts. Drawing on Luciano Floridi’s account of “constitutive” information, I further argue that the “digital remains” metaphor is surprisingly apt: these artefacts in fact enjoy a claim to moral regard akin to that of corpses.  相似文献   

4.
In this paper, a critique will be developed and an alternative proposed to Luciano Floridi’s approach to Information Ethics (IE). IE is a macroethical theory that is to both serve as a foundation for computer ethics and to guide our overall moral attitude towards the world. The central claims of IE are that everything that exists can be described as an information object, and that all information objects, qua information objects, have intrinsic value and are therefore deserving of moral respect. In my critique of IE, I will argue that Floridi has presented no convincing arguments that everything that exists has some minimal amount of intrinsic value. I will argue, however, that his theory could be salvaged in large part if it were modified from a value-based into a respect-based theory, according to which many (but not all) inanimate things in the world deserve moral respect, not because of intrinsic value, but because of their (potential) extrinsic, instrumental or emotional value for persons.  相似文献   

5.
In this paper, we argue that, under a specific set of circumstances, designing and employing certain kinds of virtual reality (VR) experiences can be unethical. After a general discussion of simulations and their ethical context, we begin our argument by distinguishing between the experiences generated by different media (text, film, computer game simulation, and VR simulation), and argue that VR experiences offer an unprecedented degree of what we call “perspectival fidelity” that prior modes of simulation lack. Additionally, we argue that when VR experiences couple this perspectival fidelity with what we call “context realism,” VR experiences have the ability to produce “virtually real experiences.” We claim that virtually real experiences generate ethical issues for VR technologies that are unique to the medium. Because subjects of these experiences treat them as if they were real, a higher degree of ethical scrutiny should be applied to any VR scenario with the potential to generate virtually real experiences. To mitigate this unique moral hazard, we propose and defend what we call “The Equivalence Principle.” This principle states that “if it would be wrong to allow subjects to have a certain experience in reality, then it would be wrong to allow subjects to have that experience in a virtually real setting.” We argue that such a principle, although limited in scope, should be part of the risk analysis conducted by any Institutional Review Boards, psychologists, empirically oriented philosophers, or game designers who are using VR technology in their work.  相似文献   

6.
7.
According to the amoralist, computer games cannot be subject to moral evaluation because morality applies to reality only, and games are not real but “just games”. This challenges our everyday moralist intuition that some games are to be met with moral criticism. I discuss and reject the two most common answers to the amoralist challenge and argue that the amoralist is right in claiming that there is nothing intrinsically wrong in simply playing a game. I go on to argue for the so-called “endorsement view” according to which there is nevertheless a sense in which games themselves can be morally problematic, viz. when they do not only represent immoral actions but endorse a morally problematic worldview. Based on the endorsement view, I argue against full blown amoralism by claiming that gamers do have a moral obligation when playing certain games even if their moral obligation is not categorically different from that of readers and moviegoers.  相似文献   

8.
This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to a computer database. In the UK, the issues that these cases raise have resulted in legislation designed to protect children which allows for the creation of a huge database for children. Paradoxically, we have the situation where we failed to use digital data in enforcing the law to protect children, yet we may now rely heavily on digital technologies to care for children. I draw on the work of Floridi, Sanders, Collins, Kusch, Latour and Akrich, a spectrum of work stretching from philosophy to sociology of technology and the “seamless web” or “actor–network” approach to studies of technology. Intentionality is considered, but not deemed necessary for meaningful moral behaviour. Floridi’s and Sanders’ concept of “distributed morality” accords with the network of agency characterized by actor–network approaches. The paper concludes that enfranchizing non-humans, in the shape of computer databases of personal data, as moral agents is not necessarily problematic but a balance of delegation of morality must be made between human and non-human actors.  相似文献   

9.
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading.  相似文献   

10.

According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: (i) to bring attention to this neglected issue, (ii) to clarify what exactly this concern about dehumanization might amount to, and (iii) to sketch an argument for why dehumanizing the hiring process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e. removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e. conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee-employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (in: Lackey, Applied Epistemology, Oxford University Press, 2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-offs will need to be made.

  相似文献   

11.
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents.  相似文献   

12.
The paper presents, firstly, a brief review of the long history of information ethics beginning with the Greek concept of parrhesia or freedom of speech as analyzed by Michel Foucault. The recent concept of information ethics is related particularly to problems which arose in the last century with the development of computer technology and the internet. A broader concept of information ethics as dealing with the digital reconstruction of all possible phenomena leads to questions relating to digital ontology. Following Heidegger’s conception of the relation between ontology and metaphysics, the author argues that ontology has to do with Being itself and not just with the Being of beings which is the matter of metaphysics. The primary aim of an ontological foundation of information ethics is to question the metaphysical ambitions of digital ontology understood as today’s pervading understanding of Being. The author analyzes some challenges of digital technology, particularly with regard to the moral status of digital agents. The author argues that information ethics does not only deal with ethical questions relating to the infosphere. This view is contrasted with arguments presented by Luciano Floridi on the foundation of information ethics as well as on the moral status of digital agents. It is argued that a reductionist view of the human body as digital data overlooks the limits of digital ontology and gives up one basis for ethical orientation. Finally issues related to the digital divide as well as to intercultural aspects of information ethics are explored – and long and short-term agendas for appropriate responses are presented.  相似文献   

13.
Is cybernetics good, bad, or indifferent? SherryTurkle enlists deconstructive theory to celebrate thecomputer age as the embodiment of “difference.” Nolonger just a theory, one can now live a “virtual” life. Within a differential but ontologically detachedfield of signifiers, one can construct and reconstructegos and environments from the bottom up andendlessly. Lucas Introna, in contrast, enlists theethical philosophy of Emmanuel Levinas to condemn thesame computer age for increasing the distance betweenflesh and blood people. Mediating the face-to-facerelation between real people, allowing and encouragingcommunication at a distance, information technologywould alienate individuals from the social immediacyproductive of moral obligations and responsibilities. In this paper I argue against both of thesepositions, and for similar reasons. Turkle'scelebration and Introna's condemnation of informationtechnology both depend, so I will argue, on the samemistaken meta-interpretation of it. Like Introna,however, but to achieve a different end, I will enlistLevinas's ethical philosophy to make this case.  相似文献   

14.
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.  相似文献   

15.
The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take into consideration the work of Bostrom and Dietrich, who have radically assumed this viewpoint and thoroughly explored its implications. Thirdly, I present an alternative approach to AMAs—the Discontinuity Approach—which underscores an essential difference between human moral agents and AMAs by tackling the matter from another angle. In this section I concentrate on the work of Johnson and Bryson and I highlight the link between their claims and Heidegger’s and Jonas’s suggestions concerning the relationship between human beings and technological products. In conclusion I argue that, although the Continuity Approach turns out to be a necessary postulate to the machine ethics project, the Discontinuity Approach highlights a relevant distinction between AMAs and human moral agents. On this account, the Discontinuity Approach generates a clearer understanding of what AMAs are, of how we should face the moral issues they pose, and, finally, of the difference that separates machine ethics from moral philosophy.  相似文献   

16.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

17.
Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.  相似文献   

18.
Floridi’s ontocentric ethics is compared with Spinoza’s ethical and metaphysical system as found in the Ethics. Floridi’s is a naturalistic ethics where he argues that an action is right or wrong primarily because the action does decrease the ?entropy’ of the infosphere or not. An action that decreases the amount entropy of the infosphere is a good one, and one that increases it is a bad one. For Floridi, ?entropy’ refers to destruction or loss of diversity of the infosphere, or the total reality consisting of informational objects. The similarity with Spinoza is that both philosophers refer to basic reality as a foundation for normative judgments. Hence they are both ethical naturalists. An interpretation of both Floridi and Spinoza is offered that might begin to solve the basic problems for any naturalistic ethics. The problems are how a value theory that is based on metaphysics could maintain normative force and how normative force could be justified when there appear to be widely differing metaphysical systems according to the many cultural traditions. I argue that in Spinoza’s and presumably in Floridi’s system, there is no separation between the normative and the natural from the beginning. Normative terms derive their validity from their role in referring to action that leads to a richer and fuller reality. As for the second problem, Spinoza’s God is such that He cannot be fully described by mere finite intellect. What this translates to the contemporary situation of information ethics is that there are always bound to be many different ways of conceptualizing one and the same reality, and it is the people’s needs, goals and desires that often dictate how the conceptualizing is done. However, when different groups of people interact, these systems become calibrated with one another. This is possible because they already belong to the same reality.  相似文献   

19.
To what extent should humans transfer, or abdicate, “responsibility” to computers? In this paper, I distinguish six different senses of ‘responsible’ and then consider in which of these senses computers can, and in which they cannot, be said to be “responsible” for “deciding” various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater “control” of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer's larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish “background-oversight” responsibility. They must be prepared, at least periodically, to review whether the “expertise” to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号