首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

2.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

3.
目前应用领域的机器人缺乏意识、精神状态和感觉这些情感条件,机器人只是按照人类设定的程序进行遵循一定的规则行为。判定一个机器人能否称得上人工物道德行为体(AMAs).似乎取决于是否具有情感因素。道德与情感之间有着紧密联系的关系。然而.行为主义和表现主义认为,即使缺乏情感的机器也应当受到道德关护。从机器人的应用实践来看.无论是认知缺陷角色的机器人、奴仆角色机器人还是财产物角色机器人.他们都有相应的道德地位,都应当受到不同方式的伦理关护。随着人工智能的发展,我们认为,未来我们一定能够制造出一种具有情感的AMAs机器人。  相似文献   

4.
黎常  金杨华 《科研管理》2021,42(8):9-16
人工智能在深刻影响人类社会生产生活方式的同时,也引发诸多伦理困境与挑战,建立新的科技伦理规范以推动人工智能更好服务人类,成为全社会共同关注的主题。本文从科技伦理的视角,围绕机器人、算法、大数据、无人驾驶等人工智能领域所出现的伦理主体、责任分担、技术安全、歧视与公平性、隐私与数据保护等问题,以及人工智能技术的伦理治理,对国内外相关研究成果进行回顾分析,并提出未来需要在中国情境下伦理原则与治理体系的建立、人工智能伦理研究的跨学科合作、理论分析与实践案例的融合、多元主体伦理角色分工与协作等方面进行进一步研究。  相似文献   

5.
6.
It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the design process of robots known as Care Centered Value Sensitive Design (CCVSD). Although this approach was originally and intentionally designed for the healthcare domain, the aim of this paper is to present a preliminary study of how personal and professional service robots might also be evaluated using the CCVSD approach. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (1) structuring the analysis and, (2) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. By investigating the foundations of the approach I hope to show why it may be applicable for service robots and further to give examples of current robot prototypes that can and cannot be evaluated using CCVSD.  相似文献   

7.
In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.  相似文献   

8.

Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.

  相似文献   

9.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

10.
We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved.  相似文献   

11.
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments.  相似文献   

12.
技术不仅仅是一种功能性工具,更是调节人与世界之间关系的中介,它在使用过程中影响着用户的知觉和行为,进而影响人们的伦理行为决策。因此,需要对技术中介与伦理行为之间的关系作出把握,通过中介分析和设计,将道德物质化人工物的设计之中,以促进技术社会的民主化进程。源于现象学的技术中介理论引入技术伦理学领域,有助于丰富对技术自身特性的认识,有助于建立适应当代社会发展的"高技术伦理学"。  相似文献   

13.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

14.
Our moral condition in cyberspace   总被引:1,自引:1,他引:0  
Some kinds of technological change not only trigger new ethical problems, but also give rise to questions about those very approaches to addressing ethical problems that have been relied upon in the past. Writing in the aftermath of World War II, Hans Jonas called for a new ``ethics of responsibility,'' based on the reasoning that modern technology dramatically divorces our moral condition from the assumptions under which standard ethical theories were first conceived. Can a similar claim be made about the technologies of cyberspace? Do online information technologies so alter our moral condition that standard ethical theories become ineffective in helping us address the moral problems they create? I approach this question from two angles. First, I look at the impact of online information technologies on our powers of causal efficacy. I then go on to consider their impact on self-identity. We have good reasons, I suggest, to be skeptical of any claim that there is a need for a new, cyberspace ethics to address the moral dilemmas arising from these technologies. I conclude by giving a brief sketch of why this suggestion does not imply there is nothing philosophically interesting about the ethical challenges associated with cyberspace.  相似文献   

15.
Following the success of Sony Corporation's`AIBO,' robot cats and dogs are multiplyingrapidly. ``Robot pets' employing sophisticatedartificial intelligence and animatronictechnologies are now being marketed as toys andcompanions by a number of large consumerelectronics corporations.It is often suggested in popular writing aboutthese devices that they could play a worthwhilerole in serving the needs of an increasinglyaging and socially isolated population. Robotcompanions, shaped like familiar householdpets, could comfort and entertain lonely olderpersons. This goal is misguided and unethical. While there are a number of apparent benefitsthat might be thought to accrue from ownershipof a robot pet, the majority and the mostimportant of these are predicated on mistaking, at a conscious or unconscious level,the robot for a real animal. For an individualto benefit significantly from ownership of arobot pet they must systematically deludethemselves regarding the real nature of theirrelation with the animal. It requiressentimentality of a morally deplorable sort. Indulging in such sentimentality violates a(weak) duty that we have to ourselves toapprehend the world accurately. The design andmanufacture of these robots is unethical in sofar as it presupposes or encourages thisdelusion.The invention of robot pets heralds thearrival of what might be called ``ersatzcompanions' more generally. That is, ofdevices that are designed to engage in andreplicate significant social and emotionalrelationships. The advent of robot dogs offersa valuable opportunity to think about the worthof such companions, the proper place of robots in society and the value we should place on ourrelationships with them.  相似文献   

16.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

17.
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.  相似文献   

18.
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.  相似文献   

19.
Common morality and computing   总被引:6,自引:4,他引:2  
This article shows how common morality can be helpful in clarifying the discussion of ethical issues that arise in computing. Since common morality does not always provide unique answers to moral questions, not all such issues can be resolved, however common morality does provide a clear answer to the question whether one can illegally copy software for a friend.  相似文献   

20.
Trust can be understood as a precondition for a well-functioning society or as a way to handle complexities of living in a risk society, but also as a fundamental aspect of human morality. Interactions on the Internet pose some new challenges to issues of trust, especially connected to disembodiedness. Mistrust may be an important obstacle to Internet use, which is problematic as the Internet becomes a significant arena for political, social and commercial activities necessary for full participation in a liberal democracy. The Categorical Imperative lifts up trust as a fundamental component of human ethical virtues – first of all, because deception and coercion, the antitheses of trust, cannot be universalized. Mistrust is, according to Kant, a natural component of human nature, as we are social beings dependent on recognition by others but also prone to deceiving others. Only in true friendships can this tendency be overcome and give room for unconditional trust. Still we can argue that Kant must hold that trustworthy behaviour as well as trust in others is obligatory, as expressions of respect for humanity. The Kantian approach integrates political and ethical aspects of trust, showing that protecting the external activities of citizens is required in order to act morally. This means that security measures, combined with specific regulations are important preconditions for building online trust, providing an environment enabling people to act morally and for trust-based relationships.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号