首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.  相似文献   

2.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

3.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

4.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

5.
基因改良生物(GMOs)究竟是利益大于风险还是相反,见仁见智。笔者认为目前的风险-利益分析过于理想化,只考虑技术层面的问题,而忽视伦理和社会因素。GMOs的争论包含着伦理价值判断、法律、政治、文化、经济、体制安排,甚至历史和传统等多方面的因素,是社会协商的产物。不同的社会集团对该问题的理解各异,专家的评价观点不能代表公众的利益诉求,在社会民主化进程中,简单地将公众排除在决策过程之外是不可取的,因为任何决策都将显著影响到公众的生命健康。  相似文献   

6.
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading.  相似文献   

7.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

8.
This paper discusses privacy and the monitoring of e-mail in the context of the international nature of the modern world. Its three main aims are: (1) to highlight the problems involved in discussing an essentially philosophical question within a legal framework, and thus to show that providing purely legal answers to an ethical question is an inadequate approach to the problem of privacy on the Internet; (2) to discuss and define what privacy in the medium of the Internet actually is; and (3) to apply a globally acceptable ethical approach of international human rights to the problem of privacy on the Internet, and thus to answer the question of what is and is not morally permissible in this area, especially in light of recent heightened concerns about terrorist activities. It concludes that the monitoring of e-mail is, at least in the vast majority of cases, an unjustified infringement of the right to privacy, even if this monitoring is only aimed at preventing the commission of acts of terrorism.  相似文献   

9.
随着人工智能技术的发展,自治型智能机器人开始走进人们的生活视阈。"机器人伦理学"在国外的兴起正是这一背景下的伦理反思。然而,"机器人伦理学"的研究对象"机器人"有着特定的涵义,其存在领域也涵盖劳动服务、军事安全、教育科研、娱乐、医疗保健、环境、个人护理与感情慰藉等各个方面。其中,安全性问题、法律与伦理问题和社会问题成为"机器人伦理学"研究的三大问题域。  相似文献   

10.
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction  相似文献   

11.
In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.  相似文献   

12.
目前应用领域的机器人缺乏意识、精神状态和感觉这些情感条件,机器人只是按照人类设定的程序进行遵循一定的规则行为。判定一个机器人能否称得上人工物道德行为体(AMAs).似乎取决于是否具有情感因素。道德与情感之间有着紧密联系的关系。然而.行为主义和表现主义认为,即使缺乏情感的机器也应当受到道德关护。从机器人的应用实践来看.无论是认知缺陷角色的机器人、奴仆角色机器人还是财产物角色机器人.他们都有相应的道德地位,都应当受到不同方式的伦理关护。随着人工智能的发展,我们认为,未来我们一定能够制造出一种具有情感的AMAs机器人。  相似文献   

13.
Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner.  相似文献   

14.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

15.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

16.
This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.  相似文献   

17.
新冠肺炎疫情的暴发加速了机器人在医疗卫生领域的应用,而中国机器人发展仍处于"婴儿期".通过对近年来美、日、欧盟、中涉及医疗卫生机器人的相关政策进行比较分析,发现应用政策差异主要体现在机构设立、项目开展、平台建设、法规伦理研究4个方面.研究发现,与美、日、欧盟比较,中国在上述方面存在明显差异:顶层设计过于依赖战略目标和资金支持,忽略人才培养、科技创新等软实力配置,项目审批程序多、速度慢;政策制定中有政府过度参与的迹象,同时医疗大数据管理和数据平台的建设尚处于萌芽期;社会包容性以及相应制度化标准的政策力度不足等.借鉴先进国家和地区的经验,得出对中国发展医疗卫生机器人的启示:政府需要在加强顶层设计、培养人才、面向市场定向开发、大数据平台建立和法规及伦理研究等方向进一步加强探索,抢占新一代机器人的技术制高点,以期实现智慧医疗的愿景.  相似文献   

18.
We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved.  相似文献   

19.
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents.  相似文献   

20.
长期以来,国际科学界在体外人胚胎研究领域所遵循的“14天规则”,将人胚胎体外研究时间限制在受精后的14天内,这是该研究领域最重要的伦理规则。随着胚胎培养技术的发展,这一伦理规则受到前所未有的挑战。2021年,国际干细胞研究学会(ISSCR)在《干细胞研究和临床转化指南》中建议有条件地放宽这一限制,科学界重启关于“14天规则”的讨论。文章以“14天规则”为切入点,系统梳理人胚胎研究伦理规制的历史背景和现实挑战,围绕人胚胎的道德地位、尊严和法律地位等关键伦理问题进行分析,全面分析各界利益相关者的观点和态度,从而结合实际做出评估,明确提出审慎、适当延长“14天规则”的政策建议和具体措施。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号