首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

2.
随着人工智能技术的发展,自治型智能机器人开始走进人们的生活视阈。"机器人伦理学"在国外的兴起正是这一背景下的伦理反思。然而,"机器人伦理学"的研究对象"机器人"有着特定的涵义,其存在领域也涵盖劳动服务、军事安全、教育科研、娱乐、医疗保健、环境、个人护理与感情慰藉等各个方面。其中,安全性问题、法律与伦理问题和社会问题成为"机器人伦理学"研究的三大问题域。  相似文献   

3.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

4.
目前应用领域的机器人缺乏意识、精神状态和感觉这些情感条件,机器人只是按照人类设定的程序进行遵循一定的规则行为。判定一个机器人能否称得上人工物道德行为体(AMAs).似乎取决于是否具有情感因素。道德与情感之间有着紧密联系的关系。然而.行为主义和表现主义认为,即使缺乏情感的机器也应当受到道德关护。从机器人的应用实践来看.无论是认知缺陷角色的机器人、奴仆角色机器人还是财产物角色机器人.他们都有相应的道德地位,都应当受到不同方式的伦理关护。随着人工智能的发展,我们认为,未来我们一定能够制造出一种具有情感的AMAs机器人。  相似文献   

5.
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction  相似文献   

6.
As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively with their physical and social environments.  相似文献   

7.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

8.
In this essay, a new approach to the ethics of emerging information technology will be presented, called anticipatory technology ethics (ATE). The ethics of emerging technology is the study of ethical issues at the R&D and introduction stage of technology development through anticipation of possible future devices, applications, and social consequences. In the essay, I will first locate emerging technology in the technology development cycle, after which I will consider ethical approaches to emerging technologies, as well as obstacles in developing such approaches. I will argue that any sound approach must centrally include futures studies of technology. I then present ATE and some applications of it to emerging information technologies. In ATE, ethical analysis is performed at three levels, the technology, artifact and application levels, and at each levels distinct types of ethical questions are asked. ATE analyses result in the identification and evaluation of a broad range of ethical issues that can be anticipated in relation to an emerging information technology. This ethical analysis can then be used for ethical recommendations for design or governance.  相似文献   

9.
黎常  金杨华 《科研管理》2021,42(8):9-16
人工智能在深刻影响人类社会生产生活方式的同时,也引发诸多伦理困境与挑战,建立新的科技伦理规范以推动人工智能更好服务人类,成为全社会共同关注的主题。本文从科技伦理的视角,围绕机器人、算法、大数据、无人驾驶等人工智能领域所出现的伦理主体、责任分担、技术安全、歧视与公平性、隐私与数据保护等问题,以及人工智能技术的伦理治理,对国内外相关研究成果进行回顾分析,并提出未来需要在中国情境下伦理原则与治理体系的建立、人工智能伦理研究的跨学科合作、理论分析与实践案例的融合、多元主体伦理角色分工与协作等方面进行进一步研究。  相似文献   

10.
11.
鲍劼  宋迎法  都平平  王静 《现代情报》2018,38(10):115-120
[目的/意义]人工智能技术飞速发展,已被越来越多地应用于智慧图书馆的研究与实践。实现高校图书馆智能服务机器人的智慧化服务,可有效提升高校图书馆的服务效率,为智慧图书馆建设提供可行的参考案例。[方法/过程]运用语音技术、四元麦克风阵列、语音知识库、机器人运动控制等技术,以中国矿业大学图书馆为例,设计与实现了高校图书馆智能服务机器人,并探讨智能服务机器人提供智慧化服务的新模式。[结果/结论]实践证明,智能服务机器人可实现语音互动、智能咨询、信息播报、路线导引等智慧化服务,为读者提供智能化的互动体验,创新高校图书馆智慧服务模式,是高校图书馆向智慧化方向发展的重要探索。  相似文献   

12.
法庭科学作为特殊的应用性科学,其研究和应用中时有涉及伦理问题,但目前中国尚缺乏法庭科学中科研伦理和行为规范问题的研究,相关伦理审查机构和审查规章仍是空白。从当前法庭科学科研伦理实践出发,针对职业伦理规范建设不足、科研伦理监管缺位以及法庭科学职业特色与伦理要求的冲突进行分析,为法庭科学研究科研伦理管理提出可参考的建议。  相似文献   

13.
当技术伦理发生冲突时,技术主体就面临着在多种伦理或伦理原则中选择优先适用何者的问题,这就涉及了技术伦理位阶问题。技术伦理冲突是技术伦理位阶存在的逻辑前提。技术伦理位阶可以根据技术主体的身份、技术活动的类型进行判别。在技术活动中优先适用的伦理或伦理原则称为上位技术伦理,次优适用的伦理或伦理原则称为下位技术伦理。作为恒定上位技术伦理的普世伦理因其是人们最基本、最低限度的共同价值、标准和态度,故任何技术活动都必须优先适用它。判别技术伦理位阶可以解决技术伦理冲突,从而帮助专业技术人员解决伦理困惑,使得技术活动合目的性地进行。  相似文献   

14.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

15.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

16.
There is surprisingly little attention in Information Technology ethics to respect for persons, either as an ethical issue or as a core value of IT ethics or as a conceptual tool for discussing ethical issues of IT. In this, IT ethics is very different from another field of applied ethics, bioethics, where respect is a core value and conceptual tool. This paper argues that there is value in thinking about ethical issues related to information technologies, especially, though not exclusively, issues concerning identity and identity management, explicitly in terms of respect for persons understood as a core value of IT ethics. After explicating respect for persons, the paper identifies a number of ways in which putting the concept of respect for persons explicitly at the center of both IT practice and IT ethics could be valuable, then examines some of the implicit and problematic assumptions about persons, their identities, and respect that are built into the design, implementation, and use of information technologies and are taken for granted in discussions in IT ethics. The discussion concludes by asking how better conceptions of respect for persons might be better employed in IT contexts or brought better to bear on specific issues concerning identity in IT contexts.  相似文献   

17.
This article addresses some of the most important legal and ethical issues posed by robot companions. Firstly, we clarify that robots are to be deemed objects and more precisely products. This on the one hand excludes the legitimacy of all such considerations involving robots as bearers of own rights and obligations, and forces a functional approach in the analysis. Secondly, pursuant to these methodological considerations we address the most relevant ethical and legal concerns, ranging from the risk of dehumanization and isolation of the user, to privacy and liability concerns, as well as financing of the diffusion of this—still expensive—technology. Solutions are briefly sketched, in order to provide the reader with sufficient indications on what strategies could and should be implemented, already in the design phase, as well as what kind of intervention ought to be favored and expected by national and European legislators. The recent Report with Recommendations to the Commission on Civil Law Rules on Robotics of January 24, 2017 by the European Parliament is specifically taken into account.  相似文献   

18.
人工智能伦理准则与治理体系:发展现状和战略建议   总被引:1,自引:0,他引:1  
首先,界定人工智能伦理准则的基本概念,分析人工智能发展现状。然后,探讨导致人工智能伦理问题的主要原因,总结人工智能典型应用场景下的伦理问题,包括自动驾驶、智能媒体、智慧医疗、服务机器人等;此外,围绕应对人工智能伦理问题的基本原则探索治理框架体系,包括技术应对、道德规范、政策引导、法律规则等方面。最后,结合我国人工智能发展规划战略部署,指出在社会治理的落地过程中宜采取分层次、多维度的治理体系,并提出在人工智能伦理准则和治理方面的具体措施建议(2020-2035年),包括社会宣传、标准体系、法律法规等方面。  相似文献   

19.
唐林新  邓汨方 《科教文汇》2012,(24):175-176
工业机器人在企业生产中得到越来越多的应用,为满足企业对工业机器人应用人才的需求,高职院校应尽快开设工业机器人技术课程。本课程的特点适合用项目法进行教学,教学内容着重于工业机器人的认知、操作、维护、设计等方面,实际教学时采用实训设备与机器人模拟软件相结合的方法会取得良好的教学效果。  相似文献   

20.
Robotic systems consisting of a neuron culture grown on a multielectrode array (MEA) which is connected to a virtual or mechanical robot have been studied for approximately 15 years. It is hoped that these MEA-based robots will be able to address the problem that robots based on conventional computer technology are not very good at adapting to surprising or unusual situations, at least not when compared to biological organisms. It is also hoped that insights gained from MEA-based robotics can have applications within human enhancement and medicine. In this paper, I argue that researchers within this field risk overstating their results by not paying enough attention to fundamental challenges within the field. In particular, I investigate three problems: the coding problem, the embodiment problem and the training problem. I argue that none of these problems have been solved and that they are not likely to be solved within the field. After that, I discuss whether MEA-based robotics should be considered pop science. Finally, I investigate the ethical aspects of this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号