首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction  相似文献   

2.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

3.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

4.
We investigated how people react emotionally to working with robots in three scenario-based role-playing survey experiments collected in 2019 and 2020 from the United States (Study 1: N = 1003; Study 2: N = 969, Study 3: N = 1059). Participants were randomly assigned to groups and asked to write a short post about a scenario in which we manipulated the number of robot teammates or the size of the social group (work team vs. organization). Emotional content of the corpora was measured using six sentiment analysis tools, and socio-demographic and other factors were assessed through survey questions and LIWC lexicons and further analyzed in Study 4. The results showed that people are less enthusiastic about working with robots than with humans. Our findings suggest these more negative reactions stem from feelings of oddity in an unusual situation and the lack of social interaction.  相似文献   

5.
In order to investigate how the use of robots may impact everyday tasks, twelve participants in our study interacted with a University of Hertfordshire Sunflower robot over a period of 8 weeks in the university's Robot House. Participants performed two constrained tasks, one physical and one cognitive, four times over this period. Participant responses were recorded using a variety of measures including the System Usability Scale and the NASA Task Load Index. The use of the robot had an impact on the experienced workload of the participants di?erently for the two tasks, and this e?ect changed over time. In the physical task, there was evidence of adaptation to the robot's behavior. For the cognitive task, the use of the robot was experienced as more frustrating in the later weeks.  相似文献   

6.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

7.
随着人工智能技术的发展,自治型智能机器人开始走进人们的生活视阈。"机器人伦理学"在国外的兴起正是这一背景下的伦理反思。然而,"机器人伦理学"的研究对象"机器人"有着特定的涵义,其存在领域也涵盖劳动服务、军事安全、教育科研、娱乐、医疗保健、环境、个人护理与感情慰藉等各个方面。其中,安全性问题、法律与伦理问题和社会问题成为"机器人伦理学"研究的三大问题域。  相似文献   

8.
It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the design process of robots known as Care Centered Value Sensitive Design (CCVSD). Although this approach was originally and intentionally designed for the healthcare domain, the aim of this paper is to present a preliminary study of how personal and professional service robots might also be evaluated using the CCVSD approach. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (1) structuring the analysis and, (2) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. By investigating the foundations of the approach I hope to show why it may be applicable for service robots and further to give examples of current robot prototypes that can and cannot be evaluated using CCVSD.  相似文献   

9.
This paper explores the relationship between dignity and robot care for older people. It highlights the disquiet that is often expressed about failures to maintain the dignity of vulnerable older people, but points out some of the contradictory uses of the word ‘dignity’. Certain authors have resolved these contradictions by identifying different senses of dignity; contrasting the inviolable dignity inherent in human life to other forms of dignity which can be present to varying degrees. The capability approach (CA) is introduced as a different but tangible account of what it means to live a life worthy of human dignity. It is used here as a framework for the assessment of the possible effects of eldercare robots on human dignity. The CA enables the identification of circumstances in which robots could enhance dignity by expanding the set of capabilities that are accessible to frail older people. At the same time, it is also possible within its framework to identify ways in which robots could have a negative impact, by impeding the access of older people to essential capabilities. It is concluded that the CA has some advantages over other accounts of dignity, but that further work and empirical study is needed in order to adapt it to the particular circumstances and concerns of those in the latter part of their lives.  相似文献   

10.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

11.
This paper examines one particular problem of values in cloud computing: how individuals can take advantage of the cloud to store data without compromising their privacy and autonomy. Through the creation of Lockbox, an encrypted cloud storage application, we explore how designers can use reflection in designing for human values to maintain both privacy and usability in the cloud.  相似文献   

12.
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.  相似文献   

13.
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading.  相似文献   

14.
This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.  相似文献   

15.
This article addresses some of the most important legal and ethical issues posed by robot companions. Firstly, we clarify that robots are to be deemed objects and more precisely products. This on the one hand excludes the legitimacy of all such considerations involving robots as bearers of own rights and obligations, and forces a functional approach in the analysis. Secondly, pursuant to these methodological considerations we address the most relevant ethical and legal concerns, ranging from the risk of dehumanization and isolation of the user, to privacy and liability concerns, as well as financing of the diffusion of this—still expensive—technology. Solutions are briefly sketched, in order to provide the reader with sufficient indications on what strategies could and should be implemented, already in the design phase, as well as what kind of intervention ought to be favored and expected by national and European legislators. The recent Report with Recommendations to the Commission on Civil Law Rules on Robotics of January 24, 2017 by the European Parliament is specifically taken into account.  相似文献   

16.

Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.

  相似文献   

17.
In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the scientific literature on the dominant technologies: privacy, autonomy, security, human dignity, justice, and balance of power. This study shows that the new wave of digitization is putting pressure on these public values. In order to effectively shape the digital society in a socially and ethically responsible way, stakeholders need to have a clear understanding of what such issues might be. Supervision has been developed the most in the areas of privacy and data protection. For other ethical issues concerning digitization such as discrimination, autonomy, human dignity and unequal balance of power, the supervision is not as well organized.  相似文献   

18.
Caller ID service continues to be a controversial issue in the U.S. because of its privacy implications. State and federal regulators, legislators, scholars, and the courts have examined and responded to the privacy issue from a policy perspective, but perhaps without a complete understanding of the meaning of privacy in the context of the debate. What types of privacy are involved, how significant are these interests, and how might privacy needs compare and be balanced? This article explores privacy in the context of the Caller ID debate from a social science perspective. It examines motives for seeking and preserving privacy and explores the dynamic relationship between the caller and called party positions. It then provides an analysis of current and proposed Caller ID features and policies with a view toward understanding how these proposals balance competing privacy needs. This article establishes an analytic framework and a foundation for further study of caller and called party privacy that should lead to a better understanding of the privacy debate and the privacy implications of Caller ID.  相似文献   

19.
郑磊  叶桦  孙晓洁 《科技通报》2011,27(5):671-676
为提高工业焊接机器人安全性,分析和研究了安全保护系统的任务需求,引入CAN工业现场总线技术,建立基于CAN总线的机器人控制结构,针对机器人控制系统各种工作状态制定相应的安全策略,划定系统安全等级,设计独立的安全控制器,配置安全控制器与机器人各模块之间的CAN总线通讯协议格式,监控机器人运行参数,通过逻辑判断和安全区域计...  相似文献   

20.
This paper aims to provide new insights to debates on group privacy, which can be seen as part of a social turn in privacy scholarship. Research is increasingly showing that the classic individualistic understanding of privacy is insufficient to capture new problems in algorithmic and online contexts. An understanding of privacy as an “interpersonal boundary-control process” (Altman, The environment and social behavior, Brooks and Cole, Monterey, 1975) framing privacy as a social practice necessary to sustain intimate relationships is gaining ground. In this debate, my research is focused on what I refer to as “self-determined groups” which can be defined as groups whose members consciously and willingly perceive themselves as being part of a communicative network. While much attention is given to new forms of algorithmically generated groups, current research on group privacy fails to account for the ways in which self-determined groups are affected by changes brought about by new information technologies. In an explorative case study on self-organized therapy groups, I show how these groups have developed their own approach to privacy protection, functioning on the basis of social practices followed by all participants. This informal approach was effective in pre-digital times, but online, privacy threats have reached a new level extending beyond the scope of a group’s influence. I therefore argue that self-determined sensitive topic groups are left facing what I present as a dilemma: a tension between the seemingly irreconcilable need for connectivity and a low threshold, on the one hand, and the need for privacy and trust, on the other. In light of this dilemma, I argue that we need new sorts of political solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号