首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 78 毫秒
1.
无人驾驶汽车规范发展法律路径研究   总被引:2,自引:2,他引:0  
无人驾驶汽车作为“站在四个轮子上的机器人”,正成为汽车领域的新蓝海。各国无人驾驶汽车领域的竞争已不仅是技术的竞争,也是人工智能的法律竞争。面对无人驾驶汽车的大趋势,我国立法应当为其提供从研发到责任认定的全方位的规范。首先,在发展进路上,无人驾驶汽车应当选择渐进式发展路径,且前期不以学习能力为其必备的技术内容;其次,在制度保障上,通过许可制度、强制认证制度、乘客隐私权保护制度和运行报告制度等,为无人驾驶汽车的研发与利用提供制度保障;最后,无人驾驶汽车交通事故中责任认定在侵权责任与产品责任之外,还存在各方均无过错的情形,对此,扩充无过错责任的适用范围,将汽车制造商和车主纳入无过错责任主体,方能保障无人驾驶汽车交通事故的追偿。  相似文献   

2.
智能医疗机器人的广泛应用有效提升了医疗服务效率和改善了患者医疗体验。然而,伴随着医疗机器人颠覆性技术发展,其应用引起了诸多伦理方面的隐忧,相关风险类型可归纳为安全风险、隐私风险、道德风险、责任风险以及公正风险。医疗机器人伦理风险治理的利益相关者可以界定为管理方、设计方、供给方以及需求方,因此,厘清各主体的角色和责任是实现其伦理治理目标的关键。提出应当通过将负责任创新融入设计环节、建立严格的伦理审查制度、保护使用者的主体权利、构建医疗机器人的道德能力以及完善伦理风险的法律规制加以规避风险。  相似文献   

3.
公司是否可以独立地承担责任及对自己的对外债务承担的是有限责任还是无限责任,历来是易引起争论的问题。公司对自己的债务对外承担的是独立的责任,在其他情况下也可以与其他主体共同承担责任;公司对自己的对外债务承担的是无限责任,在其他情况下还可以承担有限责任。  相似文献   

4.
11月29日,中国科学院学部在京组织召开了2012 '科技伦理研讨会.这是继2011年以"转基因技术伦理问题"和"纳米技术伦理问题"为主题召开的科技伦理研讨会之后,由学部组织的又一次关于新兴科技的伦理、法律和社会问题(ELSI)以及科学家责任问题的学术研讨会.本次研讨会以"干细胞研究中的伦理问题及科学家社会责任"为主题,由中国科学院学部科学道德建设委员会、生命科学和医学学部常委会主办,院士工作局、学部道德与科技伦理中心承办.中国科学院学部科学道德建设委员会主任许智宏院士、科学道德建设委员会副主任周远院士、国家自然科学基金委员会主任陈宜瑜院士、中国科学院副院长李静海院士、中国科学院学部主席团秘书长曹效业,干细胞领域的院士专家,科技伦理、科技政策、科研管理的专家学者,以及来自有关部委的分管领导等50余人参加了本次会议.  相似文献   

5.
论技术责任的主体   总被引:6,自引:1,他引:6  
杜宝贵 《科学学研究》2002,20(2):123-126
本文通过对技术责任的主体的历史演变过程和技术责任主体基本构成的研究 ,在区分技术责任主体的意义和价值的基础上 ,分析了技术责任主体应该是一个由工程师、科学家以及企业、国家等构成的技术责任主体群  相似文献   

6.
组织中的心理契约即在组织与员工的互动关系中,组织与其成员双方感知到的应该为对方承担的责任与义务.它包括"组织对员工应承担的责任"和"员工对组织应承担的责任"两个方面.本文研究农村基层组织中的心理契约关系,首先在访谈和多次预试的基础上编制了一套"中国农村基层干部心理契约问卷",然后在大规模问卷调查的基础上,采用多元统计分析技术对农村基层干部心理契约进行了探讨,首次提出了反映农村基层干部心理契约内部结构的六维模型.研究发现,农村基层干部心理契约状况可划分为3组:"单一对应型心理契约组","双低型心理契约组","双高型心理契约组",其中"双低型心理契约组"成员占全体调查对象的比例超过三分之一,表明目前相当部分农村基层干部与组织关系处于松散乃至断裂的不良状态.论文还探讨了影响干部心理契约的个体与组织因素.这为加强农村基层政权建设、改进农村基层组织管理提供了理论依据和方法借鉴.  相似文献   

7.
论技术使用者的三重角色   总被引:1,自引:0,他引:1  
技术使用者是技术在使用阶段的主体,在STS关系之网中至少承担三种角色意义:经济学意义上的技术发展者、社会学意义上的技术建构者和伦理学意义上的技术责任者。技术使用者所承担的这些角色不仅彰显了使用在科学、技术与社会关系演变过程中的重要作用,还预示着技术与社会研究中的使用转向。  相似文献   

8.
黎常  金杨华 《科研管理》2021,42(8):9-16
人工智能在深刻影响人类社会生产生活方式的同时,也引发诸多伦理困境与挑战,建立新的科技伦理规范以推动人工智能更好服务人类,成为全社会共同关注的主题。本文从科技伦理的视角,围绕机器人、算法、大数据、无人驾驶等人工智能领域所出现的伦理主体、责任分担、技术安全、歧视与公平性、隐私与数据保护等问题,以及人工智能技术的伦理治理,对国内外相关研究成果进行回顾分析,并提出未来需要在中国情境下伦理原则与治理体系的建立、人工智能伦理研究的跨学科合作、理论分析与实践案例的融合、多元主体伦理角色分工与协作等方面进行进一步研究。  相似文献   

9.
作为社会大众获取信息主要来源的公共机构,大众传媒在很大程度上直接影响着社会的道德价值理念取向和实践行为方式,由此可见,大众传媒在行使其"自由"权利的同时,必须要考量"责任""伦理",即大众传媒实践活动中对其所应承担的社会责任问题的伦理考量和道德追问。面对现实中大众传媒的种种问题,需要政府社会、大众传媒、受众群体之间的相互协调、共同努力,构建起一个富有责任、勇于担当的大众传媒。  相似文献   

10.
论我国社会救助的多元化主体   总被引:14,自引:0,他引:14  
在我国现代社会救助体系中,国家承担着第一责任主体角色。此外,由慈善机构、扶贫机构、社会救助团体等非政府组织和社会成员之间开展的社会互助是我国社会救助的又一重要主体,是对政府救助必不可少的补充。承认并确立社会救助的多元主体,给予社会互助以必要的扶持并营造其良性发展所需的社会环境,是完善我国社会救助体系的必然选择。  相似文献   

11.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

12.
李科 《科学学研究》2010,28(11):1606-1610
我国与西方国家在提出科学家社会责任的背景、履行科学家社会责任的途径、培养科学家社会责任意识的措施等方面存在较大差异。我国由于科学研究活动中违反科学道德的行为日益增多从而使得科学家社会责任问题备受关注,注重科学家通过构建良好的道德自律机制来履行自身的社会责任,强调通过科学家本人培养科学良心来提高社会责任感;而西方国家则由于科学带来诸多负面效应才引发了对于科学家社会责任问题的广泛讨论,通过科学共同体制定准则来规范科学家的科学活动,注重通过专门机构培训科学家道德伦理意识和责任意识。通过比较也不难发现,我国的科技伦理道德仍基本属于学术道德。  相似文献   

13.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

14.
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.  相似文献   

15.
当前自动驾驶汽车发展所面临伦理困境的一个核心问题是:是否应该将道德规范嵌入算法结构以及应当以何种方式嵌入。在面对未来可能的交通事故时,屏蔽信息而依靠“道德运气”进行随机选择和基于完全信息的人工智能系统自主抉择都存在严重困难,因此应当为自动驾驶汽车预设“道德算法”。而对于如何决定“道德算法”的问题,鉴于现有道德原则间的相互冲突、道德决策的复杂性以及人类道德判断的情境化特点,基于某种人类既定的道德原则或道德规范是不现实的。  相似文献   

16.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

17.
工程伦理的审美维度是一种特殊的伦理审美活动,它伴随着工程实践的发展而逐渐显现。随着工程伦理从角色伦理责任到面向公众的伦理责任的转变,工程伦理的审美维度也通过工程师面对道德困境时的道德情感中体现,并通过与道德感的转化和超越,以美引善,从而使工程活动更加人性化,最终实现为人类谋福祉的至善。从真善美和谐关系建立方式来讲,工程伦理审美应该是建立在以工程伦理为基点的真善美的统一。本着这一理论观点,文章从哲学高度和审美视角,探讨有关工程伦理的审美维度研究。  相似文献   

18.
I argue that the problem of ‘moral luck’ is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its ‘logical malleability’, leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.  相似文献   

19.
When software is written and then utilized in complex computer systems, problems often occur. Sometimes these problems cause a system to malfunction, and in some instances such malfunctions cause harm. Should any of the persons involved in creating the software be blamed and punished when a computer system failure leads to persons being harmed? In order to decide whether such blame and punishment are appropriate, we need to first consider if the people are “morally responsible”. Should any of the people involved in creating the software be held morally responsible, as individuals, for the harm caused by a computer system failure?This article provides one view of moral responsibility and then discusses some barriers to holding people morally responsible. Next, it provides information about the Therac-25, a computer-controlled medical linear accelerator, and its computer systems failures that led to deaths and injuries. Finally it investigates whether two key people involved in the Therac-25 case could reasonably be considered to have some degree of moral responsibility for the deaths and injuries. The conclusions about whether or not these people were morally responsible necessarily rest upon a certain amount of speculation about what they knew and what they did. These limitations, however, should not cause us to conclude that discussions of moral responsibility are fruitless. In some cases, determinations of moral responsibility may be made and in others the investigation is still worthwhile, as the article demonstrates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号