首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many philosophical and public discussions of the ethical aspects of violent computer games typically centre on the relation between playing violent videogames and its supposed direct consequences on violent behaviour. But such an approach rests on a controversial empirical claim, is often one-sided in the range of moral theories used, and remains on a general level with its focus on content alone. In response to these problems, I pick up Matt McCormick’s thesis that potential harm from playing computer games is best construed as harm to one’s character, and propose to redirect our attention to the question how violent computer games influence the moral character of players. Inspired by the work of Martha Nussbaum, I sketch a positive account of how computer games can stimulate an empathetic and cosmopolitan moral development. Moreover, rather than making a general argument applicable to a wide spectrum of media, my concern is with specific features of violent computer games that make them especially morally problematic in terms of empathy and cosmopolitanism, features that have to do with the connections between content and medium, and between virtuality and reality. I also discuss some remaining problems. In this way I hope contribute to a less polarised discussion about computer games that does justice to the complexity of their moral dimension, and to offer an account that is helpful to designers, parents, and other stakeholders. An earlier version of this paper was presented at the ACLA 2006 conference in Princeton, 25 March 2006.  相似文献   

2.
Can a player be held morally responsible for the choices that she makes within a videogame? Do the moral choices that the player makes reflect in any way on the player’s actual moral sensibilities? Many videogames offer players the options to make numerous choices within the game, including moral choices. But the scope of these choices is quite limited. I attempt to analyze these issues by drawing on philosophical debates about the nature of free will. Many philosophers worry that, if our actions are predetermined, then we cannot be held morally responsible for them. However, Harry Frankfurt’s compatibilist account of free will suggests that an agent can be held morally responsible for actions that she wills, even if the agent is not free to act otherwise. Using Frankfurt’s analysis, I suggest that videogames represent deterministic worlds in which players lack the ability to freely choose what they do, and yet players can be held morally responsible for some of their actions, specifically those actions that the player wants to do. Finally, I offer some speculative comments on how these considerations might impact our understanding of the player’s moral psychology as it relates to the ethics of imagined fictional events.  相似文献   

3.
4.
In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.  相似文献   

5.
My avatar,my self: Virtual harm and attachment   总被引:1,自引:0,他引:1  
Multi-user online environments involve millions of participants world-wide. In these online communities participants can use their online personas – avatars – to chat, fight, make friends, have sex, kill monsters and even get married. Unfortunately participants can also use their avatars to stalk, kill, sexually assault, steal from and torture each other. Despite attempts to minimise the likelihood of interpersonal virtual harm, programmers cannot remove all possibility of online deviant behaviour. Participants are often greatly distressed when their avatars are harmed by other participants’ malicious actions, yet there is a tendency in the literature on this topic to dismiss such distress as evidence of too great an involvement in and identification with the online character. In this paper I argue that this dismissal of virtual harm is based on a set of false assumptions about the nature of avatar attachment and its relation to genuine moral harm. I argue that we cannot dismiss avatar attachment as morally insignificant without being forced to also dismiss other, more acceptable, forms of attachment such as attachment to possessions, people and cultural objects and communities. Arguments against according moral significance to virtual harm fail because they do not reflect participants’ and programmers’ experiences and expectations of virtual communities and they have the unintended consequence of failing to grant significance to attachments that we take for granted, morally speaking. Avatar attachment is expressive of identity and self-conception and should therefore be accorded the moral significance we give to real-life attachments that play a similar role. A shorter version of this paper was presented at the Cyberspace 2005 Conference at Masaryk University, Brno, Czech Republic  相似文献   

6.
Information plays a major role in any moral action. ICT (Information and Communication Technologies) have revolutionized the life of information, from its production and management to its consumption, thus deeply affecting our moral lives. Amid the many issues they have raised, a very serious one, discussed in this paper, is labelled the tragedy of the Good Will. This is represented by the increasing pressure that ICT and their deluge of information are putting on any agent who would like to act morally, when informed about actual or potential evils, but who also lacks the resources to do much about them. In the paper, it is argued that the tragedy may be at least mitigated, if not solved, by seeking to re-establish some equilibrium, through ICT themselves, between what agents know about the world and what they can do to improve it.  相似文献   

7.
Ad blockers are a category of computer software program, typically run as web browser extensions, that allow users to selectively eliminate advertisements from the webpages they visit. Many people have alleged that using an ad blocker is morally problematic because it is bad for content providers and consumers, and it is morally akin to theft. We disagree. In this paper, we defend an independent argument for the conclusion that using an ad blocker is morally permissible. In doing so, we respond to the criticisms that ad blocking is bad for content providers and consumers, that it is morally akin to theft, and that it violates a contract between consumers and web publishers.  相似文献   

8.
This essay examines some ethical aspects of stalkingincidents in cyberspace. Particular attention is focused on the Amy Boyer/Liam Youens case of cyberstalking, which has raised a number of controversial ethical questions. We limit our analysis to three issues involving this particular case. First, we suggest that the privacy of stalking victims is threatened because of the unrestricted access to on-linepersonal information, including on-line public records, currently available to stalkers. Second, we consider issues involving moral responsibility and legal liability for Internet service providers (ISPs) when stalking crimesoccur in their `space' on the Internet. Finally, we examine issues of moral responsibility for ordinary Internet users to determine whether they are obligated to inform persons whom they discover to be the targets of cyberstalkers.  相似文献   

9.
10.
王开磊 《科教文汇》2012,(16):58-58,62
随着科技的发展和计算机的应用普及,计算机软件的应用可涉及各行各业。软件类型各式各样,例如,办公软件、邮件系统、数据库管理软件、图书管理软件,甚至是娱乐游戏等。涉及的行业包括政府、企业、银行、高校、农业等等。这些软件的开发和应用促进了社会和经济的发展,提高了人们的工作效率。因此,软件工程的教学也在整个计算机教学中占有很重要的地位。作为软件工程专业的授课教师,有责任让学生学好这门学科,为计算机软件行业提供更多更优秀的人才。本文将对计算机软件工程教学中发现的一些问题及如何优化改进这些方法进行具体阐述。  相似文献   

11.
According to the amoralist, computer games cannot be subject to moral evaluation because morality applies to reality only, and games are not real but “just games”. This challenges our everyday moralist intuition that some games are to be met with moral criticism. I discuss and reject the two most common answers to the amoralist challenge and argue that the amoralist is right in claiming that there is nothing intrinsically wrong in simply playing a game. I go on to argue for the so-called “endorsement view” according to which there is nevertheless a sense in which games themselves can be morally problematic, viz. when they do not only represent immoral actions but endorse a morally problematic worldview. Based on the endorsement view, I argue against full blown amoralism by claiming that gamers do have a moral obligation when playing certain games even if their moral obligation is not categorically different from that of readers and moviegoers.  相似文献   

12.
13.
Ethics and Information Technology - Under what circumstances if ever ought we to grant that Artificial Intelligences (AI) are persons? The question of whether AI could have the high degree of moral...  相似文献   

14.
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.  相似文献   

15.
There has been increasing attention in sociology and internet studies to the topic of ‘digital remains’: the artefacts users of social network services (SNS) and other online services leave behind when they die. But these artefacts also pose philosophical questions regarding what impact, if any, these artefacts have on the ontological and ethical status of the dead. One increasingly pertinent question concerns whether these artefacts should be preserved, and whether deletion counts as a harm to the deceased user and therefore provides pro tanto reasons against deletion. In this paper, I build on previous work invoking a distinction between persons and selves to argue that SNS offer a particularly significant material instantiation of persons. The experiential transparency of the SNS medium allows for genuine co-presence of SNS users, and also assists in allowing persons (but not selves) to persist as ethical patients in our lifeworld after biological death. Using Blustein’s “rescue from insignificance” argument for duties of remembrance, I argue that this persistence function supplies a nontrivial (if defeasible) obligation not to delete these artefacts. Drawing on Luciano Floridi’s account of “constitutive” information, I further argue that the “digital remains” metaphor is surprisingly apt: these artefacts in fact enjoy a claim to moral regard akin to that of corpses.  相似文献   

16.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

17.
Computer ethicists have for some years been troubled by the issue of how to assign moral responsibility for disastrous events involving erroneous information generated by expert information systems. Recently, Jeroen van den Hoven has argued that agents working with expert information systems satisfy the conditions for what he calls epistemic enslavement. Epistemically enslaved agents do not, he argues, have moral responsibility for accidents for which they bear causal responsibility. In this article, I develop two objections to van den Hoven’s argument for epistemic enslavement of agents working with expert information systems.  相似文献   

18.
杜严勇 《科学学研究》2017,35(11):1608-1613
关于道德责任的承担主体及其责任分配是机器人伦理研究中一个重要问题。即使机器人拥有日益提高的自主程度和越来越强的学习能力,也并不意味着机器人可以独立承担道德责任,相反,应该由与机器人技术相关的人员与组织机构来承担道德责任。应该明确机器人的设计者、生产商、使用者、政府机构及各类组织的道德责任,建立承担责任的具体机制,才能有效避免"有组织的不负责任"现象。另外,从目前法学界关于无人驾驶技术的法律责任研究情况来看,大多数学者倾向于认为应该由生产商、销售商和使用者来承担责任,而无人驾驶汽车根本不构成责任主体。  相似文献   

19.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

20.
刘湾梅 《科教文汇》2011,(2):25-25,33
受传统"应试教育"的影响,目前青少年的德育发展明显存在着许多误区和盲区。家庭教育中存在的急功近利和行为准则模糊等问题严重阻碍了孩子德、智、体、美、劳的全面和谐发展。为此,必须创新优化家校沟通的形式,转变家长重智轻德的育人观念,强化并提升家长的家庭德育能力和水平。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号