Many philosophical and public discussions of the ethical aspects of violent computer games typically centre on the relation
between playing violent videogames and its supposed direct consequences on violent behaviour. But such an approach rests on
a controversial empirical claim, is often one-sided in the range of moral theories used, and remains on a general level with
its focus on content alone. In response to these problems, I pick up Matt McCormick’s thesis that potential harm from playing
computer games is best construed as harm to one’s character, and propose to redirect our attention to the question how violent
computer games influence the moral character of players. Inspired by the work of Martha Nussbaum, I sketch a positive account
of how computer games can stimulate an empathetic and cosmopolitan moral development. Moreover, rather than making a general
argument applicable to a wide spectrum of media, my concern is with specific features of violent computer games that make
them especially morally problematic in terms of empathy and cosmopolitanism, features that have to do with the connections
between content and medium, and between virtuality and reality. I also discuss some remaining problems. In this way I hope
contribute to a less polarised discussion about computer games that does justice to the complexity of their moral dimension,
and to offer an account that is helpful to designers, parents, and other stakeholders.
An earlier version of this paper was presented at the ACLA 2006 conference in Princeton, 25 March 2006. 相似文献
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances. 相似文献
Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.
Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable
to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper
a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis
on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while
robots are neither human nor mere tools, we have sufficient functional, agency-based, appearance-based, social-relational,
and existential criteria left to evaluate trust in robots. It is also argued that such evaluations must be sensitive to cultural
differences, which impact on how we interpret the criteria and how we think of trust in robots. Finally, it is suggested that
when it comes to shaping conditions under which humans can trust robots, fine-tuning human expectations and robotic appearances
is advisable. 相似文献
Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range fighting, creates epistemic and moral distance in so far as ‘screenfighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also shows that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. This ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill, but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting. 相似文献
Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes. 相似文献
Nussbaum’s version of the capability approach is not only a helpful approach to development problems but can also be employed
as a general ethical-anthropological framework in ‘advanced’ societies. This paper explores its normative force for evaluating
information technologies, with a particular focus on the issue of human enhancement. It suggests that the capability approach
can be a useful way of to specify a workable and adequate level of analysis in human enhancement discussions, but argues that
any interpretation of what these capabilities mean is itself dependent on (interpretations of) the techno-human practices
under discussion. This challenges the capability approach’s means-end dualism concerning the relation between on the one hand
technology and on the other hand humans and capabilities. It is argued that instead of facing a choice between development
and enhancement, we better reflect on how we want to shape human-technological practices, for instance by using the language
of capabilities. For this purpose, we have to engage in a cumbersome hermeneutics that interprets dynamic relations between
unstable capabilities, technologies, practices, and values. This requires us to modify the capability approach by highlighting
and interpreting its interpretative dimension. 相似文献