Robots: ethical by design |
| |
Authors: | Gordana Dodig Crnkovic Baran ?ürüklü |
| |
Institution: | (1) Computer Science Laboratory, School of Innovation, Design and Engineering, M?lardalen University, V?ster?s, Sweden;(2) Computational Perception Laboratory, School of Innovation, Design and Engineering, M?lardalen University, V?ster?s, Sweden |
| |
Abstract: | Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or
even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments.
Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of
the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence
of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent
and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities.
Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades.
Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the
ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to
be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual)
morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions,
etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern
High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network
of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other
stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that
the process of development must assume an evolutionary form with a number of iterations because the emergent properties of
artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this
paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements
for design of ethical robots. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|