Morality and humanoid robots: revisiting different ethical theories
Abstract
Uniformity in human actions and attitudes incumbent with the ceteris paribus clause of folk psychology lucidly transits moral thoughts into the domain of subject-centric exploration. Morality could be identified in terms of some particular conducts and codes that are endorsed by a group of people and also mutually followed by them in the society. In Zettel (1967, 374), Wittgenstein argues, ‘Concepts with fixed limits would demand a uniformity of behaviour, but where I am certain, someone else is uncertain. And that is the fact of nature.’ The crux of reflexivity of moral agency and its discussion pans out the autonomy of agency (when seen in the sense of self-responsibility and others-responsibility), the same kind of paradox might arise in the case of collective agency. The conception of morality underpins the moral responsibility that not only depends on the outward practices (read output in the case of Humanoid robots) of the agents but on the internal attitudes (input) that the rational and responsible intentioned beings generate. The primary question that initiates a tremendous debate ‘Can humanoid robots be moral? deciphers from the normative outlook where morality includes human conscience and socio- linguistic background. The presentation advances the thesis that the conceptions of morality and creativity interplays with linguistic human beings instead of the non-linguistic humanoid robots as humanoid robots are indeed docile automata who only can follow syntaxes rather than understanding semantics and they cannot be responsible for their respective actions. To eradicate human ethics in order to formulate ethics on humanoid robots undermines the adequacy of different moral theories (Virtue ethics, Utilitianism, Deontology, Pragmatism etc) and the prospect of responsible agency, which a humanoid robot could scarcely articulate.
Collections
- R & P Seminar [209]