Please use this identifier to cite or link to this item:
http://hdl.handle.net/11718/22935
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chakraborty, Sanjit | |
dc.date.accessioned | 2020-03-13T04:59:38Z | |
dc.date.available | 2020-03-13T04:59:38Z | |
dc.date.issued | 2019-11-26 | |
dc.identifier.uri | http://hdl.handle.net/11718/22935 | |
dc.description | Morality and Humanoid Robots: Revisiting Different Ethical Theories by Dr. Sanjit Chakraborty, Visiting Faculty, IIM Indore | en_US |
dc.description.abstract | Uniformity in human actions and attitudes incumbent with the ceteris paribus clause of folk psychology lucidly transits moral thoughts into the domain of subject-centric exploration. Morality could be identified in terms of some particular conducts and codes that are endorsed by a group of people and also mutually followed by them in the society. In Zettel (1967, 374), Wittgenstein argues, ‘Concepts with fixed limits would demand a uniformity of behaviour, but where I am certain, someone else is uncertain. And that is the fact of nature.’ The crux of reflexivity of moral agency and its discussion pans out the autonomy of agency (when seen in the sense of self-responsibility and others-responsibility), the same kind of paradox might arise in the case of collective agency. The conception of morality underpins the moral responsibility that not only depends on the outward practices (read output in the case of Humanoid robots) of the agents but on the internal attitudes (input) that the rational and responsible intentioned beings generate. The primary question that initiates a tremendous debate ‘Can humanoid robots be moral? deciphers from the normative outlook where morality includes human conscience and socio- linguistic background. The presentation advances the thesis that the conceptions of morality and creativity interplays with linguistic human beings instead of the non-linguistic humanoid robots as humanoid robots are indeed docile automata who only can follow syntaxes rather than understanding semantics and they cannot be responsible for their respective actions. To eradicate human ethics in order to formulate ethics on humanoid robots undermines the adequacy of different moral theories (Virtue ethics, Utilitianism, Deontology, Pragmatism etc) and the prospect of responsible agency, which a humanoid robot could scarcely articulate. | en_US |
dc.publisher | Indian Institute of Management Ahmedabad | en_US |
dc.subject | Morality | en_US |
dc.subject | Humanoid Robots | en_US |
dc.subject | Ethical Theories | en_US |
dc.subject | Virtue ethics | en_US |
dc.subject | Moral thoughts | en_US |
dc.title | Morality and humanoid robots: revisiting different ethical theories | en_US |
dc.type | Video | en_US |
Appears in Collections: | R & P Seminar |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
RP_Nov_26_2019.html | RP_Nov_26_2019 | 1.01 kB | HTML | View/Open |
Items in IIMA Institutional Repository are protected by copyright, with all rights reserved, unless otherwise indicated.