• Login
    View Item 
    •   IIMA Institutional Repository Home
    • Video Library
    • R & P Seminar
    • View Item
    •   IIMA Institutional Repository Home
    • Video Library
    • R & P Seminar
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Morality and humanoid robots: revisiting different ethical theories

    Thumbnail
    View/Open
    RP_Nov_26_2019 (1.013Kb)
    Date
    2019-11-26
    Author
    Chakraborty, Sanjit
    Metadata
    Show full item record
    Abstract
    Uniformity in human actions and attitudes incumbent with the ceteris paribus clause of folk psychology lucidly transits moral thoughts into the domain of subject-centric exploration. Morality could be identified in terms of some particular conducts and codes that are endorsed by a group of people and also mutually followed by them in the society. In Zettel (1967, 374), Wittgenstein argues, ‘Concepts with fixed limits would demand a uniformity of behaviour, but where I am certain, someone else is uncertain. And that is the fact of nature.’ The crux of reflexivity of moral agency and its discussion pans out the autonomy of agency (when seen in the sense of self-responsibility and others-responsibility), the same kind of paradox might arise in the case of collective agency. The conception of morality underpins the moral responsibility that not only depends on the outward practices (read output in the case of Humanoid robots) of the agents but on the internal attitudes (input) that the rational and responsible intentioned beings generate. The primary question that initiates a tremendous debate ‘Can humanoid robots be moral? deciphers from the normative outlook where morality includes human conscience and socio- linguistic background. The presentation advances the thesis that the conceptions of morality and creativity interplays with linguistic human beings instead of the non-linguistic humanoid robots as humanoid robots are indeed docile automata who only can follow syntaxes rather than understanding semantics and they cannot be responsible for their respective actions. To eradicate human ethics in order to formulate ethics on humanoid robots undermines the adequacy of different moral theories (Virtue ethics, Utilitianism, Deontology, Pragmatism etc) and the prospect of responsible agency, which a humanoid robot could scarcely articulate.
    URI
    http://hdl.handle.net/11718/22935
    Collections
    • R & P Seminar [209]

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of IIMA Institutional RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    Statistics

    View Usage Statistics

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV