- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
You can read more here: Asimov's Robotic Laws on Wikipedia
But, whilst you do, please note the creation at a later time, of a zeroth Law
- a robot must not merely act in the interests of individual humans, but of all humanity
This principle could easily be applied to Human Rights. Everyone can have their human rights, but not at the cost of the community’s human rights. Thus we can be protected from people who invoke Human Rights without having a care for their social responsibilities.
Or have I missed something?
No comments:
Post a Comment