Asimov’s Three Laws of Robotics
One of the most prolific, and best in my opinion, science fiction writers, Isaac Asimov, is synonymous for his future histories of the human race and their interactions with robots. Rather than fear them he seems to suggest we embrace them as members of our society. In the first of his series of ‘Robot Novels’, “I, Robot” he even goes so far as to suggest 3 Laws that should be hard wired into these Artificial Intelligence automatons, in such a way that they can never be broken. As AI comes ever closer to our society maybe we should take note.
Origin of the term ‘Robot’
The Czech word robota refers to compulsory, or serf, labour. In his 1920 play R. U. R. (Rossum’s Universal Robots), the Czech writer and playwright Karel Capek co-opted it and coined ‘robot’ to designate a class of automatons designed to work for humans. In the play we see, what has now become a pretty standard turn of events for stories featuring such creations, they end up rebelling and destroying the human race.
However it wasn’t until 30 years after the play was published, that Asimov presented his ‘the three laws of robotics’ with the idea that if they are followed then human beings would be safe to coexist with robots.
The 3 Laws of Robotics
Asimov’s suggested 3 laws are :
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second Law.
Asimov added a fourth law in later novels, which he named the “zeroth law” because it was meant to precede those above, it stated that:
Of course the major flaw in all this is that we require human beings to guarantee that they will always ‘hard bake; these laws into any AI creation they make – Hmm I think I’d trust the Robots more !