Asimov's three Laws of robotics
Isaac Asimov was a prized science fiction writer whose work revolutionized robotics and tackled issues that arose with the rise of the machines. He developed the Three Laws of Robotics as a way of safe-proofing human existence if man were to ever build advance robotic beings. the laws go as follows:
|
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A zeroth law was also made which exceeded the others in importance: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. |
|
His laws worked perfectly in his stories as robots were able to coincide with humans quite safely without risk of them turning on humanity, but as we reach closer to the singularity many questions pop up that seem to point out flaws in Asimov's laws. For example the laws can be vague and since none of them actually distinguish a robot from a human then through a lack of information a robot could bypass any of the laws without actually "breaking" any. Also Asimov's laws only pertains to robots with human level or near human level intelligence, as computers evolve super intelligence then who are we to say that a robot or AI wouldn't find a way to reprogram itself?