Should robots still be governed by Asimov's Three Laws of Robotics? If not, what new rules should there be?
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What's funny is that I actually picked up one of Asimov's novels, Prelude to Foundation, during the beginning of the year and am slowly but steadily working my way through it, so when his name was mentioned during the morsel presentation, I knew immediately what the presenter was talking about. 
I don't think that there should be any change to the rules regarding robots. In fact, I feel that the existing rules are even more relevant now. Especially as technology becomes more advanced, robots become increasingly powerful. If the movie The Terminator has taught us anything, it shows that human safety should be the number 1 priority of robots. The antagonist robot breaks all three of Asimov's laws in the movie, which brings about the entire conflict of the film. 
On another note, people say that robots are making people stupid as they come to rely increasingly on technology. I feel that, in a way, this is also a violation of the first law since the increasing dependence will indirectly cause harm to humans in the future, and this issue is something we should address



Leave a Reply.