Skip to main content


Notes and Queries
Categories
Nooks and crannies
Yesteryear
Semantic enigmas
The body beautiful
Red tape, white lies
Speculative science
This sceptred isle
Root of all evil
Ethical conundrums
This sporting life
Stage and screen
Birds and the bees


SPECULATIVE SCIENCE

What are Issac Asimov's three laws of robotics? Are they purely ficticious or is there scientific credence to them?

Paul Peters, Tottenham, UK
  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Benjy Arnold, London UK
  • They are ficticious in the sense that no robot or, in the loose sense of the term, automatic machine exists to which any of them apply. Smart bombs and cruise missiles are kinds of robots which violate the first and third laws. Computers and machines run by computers do what they are programmed to do and, of course, they will hurt humans if programmed to do so, or if humans get in their way as they are carrying out their programs. Superficially these concepts appear to be reasonable. But they are based upon human concepts. Humans can be malevolent, machines do what they are programmed to do. At the time these concepts were first mooted there may have a been a certain unease about robots taking over the world. These days we are so used to them we hardly give it a second thought.

    Terence Hollingworth, Blagnac France
  • The three laws of robotics are suggestions for how robots should operate, ideally. They are: 1. A robot must never harm a human, or through inaction allow a human to come to harm. 2. A robot must always obey the orders of humans except where to do so would conflict with obeying the first law. 3. A robot must protect its own existence, except where to do so would conflict with the first or second laws. They are laws like the law against murder, now laws like the law of gravity. Therefore scientific credence is irrelevant. We choose to build robots which obey them, or not. It is up to us.

    Simon Blake, Shrewsbury England
  • The three rules are as follows (in my own words - don't have a book to hand) 1) A robot will not, by it's action or inaction, allow harm to come to a human being. 2) A robot will not, by it's action or inaction, or unless it would therefore break rule 1, allow harm to come to itself 3) A robot will, unless this causes it to break either rule one or rule 2, do as it is commanded by a human. These laws are designed to be part of the make up of a robot's inherent nature - they are not hard and fast physical laws, but something which robots would be made to follow as part of their creation.

    Simon, Hinchley Wood UK
  • Asimov's laws of robotics are not scientific laws, they are instructions built in to every robot in his stories to prevent them malfunctioning in a way that could be dangerous. The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself. Where these laws conflict, precedence is given to the first law, then the second law, with the robot's self-preservation taking last place. For example if a human ordered a robot to attack another human it would refuse to follow the order (first law takes precedence over second), but if a human ordered it to disassemble itself it would obey (second law takes precedence over third).

    Susie Burlace, London UK
  • The laws assume a very whimsical view of robots as androids who interract with humans as equals. This has little relevance to the science of cybernetics, but does make some subtle points about our sociology.

    Allan Dean, Wimbledon UK
  • Asimov's laws were created by Isaac Asimov as a counter to the Frankenstein legend - that any halfway intelligent creation of mankind would be flawed and jealous of humanity and must inevitably turn on its creator. Before Asimov science fiction was filled with dangerous killer robots. After him we have friendly, almost human androids. The three laws are - and it is important to get them in order - 1 - A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2 - A robot must obey the orders given it by human beings except where such orders would conflict with the first law. 3 - A robot must protect its own existence as long as such protection does not conflict with the first or second law. These laws were invented for science fiction, but they are treated with great credibility in the field of robotics (after all, Asimov came up with the word robotics). Should the day come when we have reasoning, intelligent robots, you can bet that Asimov's Laws will be the first in.

    Trevor Smith, Peterborough UK
  • Didn't Asimov also invent a 'zeroeth law', something like "A robot may not injure humankind, or, through inaction, allow humankind to come to harm."?

    Tim Campbell, Wigan UK
  • Two points: 1) There's actually another one. The Zeroth Law (it came later, chronologically, but is more fundamental) states that a robot is incapable of causing Mankind harm, or by inaction...... 2) He made them up, but I dare say that cyberneticists will implement something like them (if we ever get that far) cause it's a good idea as well as the fact that many of them will have read Asimov's novels.

    John Brookes, Manc UK
  • I don't know the answer to this question but I'd like to know why so many people have trouble spelling "Isaac".

    Seth Nettles, Kingston Jamaica
  • It's also worth noting that while Asimov's Laws appear reasonable on the surface, some of the fiction they appear in (at least the book "I, robot") deals with how these apparently immutable laws designed to prevent robots harming humans can have harmful consequences. Worth reading...

    Mark, Wallasey UK
  • Asimov's robot stories are fascinating as science fiction and as an exploration of moral values and sociological mores. He plays around with the three laws to demonstrate their value and also their shortcomings, I recommend "The Complete Robot" - a collection of his short stories over the decades as an introduction to his then ground-breaking ideas.

    Alan Harrison, Altrincham United Kingdom


Add your answer



UP




guardian.co.uk © Guardian News and Media Limited 2011