Monday, December 03, 2012

Which "Laws" Should Our Modern Robots Use?

Ok science fiction/robot story fans, the next three lines  should be instantly identifiable. 

1)  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2)  A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.

3)  A robot must protect its own existence as long as such protection does not conflict with the first or second laws.


If you said anything other that Dr. Asimov's three robot behavioral constraints.  Asimov used these"laws" as foils for his story plots concerning robot's behavior often with a conflict between a law and what the robot needs to accomplish.

The curious thing about these laws is that many people know of them and consider them real restrictions on anything vaguely robotic.  So the problem at hand is which laws can realistically be incorporated and which are beyond our tech. 

Case in point The robot controlled car.  Here is the instance that The New-Yorker 
  • Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?
Chances are you have only a few thousandth of a second to make the complicated maneuver. 

Now add in all the robotic devices that the military uses.  I sounds almost impossible to implement and in the next  couple of decades the problem will most likely triple in complexity.

Here is the url to the New Yorker article.                        

6 comments:

kallamis said...

I've actually had this argument with people. They insist that any robots we build will have such constraints in place.
So then, what use would the military have to be interested for one.
And two, they can't even make a game that doesn't get hacked within a few days of release at most. Just how long do they thing the laws would be effective in the hands of computer geeks.
I've never used these in my gaming here. I always found them to be of little practical value, even for a close to realistic scenario. Someone is always going to find a way, and the military especially.
Asimov's idea was great, but to actually be implemented would just not be feasible.
What good is an AI robot especially if it can't truly think for itself. The program is too restrictive.
If the robot is programmed to never harm a human, but yet is to interact with a human, then it may consider winning a game for instance to be doing harm. If it goes by blood pressure, pulse rate, an anger sensor, etc, then around here it'll be summoning an ambulance every game day for starters. Not to mention it would never try and beat you.
What bloody good is a duelist that won't wipe out their opponent. I wish for the day of AI robots, but I doubt I ever see it happen in reality.

kallamis said...

I meant card duelist, not pistols at dawn duelist.

Unknown said...

There's the Zeroeth Law of Robots, too. A robot may not harm _humanity_ or allow humanity to be harmed. Kallamis, your point about the definition of harm is a great one. Is it just physical harm or is emotional harm considered? Fun stuff.

Beam Me Up said...

That is actually the core of the problem. Everyone assumes that because they are robots they are following some set of esoteric laws, when there are no such guidelines. Asimov's would be worth exploring, however he fashioned them as guidelines to a much higher functioning "brain" and remember they were not designed to really keep robots from hurting humans or themselves. They were dramatic signposts, a foil if you will, for his main character to throw themselves onto.

At present the only "laws" we can have are the ones that can be programmed in with no grey area (do this, but do not do this, if this occurs.)

the example of the Goggle car is a case in point...We will have to make the decision and the car MUST follow implicitly.

It will be decades before processing power can utilized something like "what if?" in their logic streams with only the vaguest guidelines on behavior to work from.

Dave Tackett said...

As much as I'm a fan of Asimov, I always thought the three laws were highly unlikely (A bit silly). And as we continue to enter into the era of computers and robotics, they seem even less believable.

A more likely law of robotics would be: A robot will follow its programming to the best of its abilities, nothing more and nothing less. (With the corollary that a robot's programming may not be what its programmer intended due to coding errors and later corruptions of the programming.)

An optional descriptive law might also be: A robot does not think. It merely follows an algorithm. These algorithm's are sometimes complex enough to simulate thought, but are inherently different. Thinking is recursive while following an algorithm is linear .

I agree with kallamis, that Asimov's laws aren't feasible. As for the Google car, the ideal solution is if its motion sensing or radar is good enough that it can react fast enough to prevent that decision from being made.

Beam Me Up said...

Bingo Dave.
First you are right about Asimov...hate mailsacommin...but they were devices and limitations he could throw his main characters against. We will probably never have a device such as his robots simply because there really is no need for such a generic device.

But then to know Asimov's peculiarities goes a long way towards understanding his robots and his laws. Asimov was uncomfortable writing stories with humans as the main character early in his career (most likely all through his professional life)He was much more comfortable with a limited range of emotions and reactions. You can see the time when he moved out from under this type of writing when he started the Empire series which deliberately had no robots at all (and yes for those that want to jump on this one...I DO remember R.Daniel)

But I digress - I agree that future "robots" will have complex algorithm to formulate their behaviors. But in truth they will be little more than do this, unless this, than do this.

Now that might change if it comes to the point that the programming becomes so complicated that it becomes impossible for a human to understand then the overall structure might become more of a holographic structure, but the core should always contain the basic command sets and that might be as close as we get to "laws"