Monday, July 14, 2008

Science Fiction

Laws of Robotics

In science fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which almost all (good) robots appearing in his fiction must obey. The Laws are:


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov once added a "Zeroth Law"—so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws—stating that a robot must not merely act in the interests of individual humans, but of all humanity.

In the novels Foundation and Earth and Prelude to Foundation, the Zeroth Law reads: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

A condition stating that the Zeroth Law must not be broken was added to the original Laws.


Roger Clarke analyzed the complications in implementing these laws, in the event that systems were someday capable of employing them. He argued,

"Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically or perhaps because it was artistically appropriate, the sum of Asimov’s stories disproves the contention that he began with: It is not possible to reliably constrain the behavior of robots by devising and applying a set of rules."

On the other hand, Asimov's later novels (The Robots of Dawn, Robots and Empire, Foundation and Earth) imply that the robots inflicted their worst long-term harm by obeying the Laws perfectly well, thereby depriving humanity of inventive or risk-taking behavior.


Modern Laws

Modern roboticists and specialists in robotics agree that, as of 2006, Asimov's Laws are perfect for plotting stories, but useless in real life. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. SF author Robert Sawyer generalizes this argument to cover other industries, stating:

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (Few examples: the tobacco industry, the automotive industry, the nuclear industry.

The military would want strong safeguards built into any robot where possible, so laws similar to Asimov's would be embedded if possible. David Langford [British author, editor and critic, for science fiction field] has suggested, tongue-in-cheek, that these laws might be the following:

1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.


Singularity

Statistician I. J. Good wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.

A theoretical point in the future of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence – this is better known as Technological Singularity.

Good (1965) speculated on the consequences of machines smarter than humans:

“Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make. ”

Mathematician and author Vernor Vinge greatly popularized Good’s notion of an intelligence explosion in the 1980s,

Vernor Vinge (Professor of Mathematics, computer scientist, and science fiction author) calling the creation of the first ultra-intelligent machine the Singularity, later called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion.

Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. “When greater-than-human intelligence drives progress,” Vinge writes, “that progress will be much more rapid.” This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.

Isaac Asimov’s Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans. In Asimov’s stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves shut down in the case of a real conflict. On the other hand, in works such as the 2004 film I, Robot, which was based very loosely on Asimov's stories, a possibility is explored in which AI take complete control over humanity for the purpose of protecting humanity from itself. In 2004, the Singularity Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov’s laws in particular (Singularity Institute for Artificial Intelligence 2004).



Laws of Prediction

Arthur C. Clarke (a British science fiction author, inventor, and futurist, most famous for the novel 2001: A Space Odyssey) formulated the following three "laws" of prediction:

1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3. Any sufficiently advanced technology is indistinguishable from magic.

No comments: