“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
–Dutch computer scientist Edsger Wybe Dijkstra

ALTHOUGH artificial intelligence (AI) is relatively a new discipline–its development is traced to 1951—it has faced numerous issues that computer professionals have been trying to address with dissertations and studies.

One of the most common concerns of computer philosophers is the kind of “rights” AI beings will acquire, if the time comes when their “intelligence” would be comparable to that of humans.

The popular film I, Robot, which starred Will Smith as a technophobic homicide detective, reiterates the three laws that govern the creation of AI beings: that a robot may not harm a human being, or, through inaction, allow a human being to come to harm; that it must obey the orders given to it by humans except where such orders would conflict with the First Law; and that it must protect its own existence, as long as such protection does not conflict with the First or Second Law. The laws were derived from the Three Laws of Robotics by Isaac Asimov, a Russian-born American author and biochemist, who concluded that the laws would not at all control the behavior of AI agents.

The film, which was based on a collection of nine science-fiction short stories by Asimov, tells that the laws for robots are invented to protect humans.

Being a student who majors in a field directly related to AI, I think Asimov’s conclusion came too soon, considering AI, popularly perceived to be strong AI, a branch of philosophical AI in which its assumed computing machines apporeach or even supersede human intelligence in sole reasoning and problem-solving, is generally a scientific supposition, although components and studies on the creation of AIs are ongoing.

UST leads in CODs, COEs among private schools

Let’s take a look at the supposedly most advanced AI robot able to interact with humans. With its latest version unveiled in 2005, ASIMO, a humanoid robot created by Honda, has its interactions classified within five categories under recognition technology: moving objects, postures and gestures, environment, sounds, and face. It can use networks such as the Internet and can provide information for various commercial applications.

ASIMO is a responsive robot of today that cannot violate any of the three laws. Essentially, the only thing a robot, or generally an AI machine, can do is only what it is programmed to do.

Even if time comes when AI beings would be able to “think” and “learn,” the three enduring laws will then be a limitation of a hypothetical free will, which would be unethical, unless they are restricted only to responsive machines like ASIMO.

Computer scientists and philosophers should consider Asimov as a science fictionist whose books explain scientific concepts in a historical way, basing his principles to those when, at its time, even science is facing its own questions and criticisms. The ethical debate on AI ethics might be looking at the future too brightly, but ending up in darkness, much like in the movie Artificial Intelligence: A.I., which depicted the Twin Towers still standing proudly 2,000 years to the future. In fact, in less than three months after the movie’s release, the towers were destroyed in the Sept. 11 terrorist attacks.


This site uses Akismet to reduce spam. Learn how your comment data is processed.