February 11, 1999

Professor Wendy Grobin

Ethics and the Internet

The Humanization of Technology

Today more than ever, as we approach the year 2001, we find Stanley Kubrick and Arthur C. Clarke's movie 2001, A Space Odyssey to have implications about how we think about technology. If we pause our current discussions about technology, a nd look back to the issues tackled by 2001, we find imagery and questions that are still so poignant as to inspire a parody (2001, A Space Tragedy) more than three decades after it was first released. The cinematography itself was revolutio nary, epic, and grand in nature, and Kubrick spared none of these qualities while providing us with fodder for timeless questions about technology, humanity, and the interaction between the two. Even as we marvel about the content of the film itself, we must also recognize that a large part of 2001's popularity is its ability to inspire discussions about difficult topics.

Let’s now try and understand what Kubrick and Arthur C. Clarke were attempting to create with this movie. It’s a wonderful cinematic masterpiece, and full of signs with wide ranging implications, but that is outside the scope of this paper. The p rimary topic at hand is what these artists were attempting to communicate about how humans and technology interact, or at least, what kind of questions did they want to raise? Arthur C. Clarke wanted to remain true to physics, which lead to the great sil ence during many of the outer-space scenes, as well as the seemingly drawn-out ‘action scenes’. The effect gained from this was supposed to provide the viewer with a sense of what real space travel would be like, and the conditions forced upon the travel ers. The effect doesn’t stop there, though, there is a far greater dramatic effect gained from these techniques.

As the journey to Jupiter is underway, we find the depictions of Dave Bowman and Frank Poole, the only two conscious crew members, to be utterly lacking in anything personal. We’re actually lucky enough to catch fragments of Dave’s life from previ ous sections, like while he is speaking to his daughter on the video-phone, though it contributed little to his humanity. Holistically, all the human characters in the movie are portrayed without much emotion at all. For example, when the crew is greete d by a telecast from the BBC, the only one to respond clearly is HAL—the human crew merely mumbles a hello. When Dave and HAL are playing chess, HAL informs Dave of an impending loss, which Dave simply shrugs off. Even when HAL is complementing Dave on his artwork, we find Dave’s choice of subjects to be the most emotionless of all—the cask that one of his hibernating crew mate is encapsulated in.

In contrast to this emotionless crew, we find the ship’s onboard computer HAL portraying a unique, yet subtle emotional role. For the reasons already discussed, HAL embodies more emotion than the humans that he interacts with. Additionally, who c an forget the scene when Dave terminates HAL’s functioning? "Will you stop, Dave... Stop, Dave. I’m afraid... I’m afraid... I’m afraid, Dave... Dave... my mind is going... I can feel it..." (Picard). This is a drastic difference from when the human crew members die—one silently in space without much show at all, and the members in hibernation actually died before emerging-- we never even saw their faces.

It is important to stop and consider why Kubrick and Clarke would present us with these images and emotional situations. As artists with technical and scientific expertise, perhaps they felt the need to raise issues that had not been dealt with ex tensively. The interaction between humans and computers, or what space travel is actually like, and even the ethical considerations with the expanding capabilities of computers. These topics were definitely raised with impressive forethought and imagery , for they are still being debated and written about today. The American society has accepted this movie as a classic, and the imagery and issues involved have become icons in our cultural landscape—so much as to even appear in commercials during the Sup er Bowl.

Now it is left to us to consider these questions and the implications that our answers have on how we operate with the technology around us. What is the significance of HAL being shut down, versus the loss of an emotionless human life? How do we compare Dave facing death when he’s locked out of the ship without his helmet, and how he faced it without emotion, while HAL went pleading and begging, showing great emotion? What does it take to be human—does one need to exhibit humanistic traits, or t o be physically and organically made up of certain proteins and nutrients? These are all difficult questions, worthy of papers unto themselves. What it is pertinent to address at this point is how we, as human beings, deal with questions like this.

One distinct habit of humans, either in artistic form like 2001 or in prose articles is to provide human traits and actions to a computer or other technology. We see the humanity in HAL because that’s what its programmers put into it: &quo t;Well, he acts like he has genuine emotions. Er - of course, he's programmed that way, to make it easier for us to talk to him, but as to whether or not he has real feelings is something I don't think anyone can truthfully answer" (Kubrick). As pa rt of the story line, the characters agree that HAL only acts with emotions because that is how he was programmed. This does bring into question where the line is drawn between what HAL was programmed to do, and what he may have learned since then. The film does contribute to the illusion that HAL does learn—it went through several months of training for this mission. Unless this is a euphemistic term for additional coding, or debugging, then there would have to be a reason for them to train HAL.

Similarly, in Daniel C. Dennet’s article When HAL Kills, Who’s to Blame?, we find Dennet describing Deep Blue (IBM’s chess playing computer) with very humanistic characteristics. "Deep Blue is an intentional system, with beliefs and de sires about its activites and predicaments on the chessboard; ... As it hustles through a multidimensional decision tree, it has to keep one eye on the clock" (Dennet 353). Dennet’s article is very competent and raises important issues, but he sti ll falls into this habit of ascribing human traits to this piece of technology. What does it mean for a computer program to hustle, or to have desires? Were Dennet a student of Computer Science, rather than of Cognitive Sciences, perhaps he would have a different perspective. As it is, he approaches the discussion with preconceived notions of what the computer Deep Blue might be capable of.

As a contributor to a similar project here at Duke University, the author has to protest such humanizations of technology. I was able to work with a project in our Computer Science Department on a program called Proverb, which is being developed t o solve crossword puzzles. Though the end product is impressive (and to be honest it is clearly better at crosswords than I am) I would scarcely ascribe human traits to it. Programs that approach high complexity and significant accomplishments are easy to idolize, and ascribe to them verbs such as hustle or desire, but these are misnomers. A computer program would never ‘not-hustle’, as it will never deviate from its normal runtime speed, unless specifically programmed to do so. Likewise, our program did not have a desire to fill in the crossword puzzles—that would imply that it was doing this task because it wanted to, rather than because we tell it to.

This leads to an interesting point that I believe has been misrepresented in recent literature regarding Artificial Intelligence (AI). The common conception of AI is that computer scientists are attempting to create an artificial intelligence, i.e ., something akin to HAL. Though some are hoping for that very thing some day, it is not what the majority of AI research is about. Artificial Intelligence is actually about making software act intelligently—not to be intelligent. Though it sounds as th ough I’m playing with semantics, the difference is more important than that. We worked on Proverb, and tried to have it make intelligent choices for solving a crossword puzzle. We did not go about trying to create a program that was intelligent, and the n introduce it to crossword puzzles.

Now, after analyzing the situation, we should still pay attention to the underlying issue. If we are acting irrationally when considering the humanization of technology, we should consider asking the question, why? First, it is human natur e to do this. We find ourselves being able to understand how a computer works better by ascribing human traits to it. If I described the methods of data mining and fractal resource coordination, very few would have an intuitive feel for what the program is doing, let alone understand such description. I don’t think this is sufficient to explain the phenomenon though.

One other phenomenon that I have noticed in society at large seems to be relevant to our analysis. Rather than the humanization of technology, perhaps the problem is the dehumanization of humanity. Humans today are living in an automated world. For those who don’t work with computers for a living, technology is still pervasive. Automated call answering systems are ubiquitous, from credit card companies to auto manufacturers. Any advertiser worth its salt has an URL accompanying any new ads. C ell phones, pagers, PDAs (personal data assistants), global positioning systems, and laptops surround us almost anywhere we go. It’s in the news. It is the news sometimes. We find ourselves at the mercy of the technologies as well. We must quic kly answer the phone if it rings, or learn how to type on a keyboard, or manipulate our thinking around a computer’s paradigm of working.

Human beings are interacting with technological things in almost any aspect of life these days, and that can be overwhelming. In under a century, we have shifted human kind’s means of daily activity drastically. How do we cope with it? Perhaps o ne way is by giving our technology humanistic traits. We try to humanize the technology that surrounds us, inventing personalities for programs that help us in our daily routines, or human traits to a computer that plays chess.

When we take this back to 2001 the movie, we find that perhaps Kubrick and Clarke were ahead of the times on this idea as well. The computer named HAL that provides the emotion for the crew of his ship—for they’re unable to provide it for t hemselves. In this manner, it seems as though they might have foreseen everything we’ve looked at today and extrapolated it out even further.

Much of the recent discussion about 2001, as we approach the year it is supposed to take place in, has centered around the technology in the film. Many of these consider how close we are to being able to create a computer as capable as HAL. Pe rhaps they’re looking in the wrong direction and we should be asking ourselves how close we are to Dave Bowman, if we’re directing humanity down a path of emotionless interaction with technology. This is an important message that is almost overshadowed b y the questions raised by technology in the film, let’s hope we don’t miss it altogether.


Dennett, Daniel C. "When HAL Kills, Who’s to Blame? Computer Ethics." HAL’s Legacy,

2001’s Computer as Dream, and Reality. The MIT Press. Cambridge, MA. p. 352-55

Kubrick, Stanley, and Clarke, Arthur C. "2001, A Space Odessy." 1968.

Picard, Rosalind W. "Does HAL Cry Digital Tears? Emotion and Computers" Section 01.