Intentionality

You have only one, relatively short reading for next time, but it deals with a difficult and somewhat slippery philosophical topic: intentionality.  (Intentionality, in a philosophical sense, must be carefully distinguished from ordinary “intention,” as when we say that we intend to do something or other.  They are not unrelated, but they are not the same either.)  There is a long article on intentionality in the online Stanford Encyclopedia of Philosophy, but I expect it goes into more detail than you need.  (But don’t let me discourage you from reading it if you are interested.)  Unfortunately the entries in the online philosophy dictionaries (that I have seen) are quite brief, and probably less informative than the remarks I’ll make here.

The notion of intentionality is critical to the distinction between “real minds” and “simulated minds” (or between “real thinking” and “simulated thinking”). This is because something has intentionality when it is about something else, that is, when it has content or meaning.  Mental states such as beliefs, desires, fears, hopes, doubts, memories, anticipations, and so forth are all about something else.  (We believe that …  We hope that …  We remember that …)  Therefore these are all intentional mental states.  Consciousness, too, has intentionality, because consciousness is always consciousness of something (so, it is asserted, you can’t be simply conscious; you are always conscious that … or conscious of …, even if you don’t put it into words.)  Consciousness is inseparable from the content of consciousness.

Of course, other things besides mental states have intentionality or “aboutness.”  Your biology textbook is about biology, your student records are about your academic performance here, your emails are about AI or your dinner plans or whatever.  But there is a difference.  The statement:

  (S) “George Gordon, Lord Byron was the father of Augusta Ada, Countess of Lovelace.”

is certainly about something; it has meaning; it has intentionality.  But it is said to have only “derived intentionality,” because its meaning derives from the original intentionality of my knowledge or belief that Lord Byron is the father of Ada; when you read and understand the statement (S), it also has original intentionality for you (because you believe it, doubt it, consider it, or whatever).  In my mind or your mind the proposition expressed by (S) has original intentionality (as a belief, doubt, hope, etc.), but on a piece of paper or a computer screen statement (S) has at most derived intentionality.  If I store the sentence (S) in a computer database, I can hardly say that the computer believes that Lord Byron is Ada’s father, or that (S) means that to the computer; nor can I legitimately say that UT’s student information system knows your academic history.

Therefore original intentionality is considered a distinctive characteristic of genuine mental states, and a fundamental question in the philosophy of AI is: Can a computer’s internal states have original intentionality?  Is it possible for a computer to genuinely believe (or doubt, or consider, …) the information in its knowledge base, or does it simply act like it does so?  This is the issue that Daniel Dennett addresses in his article.


Return to Minds & Machines home page

Return to MacLennan's home page

Send mail to Bruce MacLennan / MacLennan@cs.utk.edu

Valid HTML 4.01! This page is www.cs.utk.edu/~mclennan/UH267/Intentionality.html
Last updated: 2006-09-04.