back to the list

The Computational Theory of Mind

With the dawn of electrical engineering and computer science, scientists and theorists have begun to understand the human brain in the context of computer hardware and software. Computers seem to be mankind's first chance at manufacturing an instrument as subtle and expressive as the human mind, and thus to many it seems obvious to think that in electrical engineering, we have begun to imitate the very basis of the human mind.

Much of this thinking has turned into the Computational Theory of Mind, the idea that the mind processes information in a way not unsimilar to the processing unit of the mundane computer. In a way, it's impossible to imagine a non-computational mind: how could anything process information without dealing in some computational terms? The only other historically competing idea was that of associationism, the thought that thoughts were organized by their relation to one another and the brain was simply and organ that could identify and act on those relations. Now however the Computational Theory of Mind seems like the only way of explaining any exchange or analysis of information as we know it now.

But regardless of its modern ascendancy, mental computation needs some caveats before we begin to treat the human mind and its machine creations as perfectly analogous. There has been significant philosophical criticism of the machine CPU metaphor for the brain and it's all worth addressing.

I must say that it's at least my own predisposition to say that much of the semi-conscious and primitive aspects of human mental work may be structured on a quasi-associationist framework, while only the behaviors that we think of as higher mental processes seem to exert significant computation. "Logical" and precision thinking is not a reflex to humans, but a craft that must be mastered; it seems strange to say that this difficult capacity is the very nature of all micro-computations of the brain. Naturally it would be wrong to say that computation and "logic" are equivalent, as the brain may yield pathways of decision-making that certainly do not appear to our overminds as logical; regardless our ideas of what constitute logic and rationality are the results of computational thinking, which seems not to be so easily performed by humans.

Now when humans free associate, for example, they do not often only produce words that are of similar computational meaning, but words that might tell us that semantically opposed and logically opposite concepts have a similar place in their mental network. If told to associate with 'white,' one might as well say 'black' first, if told 'Republican' he might first say 'Democrat'. Of course it could be said that this is simply an example of computation. If told to mention concepts related to 'A,' a computationally thinking person wouldn't be too ridiculous to respond saying '~A.' Still free association seems to be so reflexive to humans, while to program a machine with similar capabilities would cost quite a bit of effort. One would have to interface and relate the qualities of millions of objects, and only then could a computer begin to do what humans do, even erroneously all day long: make non-stop associations.

Association seems to overwhelm computation in all situations. One of the easiest ways to trick a test-taker is to inject a subtle 'not' into a question or modify the correct answer by a jot or tiddle. When humans take tests, they are asked to think in computation, which as it happens is somewhat of a difficult thing to do when their immediate reflex is to utilize their capacities to generalize and associate to deal with quotidian life. A fully computational mind would, barring some incredible shortcutting, need to compute the meaning of each word without room for skipping. Nevertheless humans often go about "studying test taking" and take classes on how to correctly detect tricks that might fool their association-prone mental machinery.

Additionally humans seem to look at questions or interpret speech in a way that expects the fewest curveballs possible, and computation seems only to be a higher-order phenomenon in that regard. In a common example from language, most languages (English is an exception) have no problems allowing speakers to use double negatives without any extra affects on the meaning of the sentence. Humans apparently can gloss over double negative sentences with no understanding problems; this is likely because all negator terms (no, not, nothing, nowhere) simply indicate that the statement is the opposite of the a statement without the negation. A listener does not computationally analyse every component of the sentence, but understands the meaning, and associates the many negations as a simply opposite.

Due to the unique pedantic style of English speaking, speakers are forced to abandon seemingly natural forms "I didn't see nothing." with a "computationally correct" form: "I didn't see anything." Children are often chided by parents and teachers who respond with this computationally correct form, saying "If you didn't see nothing, that means you did see something!", an exemplary phrase that parses the sentence exactly and logically rather than the intuitive way that other languages are wont to use.

Of course a speaker of any language that allows double negatives may often come to that point, after a bit of reflection and introspection, when they realize in a fundamental sense, the nigh universal habit of double negation "just doesn't make any logical sense." It seems that our analytical faculties can easily detect that the norm is to say something that theoretically the opposite of what it means, but as a point, if we, by nature, think computationally, how is it possible that double negation stays an unquestioned standard for even a small amount of time? Certainly children would immediately conceive of the gap in their nonstop computation breached by double negatives.

Additionally I can say in my experience in my own error-analysis, my mistakes in writing seem to indicate that the mental shortcuts I use while writing seem to be more based on association with similar forms and words rather than a logical computational relationship between semantically similar words. I often miswrite words but mixing up their endings, I may write 'flouting' instead of 'flouted.' This makes little sense as a computational failure, rather it seems that my brain stores the information of differing forms in an associative manner. My spelling errors, especially those of short words seem to indicate that words are associated by form rather than organized by meaning: I may write 'up' instead of 'us' or 'that' instead if 'the.'

Contrarily, our artificial intelligence has been designed from the bottom up with computation at its core, while it seems hard to understand how humans have computation that deeply. Handling negation, contradiction and logical proofs may often be difficult, but it always requires a good bit of conscious thinking. In addition, evolutionarily speaking, there seems to be little reason for us to evolve or means by which to evolve a complex computational language of the mind as the basis of all successive mental advancements. Our lower faculties seem not to exhibit computation and it's difficult to imagine means for our proto-neural ancestors to evolve such computation a couple billion years ago. What seems more likely is that those ancestors gradually gained the capacity to understand objects by their relation with some rudiments of computation, while our later conscious ancestors developed the higher processes of computational thinking we are familiar with today.

Yet my point is not that the mind does not think computationally, but that computation seems to be a more mamalian process that certainly does not define our subconscious, unconscious or preconscious thought. In those realms, we may operate in a way more similar to the old framework of associationism, with our most ancient ancestors making decisions with that old, dusty web of association.

Logical thinking may be evolution's attempt at "trying again" at mental processing, with a far more vibrant and eclectic mental mold to work in. Computational and analytical thinking may as well be a pristine and sublime tool for advanced cognition and behavior built on a rickety, emotional and decrepit swamp of irrational associationalist pseudo-thought.

Much of our thought appears to be a hybridization or a composite form of association and computation. We can decode an argument and understand its validity, but we can also see the unsaid and assumed. In the same light, we can fall victim to an appealing associating argument without a complete computational understanding of it.

What I think is core to understanding computation is that it seems to be mostly a conscious tool, which nearly necessarily developed after the arrival of conscious thought. Had it have been otherwise, we would likely be able to make complex computations subconsciously.

What's more is that computation, even if it were the foundation of thought, goes to no lengths to explain the variability of the human mind and its consciousness. The example of John Serle is probably most popular: let's say that there is a machine that can be given any sentence in Chinese, it then would output a pragmatically proper and grammatically correct response. If the Chinese machine were placed in a room with a lone English speaker, they could converse with a Chinese speaker outside the room without any difficulty: the Chinese speaker would send in a message, the computer would create the respond and the English speaker would write the characters down and slip it under the door.

What's important to note however is that neither the machine or the English speaker are in any way aware or conscious of the content of the message or feel any emotion or understanding toward it. The Chinese machine may be programmed to write clever jokes which the Chinese speaker may fully appreciate, but the machine has no awareness whatsoever of his comedic production. In this way, even if computation could explain human behavior and information processing, it cannot explain how minds differ from basic programming in their sensation of awareness.