Archives: Consciousness

I’ve started having an email conversation with a friend about consciousness, and whether we gain anything from having it. I think it might be fun to post some of it here.

This has come about from both of us having read Peter Watts’ novel Blindsight, which is available for you to legally download free here, but which I strongly recommend you actually purchase with real money.

A good, and entertaining, introduction to this can also be had at the mere cost of 20 minutes or so of your time viewing this excellent mock pharmaceutical research presentation:

The Vampire Domestication Slide Show
View it, and then come back and read…

Back already? Then here we go:

The book raises the question, which my friend has explicitly asked:

“Did you come away with any thoughts about what consciousness *is* for, anyway?”

What’s it for? I think Watts suspects it isn’t for anything, but is something like a non-adaptive mutation that got fixed in the population somehow. By non-adaptive I don’t mean maladaptive… you know?

I don’t know if I’d go that far, though. Seems like it must bestow some evolutionary advantage in order to permeate the species, as it does.

One of the things I was trying to imagine after having read the book was, what would a fully non-conscious human really be like… one who’s behavior was dictated primarily by biological efficiency. I don’t think such an animal could actually pass for conscious, or prevail in competition with conscious beings.

On a crude level, if you’re talking about superior biological efficiency, simply voiding your bladder when the impulse begins is far more efficient than holding it in to avoid social awkwardness. I rather think the Watts Reconstituted Vampires would be more likely to appear infantile in this and other respects, despite their superior hunting skills.

Slightly less crude, I’m not sure a fully biologically efficient non-conscious being could adapt to the intricities of contemporary civilization, no mater how omni-autistic they were. In Watts’ premise, these animals evolved as predators of humans, all their non-conscious pattern recognition and predictive brilliance centered around hunting human prey. I doubt such a thing could be made to function among a complex population as depicted in the novel… it has no stake nor ability to appreciate any advantages to civilization. It has no fellow feeling for others of it’s kind, as it has no self awareness. As far as it is concerned, eating and processing nutrition to fulfill biological impulses would be the utter height of purposefulness.

I doubt that any degree of omni-autisitic brilliance could overcome such a narrow scale of ambition when confronted with mass organization that has the advantage of input by reflective consciousness.

I think it seems to me that, although consciousness does seem to be something of a deluded observer of the organism, rather than the pilot, these observations do eventually inform behavior after the fact. I don’t know if this is what really goes on in there, but it seems like the observer experiences action, judges it, imagines consequences or alternatives, which it then may come to strongly desire. This strong desire has some influence on the non-conscious actor in there, maybe by seeping into the connections the actor uses to receive stimulus from the senses, so that this consciously arrived at desire may have a tendency to tilt action in some general way.

Accumulated over a lifetime, this tilt manifests as things like which foreign language you took up in High School, what movie you decide to see next week, or even simple things like successful potty training.

In some way I think the conscious observer realizes it isn’t fully in control of its meatwagon. Tool use seems to me to be the first step down a long road toward replacing these grudgingly cooperative but independent non-conscious steeds with something that pays closer attention to the conscious commands of the rider.

What do you think?

Yesterday’s Rubik’s Cube solving robot is a good starting point to raise the question of whether or not it is possible to model intelligence to such a fine degree that the model could be considered intelligent itself.

Mental Model
This picture of a mental model linked to from Steve Jurvetson’s Flickr photostream, he owns the picture; made available under a creative commons license, some rights reserved.

There is a thought experiment called the Chinese Room. It is meant to prove that even if you could model a consciousness to a degree indistinguishable from actual conscious behavior, it still wouldn’t actually be conscious, it would be just an automatic process.

The argument goes like this: Imagine you know nothing of the Chinese language (any of them, take your pick), written or spoken. You are placed into a box with 2 openings. Through one opening, pieces of paper with indecipherable squiggles on them are inserted. You then take these squiggles, compare them with a vast library of rules as to what squiggles to put on another piece of paper based on what you find on the first piece of paper. You then slide this second squiggled-up paper out the other opening. You’ve probably guessed it by now; to an outside Chinese speaking interlocutor, the box appears to be responding correctly to questions posed to it in Chinese. It appears the box understands Chinese. You, however, inside that box don’t understand it at all, you’re just following instructions. You have no consciousness, no awareness of what the conversation is about, or even really that you’re facilitating a conversation at all. It could be anything. It’s a meaningless activity to you.

Poor Rubot doesn’t actually solve the cube puzzle. It knows not what it does. There is no consciousness there.

Leaving aside the response that for a system to behave indistinguishable from a conscious person so well as to fool other conscious people, it would need to do far more than simple return rules-based responses, there is another objection I’ve been thinking about.

It’s true that you inside the box do not comprehend the conversation, but, in a way, the box really does. The box as a system understands. In the thought experiment, you are deliberately being placed in the role of something like a neuron… not in the role of the interpreter of neural activity. The interpreter, the consciousness, in this experiment is the set of rules. All you are doing is delivering stimuli to the rule-set, and returning output from the rule-set to the world.

So, is consciousness a rule-set? I don’t know. Maybe something like that. Is that what we are, that thing we are referring to when we say “I want this” or I’m going there”… the I inside our heads? Is that, in the end, a rule set, partially built in conception and then elaborated through experience?

Evidence seems to suggest something like this is true.

All thought is action. All action is in some way reaction. Maybe our personhood is a really elaborate set of rules for interpreting stimuli that build up in our meat-brains throughout our lives. If that were so, maybe we can attribute real consciousness to software that models conscious behavior so closely as to be indistinguishable from our consciousness. Just because it’s not happening inside a human head doesn’t mean in might not really be as aware as we are.

You might have heard about this one already, but it’s one of the more astonishing things to have happened in the past couple of years. Duke University Doctor Miguel Nicolelis has successfully wired up monkeys’ brains to a robotic arm which they have learned to control using thought alone:

Here is a New Scientist article on the subject.

Although not exactly a robot in the autonomous sense, this illustrates a kind of blending of the robotic into the biological that has been going on for some time now. There are, in fact, many cyborgs living among us today. Many, many people depend on their mechanical enhancements for continued life, mobility, the ability to communicate, or all of these things at once. Anyone who has:

a pacemaker,
an artificial heart
a portable dialysis machine
portable oxygen
an automated wheelchair
artificial limbs
a hearing aid
contact lenses or glasses
Speech Assistance machines

is already in some degree a cyborg.

You could make an argument for almost any sort of tool to enhance human performance as being a step down the road to cybernetics, but for the word to have any real meaning I think you have to draw the line somewhere. For me, I think that any time we take a machine into our bodies, or invest some degree of our consciousness into a machine, we are talking about the merger that produces cyborgs.

It’s interesting to think that, in as much as our conscious minds seem to ride along on our biological bodies without as much real control over them as we might think, that the ongoing push toward cybernetics isn’t so much an attempt to prolong the life of the body as it is consciousness attempting to devise a more acquiescent, durable host for itself. Consciousness, the selfish meme, attempting to transcend its withering native flesh through the agency of technological invention, an activity unique to consciousness itself.