Archives: model

To celebrate spring in a very NSFW way, here is the picture story of a very bad Easter Bunny.

Click the little pictures to see big ones, if you dare.

This was another fun shoot with Zelda, who is always down for zombie contact lenses and a mouthful of sugar blood.

arm

Tonight’s work. Let me walk through my process.

When working with a model, I’ll generally schedule a 4 hour session. Starting with very quick poses, like a minute or two minutes, I draw over and over quick scribbles to get a sense of shapes and weight and movement. I call these things scribblesheets – they don’t look like much to anyone but me, but I find bits in them.

scribblesheet

Then I’ll move on to five or ten minute poses and use botches of watercolor with ink to do slightly more realized sketches. These also are not usually for public consumption, but they help me start to get a sense of features, the unique quirks of presence in bodies. These I call splattersketches.

splattersketch

Finally, I’ll turn to full pastel drawings. These are generally done in a sequence of 20 minute poses, often doing 20 minutes, and taking a break, then going back to the same pose. Sometimes I’ll alternate and do two or three different poses, and thread them so the model doesn’t cramp up keeping the exact same position over and over. It can take multiple days of sittings to get these to come together. Here is the very rough beginning of one of these. This is about a half hour’s attack.

easel
rough

The first picture in the post is another one from tonight, a little further along, probably a bit more than an hour or so of work.

So that’s the way it works. By late summer I should have another set of 6 or so finished pieces.

Yesterday’s Rubik’s Cube solving robot is a good starting point to raise the question of whether or not it is possible to model intelligence to such a fine degree that the model could be considered intelligent itself.

Mental Model
This picture of a mental model linked to from Steve Jurvetson’s Flickr photostream, he owns the picture; made available under a creative commons license, some rights reserved.

There is a thought experiment called the Chinese Room. It is meant to prove that even if you could model a consciousness to a degree indistinguishable from actual conscious behavior, it still wouldn’t actually be conscious, it would be just an automatic process.

The argument goes like this: Imagine you know nothing of the Chinese language (any of them, take your pick), written or spoken. You are placed into a box with 2 openings. Through one opening, pieces of paper with indecipherable squiggles on them are inserted. You then take these squiggles, compare them with a vast library of rules as to what squiggles to put on another piece of paper based on what you find on the first piece of paper. You then slide this second squiggled-up paper out the other opening. You’ve probably guessed it by now; to an outside Chinese speaking interlocutor, the box appears to be responding correctly to questions posed to it in Chinese. It appears the box understands Chinese. You, however, inside that box don’t understand it at all, you’re just following instructions. You have no consciousness, no awareness of what the conversation is about, or even really that you’re facilitating a conversation at all. It could be anything. It’s a meaningless activity to you.

Poor Rubot doesn’t actually solve the cube puzzle. It knows not what it does. There is no consciousness there.

Leaving aside the response that for a system to behave indistinguishable from a conscious person so well as to fool other conscious people, it would need to do far more than simple return rules-based responses, there is another objection I’ve been thinking about.

It’s true that you inside the box do not comprehend the conversation, but, in a way, the box really does. The box as a system understands. In the thought experiment, you are deliberately being placed in the role of something like a neuron… not in the role of the interpreter of neural activity. The interpreter, the consciousness, in this experiment is the set of rules. All you are doing is delivering stimuli to the rule-set, and returning output from the rule-set to the world.

So, is consciousness a rule-set? I don’t know. Maybe something like that. Is that what we are, that thing we are referring to when we say “I want this” or I’m going there”… the I inside our heads? Is that, in the end, a rule set, partially built in conception and then elaborated through experience?

Evidence seems to suggest something like this is true.

All thought is action. All action is in some way reaction. Maybe our personhood is a really elaborate set of rules for interpreting stimuli that build up in our meat-brains throughout our lives. If that were so, maybe we can attribute real consciousness to software that models conscious behavior so closely as to be indistinguishable from our consciousness. Just because it’s not happening inside a human head doesn’t mean in might not really be as aware as we are.