Archives: Robot of the Week!

This one looks really cool:

Developed by the Human Centered Robotics Group at Essex University, this is an automaton that falls in the category of devices that mimic observed behaviors in nature.

From the mission statement of the project that developed this fish (please excuse the poor English, I think it was written a native Chinese speaker):

In nature, fish has astonishing swimming ability after thousands years evolution. It is well known that the tuna swims with high speed and high efficiency, the pike accelerates in a flash and the eel could swim skilfully into a narrow hole. Such astonishing swimming ability inspires us to improve the performance of aquatic man-made robotic systems, namely Robotic Fish. Instead of the conventional rotary propeller used in ship or underwater vehicles, the undulation movement provides the main energy of a robotic fish. The observation on a real fish shows that this kind of propulsion is more noiseless, effective, and maneuverability than the propeller-based propulsion. The aim of our project is to design and build autonomous robotic fishes that are able to reactive to the environment and navigate toward the charging station. In other words, they should have the features such as fish-swimming behaviour, autonomously navigating ability, cartoon-like appearance that is not-existed in the real world.

It’s rather like the swimming snake from Japan that I wrote about earlier (unfortunately the youtube video for that one appears to have been taken down).

Where early ideas of mimicking animal locomotion to travel through foreign environments, for example the flapping wing designs of so many comical failures of early attempts at flight, often proved that simple imitation was the wrong starting point, once the full set of the physical principals underlying a form of motion are understood, often returning to mimicry results in some surprising new capabilities. We had to bludgeon our way through the air in fixed wing aircraft for a couple of decades before we were able to figure out how to make things like ornithopters a practical application.

The fact that one of the advantages to the undulant propulsion this robotic fish uses is its silence is not necessarily as ominous as it sounds. Of course one of the first things that comes to mind is its potential use as a guidance and propulsion system for underwater bombs. But there are many other reasons to be quiet under the sea. Using them as marine biology research tools, I imagine you’d be able to swim them right into the midst of other life without disturbing natural behavior, as a noisy propeller based submersible might do. Autonomously guided versions might be able to inspect undersea infrastructure around mining rigs or undersea cables, or even perform environmental cleanup tasks like oceanic Roombas.

The tradition of mimicking animal biology with automatons goes back at least as far as Vaucanson’s Duck in 1739. But where the purported mimicry in that device was digestion and in the end it turned out to be a not very elaborate hoax, there have been many profound advances in robotics since that have sprung from a close observation of evolved natural solutions to environmental challenges. Locomotion is only one of those. Simple self-awareness is another. Repairing injuries and self-replication is yet another.

I’m trying to come up with a system for classifying the various strategies of biological mimicry in evidence in contemporary robotics research. I’m sure someone smarter than me has done this already, but until I find theirs, I’ll keep working on mine.

I imported all the posts from my previous blog A Transparent Life, and was reminded of an old feature I had there called Robot of the Week. I’d post a youtube video of some interesting robot, and it would either be a springboard for some topic, or it would just be fun to look at.

I was sad to discover many of those old videos are no longer available from youtube due to copyright challenges. I doubt anyone anywhere was losing any money over a couple minutes of robot demo video being viewable online.

To obtain vengeance, I’m starting the feature again. So, for the relaunch of Robot of the Week I’d like to introduce you to the worlds most sophisticated and expensive player piano:

Now, that’s neat for sure, and one day it will even be good at the violin – but this is the ethical equivalent of dressing a monkey in overalls and teaching it to smoke a cigar. Amusing to humans, but not what robots are for!

I wonder what insight there is to be gained from designing a humaniform violin playing machine? Probably it’s a good platform to experiment with coordinated finger-like manipulation. But I think equally or more complex multi-digit coordination is probably already commonplace in industrial assembly line robots.

He’s cute, but we are not impressed.

This one is really neat:

This piece of robotic awesomeness come courtesy of Cornell University.

The starfish like robot starts out with no internal model of itself. It goes through a series of self-directed motions which it uses to figure out what kinds of pieces it has, where its joints are, how many limbs it has, how it can possibly move, etc. Then it uses the knowledge it has gained to figure out a way to walk, and it walks. When the engineers later remove a piece, it senses the lost portions, re-configures its self image, and tries to devise an alternate method of walking.

Although there is a significant amount of showmanship in the pre-programmed human interaction this robot displays, the actual Rubik’s Cube solving is legitimate. In this video the cube isn’t too badly mixed up by the little girl to begin with, but the robot can usually solve the cube no matter how mixed up it is in about 35 seconds, or about 20 moves total:

Rubik’s Cube solving machines actually look more astounding than they are. What you essentially need is some kind of sensor that can recognize the pattern of squares on each side of the cube (which this robot does when it holds the cube up to its eyes, which are actually scanners), then a piece of software, much like the chess playing software everyone is fairly familiar with, to determine what combination of moves are required to complete the task. Finally, you need some moderately precise manipulators that can turn the cube.

J. P. Brown, an archaeological conservator at the Field Museum in Chicago (not an engineer or inventor), has posted instructions for building just such a robot using nothing more complex than Lego Mindstorms! He even posts the full code to his color recognition program and the logic for the cube-solving solution he uses. His machine is slow compared to the one in the video above, but it works, and you can build it yourself. You should give it a try!

When non-specialists using off the shelf tools can build robotic manipulators which a mere 10 years ago would have been projects worthy of professional robotics labs, you’ve got to realize that real robot renaissance is on the rise.

(Sorry about that.)

In keeping with the run of dramatically different locomotive techniques being experimented with in robotics, here is an extended video segment from a Japanese television show featuring a robotic water eel (it’s in japanese, but if you watch it all the way through, it’s really visually informative on how the mechanism actually works, and there’s a bit comparing the motion of a snake across the ground with the way a person on rollerblades can get forward momentum by alternately spreading their legs and drawing them back together, which is something I’d never considered as similar before… it’s a cool insight):

This blog is starting to become the “Robot of the Week” column as I’ve been unable for reasons of available time to post at any length on other topics during the week. This should be changing soon, and though Robot of the Week will remain the Monday feature, I’ll be getting back to more work on ideas of wealth creation and science in general as well.

You might have heard about this one already, but it’s one of the more astonishing things to have happened in the past couple of years. Duke University Doctor Miguel Nicolelis has successfully wired up monkeys’ brains to a robotic arm which they have learned to control using thought alone:

Here is a New Scientist article on the subject.

Although not exactly a robot in the autonomous sense, this illustrates a kind of blending of the robotic into the biological that has been going on for some time now. There are, in fact, many cyborgs living among us today. Many, many people depend on their mechanical enhancements for continued life, mobility, the ability to communicate, or all of these things at once. Anyone who has:

a pacemaker,
an artificial heart
a portable dialysis machine
portable oxygen
an automated wheelchair
artificial limbs
a hearing aid
contact lenses or glasses
Speech Assistance machines

is already in some degree a cyborg.

You could make an argument for almost any sort of tool to enhance human performance as being a step down the road to cybernetics, but for the word to have any real meaning I think you have to draw the line somewhere. For me, I think that any time we take a machine into our bodies, or invest some degree of our consciousness into a machine, we are talking about the merger that produces cyborgs.

It’s interesting to think that, in as much as our conscious minds seem to ride along on our biological bodies without as much real control over them as we might think, that the ongoing push toward cybernetics isn’t so much an attempt to prolong the life of the body as it is consciousness attempting to devise a more acquiescent, durable host for itself. Consciousness, the selfish meme, attempting to transcend its withering native flesh through the agency of technological invention, an activity unique to consciousness itself.