Reservoir Computing

Reading Time: < 1 minute

Reservoir Computing

The idea is commonly known as “reservoir computing” and came from attempts to develop computer networks modeled on the brain. It involves the idea that we can tap into the behaviour of physical systems – anything from a bucket of water to blobs of plastic laced with carbon nanotubes – in order to harness their natural computing power.

The basic idea is to stimulate a material in some way and learn to measure how this affects it. If you can work out how you get from the input stimulation to the output change, you will effectively have a calculation that you can then use as part of a range of computations. Unlike with traditional computer chips that depend on the position of electrons, the specific arrangement of the particles in the material isn’t important. Instead we just need to observe certain overall properties that let us measure the output change in the material.

Wow. I’d not heard of this before, but it seems pretty intriguing and I can see how something like this might work well with machine learning approaches. The trick, I suppose, is figuring out how to “train” the material, and that’s the part I don’t get here. I can see training a Deep Learning system by tuning the features, but how would one tune a bucket of water?

Thanks John Verdon and Mark Bruce.

Originally shared by John Verdon

Thanks Mark Bruce 

https://theconversation.com/theres-a-way-to-turn-almost-any-object-into-a-computer-and-it-could-cause-shockwaves-in-ai-62235
Scroll to Top
%d bloggers like this: