The Algebra of Neural Networks May Explain Feature Association

Reading Time: < 1 minute

The Algebra of Neural Networks May Explain Feature Association

This one’s not for everyone, as it’s pretty dense. And ignore the title, which was clearly written by someone other than the author of the article, as this research has nothing to do with being able to perceive higher dimensions.

What it is about is the way the local and global structure of networks come together in ways that help them connect features:

We conjecture that a stimulus may be processed by binding neurons into cliques of increasingly higher dimension, as a specific class of cell assemblies, possibly to represent features of the stimulus, and by binding these cliques into cavities of increasing complexity, possibly to represent the associations between the features.

This quote is not from the article, but rather from the original research (https://goo.gl/wnwfkC). Essentially, what it is saying is that they may have just found a link between the way that networks identify and associate ‘features’ in the objects that they perceive. In this work, they’re talking about biological neural networks, but I think it’s roughly analogous to the kinds layers that we see in deep learning networks in machine learning. I’m no expert in that, however, so if there are experts out there willing to weigh in, that’d be great to get your thoughts.

Special thanks to Ron Serina for flagging this one for me.

Originally shared by Tom Eigelsbach

“The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

http://www.sciencealert.com/new-study-discovers-your-brain-actually-works-in-up-to-11-dimensions
Scroll to Top
%d bloggers like this: