Mother Nature is discovering a new clay for her acts of creation. Carbon, that trusty medium for life’s evolutionary processes, is losing its monopoly as nature learns to express herself through the inorganic chemistry of technology. Machine learning is how she does it.
If you have used Google Translate, Facebook’s news feed, or Amazon’s product recommendations, then you have benefited from the fruits of machine learning. These companies are using machine learning to radically change the way software development works.
One of the interesting consequences of machine learning is that it generates code that is difficult for software engineers to understand. It’s a bit like passing a bag of ingredients through an order window to a cook on the other side, getting a delicious meal back, and not knowing exactly how she did it. In this case, the kitchen acts like a “black box;” we know the input going in and the output coming out but its internal workings remain opaque.
The input into a machine learning system is data and its output is an algorithmic model representing that data. Inside the box, the data is put through a complex series of cascading transformations and all that iteration upon itself leads to complexity — and inscrutability.
Beyond this complexity and inscrutability, there is something else: a kind of alien quality that pervades much of what machine learning touches. Machine learning systems draw on massive pools of data in order to generate statistical models of reality. These models are not constrained by the sense-making patterns that evolved with the human brain. Much of what our brains do, in fact, is toss out information that doesn’t conform to the way we think the world should work. For machines, our human conventions represent just a small portion of reality’s possibilities. It is this ability to model things beyond the way the human brain understands them that gives machine learning solutions their alien quality.
In most cases, the unusual nature of this new code is hidden by human grooming. But not always. Machine learning systems that lead to weird product recommendations or news feed items would quickly be rejected by end-users. Companies therefore constrain these systems by prioritizing results that are pleasing to humans. But in 2014, Google engineers developed a computer vision program called DeepDream that opened a window into how machines ‘recognize’ objects. The image at the top of this article is a lotus flower, as ‘seen’ by an algorithm trained to recognize this cute little robot to the right in everything it sees. The result is definitely otherworldly.
DeepDream helps expose the alien output of machine learning. One question it raises is whether this strangeness might be used in more serious applications.
It turns out that machine learning is an excellent design tool for thinking outside the black box of the human mind. Autodesk is well known for its computer-aided design software. The company is now pioneering a new field called “generative design” that taps the alien creativity of machine learning. The designs it generates feel organic, and even alien, as an image search for “Autodesk Dreamcatcher” or “generative design” will show.
Machine learning builds models of reality far more complex than what we can hold in our heads. Autodesk’s generative design tools work from goals and constraints. The constraints might be the precise location of the mounting bolts needed to integrate a new cylinder head design into the rest of an engine. The g can then be tweaked within so that the designer can trade off weight, strength, and cost attributes.
In the above video, the Autodesk spokesperson describes it as “growing” a chassis for a car design. This language is not accidental. It reflects a newly emerging reality of software, and something even bigger.
Humans think about automation as though the world revolved around us. We see it as processes that don’t require human intervention, even though the world got along just fine for billions of years without us doing any intervening. The other interesting thing is how we refer to our own actions as “automatic” when they don’t involve conscious thought. It’s as though we recognize that our instincts are, in fact, a kind of automation.
When a Venus flytrap snaps shut on an ant, that is automation. When a neuron triggers a response in another neuron, that is automation. When a bee responds to the ultraviolet signals of a flower and collects pollen, that too is automation. Automation is thus bigger and older than technology. Biological automation existed long before technological automation. It’s just that technological automation is now advanced enough for us to see the similarities.
As we automate software development, machine learning unlocks a complex reality that exceeds the unaided human mind’s ability to comprehend. These techniques see past our cognitive filters and give form to an underlying reality we could not see.
Thanks to these techniques, nature is learning to talk to us once again. We turned away from her once, as we became enamored with a mechanical world. Now that ordered precision is bending to the overwhelming gravity of nature’s underlying order. For just as automation is bigger than technology, we are learning that Nature is bigger than biology.
16 thoughts on “Artificially Intelligent Design”
Pingback: Artificial Intelligence/Machine Learning Roundup #71 | Daily Artificial Intelligence & Machine Learning Curated News
Really like it.
Thank you Gideon for educating us in language we understand. Hugs to the family.
Thank you, Rick. Not everyone got this one, but I think there is something interesting and important going on here that just takes a perspective shift.
Providing the examples helped clarify the narrative. Greetings from Tucson, Gideon!
Thank you, Joyce. 🙂
Quantization is the enemy of evolution:
In other words biological evolution operates under severe handicaps that need not apply in the digital domain.
Fascinating thread, Sean. The quantum references are over my head, but I’m following the points you’re making about the speed of digital evolution. And yes, I agree that the digital domain removes many constraints from biology — and even physics.
When you wrote, “The input into a machine learning system is data and its output is an algorithmic model representing that data.” Did you mean, “its output is a statistical model representing that data?” Just wondering, because when I learned about ML, I was taught that the layers of learning algorithm (is the input a cat picture?), analyze the data (thousands of sample cat photos), to build the statistical probability of a likely match (input is/isn’t a cat). Thanks for the clarification.
Thanks for the comment, Dave. And that’s a good question. The difference here is whether one is talking about the training of machine learning systems or the using of them to do prediction work. Or another way to describe the difference is training versus inference. In one, you’re using the data to train/build a kind of algorithmic model and the other you’re using that model as a statistical model of the world with which to do inference. Here’s a good article outlining the distinction:
Also, I am still very much learning this stuff myself.
Great piece, Gideon! This post definitely resonated for me. Looking forward to an extended conversation.
Thanks, Richard. I’m so glad, and I’m looking forward to talking.
Thanks Gideon. That makes a lot of sense. Looking forward to more articles.
Thank You for the interesting read. There is a lot of imagination in those designs and creative possibilities on the scientific outlay.
Agreed. Thanks, Jodi.