The other day, I had the opportunity to be interviewed by Teodora Petkova for Ontotext as part of a series on…

The other day, I had the opportunity to be interviewed by Teodora Petkova for Ontotext as part of a series on semantic technology and the future of knowledge.

Here’s the interview, which touches on knowledge technologies, organizations, and even Dr. Doolittle.

Thanks for the opportunity, Teodora.

Originally shared by Ontotext

Aligning Codes

At The Nexus of People, Technology and Organizations with Gideon Rosenblatt

Gideon Rosenblatt is a a writer with a background in technology, business, social enterprise and social change. Gideon ran a mission-driven technology consulting group called Groundwire for nine years. As an agent of social change, this social enterprise built websites, relationship management databases and other technologies to improve constituent engagement in hundreds of environmental and sustainability organizations across North America. Before that, Gideon spent ten years at Microsoft in marketing and product development. He started as a product marketer for Microsoft’s early ‘multimedia’ encyclopedia and other CD-ROM titles. Later, he conceived and built CarPoint, one of the world’s first large-scale e-commerce websites, and ran that software development team for several years. Prior to that, Gideon earned an MBA in marketing from Wharton after four years of doing consulting and market research for US corporations in China.

Today, Gideon writes about artificial intelligence, mission-driven technology, people-centered organizations, humanity’s ties with technology. He runs two websites: The Vital Edge ( and the Alchemy of Change ( – both worth following for insights about the future of human-machine intelligence. There, Gideon Rosenblatt focuses on the intersection of technology and mission-driven, stakeholder-friendly organizations, writing to ensure a place for the human heart in the future of intelligence on this planet.

We are grateful to have been able to interview Gideon and gain valuable perspectives about a future where technologies, human intelligence and our collective efforts for sense-making converge.

Enjoy this talk!

Gideon, how much of the new intelligence is computer code and how much of it is cultural code?

It’s my belief that culture is the “code within the code” of our machine code. That cultural code is most directly influenced by the cultures of the organizations developing that machine code. That, of course, is wrapped in a broader cultural coding. The way that the cultural code shows up in our technologies though is partially passed through by the individuals working on those projects in these companies. The bigger influence though is the organizational culture. That’s what sets the frame. That’s what establishes the business rules and the subtle cues of what is and what isn’t acceptable. One of the big problems we face as we develop our machine code is that the cultural coding in many of the for-profit corporations developing our most powerful technologies is dominated by a Wall Street culture that places returns to shareholders above all other values in the company. That’s a big problem, especially when you consider the long-term importance of what we are now building in terms of the future of intelligence on this planet.

You once wrote that “We will swim in a sea of meaning with human experience, in the form of attention, crystalizing knowledge, like particles from quantum possibility.” What do you think is the most challenging part when it comes to developing artificial intelligence tools and teaching machines to understand meaning?

I think the two hardest nuts to crack with regard to synthetic intelligence will be “experience” and “volition.” Your question about meaning is directly related to the former.

Subjective experience is one of those shy problems. You try to probe it and it just pulls itself further inward. We do all these experiments, poking and prodding, stimulating neurons here and there and it looks like we’re making headway on some sort of shared, objective meaning. Last fall, I interviewed Professor Marcel Just on his recent research developing computational models for predicting how particular sentences activate the way neurons fire in the brain. It is fascinating stuff. They found that the same sentences activated the same regions of the brain across different research participants. That means that when I read “the fish lived in the river,” the neurons in my brain fire in a very similar pattern to those in yours. The problem, though, is that just because we know how to map this cluster of neurons to this cluster of words doesn’t mean that we know what the experience of those words feels like to another person. The subjectivity of another is a private world forever cloistered away from the analytical, probing mind.

Machines are an extension of that analytical, probing mind. Machines will agree on definitions, but definitions are only one half of meaning. Associations are the other half and true associations are affective associations that seem to be rooted in the ground of experience. Right now at least, that capacity for experience seems to be intrinsically biological. Will we break that barrier? I don’t know.

When did you first realize that the threads of machine intelligence, human consciousness and the way we build and lead organizations come into one: the fabrics of a brand new understanding of cognition?

That’s an interesting question. I came out of the technology world, working nearly a decade at Microsoft before running a mission-driven technology consulting organization for another nine years. I left my work managing organizations and teams of people in order to focus on writing. I initially settled on writing about mission-driven, stakeholder centric organizations. Why? Because my last job had focused on helping environmental organizations, who were routinely being outmaneuvered by for-profit businesses with large financial incentives to undermine their work. I wanted to figure out how to realign business incentives. Gradually, I found myself coming back to focusing on technology though. I just can’t seem to help it; it’s in my blood, I suppose. I just love following the cutting edge of innovation and that led me to focus on machine learning. After spending a couple years teaching myself this fast-changing world as best I could, I have since come back around and am now synching these two seemingly distinct fields. My reasoning relates very much to your first question. We are building the future of planetary intelligence and it is intimately wrapped up in our modern organizations. We need to focus like crazy on improving the “code within the code” of these organizations if we want the future of intelligence on this planet to remain supportive of humans and life more generally.

If, as you say “Human intelligence occupies a microscopically small niche within the vast universe of cognition” how do you think machine learning will help us enlarge a bit that niche?

Ha! I don’t know, but I like to think that machine learning will one day help us to become Doctor Doolittle. I don’t know if you ever saw that Disney movie, but it’s about a doctor who can talk to the animals. Going back to those models for predicting neural firings from question two above, it’s conceivable that we could extend that work to trying to understand what animals “mean” when utter various call signs as well as other forms of communication.

The more interesting question though centers on expanding our understanding of cognition. What does it mean to be smart like an octopus is smart? Their brains are decentralized, with part of their processing power in a central brain but lots of it distributed into neural clusters in their eight arms. How cool is that? It’s one thing to understand how that kind of intelligence is going to be intrinsically different from ours and another thing entirely to try to understand what nine centers of octo-sentience might feel like. Again though, knowing for sure seems like a nut we might never crack due to the problem of subjective experience I mentioned above.

After analog and digital what’s next? 🙂

Etheric!! 🙂

How further will technology take collective intelligence and more importantly the way we as humans build and run organizations and enterprises?

I am actually in the process of writing a book that is in large part about just this. So I have a “bookfull” of thoughts on this topic. For right now though, what I’ll say is that we need to drastically change our notions of what organizational intelligence looks like so that it is no longer employee-centric. Organizational boundaries are becoming much more like permeable membranes and one of the key things that flows through that membrane is intelligence!

Have you had any experience with the Semantic Web and semantic technology? And how do you see them changing the environment we live and work in?

Back in 2002, I had an opportunity to meet with a guy who ran a company called Sidereal Technologies or something close to that. Their big product was a system for doing faceted search on semantically marked up texts. Man, was it cool. Before that though, you could say that semantic technologies were what pulled me into the field of technology in the first place. My first work at Microsoft was in multimedia publishing. It was our group that build Microsoft’s Encarta encyclopedia and a whole bunch of other cool titles. The semantic markup that we used for all that stuff was SGML. This was back in the early 1990s, so it was pretty early on in the field. We distributed these titles on CD-ROM, which enabled us to do amazing things, at least for that time. For instance, our Cinemania movie guide allowed you to do really sophisticated faceted searches, combining all kinds of different attributes and get near-instantaneous filtering of the thousands and thousands of movies in the database. I then went on to start an online car-buying service and the heart of that thing was a gigantic database of SGML-marked up vehicle specs. We invested a huge amount of work and money into standardizing meanings across automakers. It was hard work, but it enabled us to do very powerful comparisons and, most importantly, configure all these vehicles with various options, trims, colors, etc. We called it a “configurator” and it was an amazing example of what is possible when your technology can apply a common understanding across a wide range of data suppliers. So yeah, I was very involved in this field at one point in my life. I still have a soft spot for it.

I am also a huge fan of Tim Berners-Lee. His vision for the Semantic Web is very compelling. My question continues to be whether we will get through through tagging or whether our machines will get so smart that we won’t need to do all that tagging. Perhaps they will be able to simply overlay semantic meaning using probabilistic models. I really don’t follow the field closely enough these days to know.

As for the impacts, they are probably too broad to list or even know. The one thing I do think is worth mentioning, however, is that once they understand the definitions and contextual meanings of word clusters, our machines will be used to pre-process knowledge for us in ways that will blow our minds. Want to know how Roman architecture influenced the city of Washington, DC, for example? Just ask and the future of Google Search will read through thousands of documents, process their meaning near instantly and then spit it back to you, formatted to your length, complexity and style specifications. We will learn way faster through these technologies.

OK. That’s it. Thanks for the great questions and the opportunity to have this virtual chat!!

We thank you, Gideon Rosenblatt!

If you liked the interview and would like to stay tuned for more #semtechtalks , follow our collection Semantic Technology Talks at

Scroll to Top