Humanity is now developing our greatest contribution to the expansion of intelligence on the planet: the flowering of artificial intelligence. It would be a shame if all we used it for were Amazon shopping and Facebook birthday reminders.
Luckily, machine learning and artificial intelligence aren’t just a for-profit undertaking. Universities, companies, nonprofits, and governmental agencies are already busy developing interesting tools and applications that direct machine learning toward the common good. Though still in their early days, these initiatives just may represent our best bet for addressing our most challenging ecological and societal problems. Welcome to the world of “Mission-Driven AI.”
[Tweet “#MissionDrivenAI has the potential to radically accelerate our ability to solve for the common good.”]
First, What is “Mission-Driven”?
There’s also a difference between a social mission and a “customer mission.” Organizations serve a customer mission through dedication not just to customer satisfaction, but customer success — and it’s a wonderful thing to behold when we see companies doing it. Customer missions focus on customer outcomes. Social missions focus on social outcomes and the common good, be it societal or ecological. When I use the term “mission-driven” in this article, I’m talking about this latter type of mission.
Mission-Driven AI Development
With this clarification in hand, let’s now look at mission-driven artificial intelligence by segmenting it into AI development and AI application. First, development.
Open Source AI Development
Mission-driven AI development is complicated by the prominent role of open source. I say this because some of the biggest open source machine learning projects are backed by very large corporations. Examples include Google’s TensorFlow and the recently announced Microsoft and Amazon framework, Gluon. The point is that there are lots of reasons to choose the open source model. Sometimes it’s a strategic rationale, like Google’s decision to open source its Android operating system in its battle with Apple. Sometimes it’s a more idealistic motivation like the Stallmanian commitment to freedom through technology. Clearly, some open-source AI development is mission-driven and for the common good. But not all.
Non-Profit AI Development
In the category of unabashedly mission-driven AI research and development, is the non-profit organization, OpenAI:
OpenAI’s mission is to build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible.
It’s worth mentioning a couple of other players in this field. In researching this article, I ran across the Prague-based organization, GoodAI, though I don’t know much about their work. And even though they aren’t directly involved in machine learning development themselves, another organization, AI4All, works to promote diversity and inclusion in the field of artificial intelligence. They focus on high school students, and partner closely with universities to broaden access to education in the field of artificial intelligence.
University AI Development
Speaking of universities, there are many that play important roles in AI research. Stanford, Carnegie Mellon, MIT, Berkeley, and the University of Washington are just a few of the top names in the US. In addition, in Canada, there is the University of Montreal, the University of Toronto (and the affiliated Vector Institute), the University of Alberta, again, just to name a few. How truly mission-driven these programs are is hard to say, aside from the fact that, in most if not all cases, their research is openly shared.
Mission-Driven AI Applications
Here’s how this division might apply to the mission-driven application of AI:
General Utility Machine Learning:
Because their utility is not limited to mission-driven organizations, however, and because the resources needed to develop them are significant, these systems are likely to be built by commercial providers, rather than non-profit entities. Facebook, Apple, and Google are investing heavily in the underlying technology for intelligent agents. Though their current incarnations as chatbots are still limited, these technologies will dramatically alter the way that we engage with organizations. They could dramatically lower costs and improve the performance of a broad range of mission-driven organizations, especially those whose work entails much contact with the public.
Somewhere between general utilities like speech recognition and mission-specific applications, like the ones outlined below, are machine-learning systems for solving general non-profit needs. Using machine learning to improve fundraising analytics or impact evaluation are good examples.
Solution-Specific Machine Learning:
The players are too numerous to comprehensively list here and there is considerable overlap with big-data analytics work. The field includes big players like Google, IBM, and Microsoft working both on their own and in partnership with mission-driven organizations. It also includes a number of smaller, more focused organizations like Delta Analytics, Alethiom, and DrivenData. One of the more interesting new applications is One Concern, which combines machine learning and hazard modeling to protect communities before, during and after natural disasters. If you know of other interesting projects or organizations like these, please drop me a pointer in the comments below.
People-Engaging Machine Learning:
The final category for mission-driven applications of machine learning is a bit more ‘out there’ and doesn’t yet exist as far as I know. It maps to the “people organizations” in Movement as Network — organizations defined by audiences rather than issues. Here, the opportunity is to use machine learning as a way to engage very large networks of constituents, much the way Facebook, Google, and Amazon do with their end-users. The opportunity here is to use machine learning to determine how best to engage and have impact with very large numbers of citizens. One intriguing example is pol.is, which uses machine learning to facilitate Internet-scale conversations that converge on a kind of smarter, collaborative democracy.
Given the huge quantity of data necessary for today’s machine learning algorithms, it seems somewhat unlikely that individual non-profit organizations could tackle an opportunity like this on their own. Perhaps organizations like Avaaz with its 46-million members or Change.org with its 100-million members could prove me wrong. Alternatively, there may be an opening here for formal coalitions or loose collaborative networks of organizations and people to figure out a way to pull something like this off.
I am very excited at the potential for machine learning (and artificial intelligence more broadly) to eventually have an enormously positive impact on the way we protect the common good. At this point, mission-driven artificial intelligence is more an opportunity, a dream, than a widespread reality. But that will change as the technology becomes more easily accessible.
As a society, we now face some very large risks to human survival and the health of the planet. I can’t think of a more worthy job for this remarkable new intelligence that we are now ushering in.
P.S. — I am using a Twitter List to track people and organizations involved in Mission-Driven AI.
8 thoughts on “Mission-Driven Artificial Intelligence and the Common Good”
Pingback: Mission-Driven Artificial Intelligence and the Common Good – NewsChest Technology
Gideon, This whole arena of AI is so fascinating (and overwhelming) to me, a real layman in this field. Until recently, it seemed to be the stuff of science fiction, and yet now the uses are showing up everywhere.
Attached is an article about research being done at Stanford University, dealing with, of all things, predicting a patient’s end of life, in order to better use palliative care.
Within that article is a link to the actual study, which has details way beyond my understanding but will mean something to people of your training.
I find that this Stanford article is yet another amazing example of the unlimited areas of application of artificial intelligence. Stay tuned!
Thanks for the link, Bill. I’ll check it out. 🙂
Bill, The kind of predictive analytics have been around for a long time, only recently rebranded as AI. In fact, the big joke in the valley in the 80s, was that AI was anything that hadn’t been commercialized sufficiently to become mundane. A lot of us remember that so-called “AI Winter” when the claims for these system couldn’t yet be realized because of the limitations of storage, computational speed, and data acquisitions. But in some fields, particularly finance, this stuff has been around for a long time. And, has had profound effects in shaping the current day. The predictive analytics used to define credit risk for mortgages, are partially responsible for the red-lined racial segregation of cities. These systems, which generally operated on seeing the future as a function of the past, can, paradoxically, reinforce the past onto the future. So be somewhat skeptical about the se predictive systems about end-of-life: they not only predictive the end of the life, but they will, in some cases, actually shorten the end of life by justifying policies to do so. Physicians are soon going to be faced, more than in the past, with moral decisions about diagreeing with machine-generated diagnostics. In the old days, they used to say “nobody got fired by buying IBM.” But fear a future,when they start to say, “nobody got sued by agree with Dr Watson.”
As I continue absorbing all of the information on artificial intelligence, I have a simple question. From the beginning of AI, it has required human intelligence in order to create the programs and processes to arrive at AI. Do we someday come to the point where AI no longer requires human intelligence? Brave New World?
I think that we are on the verge of just that point, Bill. Let me quote something from the DeepMind folks, talking about the latest iteration of the Go-playing system, AlphaGo Zero:
“It is as if the system had learned a new internal language of how to play Go… It could be possible that the human language of Go is inefficient in that it is unable to express more complex compound concepts…It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself.”
I think there’s another way to look at machine learning. It’s the systematic embedding of knowledge into automatable form. In fact, a good portion of the techworld, isn’t aware that it is mostly doing knowledge engineering in one form of another: either directly and intentionally putting expertise in such systems (e.g., a Google “collaboration” with ophthalmologists labeling pictures of retinas to develop doctor-free retina diagnostic machines is ultimately the extraction of their experiences). Captcha systems that help Google train its Waymo systems. All of this, despite the hype about ‘deep learning’ is stuff that is reaching maturity because of computing, storage, and information collection have become cheap. But it is ultimately a form of epidemiology on steroids. Stepping back from the details of these systems is something more fundamental: who gets to own the productivity of the robots being created. If these platforms are merely vacuums for the knowledge of others, extracted from them on the cheap, and then used to primarily benefit the platform owners, we’re headed back to the days of atomized piece labor for the few things that can’t be economically automated. That is, Shirtwaist Triangle 2.0. I’d like to see alot more thought put into the ownership of these knowledge collected, predictive analysis machines — and something more profound than a notion of a universal basic income, which is just a bandaid to allow things to progress as they currently are — massive concentration of economic and therefore political power into a small number of super-empowered plutocrats.