Overview of Artificial Intelligence History and Players

Reading Time: 1 minute

A good overview of artificial intelligence, machine learning, deep learning history and major players.

https://www.theverge.com/2018/10/16/17985168/deep-learning-revolution-terrence-sejnowski-artificial-intelligence-technology

13 comments

  1. Good article. I was glad to see they defined the terms at the beginning. 🙂

  2. This is so exciting, Thanks Gideon

  3. Good if AI stays friendly to us.

  4. A good overview, yes, from an historical perspective, but a poor one from a critical judgment view point. DL is a modern instantiation of statistical inference. Being modern and effective does not imply it can get away from essential principles: complying with the fundamental requirement of having an ergodic data set for “learning”. i.e. converging to a preferred response to a salient signature, i.e. a pattern whose random variations originate for a common generating function.

    In many cases of interest this is “true” and DL works fine. In other cases, it is not, and DL fails miserably, in a brittle way – a dangerous behaviour whenever resilience or graceful degradation is a must, like in critical applications. DARPA has just announced a program to look into this. They will find that ergodicity is hard to assess (except in toy, well controlled experiments), thus requiring credible prior knowledge on the data generation process..

  5. Denis Poussart , I’ve not heard that term specifically but am familiar with the problem in general terms in terms of the difficulties in choosing sampling data, and how that becomes intractable with predicting novel circumstances. Seems like a very challenging, and fundamental statistical prediction problem, not limited to DL. Don’t you think?

  6. Future’s New Revelations or Tribulation, Knowledge is absolutely brilliant for good reason Why . . .?

  7. Gideon Rosenblatt YES, ABSOLUTELY! Reliable probabilistic inference requires that statistics be computed from data samples that all come from the same generating process. The term “ergocity” qualifies this condition. The very same condition applies to “Big Data” analysis and it is a deep paradox that the bigger the data being analyzed, the more problematic this requirement be satisfied (since data may be recorded in varying conditions that do modify the generating process in subtle ways”).

    DL is “just” another way to process such data but there is magic exemption from having quality data to begin with.

    Strictly speaking, the situation where this requirement would be met would be one where “everything” is in an isolated box – of safely assumed to be nearly so. Historically this is said to be taking a reductionist view. This is the stance that has been taken in classical science, when experimenters are extremely careful in isolating the phenomenons under study (the reason behind doing controlled experiments).

    In reality, especially when human affairs are at play – but even in natural dynamics, for instance weather – every data sample results from a hugely numerous, recursively nested set of interactions: the world IS complex.

    Human intelligence has this extraordinary capability to navigate – with limitations of course – through a maze of potential influences, detect and appreciate particular patterns that emerge not just from observed data but also that blends with the wealth of human knowledge (all the way to cultural / historical perspectives) that are cast in abstract mental models. We are still eons from understanding how the human mind does this – clearly it involves process that span a gigantic conceptual space, all the way from molecular dynamics to neurosciences to psychology to history and philosophy…

    DL is a (good) way to extract / match patterns from large set of data, so large that they would overwhelm humans with fatigue, OK if, and only if this ergodicity (simplistic) hypothesis is judged to be “satisfied”. When put to the test of real, uncontrived dynamics, it may quickly become brittle and fail. As this type of machine intelligence is being deployed, often without sufficient critical judgement – thankfully the hype is coming down.It should never be used just by itself in a critical situation, where failure may have immense consequences. For the foreseeable future, my own perpective asserts that the most promising paths involves a blending of such advanced machine computations (like DL that are cheap) with human judgment (that are deep if well trained and competent) by optimizing human-machine cognitive interfaces. And of course, this raises all kinds of further questions, like ethical use. (Apologies for this long post!)

  8. Denis Poussart, these are the kinds of conversations and sharing that I’m really going to miss with G+ shuts down. It’s so sad, actually. So on a side note, have you decided where you’re going to hang your online shingle come the diaspora? I want to stay in touch.

    Not having a background in mathematics and having only really scratched the surface of statistics in business school, I actually needed to do a little more digging on this ergocity concept. It’s interesting that “ergo” means “therefore,” which suggests that at least at one level it means “thereforeness.” What that suggests to me is that with ergocity, there is reason that permeates the data, a kind of causal force unifying it.

    I found this attached article to be very helpful for a lay person. The example regarding the quality of journalism in a newspaper gave me an intuitive sense for what you’re talking about. What I find interesting is that, at least in this example, there is a process at work — an editorial process — that ensures the “thereforeness” across time (weeks of editing) and space (different editors). I’m curious whether you think there is anything worth elaborating on that.

    Thanks for your interesting comments. I always value your insights.

    news.softpedia.com – What Is Ergodicity?

  9. Mr Poussart points out that if we permit simplicity our system of understanding ergodicity works better. Also he points out that rationality in context of classical science is a determinant factor in our search for the truth. At first I tried to consider Ergodicity in this context , I thought perhaps that in the gallery of our respective conscious each of us might admire our masterpieces without envy, shame or worries. I respect that you can consider Ergodicity in proportion to the chaos that follows when inevitably a human being challenges mathematical certainty. I have some questions to ask of the systems used by the prisons and forensics estates in the UK that are determinant in forecasts for analysing risk in inmate population. This science is used within the framework of the entire UK justice system. This is important, in context to how this debate sets out the principles of statistical data and how it is used.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sign up here for the latest articles. You can opt out at any time.


Subscribe by email:



Or subscribe by RSS:

Subscribe-by-RSS---Orange-Background
%d bloggers like this: