Architects of Intelligence, a review

Here’s the set up of this book, Architects of Intelligence (2018): Journalist Martin Ford interviews a couple of dozen geeks working on AI. Here’s the thing I worried about: the book would just be full of speculative chit-chat on the impact of AI on jobs and the dangers (or not) of paper-clip-making AI monsters. The relief: every conversation gets around to that, but thankfully, Ford allows his interviewees space to discuss the way forward in AI research in a serious, mostly understandable-for-the-AI-novice way too.

My four take-aways

First, it is debated, but the future is probably to be found in cooperation between the deep learning folk and the cognitive science people. In basic terms the former rely on statistical algorithms learning ever-more complex, multi-dimensional patterns in data. The later argue that this is only one form of intelligence, a basic one, and that higher-level intelligence requires reason/understanding too. And you can only approximate that with logical operations and structures that deep learning cannot generate.

There’s history. Deep learning/neural nets is currently where all the action is – but back in the 1960s, before they got the algorithms right (backprop, ReLUs, convolutional layers etc.), huge data resources and massive compute, those working on statistical routes to intelligence got shut out of the mainstream. They had the basic building block for what they do today, the perceptron, but it did not work. Only a few guys survived actively researching – Geoff Hinton was one. Meanwhile, the AI mainstream in the 1980-90s was dominated by manipulating symbols to represent reasoning. But these approaches relied upon rules, and that got bogged down in rules being built upon rules on more rules. (Just as autonomous driving cannot be solved with rules - but will only work with a sufficiently sophisticated set of principles.) And then in the 2010s, deep learning came of age, won all the competitions, and is still driving all the advances we are beginning to enjoy (and fear) in image and voice recognition. All of DeepMind’s chess, Go and Atari game beating is done by reinforcement (deep) learning algorithms.

Some deep learning people, like Hinton, think deep learning approaches still have a long way to take us, perhaps all the way to general intelligence. Reasoning, he argues, comes very late in evolutionary terms, and is built on top of a lot of other stuff.

But there are also reasons to think deep learning, for all its progress, will not take us all the way there. It is, after all, advanced curve-fitting - and can only “know” what it has seen before (trained on). It ‘interpolates between contingencies’, but never really understands what it is seeing.

Take the DeepMind’s Atari game algo here which played itself and built up the required knowledge. Undoubtedly an impressive feat. But the training took the equivalent of 38 days at each game, games a 12-year old could play coherently after a few minutes. (As Google’s Ray Kurzweil says, “life begins at a billion examples”). And the algorithm cannot generalize to other situations. So if you introduced a new ball, or altered a rule, it would fail. The DeepMind folk would say we just need more training, more compute and more layers to learn the higher-level behaviors.

But the critics suggest these problems represent a fundamental block. Judea Pearl proposes there are three levels of cognition: seeing, intervening, imagining. While deep learning is progressing in the first two, it will never be able to jump to the third. This is where human “understanding” enters. Another sceptic, Gary Marcus’ work on child development suggests that they’re able to make deep abstractions about what they see from a very early stage, and this allows them to reason. Joshua Tenenbaum at MIT also argues that a baby/child develops ‘intuitive physics’ and ‘intuitive psychology’ in their first few months, then learns language and then uses language to learn everything else.

Their argument is that we need logical/casual models to understand/explain the world. And that structure cannot be ‘learned from scratch’ with deep learning. So these skeptics work on probabilistic (Bayesian) approaches, which have structure and priors. Marcus lays out some of the things we need: an ability to understand concepts such as symbol manipulation, abstract variables, operations over variables, type-token distinction, spatial translation, causality, understanding how objects move on time/space paths, sets, etc. Somehow the future of AI research will have to meld these structures together with the pattern-recognition capabilities that deep learning provides.

The second take-away was that language is really important for intelligence, in ways I’d not appreciated. We learn language and we learn through it. We share models in our brain with others through language. Hinton noted that when Trump tweets he’s not telling you facts, he’s showing a possible reaction to news, and this is a kind of teaching. Kurzweil argues that when we use language, we use the range of human intelligence, including reasoning. Thus he likes a long, sufficiently planned and structured Turing Test to test lab-invented intelligence. The Turing test is whether an AI can persuade a human interlocutor that it is human through written interaction, a test some have criticized for being too easily-gamed. (Personally, I think we should rely to the “Garland Test”, which I am naming in honor of Alex Garland’s amazing movie Ex Machina. The “Garland Test” of AI intelligence is whether, alone in a room, an AI can make you fall in love with it.)

We get interviews with a few people pursuing work in language. David Ferruci’s Elemental Cognition sounds really interesting – a firm aimed at reading and understanding a story. And the Allen Institute’s Aristo Project is based on an AI reading a book and then taking a exam. Kurzweil says that some Google’s search and ‘Talk to Books’ is already based on ‘understanding’ (not just keyword-bsaed search). (I wish he said more about how that works!)

Third, Gary Marcus makes a cool point about our human mental capacities. Our auditory and optical systems are amazing (I didn’t know we are able to see a single photon of light, but apparently we are). But our memory sucks. While a computer has location-addressable, finely structured memory, we have a “cue-addressable” memory, triggered by quixotic things like smell, associations etc. Given its distributed nature, our memories are unreliable. This bleeds over into our decision-making processes, and all the flaws explored by Daniel Kahneman and Amos Tversky (anchoring, confirmation bias etc.) All stuff that makes us unintelligent. Presumably, evolution pushed resources towards seeing and hearing, rather than an expensive, finely-tuned memory system - just remembering that the lions lived ‘over there was’ enough - but we really needed to be able to see and hear them very well. But I think the point is interesting as it suggests that with more resources, an AI could easily over-intelligence us; and in the areas where it is already beating us, in DeepMind’s AlphaZero for instance, memory seems like its key weapon.

Fourth, Stuart Russell at Berkeley pointed out something I’d not thought about much. We spend a lot of time talking about UBI (Go Andrew Yang!), but there is going to be a bigger need. We’ll need, Russell explains, to think about how to build a lot more emotional resilience in people. How to help people – through coaching, teaching, counseling, collaborating - get used to life without much work, and with a superior-in-many-respects intelligence doing the jobs we used to.

It reminded me of this really thoughtful piece by Derek Thompson. I’ve not read a better meditation on what is coming for the “precariat” - and how that might cope with it - in part-time jobs, in communities, in artisanal work, in video games. If anything, Thompson’s piece shows up just how unthought-through most of the pretty-dull discussion among the AI experts is about what is coming. They know the tech, but the implications, I guess, are for the rest of us to sort out. If only we were truly intelligent.

Back