Robots are like babies – as they learn from us they start teaching themselves

Predictive intelligence is learning from our every move - and figuring out how to enhance them

Waze, the traffic and navigation app owned by Google, apparently knows me pretty well. Recently, as I returned to my car in Rockaway Beach, New York, a place I had been visiting for a few days in a row, Waze asked if I was bound for my home address the moment I opened the app.

Waze was using, in essence, a form of Bayesian probabilistic reasoning1 to make a guess about my destination. Given that for the past three days, my travel patterns had been identical, it was a pretty sure bet that, of all the places I might be heading, "home" was top of the list.

Such predictive intelligence is, of course, everywhere in Silicon Valley these days. Services such as Google Now and Apple Intelligence comb through the raw data of your emails, search, browsing history and calendar, among other sources, to anticipate what information you might need or what actions you might be planning to take, before you even think to ask. Predictive search seems to eerily presage what you are thinking, before you have even finished typing. Marketers increasingly want to know who their customers are before the business of selling begins.

Curiously, this is not so out of line with how the human mind works, as a recent - and increasingly influential - line of thinking in cognitive science contends. Called predictive processing, or more simply, the predictive brain2, the basic idea is that rather than a passive receptacle for the outside world that's waiting to process the raw data that comes in, the brain is an active, Bayesian machine. It is generating inferences about that world and shaping how we perceive it.

Andy Clark, a professor of philosophy at the University of Edinburgh and author of Surfing Uncertainty: Prediction, Action and the Embodied Mind, uses the example of returning to his office and seeing a steaming cup of coffee he'd left on the table. Many of us might imagine that the process of seeing it is a bottom-up affair: it surges into view, like "an array of activated pixels", and as the representation takes shape, we match it against stored mental models of what an object such as a cup of coffee looks like.

Clark suggests another possibility. Upon re-entering his office, he writes, "My brain already commands a complex set of coffee-and-office involving expectations." This already complete top-down model sends a stream of predictions against the incoming sensory data, pre-emptively activating clusters of neurons to do the work of looking for that coffee. When errors are encountered (for example, someone moved the coffee to put a report on the desktop), "incorrect downward-flowing 'guesses'3 yield error signals that propagate laterally and upwards and are used to leverage better guesses". To perceive the world, Clark writes, "is to successfully predict our own sensory states".

Interestingly, it's the errors that stand out. Clark notes that the reason we can't tickle ourselves is akin to the problem of trying to tell yourself a joke. However ticklish you may be, however funny the joke may be, you have already deployed, as Clark puts it, "a precise model of the mapping from our own motor commands to sensory (bodily) feedback." As your body knows what to expect, neurons are suppressed and there's not enough room for surprise - which is key to both being tickled and laughing at jokes. Conversely, we can sometimes have a brain response to a missing auditory stimulus that is equal to the response of a stimulus that is present, because we predicted it would be there (think of the syndrome in which you feel your phone vibrating in your pocket when it is not.)

All this is, at least metaphorically, how the idea of predictive intelligence might work. A smart digital assistant would not simply "look" at that day's calendar, note you have an appointment in central London at 1pm, and then start acting. It would have already known about the appointment (a piece of neuronal information lurking in an email). Just as it would already have had a mental model of what the traffic would be like that day (maybe it would even warn you of the probability of a "surge" price on Uber). Alternatively, take a smartphone camera. Instead of its owner opening it and waiting briefly as it adjusts to the light and other conditions, the phone would have already known where it was, what time it was and what the weather was like. It would have pre-empted the camera appropriately (perhaps even pre-applying the appropriate Instagram filter). If it was wrong? All that information would go into improving the predictive model.

Of course, we do not often have enough experience of something to generate predictive models. We need to "tune" the generative model through learning, Clark says. "We learn to cancel out prediction errors relative to increasingly high-level goals." For example, learner drivers tend to look at the road immediately in front of the car, but as we gain experience, we learn it is usually more important to look further down the road. In Google's well-documented deep learning experiments, untrained algorithms were let loose in a new environment: YouTube. They did not know what cats were, but they soon learned that they seemed to be important on the site. They were learning what to look for.

Similarly, robots working in warehouses work using a form of predictive processing of their own. When sent to retrieve objects, the machines are trained to filter out extraneous information such as shelves, overhead lights and other robots. Their neurons are channelled towards what they are expected to see. But the next generation of robots, Clark hints, will not only use predictive processing to learn about their worlds, but about themselves. In other words, they will be an actor in their own predictive model.

  1. Olshausen, B. (2004). Bayesian probability theory. Redwood Center for Theoretical Neuroscience, 36(3).
  2. Clark, A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences.
  3. Clark, A. (2016) Surfing Uncertainty: Prediction, Action, and the Embodied Mind, 1(14)

This article was originally published by WIRED UK