Knowledge is in the world

On knowledge that evolves in software and hardware, in organisms and machines

April 1, 2016
Jonathan Libov

"The fleshy water-conserving cactus stem constitutes a form of knowledge of the scarcity of water in the world of the cactus, and the elongated slender beak of the humming-bird is a manifestation of the knowledge of the structure of the flowers from which the bird draws nectar. In both cases it is a very partial and incomplete knowledge, but knowledge it is."

- Henry Plotkin in Darwin Machines

Uber recently introduced Uber Trip Experiences, “which connects riders with their favorite apps at the start of a trip when they may have some time to spare.” The notion of implicitly launching apps merely by dint of entering a taxi is, in some sense, the fulfillment of a future model of computer interaction driven more by context than by explicitly pointing your cursor or finger at the app you want.

At first I was confused. Where else, if not in a cab ride or on the subway or train, are people more likely to pull out their phones of their own volition? I’m usually dawdling on my phone from the moment I sit down to the moment I pay and leave. Why is it exactly that I need Uber to help me figure out which apps to launch?

It may, however, save me a few taps. And were my home more replete with connected appliances — Philips Hue, Nest — it would make sense for my Uber ride home to trigger those appliances.

You could frame it as a race between Uber’s brain, up in the cloud, which upon starting my ride home generates a line in a database and triggers some apps based on that information. There is another brain inside my own head, receiving input from my ears and eyes that I am, in fact, in a cab riding home. And there is yet another brain, a GPS, accelerometer, and clock on my phone, which can identify that I appear to be on my way home at approximately the time of day when I would be headed home.

All three of those brains can launch an app. Which brain is best suited to do it?


Much as it took Facebook years to fulfill the promise of personalized advertising, it will take years for the cloud-hosted brains of Apple, Facebook, Google, and Amazon to understand us well enough on a personal level to really predict what we want to do at any particular moment. The current, best implementation of this is when iOS and Android look up the address of your next appointment, assess the time it will take to travel there, traffic included, and warn you when you need to leave. But calendars are also a small domain where human-assistant-like behavior is easiest to generate: Whereas natural language processing flounders and flails almost everywhere else, it works flawlessly in Fantastical and fairly well in AI scheduling assistants like x.ai and Clara and Google and Apple Maps.

It’s not coincidental that the four companies that most want to understand and even predict their users wants and needs — Apple, Facebook, Google and Amazon — have all built mobile devices. I won’t waste words elaborating how the data generated on our phones is the key to what understanding what we do and say and think and want and need; everyone already gets that.

Most everyone also already grasps that Nest is Google’s foray into the home, Apple TV is Apple’s, Echo is Amazon’s, and maybe Oculus is Facebook’s. These are all devices that sit in your home, which save the software from having to add a complex, sometimes faulty vector in their analysis: your geolocation. Matching wants and needs to time of day becomes much harder when you also need to figure out where the person is and hence what they might be doing:

Like “the water-preserving stem of a cactus or the long, slender beak of a humming-bird” from the epigraph of this post, these devices’ station in your home represents a form of knowledge. And knowledge is physically manifested in all kinds of computing devices. Integrated circuits are distinguished from general-purpose chips in that, by virtue of the logic being built into the chip, they know what problem they are working on. You could say that the integrated circuits (often referred to as ASIC’s) built into bitcoin mining machines know a thing or two about Bitcoin, that beacons know where they are, cars know that they’ll be driving on roads, and drones know that they’ll be flying in the air. ASIC’s and everything you could fit a Raspberry Pi inside may also know things that phones and clouds really don’t, and desktop computers definitely don’t. (The desktop computer, you might say, is like a disembodied brain resting on a table. Were you to rest Earth’s most intelligent brain, a human brain, on a table, it wouldn’t do much either.)

This is the essence of biological evolution: knowledge is more often baked into organisms’ physical beings, the means by which they perceive and act on the world, than their brains. Cactuses didn’t evolve brains that would help them migrate to more water-rich areas, humming-birds didn’t evolve brains that would enable them to build plastic straws to suck out the nectar. One reason is that brains are actually very resource-expensive: It requires a lot of energy in the form of food to operate a brain. The more you can do with a lesser centralized brain, the better.

This is manifesting itself in interesting ways today. Moore’s Law is now enabling chips that fit in phones which are capable of running neural networks, the basis of machine intelligence. This enables applications that can recognize objects in images or process language in ways that could previously only be done in the cloud. (One reason that Google Now is faster than Siri is that Siri sends the voice data to the cloud and back, whereas Google Now parses speech locally). Moore’s Law also gives us the $5 Raspberry Pi, which may enable today’s dumb sensors to graduate to full-on computing devices that may or may not need an internet connection in the moment to make sense of sensor data.


The last few years of computing has been dominated by what it meant to have a constant internet connection in our pocket. In 2006, while walking down the street, I only “knew” the things that were in my head. In 2016 I can pull out my smartphone and “know all the things” on the internet (more precisely, I can know all the things I know I can know).

It’s not as if that will change, but it is notable that the most interesting developments in the smartphone world of late — HealthKit, Apple Pay, neural networks running on a mobile chip, and encryption — are distinctly local technologies. And as Moore’s Law commoditizes cheap, tiny chips that we can spray across the physical world, and the coming end of Moore’s Law entails more specialized chips which will “know” out of the factory what kind of problem they’re made to solve, “knowledge” becomes increasingly diffuse. Knowledge of driving, for example, will eventually move from your brain to the car. Knowledge of where your stuff is may move from your brain to the chip implanted in all your stuff.

Humans have always used tools to extend their knowledge — what separates us from the hummingbird is that we can build straws and other tools to extract nectar. Likewise, we can use writing instruments to record knowledge, cars to move around, and so forth. What is so provocative about the issues raised by the current debates around client-side encryption is that it represents the first human instrument that is effectively uninspectable; the information stored there is as inaccessible as the information you’ve stored in your brain. We created the Fifth Amendment to protect that information, and we’re struggling to determine if legislation will or should protect the brain we’ve extended to our personal computing devices. The answer to that will surely bear on all the devices that will become computerized in the not so distant future — cars, appliances, apparel — and the knowledge stored within them.

Ultimately how you view this question may come down to this: Do you view the devices you buy or lease from Apple, Google, Facebook, and Amazon as extensions, or augmentations, of your own brain? Do you own that information as much as you own the information in your own brain? Or do you view these devices as replacements for chunks of your brain, devices that are effectively renting space inside your own head. Whose knowledge is it?

Thanks to Max Bulger, Tom Critchlow and Joel Monegro for their feedback on this post