1. 程式人生 > >Humans learn like computers. And that’s why politics is a mess.

Humans learn like computers. And that’s why politics is a mess.

Humans learn like computers. And that’s why politics is a mess.

Humans learn through data compression. Data compression leads to data loss. And that leads to stubbornness and political division.

Let me explain.

Every day, each of us is exposed to countless sensory stimuli. These range from the excitation of photoreceptors in our fields of vision, to the sounds and smells that we encounter during a typical day of being human.

We obviously don’t retain the vast majority of that information though. You probably can’t remember the pair of socks that you wore on this day one year ago, or even the colour of the last car you saw, for example.

So how does our brain decide what information to hold onto, and what to discard?

When it comes to vision, instead of choosing to remember this or that part of our field of view, our brains essentially mix together every “pixel” that we can see, distilling them into a reduced representation that we can interpret in terms of objects and landscapes. Rather than remembering the colours of a million pixels, we remember things like, “there was a dog chasing a squirrel in the park.”

And the same thing will happen to you if you sit in on a presentation: you won’t retain the specific noises produced by the presenter, but rather a distilled, reduced representation of the entire event.

We have different words for these reduced representations: “meanings”, “memories”, “lessons”, and “take-home messages” are among them.

The process of forming a reduced representation of a set of sensory experiences is called “dimensionality reduction” in computer science. And these reduced representations are sometimes called “semantic embeddings” in machine learning:

Dimensionality reduction is what allows us to take controlled sips from the information firehose that’s pointed in our direction during every moment of our existence.

But it has a dangerous down-side as well.

With dimensionality reduction comes information loss. In the best cases, the information that’s lost isn’t terribly relevant — like the number of eyelashes that your mother has, or the number of times you scratched your nose in the last 24 hours. Or the number of t-rex toes that were visible in the picture you just looked at.

Unfortunately, we often only realize that our brains have dropped a pertinent detail right when we need it.

“What colour were the thief’s eyes?”

“Which row did the usher direct us to, again?”

In these cases, some external influence causes us to notice that our brains have failed us. As a result, we feel uneasy at the prospect of picking the thief out of a lineup, or unsure which seat belongs to us. Our brains essentially throw an error, and that error manifests itself as a feeling of unease or a lack of certainty.

But we don’t always have the luxury of receiving an alert of this kind when we’ve failed to retain mission-critical data. The problem is, dimensionality reduction still leaves us with our “memories”, “lessons” and “take-home messages” — which we use to form our opinions — even if we fundamentally have no idea what facts those memories or lessons were based on.

As a result, a typical opinion-forming process might look something like this. First, I learn information A. Then, I form opinion B. Next, I forget information A, but hold on to my opinion B.

This might not seem like a problem, until you consider the possibility that information A might be updated, or disproven. Although I came to my opinion B as a direct result of information A, I may not remember how important A was in forming B, and might easily fail to update B even if I were to learn that A wasn’t true. If that happens, I’ve just developed an irrational confidence in my opinion B.

And this only gets worse if we consider confirmation bias, which will have had plenty of time to warp my data-gathering process, leading me to favour sources of information that point to that opinion B. By the time A is disproved to me, changing my mind might become nearly impossible.

Consider how the modern culture of misleading clickbait article titles plays into this. When your eyes catch an article title on your Twitter feed, you form opinions based on that title. But you generally won’t click through right away to see if your opinion is really justified based on the full story, let alone challenge your interpretation by seeking out another perspective.

Because you won’t generally remember the facts that originally drove you to hold the views that you hold, the opinions you form are likely to be just as strong, whether your did your research and read the entire article, or just skimmed a headline. And if your mind can be made up just as easily by skimming a headline as by reading several articles on a topic, that’s a good reason not to trust your own feelings about an issue before you’re confident you can remember the homework you did to get there.

To be clear, I’m certainly not saying that faulty dimensionality reduction is the only pathology that leads to irrational political stubbornness. Many other effects are also at play, including ingroup/outgroup dynamics, habit formation, and our limited information consumption bandwidths. But it’s an important piece of the puzzle, without which some of the more irrational aspects of our thought processes are destined to remain obscure.

Our opinions aren’t something that we can, or should take for granted — they’re things we should regard with suspicion. Or at least, that’s my opinion.