1. 程式人生 > >Resist Google’s Attempts to Turn You Into a Robot

Resist Google’s Attempts to Turn You Into a Robot

Users of Gmail—and there’s a good chance that’s you—have noticed an “upgrade” in the service recently, in which you have been given the opportunity to respond to a message with a few short phrases. You may have been on the receiving end of such an autoreply already, and perhaps you have used one too: They’re easy, convenient, often intuitive, and can go a long way toward reducing inbox clutter. A

few articles have even appeared about the love and fanfare for the new feature.

There are certainly times when a short programmatic response to an email does the trick. Email threads that are near the end of their useful life are best killed off efficiently—when all you need to do is accept or reject a simple proposal, confirm a time or place of meeting, or indicate that you have completed a task. But these situations by no means represent a majority of emails, and the temptation to treat other more complicated or nuanced emails like that don’t often have a happy ending.

Here’s an email I received from a friend a few weeks ago:

Screenshots: the author

Gmail suggested I respond to this message in one of the following ways:

I can think of circumstances in which I might want to use one of these responses: For example, if my intention was to permanently alienate my good friend or send him an indication that my descent into dementia had begun. But, on that day, neither seemed quite on message, so I responded in the way that a friend should—with a considerate and sympathetic expression of my thoughts and feelings about this development in his life.

This is an extreme case that illustrates the cluelessness of Google’s algorithmic approach to the complexities of human communication. The choices it presents will probably improve over time, the reason being that you will be helping them. Even if you choose not to use one of Gmail’s autoreplies, you still supply a useful datapoint for Google’s gargantuan heap of big data that indicates “well, that didn’t work.”

What’s more useful to Google, however, is when you do choose to respond to an email with an autoresponse (or as Google calls them, a “smart reply”). Here’s what’s probably going on under the hood:

  • With the largest pile in the world of written natural language representing dynamic human communication, Google is able to use deep learning combined with the methods of natural language processing to make inferences about the essential content and intent of your emails and sort them into types.
  • Using these same machine learning methods, a trio of possible responses to a given type of email message is generated and tested. Every time a human uses one of these responses, a data point is supplied to Google that says “given message type A, a human has chosen response X as an appropriate one.”
  • Multiply this last step by a thousand or a million or a gazillion, to the point where a clear statistical pattern emerges, and Google can conclude with some confidence that when a person expresses ideas, thoughts, feelings, or questions that can be classified as type A, it’s reasonable for another human to respond with utterance X.

Now think for a moment about what Google can do with this data. There may come a brave new world in which digital assistants do, in fact, sound like humans, where the things these assistants say or ask in response to human input are pretty much indistinguishable from what a real human would say. Except for one thing: When you respond with “Thanks for sharing!” or “Glad you enjoyed it!” or “Very cool!” to an email message, you are not responding as an in-the-wild human. You are responding as a human that was prompted by a robot.

So at the same time that Google is bringing robots closer to the singularity in which their communication style is indistinguishable from that of humans, they are also dragging humans closer to the robots. They are not measuring your response as an unreconstructed human; they are measuring what you do as a human that has been conditioned by a robot.

You are training sophisticated robots to think that this is the sort of thing humans typically do.

Case in point: A family member and I frequently have to communicate about matters concerning the care of an elderly relative. None of the three of us lives in the same place, and so he and I often correspond about the matter by email. I send him an email when I have news worth sharing or when I would like his input about some new development.

A few times now, in response to such an email from me, I have received a response such as “Interesting!” or “Sounds good to me.” Well, yeah. If it were not interesting, I would not have bothered putting it in an email to you, and the purpose of my writing was not to gauge your interest. He answers from his phone. He’s busy. He’s in a hurry. He apparently doesn’t want to be bothered about this now. He has dropped the option of responding like a human because Google has given him a license to do this.

The thing to remember about email is that it is a conversation. And when we think about conversing, we should think about Paul Grice and his 1967 lectures at Harvard. Grice made the brilliant and intuitive observation that ordinary conversation is a cooperative enterprise. Because of that, it is governed by the principle that contributions to conversations should facilitate the purpose of the exchange—a purpose that is generally understood and shared by participants. Grice formulated four maxims that are easily applicable to any conversation. We all follow them most of the time, even though we are never formally taught them.

  1. The maxim of quantity, where one tries to be as informative as one possibly can and gives as much information as is needed and no more.
  2. The maxim of quality, where one tries to be truthful and does not give information that is false or not supported by evidence.
  3. The maxim of relation, where one tries to be relevant and say things that are pertinent to the discussion.
  4. The maxim of manner, when one tries to be as clear, brief, and orderly as one can in what one says and avoids obscurity and ambiguity.

Ask yourself, before your finger or cursor flies to one of Gmail’s “smart replies” if you are violating one of these maxims. If you are, you are not only shortchanging your correspondent; you are training sophisticated robots to think that this is the sort of thing humans typically do.

You may have heard of the Turing Test, proposed by Alan Turing in 1950 as a test of a computer’s ability to exhibit behavior equivalent in intelligence to, or indistinguishable from, that of a human. A lot of folks are still working on this, and there have been many advances in recent years.

We will probably one day look back on the Turing Test as a historical milestone. Ideally, this will be because computers really have become so sophisticated that we do not know them from humans and not just because humans have jettisoned a portion of their humanity in numb lockstep with our robotic overlords.