nancyfulda (nancyfulda) wrote,

On the Virtues of Bias

As conscientious human beings, I believe most of us work hard to root out unwanted biases and view the world fairly and open-mindedly. In no way is the following post intended to undermine that noble effort.

But. (There's always a 'but', isn't there?)

Having spent countless hours during my Master's Degree watching little simulated robots navigate electronic mazes, I believe I have a different outlook on biases than most people.

'Bias', in the machine learning sense, is merely another word for 'what to do if you lack sufficient data for an informed decision'. When our little reinforcement learners began navigating their mazes, they not only knew nothing about the shape of the maze, but they also knew nothing about themselves or the effects of their own actions. In essence, we asked them to choose between Curtain Number 1, Curtain Number 2, or Curtain Number 3. In response to that choice, the robot either turned left, turned right, or moved forward; the world changed; and the robot experienced either simulated happiness or simulated dissatisfaction as a result of the new world configuration.

Bumping into a wall generated dissatisfaction. Reaching a designated 'goal location' generated happiness. And the poor little robots, through trial and error and with very little ability to perceive the world around them, were left to flounder unaided until they either got it right or degenerated into hopeless behaviors (such as turning in eternal circles to avoid ever hitting any walls).

Um. What we learned, to our initial displeasure, is that finding your way through a maze to a goal location without hitting walls is a very complex task if the robot has not been equipped with a pre-packaged warning mechanism that says "Watch out! That's a wall ahead of you. Bumping into it might hurt." And, if you complicate the maze such that there are multiple possible goal locations, some of which generate great happiness and others of which generate only a modicum of happiness, your reinforcement learner is very likely to stumble upon a mediocre goal location and, in all subsequent learning trials, head straight for the one known solution without bothering to check whether there might be other, better, solutions lurking behind one or more painful corners.

Our solution to these and other problems was to introduces biases. We gave our little robots a preference for exploring unknown territory, an ingrained dislike for being near walls, an a little extra nudge that said "when in doubt, move forward". And voila! They learned to navigate the maze.

Now, sometimes these biases turned out to be harmful, as when a goal location was proximate to a wall and the robot correspondingly wouldn't go near it. So we also encouraged our robots to value spontaneity, (i.e. to choose their actions randomly at times) so that they'd be able to discover these gems of happiness in what appeared to be undesirable locations. And with time, as their understanding of themselves and their environment increased, they replaced their biases with knowledge to the extent that the bias was no longer relevant. They knew what to do. They no longer needed a bias to act as a guideline in the face of uncertainty.

(A corollary: We quickly realized that if we did not introduce external biases, our reinforcement learners would produce their own, internal biases. It's just that these biases turned out to be counterproductive and not very smart: like a preference for turning left even though the optimal goal location resided at the end of a long, straight corridor.)

In retrospect, it's shameful what we subjected our poor little robots to in the name of science. I can't help feeling sorry for them; tossed adrift in a world they could barely perceive and were incapable of comprehending, told simply to "Do your best, and try to be happy." In the end what surprised me was not how much help the poor devils needed to accomplish their task, but the fact that they ever learned to do it at all.

That was a long time ago. In the years since, it has been my observation that at the fundamental level, human beings are quite similar to programmatic reinforcement learners. We flounder. We experiment. We occasionally spin in circles because avoiding pain seems far more important than progressing towards a goal. We even pass through a phase, called 'teenagerhood', in which we value thrills, experimentation, and the exploration of the unknown over the known safe paths.

And we need biases.

Yes, you heard me right. We need biases.

We don't need all biases, of course. There are good biases and bad biases, the difference (in my opinion) being whether they lead us towards actions that are generally beneficial or generally harmful. But without some form of bias during our formative years, we would either flounder in uncertainty or succumb to safe, suboptimal behaviors.

We have all seen what happens when biases are retained despite being inaccurate, when -- instead of being replaced and superseded by knowledge, as all biases eventually should -- a bias becomes so strong that it attempts to twist the evidence into compliance with its preconceptions. This is a bad thing. But in our abhorrence at this bad thing that sometimes happens with biases, we must not fall prey to the misconception that biases are inherently bad.

I submit that bias is not evil, any more than fire is evil, or a knife is evil. We are all biased, and we must all be biased; otherwise we would be incapable of acting intelligently unless we also happened to be omniscient. To attempt to rid oneself of all bias in the false belief that bias is undesirable in and of itself is a grave fallacy. Ripping out bias without replacing it through either (1) a better bias or (2) reliable information is to invite stagnation at best, and dreadful mistakes at worst.

In my opinion, the question "Am I biased?" is not nearly so useful as the similar yet substantially different question: "Am I acting on bias right now?" The first is a pointless question, always answerable by 'yes' and utterly useless in determining the rightness or wrongness of a given action. The second might open up a window for change. Because if your answer to the second question is 'yes' then the next logical question is: "Is this a situation where it is appropriate to act on bias, or had I better gather more information instead?"

And then dear readers, (those of you kind enough to stay through the end of this rather lengthy post), we can do what my little simulated robots showed me how to do ten years ago.

We can learn.
Tags: life, writing
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

  • 7 comments