Well, there is a value judgement here, but not quite as you describe.

It's that, ideally, the system should evolve classifiers that are as general as possible subject to an accuracy criterion. By "accuracy criterion", I mean for example, that a classifier's payoff prediction should be correct to within a specified percentage of, say, the maximum available payoff. The system should try to find the most general classifiers that are still accurate in this sense. So I'm interested in both accuracy and generality.

Now, where does that leave "general classifiers [which] capture the first-order statistics of a broad range of input situations"?

Well, they are potentially allowed, but it depends on the value of the accuracy criterion. If the system, for whatever reason (its heredity, a human designer's wishes, etc.) has a very strict accuracy criterion, then such classifiers will probably not occur very much.

On the other hand, the environment may be noisy or unpredictable, or the input space so huge, that a strict accuracy criterion cannot be maintained. Then it will be necessary to relax the criterion, in which case somewhat overgeneral classifiers will figure prominently in the population.


You seem to be making a value judgment between accurate and general classifiers, preferring the former. Why is this? Can't general classifiers be useful, too, to capture the first-order statistics of a broad range of input situations?