The answer is simple, and I apologize for the confusion. The fitness values are that high because for purposes of display, the F sub j values have been multiplied by 1000. The factor of 1000 is not used in the operation of the system.

Given that, the figures make sense. Look for instance at the first four macroclassifiers in Figure 2. If you divide each macro's fitness by its numerosity, you get approximately 50. Now think in "microclassifier" (ordinary classifier) terms.

Each of the 18 micros represented by these four macros should have accuracy (kappa sub j) of 1.0, since they all have error ".00". Their relative accuracies (kappa prime sub j) should be less than 1/15, since any action set in which one of the first three occurs will also contain the fourth macroclassifier, with numerosity 14. Thus 1000 times the fitness (F sub j) of each should converge to a value less than 1000/15 = 67, which is not too far off.

Thanks for bringing up this glitch.


From the description, it seems that fitness (F sub J) is an estimate of the relative accuracy for the macroclassifier: the average established by MAM is subsequently altered by the Widrow-Hoff delta rule with parameter Beta. Relative accuracy (kappa prime sub j) is always a real between 0.0 and 1.0, so the F sub J values should also be in that range, or approach that range from whatever value of F sub I is used. But in Figures 2 and 13 you show macroclassifiers with fitnesses as high as 782.0 or 384.0. How come?