Yes. I apologize for the confusion. In the program itself, the error is updated by

(e sub j) <-- (e sub j) + beta(|P - (p sub j)|/*payoff-range* - (e sub j))

i.e., the absolute difference between P and (p sub j) is normalized by dividing by the payoff range (1000). This makes the error a quantity between 0 and 1, as it is treated in the rest of the paper.

A question is whether the program could be written to run the same way without normalizing the error. It appears that the answer is no, given the way (e sub 0) is used as a threshold in the fitness calculation of Section 3.4. In other words, to change the accuracy formula so that error can be expressed in payoff units, you would have to express (e sub 0) in payoff units, which would imply knowing the payoff range.

However, in a fitness calculation that did not use (e sub 0) as a threshold--but used it only as part of the exponential argument-- I think the relative accuracies, and thus the fitnesses, would come out unaffected by expressing the error in payoff units. So, with a different fitness calculation, the system might not ever need to know the payoff range. I just don't know at this point.


Subsequent descriptions of the way the error (e sub j) is used suggest that the value should lie between zero and one, though I could find no explicit statement confirming that. If this is the case, then shouldn't the adjustment of (e sub j) include a normalization of the difference between P and (p sub j) ?