Back to Home Page

STEWART W. WILSON

Principal publications (updated to December 15, 2011)
N. B. Most articles are PDF but some older ones are gzipped Postscript files.
Use gzip -d to decode gzipped Postscript to Postscript.
In a few older files the pages display in reverse order. They should print out all right, however.
Please contact me if you have any problems.

"What Is a Learning Classifier System?" (2 pages, PDF)
XCS tutorial (GECCO-2003): Slides, plus accompanying Talk
ACM SIGEVOlution Interview with SWW (2011)
"What is pain?" (an informal debate on the nature of consciousness)



Wilson, S.W., " Coevolution of pattern generators and recognizers"
Learning Classifier Systems. 11th and 12th International Workshops, IWLCS 2008-2009. Revised selected papers.
Lecture Notes in Computer Science (LNCS-6471), J. Bacardit, W. Browne, J. Drugowitsch, E. Bernado-Mansilla,
and M. V. Butz (Eds.), Berlin, Springer-Verlag, pp. 47-56, (2010).

Proposed is an automatic system for creating pattern generators and recognizers that may provide new and human-independent insight into the pattern recognition problem. The system is based on a three-cornered coevolution of image-transformation programs.


Wilson, S.W., " Classifier conditions using gene expression programming"
In Learning Classifier Systems. International Workshops, IWLCS 2006-2007, Revised Selected Papers.
Lecture Notes in Artificial Intelligence (LNAI-4998), J. Bacardit, E. Bernado-Mansilla, and M. V. Butz (Eds.). Berlin, Springer-Verlag, pp. 206-217, (2008).
Presented, with additional results, at the International Workshop on Learning Classifier Systems (IWLCS-2008), Atlanta, GA, July 13, 2008.
A video of the talk can be can be found here. The slides are here.

The classifier system XCSF was modified to use gene expression programming for the evolution and functioning of the classifier conditions. The aim was to fit environmental regularities better than is typically possible with conventional rectilinear conditions. An initial experiment approximating a nonlinear oblique environment showed excellent fit to the regularities.


Wilson, S.W., " Three architectures for continuous action"
Learning Classifier Systems. International Workshops, IWLCS 2003-2005, Revised Selected Papers.
Lecture Notes in Artificial Intelligence (LNAI-4399), T. Kovacs, X. Llorà, K. Takadama, P. L. Lanzi,
W. Stolzmann, S. W. Wilson (Eds.). Berlin, Springer-Verlag, pp. 239-257, (2007).

Three classifier system architectures are introduced that permit the systems to have continuous (non-discrete) actions. One is based on interpolation, the second on an actor-critic paradigm, and the third on treating the action as a continuous variable homogeneous with the input. While the last architecture appears most interesting and promising, all three offer potential directions toward continuous action, a goal that classifier systems have hardly addressed.


Sigaud, O. and Wilson, S.W.,
" Learning classifier systems: A survey"
Soft Computing, vol. 11, no. 11, September, 2007, pp. 1065-1078

Learning Classifier Systems (LCSs) are rule-based systems that automatically build their ruleset. At the origin of Holland's work, LCSs were seen as a model of the emergence of cognitive abilities thanks to adaptive mechanisms, particularly evolutionary processes. After a renewal of the field more focused on learning, LCSs are now considered as sequential decision problem-solving systems endowed with a generalization property. Indeed, from a Reinforcement Learning point of view, LCSs can be seen as learning systems building a compact representation of their problem thanks to generalization. More recently, LCSs have proved efficient at solving automatic classification tasks. The aim of the present contribution is to describe the state-of-the-art of LCSs, emphasizing recent developments, and focusing more on the sequential decision domain than on automatic classification.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" Classifier prediction based on tile coding"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006),
Maarten Keijzer et al, eds. ACM Press, New York, pp. 1497-1504, (2006).

This paper introduces XCSF extended with tile coding prediction: each classifier implements a tile coding approximator; the genetic algorithm is used to adapt both classifier conditions (i.e., to partition the problem) and the parameters of each approximator; thus XCSF evolves an ensemble of tile coding approximators instead of the typical monolithic approximator used in reinforcement learning. The paper reports a comparison between (i) XCSF with tile coding prediction and (ii) plain tile coding. The results show that XCSF with tile coding always reaches optimal performance, it usually learns as fast as the best parametrized tile coding, and it can be faster than the typical tile coding setting. In addition, analysis of the evolved tile coding ensembles shows that XCSF actually adapts local approximators following what is currently considered the best strategy to adapt the tile coding parameters in a given problem.


Butz, M.V., Lanzi, P.L., and Wilson, S.W.,
" Hyper-ellipsoidal conditions In XCS: Rotation, linear approximation, and solution structure"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006),
Maarten Keijzer et al, eds. ACM Press, New York, pp. 1457-1464, (2006).

The learning classifier system XCS is an iterative rule-learning system that evolves rule structures based on gradient-based prediction and rule quality estimates. Besides classification and reinforcement learning tasks, XCS was applied as an effective function approximator. Hereby, XCS learns space partitions to enable a maximally accurate and general function approximation. Recently, the function approximation approach was improved by replacing (1) hyper-rectangular conditions with hyper-ellipsoids and (2) iterative linear approximation with the recursive least squares method. This paper combines the two approaches assessing the usefulness of each. The evolutionary process is further improved by changing the mutation operator implementing an angular mutation that rotates ellipsoidal structures explicitly. Both enhancements improve XCS performance in various non-linear functions. We also analyze the evolving ellipsoidal structures confirming that XCS stretches and rotates the evolving ellipsoids according to the shape of the underlying function. The results confirm that improvements in both the evolutionary approach and the gradient approach can result in significantly better performance.


Lanzi, P.L. and Wilson, S.W.,
" Using Convex Hulls to Represent Classifier Conditions"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006),
Maarten Keijzer et al, eds. ACM Press, New York, pp. 1481-1488, (2006).

This paper presents a novel representation of classifier conditions based on convex hulls. A classifier condition is represented by a set of points in the problem space. These points identify a convex hull that delineates a convex region in the problem space. The condition matches all the problem instances inside such region. XCSF with convex conditions is applied to function approximation problems and its performance is compared to that of XCSF with interval conditions. The comparison shows that XCSF with convex hulls converges faster than XCSF with interval conditions. However, convex conditions usually do not produce more compact solutions.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" Prediction update algorithms for XCSF: RLS, Kalman filter, and gain adaptation"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006),
Maarten Keijzer et al, eds. ACM Press, New York, pp. 1505-1512, (2006).

We study how different prediction update algorithms influence the performance of XCSF. We consider three classical parameter estimation algorithms (NLMS, RLS, and Kalman filter) and four gain adaptation algorithms (K1, K2, IDBD, and IDD). The latter have been shown to perform comparably to the best algorithms (RLS and Kalman), but they have a lower complexity. We apply these algorithms to update classifier prediction in XCSF and compare the performances of the seven versions of XCSF on a set of real functions. Our results show that the best known algorithms still perform best: XCSF with RLS and XCSF with Kalman perform significantly better than the others. In contrast, when added to XCSF, gain adaptation algorithms perform comparably to NLMS, the simplest estimation algorithm, the same used in the original XCSF. Nevertheless, algorithms that perform similarly generalize differently. For instance: XCSF with Kalman filter evolves more compact solutions than XCSF with RLS and gain adaptation algorithms allow better generalization than NLMS.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" XCS with computed prediction for the learning of Boolean functions"
Proceedings of the IEEE Congress on Evolutionary Computation Conference (CEC2005),

Computed prediction represents a major shift in learning classifier system research. XCS with computed prediction, based on linear approximators, has been applied so far to function approximation, to single step problems involving continuous payoff functions, and to multi step problems. In this paper we take this new approach in a different direction and apply it to the learning of Boolean functions a domain characterized by highly discontinuous 0/1000 payoff functions. We also extend it to the case of computed prediction based on functions, borrowed from neural networks, that may be more suitable for 0/1000 payoff problems: the perceptron and the sigmoid. The results we present show that XCSF with linear prediction performs optimally in typical Boolean domains and it allows more compact solutions evolving classifiers that are more general compared with XCS. In addition, perceptron based and sigmoid based prediction can converge slightly faster than linear prediction while producing slightly more compact solutions.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" XCS with computed prediction in continuous multistep environments"
Proceedings of the IEEE Congress on Evolutionary Computation Conference (CEC2005),

We apply XCS with computed prediction (XCSF) to tackle multistep reinforcement learning problems involving continuous inputs. In essence we use XCSF as a method of generalized reinforcement learning. We show that in domains involving continuous inputs and delayed rewards XCSF can evolve compact populations of accurate maximally general classifiers which represent the optimal solution to the target problem. We compare the performance of XCSF with that of tabular Q-learning adapted to the continuous domains considered here. The results we present show that XCSF can converge much faster than tabular techniques while producing more compact solutions. Our results also suggest that when exploration is less effective in some areas of the problem space, XCSF can exploit effective generalizations to extend the evolved knowledge beyond the frequently explored areas. In contrast, in the same situations, the convergence speed of tabular Q-learning worsens.


Wilson, S. W., "Continuous action"
Presented at the Eighth International Workshop on Learning Classifier Systems (IWLCS-2005), Washington, DC, USA, June 25, 2005. Slides

Ongoing work. Brief presentation of an idea for adding continuous action to XCS. A different approach from the IWLCS-2004 paper.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" Extending XCSF beyond linear approximation"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2005),
H. G. Beyer et al, eds. ACM Press, New York, pp. 1827-1834 in Volume 2, (2005).

XCSF is the extension of XCS in which the classifier prediction is computed as a linear combination of classifier inputs and a weight vector associated to each classifier. XCSF can exploit classifiers' computable prediction to evolve accurate piecewise linear approximations of functions. In this paper, we take XCSF one step further and show how XCSF can be easily extended to allow polynomial approximations. We test the extended version of XCSF on various approximation problems and show that quadratic/cubic approximations can be used to significantly improve XCSF's generalization capabilities.


Lanzi, P.L., Loiacono, D., Wilson, S.W., and Goldberg, D.E.,
" XCS with computed prediction in multistep environments"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2005),
H. G. Beyer et al, eds. ACM Press, New York, pp. 1859-1866 in Volume 2, (2005).

XCSF extends the typical concept of learning classifier systems through the introduction of computed classifier prediction. Initial results show that XCSF's computed prediction can be used to evolve accurate piecewise linear approximations of simple functions. In this paper, we take XCSF one step further and apply it to typical reinforcement learning problems involving delayed rewards. In essence, we use XCSF as a method of generalized (linear) reinforcement learning to evolve piecewise linear approximations of the payoff surfaces of typical multistep problems. Our results show that XCSF can easily evolve optimal and near optimal solutions for problems introduced in the literature to test linear reinforcement learning methods.


Wilson, S. W., " Optimal continuous policies: A classifier system approach"
Presented at the Seventh International Workshop on Learning Classifier Systems (IWLCS-2004), Seattle, WA, USA, June 26, 2004. (Extended abstract). Slides

The paper presents an extended classifier system that learns optimal continuous-valued actions in Markov environments.


Wilson, S. W., " Classifier systems for continuous payoff environments"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2004),
K. Deb et al, eds. Springer-Verlag, Berlin, pp. 824-835 in Part II, (2004). Slides

Recognizing that many payoff functions are continuous and depend on the input state x, the classifier system architecture XCS is extended so that a classifier's prediction is a linear function of x. On a continuous nonlinear problem, the extended system, XCS-LP, exhibits high performance and low error, as well as dramatically smaller evolved populations compared with XCS. Linear predictions are seen as a new direction in the quest for powerful generalization in classifier systems.


Llorà, X. and Wilson, S. W., " Mixed decision trees: Minimizing knowledge representation bias in LCS"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2004),
K. Deb et al, eds. Springer-Verlag, Berlin, pp. 797-809 in Part II, (2004).

Learning classifier systems tend to inherit--a priori--a given knowledge representation language for expressing the concepts to learn. Hence, even before getting started, this choice biases what can be learned, becoming critical for some real-world applications like data mining. However, such bias may be minimized by hybridizing different knowledge representations via evolutionary mixing. This paper presents a first attempt to produce an evolutionary framework that evolves mixed decision trees of heterogeneous knowledge representations.


Butz, M. V., Kovacs, T., Lanzi, P. L., and Wilson, S. W.,
" Toward a theory of generalization and learning in XCS"
IEEE Transactions on Evolutionary Computation, Vol. 8, No. 1, February, 2004.
Download is a proof. To obtain a copy of the published article, go to
http://ieeexplore.ieee.org/xpl/tocresult.jsp?isNumber=28326&puNumber=4235
(requires membership in IEEE Xplore; otherwise I will email you a copy).

In this paper, we take initial steps toward a theory of generalization and learning in the learning classifier system XCS. We start from Wilson's generalization hypothesis, which states that XCS has an intrinsic tendency to evolve accurate, maximally general classifiers. We analyze the different evolutionary pressures in XCS and derive a simple equation that supports the hypothesis theoretically. The equation is tested with a number of experiments that confirm the model of generalization pressure that we provide. Then, we focus on the conditions, termed "challenges," that must be satisfied for the existence of effective fitness or accuracy pressure in XCS. We derive two equations that suggest how to set the population size and the covering probability so as to ensure the development of fitness pressure. We argue that when the challenges are met, XCS is able to evolve problem solutions reliably. When the challenges are not met, a problem may provide intrinsic fitness guidance or the reward may be biased in such a way that the problem will still be solved. The equations and the influence of intrinsic fitness guidance and biased reward are tested on large Boolean multiplexer problems. The paper is a contribution to understanding how XCS functions and lays the foundation for research on XCS's learning complexity.


Lanzi, P. L., Stolzmann, W., and Wilson, S. W., eds.
Advances in Learning Classifier Systems. 4th International Workshop, IWLCS 2001.
San Francisco, CA, USA, July 7-8, 2001. Revised Papers.
Lecture Notes in Artificial Intelligence (LNAI-2321)
Berlin: Springer-Verlag (2002). Chapter abstracts are here.

This book is the next in the series begun by the two volumes from the previous IWLCS workshops, which were published by Springer-Verlag as LNAI-1813 and LNAI-1996 . It contains 13 revised and thoroughly reviewed papers from IWLCS-2001, in Theory and Applications sections, including an Appendix with an algorithmic description of ACS2, a recent anticipatory classifier system model.


Wilson, S. W., " Classifiers that approximate functions"
Natural Computing, 1(2-3), 211-234 (2002).

A classifier system, XCSF, is introduced in which the prediction estimation mechanism is used to learn approximations to functions. The addition of weight vectors to the classifiers allows piecewise-linear approximation, where the classifier's prediction is calculated instead of being a fixed scalar. The weight vector and the classifier's condition co-adapt. Results on functions of up to six dimensions show high accuracy. The idea of calculating the prediction leads to the concept of a generalized classifier in which the payoff prediction approximates the environmental payoff function over a subspace defined by the classifier condition and an action restriction specified in the classifier, permitting continuous-valued actions.


Wilson, S. W., " Compact rulesets from XCSI"
Presented at the Fourth International Workshop on Learning Classifier Systems (IWLCS-2001), San Francisco, CA, July 7, 2001. (Omitted by mistake from the GECCO-2001 Workshop Proceedings) (2001). Slides

An algorithm is presented for reducing the size of evolved classifier populations. On the Wisconsin Breast Cancer dataset, the algorithm produced compact rulesets substantially smaller than the populations, yet performance in cross-validation tests was nearly unchanged. Classifiers of the rulesets expressed readily interpretable knowledge about the dataset that should be useful to practitioners.


Wilson, S. W., " Function approximation with a classifier system"
Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2001),
L. Spector et al, eds. Morgan Kaufmann, San Francisco, CA, pp. 974-981 (2001). Slides

A classifier system, XCSF, is introduced in which the prediction estimation mechanism is used to learn approximations to functions. The addition of weight vectors to the classifiers allows piecewise-linear approximation. Results on functions of up to six dimensions show high accuracy. An interesting generalization of classifier structure is suggested.


Lanzi, P. L., Stolzmann, W., and Wilson, S. W., eds.
Advances in Learning Classifier Systems. Third International Workshop, IWLCS 2000.
Lecture Notes in Artificial Intelligence (LNAI-1996)
Berlin: Springer-Verlag (2001). Chapter abstracts are here.

This book constitutes the thoroughly refereed post-proceedings of the Third International Workshop on Learning Classifier Systems, IWLCS 2000, held in Paris, France in September 2000. The 13 revised full papers presented have gone through two rounds of reviewing and selection. Also included is a comprehensive LCS bibliography listing more than 600 entries as well as an appendix. The papers are organized in topical sections on theory, applications, and advanced architectures.


Lanzi, P. L. and Wilson, S. W.,
" Toward optimal classifier system performance in non-Markov environments"      NetQ(1)
Evolutionary Computation, 8(4), 393-418 (2000).

Wilson's (1994) bit-register memory scheme was incorporated into the XCS classifier system and investigated in a series of non-Markov environments. Two extensions to the scheme proved important for reaching optimal performance in the harder environments. The first was an exploration strategy in which exploration of external actions was probabilistic as in Markov environments, but internal "actions" (register settings) were selected deterministically. The second was use of a register having more bit-positions than were strictly necessary to resolve environmental aliasing. The origins and effects of the two extensions are discussed.


Holmes, J. H., Lanzi, P. L., Stolzmann, W., and Wilson, S. W.,
" Learning classifier systems: new models, successful applications"     NetQ(0)
Information Processing Letters, 82(1), 23-30 (2002).

Rules are an accepted means of representing knowledge for virtually every domain. Traditional machine learning methods derive rules by exploring sets of examples using statistical or information theoretic techniques. Alternatively, rules can be discovered through methods of Evolutionary Computation such as Genetic Algorithms (GA) and Learning Classifier Systems (LCS).
In recent years, new models of Learning Classifier Systems have been developed which have resulted in successful applications in a wide variety of domains (e.g., autonomous robotics, classification, knowledge discovery, modeling). These models have led to a resurgence of this area which for a certain period appeared almost at a dead end. This paper overviews the recent developments in learning classifier systems research, the new models, and the most interesting applications, suggesting some of the most relevant future research directions.


Butz, M. V. and Wilson, S. W., " An algorithmic description of XCS"     NetQ(1)
In Lanzi, P. L., Stolzmann, W., and S. W. Wilson (Eds.), Advances in Learning Classifier Systems. Third International Workshop (IWLCS-2000),
Lecture Notes in Artificial Intelligence (LNAI-1996). Berlin: Springer-Verlag (2001).
[An improved version, with clarifications and corrections, of: Technical Report No. 2000017, Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign, April, 2000.]

A concise description of the XCS classifier system's parameters, structures, and algorithms is presented as an aid to research. The algorithms are written in modularly structured pseudo code with accompanying explanations.


Meyer, J.-A., Berthoz, A., Floreano, D., Roitblat, H., and Wilson, S.W., eds., From Animals to Animats 6: Proceedings of The Sixth International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (2000).


Wilson, S. W., " Mining oblique data with XCS"     NetQ(6)
In Lanzi, P. L., Stolzmann, W., and S. W. Wilson (Eds.), Advances in Learning Classifier Systems. Third International Workshop (IWLCS-2000),
Lecture Notes in Artificial Intelligence (LNAI-1996). Berlin: Springer-Verlag (2001).

The classifier system XCS was investigated for data mining applications where the dataset discrimination surface (DS) is generally oblique to the attribute axes. Despite the classifiers' hyper-rectangular predicates, XCS reached 100% performance on synthetic problems with diagonal DS's and, in a train/test experiment, competitive performance on the Wisconsin Breast Cancer dataset. Final classifiers in an extended WBC learning run were interpretable to suggest dependencies on one or a few attributes. For data mining of numeric datasets with partially oblique discrimination surfaces, XCS shows promise from both performance and pattern discovery viewpoints.


Lanzi, P. L., Stolzmann, W., and Wilson, S. W., eds.
Learning Classifier Systems. From Foundations to Applications
Lecture Notes in Artificial Intelligence (LNAI-1813)
Berlin: Springer-Verlag (2000). Chapter abstracts are here.

Provides a survey of the current state of the art of LCS and highlights some of the most promising research directions. The first part presents views of well-known researchers on what learning classifier systems are. The second part is devoted to advanced topics of current interest, including alternative representations, methods for evaluating rule utility, and extensions to existing classifier system models. The final part is dedicated to promising applications in areas like data mining, medical data analysis, economic trading agents, aircraft maneuvering, and autonomous robotics. An appendix provides a 467-entry comprehensive LCS bibliography.


Wilson, S.W., " Get real! XCS with continuous-valued inputs"     NetQ(0)
From Festschrift in Honor of John H. Holland, May 15-18, 1999 (pp. 111-121), L. Booker, S. Forrest, M. Mitchell, and R. Riolo (eds.). Center for the Study of Complex Systems, The University of Michigan, Ann Arbor, MI.
Also appears in Lanzi, Stolzmann, and Wilson (2000), above.

Classifier systems have traditionally taken binary strings as inputs, yet in many real problems such as data inference, the inputs have real components. A modified XCS classifier system is described that learns a non-linear real-vector classification task.


Wilson, S.W., " State of XCS classifier system research"     NetQ(0)
In Lanzi, P. L., Stolzmann, W., and Wilson, S. W., eds.,
Learning Classifier Systems. From Foundations to Applications
Lecture Notes in Artificial Intelligence (LNAI-1813)
Berlin: Springer-Verlag (2000).

XCS is a new kind of learning classifier system that differs from the traditional one primarily in its definition of classifier fitness and its relation to contemporary reinforcement learning. Advantages of XCS include improved performance and an ability to form accurate maximal generalizations. This paper reviews recent research on XCS with respect to representation, predictive modeling, internal state, noise, and underlying theory and technique. A notation for environmental regularities is introduced.


Wilson, S.W., " Generalization in the XCS classifier system".     NetQ(2)
In Genetic Programming 1998: Proceedings of the Third Annual Conference (pp. 665-674),
J. Koza et al., eds., San Francisco, CA: Morgan Kaufmann (1998).
[Includes the material of two unpublished papers, "Generalization in XCS" and
"Generalization in Evolutionary Learning".]

This paper studies two changes to XCS, a classifier system in which fitness is based on prediction accuracy and the genetic algorithm takes place in environmental niches. The changes were aimed at increasing XCS's tendency to evolve accurate, maximally general classifiers and were tested on previously employed "woods" and multiplexer tasks. Together the changes bring XCS close to evolving populations whose high-fitness classifiers form a near-minimal, accurate, maximally general cover of the input and action product space. In addition, results on the multiplexer, a difficult categorization task, suggest that XCS's learning complexity is polynomial in the input length and thus may avoid the "curse of dimensionality", a notorious barrier to scale-up. A comparison between XCS and genetic programming in solving the 6-multiplexer suggests that XCS's learning rate is about three orders of magnitude faster in terms of the number of input instances processed.


Pfeifer, R., Blumberg, B., Meyer, J.-A., and Wilson, S.W., eds., From Animals to Animats 5: Proceedings of The Fifth International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (1998).


Wilson, S.W., " Explore/exploit strategies in autonomy".
In From animals to animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (pp. 325-332), P. Maes, M. Mataric, J. Pollack, J.-A. Meyer, S. Wilson, eds., Cambridge MA: The MIT Press/Bradford Books (1996).

Within a reinforcement learning framework, ten strategies for autonomous control of the explore/exploit decision are reviewed, with observations from initial experiments on four of them. Control based on prediction error or its rate of change appears promising. Connections are made with explore/exploit work by Holland (1975), Thrun (1992), and Schmidhuber (1995).


Maes, P., Mataric, M., Meyer, J.-A., Pollack, J., and Wilson, S.W., eds., From Animals to Animats 4: Proceedings of The Fourth International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (1996).


Wilson, S.W., " Classifier fitness based on accuracy".     NetQ(37)
Evolutionary Computation, 3(2), 149-175 (1995).

In many classifier systems, the classifier strength parameter serves as a predictor of future payoff and as the classifier's fitness for the genetic algorithm. We investigate a classifier system, XCS, in which each classifier maintains a prediction of expected payoff, but the classifier's fitness is given by a measure of the prediction's accuracy. The system executes the genetic algorithm in niches defined by the match sets, instead of panmictically. These aspects of XCS result in its population tending to form a complete and accurate mapping X x A -> P from inputs and actions to payoff predictions. Further, XCS tends to evolve classifiers that are maximally general, subject to an accuracy criterion. Besides introducing a new direction for classifier system research, these properties of XCS make it suitable for a wide range of reinforcement learning situations where generalization over states is desirable.


Todd, P.M., Wilson, S.W., Somayaji, A.B., and Yanco, H.A., " The Blind Breeding the Blind: Adaptive Behavior without Looking".
In From Animals to Animats 3: Proceedings of the Third International Conference on Simulation of Adaptive Behavior (pp. 228-237), D. Cliff, P. Husbands, J.-A. Meyer, and S.W. Wilson, eds., Cambridge, MA: The MIT Press/ Bradford Books (1994).

Sensors and internal states are often considered necessary components of any adaptively behaving organism, providing the information needed to adapt a creature's behavior in response to conditions in its external or internal environment. But adaptive, survival-enhancing behavior is possible even in simple simulated creatures lacking all direct contact with the environment--evolutionarily shaped blind action may suffice to keep a population of creatures alive and reproducing. In this paper, we consider the evolution of the behavioral repertoires of such sensor-less creatures in response to environments of various types. Different spatial and temporal distributions of food result in evolution of very different behavioral strategies, including the use of looping movements as time-keepers in these otherwise cognitively challenged creatures. Exploring the level of adaptiveness available in even such simple creatures as these serves to establish a baseline to which the adaptive behavior of animats with sensors and internal states can be compared.


Cliff, D., Husbands, P., Meyer, J.-A., and Wilson, S.W., eds., From Animals to Animats 3: Proceedings of The Third International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (1994).


Wilson, S.W., " ZCS: a zeroth level classifier system".
Evolutionary Computation, 2(1), 1-18 (1994).

A basic classifier system, ZCS, is presented that keeps much of Holland's original framework but simplifies it to increase understandability and performance. ZCS's relation to Q-learning is brought out, and their performances compared in environments of two difficulty levels. Extensions to ZCS are proposed for temporary memory, better action selection, more efficient use of the genetic algorith, and more general classifier representation.


Davis, L., Wilson, S.W, and Orvosh, D., "Temporary Memory for Examples Can Speed Learning in a Simple Adaptive System".
In From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior(pp. 313-320), J.-A. Meyer, H.L. Roitblat, and S.W. Wilson, eds., Cambridge, MA: The MIT Press/ Bradford Books (1993).

A temporary memory for recently experienced examples was coupled with BOOLE, a previously developed one-step classifier system that learns incrementally. Regimes were found in which the performance of the combination was superior to that of BOOLE alone, whether the task was to maximize average performance with respect to number of external trials, or the sum of external plus internal trials. The results suggest that storing and periodically reviewing a limited number of recent examples may enhance learning rates in otherwise strictly incremental adaptive systems.


Todd, P.M. and Wilson, S.W., "Environment Structure and Adaptive Behavior from the Ground Up".
In From Animals to Animats 2: Proceedings of the Second International Conference on Simulation of Adaptive Behavior (pp. 11-20), J.-A. Meyer, H.L. Roitblat, and S.W. Wilson, eds., Cambridge, MA: The MIT Press/ Bradford Books (1993).

We describe a framework for exploring the evolution of adaptive behaviors in response to different physical environment structures. We focus here on the evolving behavior-generating mechanisms of individual creatures, and briefly mention some approaches to characterizing different environments in which various behaviors may prove adaptive. The environments are described initially as simple two-dimensional grids containing food arranged in some layout. The creatures in these worlds can have evolved sensors, internal states, and actions and action-triggering conditions. By allowing all three of these components to evolve, rather than prespecifying any of them, we can explore a wide range of behavior types, including "blind" and memoryless behaviors. Our system is simple and well-defined enough to allow complete specification of the range of possible action-types (including moving, eating, and reproducing) and their effects on the energy levels of the creature and the environment (the bioenergetics of the world). Useful and meaningful ways of characterizaing the structures of environments in which different behaviors will emerge remain to be developed.


Meyer, J.-A., Roitblat, H.L., and Wilson, S.W., eds., From Animals to Animats 2: Proceedings of The Second International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (1993).


Wilson, S.W., " Toward a GA solution of the discovery problem".
(extended abstract). In Collected Abstracts for the First International Workshop on Learning Classifier Systems, October 6-8, 1992, Houston, Texas.

Wilson, S.W., " Classifier system mapping of real vectors".
(extended abstract). In Collected Abstracts for the First International Workshop on Learning Classifier Systems, October 6-8, 1992, Houston, Texas.

Wilson, S.W., " GA-Easy Does Not Imply Steepest-Ascent Optimizable".
In Proceedings of The Fourth International Conference on Genetic Algorithms (pp. 85-89), R. K. Belew and L. B. Booker, eds., San Mateo, CA: Morgan Kaufmann Publishers (1991).

It is shown that there are many functions which are GA-easy but not readily optimizable by a basic hill-climbing technique. The results, including a comparison of the genetic algorithm with a population of hill-climbers, provide insight into the operation of the GA and suggest further study of GA-easiness.


Meyer, J.-A. and Wilson, S.W., eds., From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press/Bradford Books (1991).


Wilson, S.W., "The Animat Path to AI".
In From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior (pp. 15-21), J.-A. Meyer and S.W. Wilson, eds., Cambridge, MA: The MIT Press/Bradford Books (1991).

A research methodology is proposed for understanding intelligence through simulation of artifical animals ("animats") in progressively more challenging environments while retaining characteristics of holism, pragmatism, perception, categorization, and adaptation that are often underrepresented in standard AI approaches to intelligence. It is suggested that basic elements of the methodology should include a theory/taxonomy of environments by which they can be ordered in difficulty--one is offered--and a theory of animat efficiency. It is also suggested that the methodology offers a new approach to the problem of perception.


Wilson, S.W., " Perceptron Redux: Emergence of Structure".
Physica D, 42(1-3), 249-256 (1990). Republished in Emergent Computation, S. Forrest (ed.), MIT Press/Bradford Books.

Perceptrons were evolved that computed a rather difficult nonlinear Boolean function. The results with this early and basic form of emergent computation suggested that when genetic search is applied to its structure, a perceptron can learn more complex tasks than is sometimes supposed. The results also suggested, in the light of recent work on classifier systems, that to hasten the emergence of an emergent computation it is desirable to provide evaluative feedback at a level as close as possible to that of the constituent local computations.


Parodi, A., Bonelli, P., Sen, S. and Wilson, S.W., "Newboole: A Fast GBML System".
In Proceedings of The Seventh International Conference on Machine Learning (pp. 153-159), Palo Alto, California: Morgan Kaufmann (1990).

Genetic based machine learning systems are considered by a majority of machine learners as slow rate learning systems. In this paper, we propose an improvement of Wilson's classifier system BOOLE that shows how genetics based machine learning systems learning rates can be greatly improved. This modification consists in a change of the reinforcement component. We then compare the respective performances of this modified BOOLE, called NEWBOOLE, and a neural net using back propagation on a difficult boolean learning task, the multiplexer function. The results of this comparison show that NEWBOOLE obtains significantly faster learning rates.


Smith, S.J. and Wilson, S.W., "Rosetta: Toward A Model of Learning Problems".
In Proceedings of the Third International Conference on Genetic Algorithms (pp. 347-350), Los Altos, California: Morgan Kaufmann (1989).

This paper proposes a canonical model of a machine learning problem. The language and structure of the model characterize learning problems independently from the learning system that solves them. To facilitate this decoupling of the problem and the learner the concept of information pathways or "channels" is introduced. Explicitly specifying the bandwidth and content of these channels can more completely define the learning problem. In addition, a small vocabulary of problem descriptors is introduced. It is hoped that these descriptors and the learning problem model will allow for a more precise definition of learning problems in the future and will provide a common language of comparison among researchers.


Wilson, S.W. and Goldberg, D.E., " A Critical Review of Classifier Systems".
In Proceedings of the Third International Conference on Genetic Algorithms (pp. 244-255), Los Altos, California: Morgan Kaufmann (1989).

The current state of classifier system development is examined with emphasis on challenges and unsolved problems. Suggestions related to the bucket-bragade architecture, the mechanics of bidding and payments, and classifier syntax follow a review of past research.

N.B. Several of the issues in this paper are addressed--maybe solved-- in later papers on classifier systems.


Wilson, S.W., "Bid Competition and Specificity Reconsidered".
Complex Systems, 2, 6, 705-723 (1988).

Experiments were conducted with respect to two classifier system mechanisms: the bid competition and the use of classifier specificity in bidding and payments. The experiments employed a simplified classifier system and so may not accurately reflect the behavior of the standard system. Nevertheless, the results indicated that, in general, (1) specificity should not be factored into amounts deducted from a classifier's strength, (2) the bid competition does not improve performance and does not encourage default hierarchies, and (3) default hierarchies will form under a somewhat different algorithm than the standard one.


Wilson, S.W., "The Genetic Algorithm and Simulated Evolution",
In Artificial Life: Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, C. Langton, ed., Santa Fe Institute Studies in the Sciences of Complexity, Volume VI, Reading, MA: Addison-Wesley (1989).

A scheme is described for simulating the evolution of multicellular systems. The scheme is based on a representation for biological development in which the genotypes are sets of production-like growth rules that are executed to produce cell aggregates--the phenotypes. Evolution of populations, through phenotype selection and genotype variation, occurs acording to the method of the genetic algorithm. Some examples of the development representation in 1-dimensional creatures are given.


Wilson, S.W., "Classifier Systems and the Animat Problem,"
Machine Learning, 2, 199-228 (1987).

This paper characterizes and investigates, from the perspective of machine learning and, particularly, classifier systems, the learning problem faced by animals and autonomous robots (here collectively termed animats). We suggest that, to survive in their environments, animats must in effect learn multiple disjunctive concepts incrementally under payoff (needs-satisfying) feedback. A review of machine learning techniques indicates that most relax at least one of these constraints. In theory, classifier systems satisfy the constraints, but tests have been limited. We show how the standard classifier system model applies to the animat learning problem. Then, in the experimental part of the paper, we specialize the model and test it in a problem environment satisfying the constraints and consisting of a difficult, disjunctive Boolean function drawn from the machine learning literature. Results include: learning the function in significantly fewer trials than a neural-network method; learning under payoff regimes that include both noisy payoff and partial reward for suboptimal performance; demonstration, in a classifier system of a theoretically predicted property of genetic algorithms: the superiority of crossovers to point mutations; and automatic control of variation (search) rate based on system entropy. We conclude that the results support the classifier system approach to the animat problem, but suggest work aimed at the emergence of behavioral hierarchies of classifiers to offset slower learning rates in larger problems.


Wilson, S.W., "Hierarchical Credit Allocation in a Classifier System,"
In Proceedings of the Tenth International Joint Conference on Artificial Intelligence (pp. 217-220), Los Altos, CA: Morgan Kaufmannn (1987).
A longer version appears in Genetic Algorithms and Simulated Annealing (pp. 104-115), Davis, L., ed., London: Pitman (1987).

Learning systems which engage in sequential activity face the problem of properly allocating credit to steps or actions which make possible later steps that result in environmental payoff. In the classifier systems studied by Holland and others, credit is allocated by means of a "bucket-brigade" algorithm through which, over time, environmental payoff in effect flows back to classifiers which take early, stage-setting actions. The algorithm has advantages of simplicity and locality, but may not adequately reinforce long action sequences. We suggest an alternative form for the algorithm and the system's operating principles designed to induce behavioral hierarchies in which modularity of the hierarchy would keep all bucket-brigade chains short, thus more reinforceable and more rapidly learned, but overall action sequences could be long.


Wilson, S.W., "The Genetic Algorithm and Biological Development,"
In Genetic Algorithms and Their Applications: Proceedings of the Second International Conference on Genetic Algorithms (pp. 247-251), Grefenstette, J.J., ed., Hillsdale, NJ: Lawrence Erlbaum Associates (1987).

A representation for biological development is described for simulating the evolution of simple multi-cellular systems using the genetic algorithm. The representation consists of a set of production-like growth rules constituting the genotype, together with the method of executing the rules to produce the phenotype. Examples of development in 1-dimensional creatures are given.


Wilson, S.W., "Quasi-Darwinian Learning in a Classifier System,"
In Proceedings of the Fourth International Workshop on Machine Learning (pp. 59-65), Los Altos, CA: Morgan Kaufmann (1987).

Classifier systems (Holland, 1986) have a distinct Darwinian flavor, and in this respect contrast sharply with most other learning systems. In this paper we bring out various aspects of the contrast, and provide an example of classifier system learning which illustrates its quasi-Darwinian operation.


Wilson, S.W., " Knowledge Growth in an Artificial Animal."
In Proceedings of an International Conference on Genetic Algorithms and Their Applications (pp. 16-23), Grefenstette, J.J., ed., Hillsdale, NJ: Lawrence Erlbaum Assoc. (1985).

Results are presented of experiments with a simple artificial animal model acting in a simulated environment containing food and other objects. Procedures within the model that lead to improved performance and perceptual generalization are discussed. The model is designed in the light of an explicit definition of intelligence which appears to apply to all animal life. It is suggested that study of artificial animal models of increasing complexity would contribute to understanding of natural and artificial intelligence.


Wilson, S.W., "Strobe imagery: A scanning model"
Research Memo RIS No. 38r,
The Rowland Institute for Science, Cambridge, MA, USA (1986).

An early suggestion of Grey Walter is combined with recent understanding of the retino-cortical mapping in a model to account for the principal imagery sometimes seen in uniform flashing lights. The model postulates dual linear cortical scan-like sampling processes whose interaction with the flash produces the images. Existing explanations of the phenomena are reviewed, and a prediction which tests the model is presented.


Wilson, S.W., "Adaptive 'Cortical' Pattern Recognition,"
In Proceedings of an International Conference on Genetic Algorithms and Their Applications (pp. 188-196), Grefenstette, J.J., ed., Hillsdale, NJ: Lawrence Erlbaum Assoc. (1985).

It is shown that a certain model of the primate retino-cortical mapping "sees" all centered objects with the same "object-resolution", or number of distinct signals, independent of apparent size. In an artificial system, this property would permit recognition of patterns using templates in a cortex-like space. It is suggested that with an adaptive production system such as Holland's classifier system, the recognition process could be made self-organizing.


Wilson, S.W., "Cortical scanning: Evidence from strobe observations",
Unpublished manuscript (1984).

Subjects described imagery seen while viewing a flashing strobe lamp with closed eyes. From the principal images, dual linear cortical scan-like processes are inferred, and then used as the basis of a photo-mechanical simulation of the images. Implications for cortical processing are presented.


Wilson, S.W., "On the Retino-Cortical Mapping",
Int. J. Man-Machine Studies, 18, 361-369, (1983).

Based on Hubel & Wiesel's physiological findings on the projection from retina to cortex, a schematic model of that stage of visual processing is constructed and its properties investigated. The projection or mapping appears to carry out an automatic "normalization of description" for the same object independent of retinal image size. This property suggests new concepts regarding (1) contrast sensitivity, (2) the nature and role of indirect vision, (3) the role of eye movements and (4) the recognition of patterns and the analysis of scenes.


Wilson, S.W., "'Aubert processing' and intelligent vision",
Unpublished Polaroid Corporation Technical Memorandum, (1981).

An 1857 experiment implies that the human visual system sees centered objects similarly, independent of their size and angle of rotation. This property would make practical the use of templates in an artificial vision system, and overcomes many of the scale, resolution, and grain problems of linear vision techniques. (Dedicated to the memory of Jerome Y. Lettvin.)


Wilson, S.W., "Interactive Lectures".
Technology Review, vol. 74, no. 3, January, 1972.

The interactive lecture system (U.S. Pat. 3942268 inter alia) used voice recordings and a sketching device to allow an individual to learn from and ask questions of articulate experts such as Carl Sagan. The system anticipated many elements now part of Salman Khan's excellent Khan Academy and in addition uniquely included the ability to ask questions.


Land, E.H., and Wilson, S.W., "Education and the Need to Know".
Technology Review, vol. 69, no. 3, January, 1967.

Experiments in which an individual teenage student would interactively question and direct an apparent machine containing recordings by experts in astronomy, physics, and other subjects. The expert was in fact in another room, so that it was possible for the student to go in practically any direction in the subject. Results showed long periods of interaction, fascinating questions and directions of inquiry, and an evident freedom encouraged by the student's belief he/she was dealing with recordings and not a live person. The paper summarizes the second author's MIT PhD thesis. The work led to the interactive lecture system described above.