Human skill Models or human control strategy, which accurately emulate dynamic human behavior, have far reaching potential in areas ranging from robotics to virtual reality to the intelligent vehicle highway project. Significant challenges arise in the modeling of human skill, however. Defying analytic representation, little if anything is known about the structure, order or granularity of an individual’s human controller. Human control strategy is both dynamic as well as stochastic in nature. In addition, the complex mapping from sensory inputs to control action outputs inherent in human control strategy can be highly nonlinear for given tasks. Therefore, developing an accurate and useful model for this type of dynamic phenomenon is frustrated by a poor understanding of the underlying basis for that phenomenon. Consequently, modeling by observation, rather than physical derivation, is becoming an increasingly popular paradigm for characterizing a wide range of complex processes, including human control strategy. This type of modeling is said to constitute learning, since the model is not derived from a nature, but rather from observed instances of experimental data, known collectively as the training set.
The main strength of modeling by learning is that no explicit physical model is required; this also represents its biggest weakness, however. On the one hand, we are not restricted by the limitations of current scientific knowledge, and are able to model processes for which we have not yet developed adequate understanding. On the other hand, the lack of scientific justification detracts from the confidence that we can show in these learned Human skill models. Yet, most learning approaches today utilize some static error measure as a test of convergence for the learning algorithm. While this measure might be useful during training, it offers few, if any, guarantees about the dynamic behavior of the resulting learned model. To counter these problems, we propose a post-training model validation procedure, which characterizes the totality of the system trajectories generated by the learned model. We discuss our approach for learning human control strategy models in the context of a driving simulator. Finally, we report results in validating the learned control strategy models.
We briefly review our general approach to abstracting Human skill models into computational models. We then narrow in on driving as one particular human control strategy. We describe our experimental driving simulator used to collect data from human subjects and to test learned models of driving strategy. Finally, we describe results for control strategy models which we use in the subsequent section to illustrate the model-validation procedure. Model validation is an important problem in the area of machine learning for dynamic systems, if learned models are to be exploited for their full potential. Such a measure is especially relevant in the learning and modeling of human control strategy, where little is known about the structure, order, or granularity of the underlying human controller. In fact, we have demonstrated the viability of the proposed method in the validation of human control strategy for the driving task, and have proposed further avenues of research along this direction.