Inductive Learning
-
Gonzalez, A. J., Daroszewski, S., and Hamilton, H. J.,
"Determing the Incremental Worth of Members of an Aggregate
Set through Difference-based Induction," International
Journal of Inteilligent Systems, 1999.
[
ABSTRACT
|
FULL DOCUMENT ] (Full document will be loaded onto a new page.)
Calculating the incremental worth or weight of the individual components of an aggregate set when only the total worth or weight of the whole set is known is a problem common to several domains. In this article we describe an algorithm capable of inducing such incremental worth from a database of similar (but not identical) aggregate sets. The algorithm focuses on finding aggregate sets in the database that exhibit minimal differences in their corresponding components (referred to here as attributes and their values). This procedure isolates the dissimilarities between nearly similar aggregate sets so that any difference in worth between the sets is attributed to these dissimilarities. In effect, this algorithm serves as a mapping function that maps the makeup and overall worth of an aggregate set into the incremental (or relative) worth of its individual attributes. It could also be categorized as a way of calculating interpolation vectors for the attributes in the aggregate set. The algorithm builds a classification tree similar to that used in ID3 and C4.5 [Quinlan, 1983; Quinlan, 1993]. It distributes all aggregate sets in the database according to their attributes and their values. It then groups together those with the same attributes and values. Each leaf of the classification tree then will contain a group of aggregate sets that are identical to each other insofar as their attributes and their values. Members of groups belonging to two sibling leaves (having the same immediate parent) differ from each other in the value of exactly one attribute. Thus, any difference in the worth of the sets in those groups can be attributed to that one difference. The worth of the aggregate sets in these groups can be averaged when the data are noisy. This algorithm was found to work well when applied to the real estate appraisal domain.
-
Gonzalez, A. J. and Gross, D. L., "Learning Tatics from a
Sports Game-based Simulation," International Journal of
Computer Simulation, 1995.
[
ABSTRACT
|
FULL DOCUMENT ] (Full document will be loaded onto a new page.)
A significant portion of knowledge about what computers can and cannot do has traditionally been determined from computer game-playing experiments. One of the earliest performance tests for computers was the game of chess. This was due primarily to the fact that games of strategy provided microworlds of the appropriate levels of complexity. In addition, the progress could be measured easily by playing performance and rank.
With some notable exceptions [11], one major problem in developing computer opponents for games of strategy has been the general absence of a learning capability. The early game-playing programs used search methods to simulate intelligent play. This method was effective for playing finite games of perfect information (i.e., tic-tac-toe), but was Ineffective for most other types of strategy games. Later programs used techniques in pattern recognition and decision theory, but this provided only a limited solution. The purpose of this article is to describe a method of simulating intelligent opponents playing in imperfect information games, where strategy is of the utmost importance. The domain used for this study will be in the area of physical sports, particularly the game of American football. The program described has the capacity to learn and adapt to an opponent’s strategy and to respond to unanticipated situations (e.g., not contained in its knowledge base).