On Merging the Fields of Neural Networks and Adaptive Data Structures to Yield New Pattern Recognition Methodologies
Abstract
The aim of this talk is to explain a pioneering research endeavour that attempts to merge two completely different fields in Computer Science so as to yield very fascinating results. These are the well-established fields of Neural Networks (NNs) and Adaptive Data Structures (ADS) respectively. The field of NNs deals with the training and learning capabilities of a large number of computing elements (i.e., the neurons), each possessing minimal computational properties. On the other hand, the field of ADS concerns designing, implementing and analyzing data structures which adaptively change with time so as to optimize some access criteria. In this talk, we shall demonstrate how these fields can be merged, so that the neural elements are themselves linked together using a specific data structure. This structure can be a singly-linked list, a doubly-linked list or even a Binary Search Tree (BST). While the results themselves are quite generic and can potentially lead to many new avenues for further research, in particular, we shall, as a prima facie case, present the results in which a Self-Organizing Map (SOM) with an underlying BST structure can be adaptively re-structured using conditional rotations. These rotations on the nodes of the tree are local and are performed in constant time, guaranteeing a decrease in the Weighted Path Length of the entire tree. As a result, the algorithm, referred to as the Tree-based Topology-Oriented SOM with Conditional Rotations (TTO-CONROT), converges in such a manner that the neurons are ultimately placed in the input space so as to represent its stochastic distribution. Besides, the neighborhood properties of the neurons suit the best BST that represents the data.
Acknowledgements: The research associated with this paper was done together with my students including Rob Cheetham, David Ng and Cesar Astudillo.