Neural Network & Fuzzy Systems

Adaptive Resonance Theory (Art)

Introduction:- ART stands for "Adaptive Resonance Theory", invented by Stephen Grossberg in 1976. ART represents a family of neural networks. ART encompasses a wide variety of neural networks. The basic ART System is an unsupervised learning model.  The term "resonance" refers to resonant state of a neural network in which a category prototype vector matches close enough to the current input vector. ART matching leads to this resonant state, which permits learning. The network learns only in its resonant state.

ART neural networks are capable of developing stable clusters of arbitrary sequences of input patterns by self-organizing.

  • ART-1 can cluster binary input vectors.
  • ART-2 can cluster real-valued input vectors.

ART systems are well suited to problems that require online learning of large and evolving databases.

Adaptive Resonance Theory (ART):-Real world is faced with situations where data is continuously changing. In such situation, every learning system faces plasticity-stability dilemma.This dilemma is about :"A system that must be able to learn to adapt to a changing environment(i.e. it must be plastic) but the constant change can make the systemunstable, because the system may learn new information only by forgetting everything it has so far learned."

This phenomenon, a contradiction between plasticity and stability, iscalled plasticity - stability dilemma.The back-propagation algorithm suffers from such stability problem.

  • ART has a self-regulating control structure that allows autonomous recognition and learning.
  • ART requires no supervisory control or algorithmic implementation.

Neural networks learn through supervised and unsupervised means. The hybrid approaches are becoming increasingly common as well.

Ø  In supervised learning, the input and the expected output of the system are provided, and the ANN is used to model the relationship between the two. Given an input set x, and a corresponding output set y, an optimal rule is determined such that: y = f(x) e where, e is an approximation error that needs to be minimized. Supervised learning is useful when we want the network to reproduce the characteristics of a certain relationship.

Ø  In unsupervised learning, the data and a cost function are provided. The ANN is trained to minimize the cost function by finding a suitable input-output relationship. Given an input set x, and a cost function(x, y) of the input and output sets, the goal is to minimize the cost function through a proper selection of f (the relationship between x, and y). At each training iteration, the trainer provides the input to the network, and the network produces a result. This result is put into the cost function, and the total cost is used to update the weights. Weights are continually updated until the system output produces a minimal cost. Unsupervised learning is useful in situations where a cost function is known, but a data set is not know that minimizes that cost function over a particular input space.

Ø  In back propagation network learning, a set of input-output pairs are givenand the network is able to learn an appropriate mapping. Among thesupervised learning, BPN is most used and well known for its abilityto attack problems which we cannot solve explicitly. However thereare several technical problems with back-propagation type algorithms. They are not well suited for tasks where input space changes and areoften slow to learn, particularly with many hidden units. Also thesemantics of the algorithm are poorly understood and not biologicallyplausible, restricting its usefulness as a model of neural learning.