Neural Network & Fuzzy Systems

Destructive Learning

Destructive learning is a technique that destroys the initial neural network architecture for the purpose of better learning. One method in this class is structural learning with forgetting .The method is based on the following assumptions:-

  • Training of a neural network starts with all the connections present.
  • A standard algorithm for training is used, but the weights "forget" a little bit when they change.
  • After a certain number of cycles, the connections, which have small weights (around 0),aredeleted from the structure.
  • Training continues until convergence.
  • The trained network consists of connections only, which represent underlining rules in data between the input variables and the output variables.

Forgetting can be selective, that is, only certain connections with small weights forget.

Some advantages of using this approach are that better generalization can be achieved when compared with that of a fully connected trained neural network, and training is faster because unnecessary connections are deleted.

A definite drawback of the method is that the neural network structure is destroyed after training; it maynot be possible to accommodate new data, significantly different from the data already used.