Neural Network & Fuzzy Systems

Back-propagation Network

A single-layer neural network has many restrictions. This network can accomplish very limited classes of tasks. Minsky and Papert (1969) showed that a two layer feed-forward network can overcome many restrictions, but they did not present solution to the problem as "how to adjust the weights from input to hidden layer”?

• An answer to this question was presented by Rumel hart, Hinton and Williams in 1986. The central idea behind this solution is that the errors for the units of the hidden layer are determined by back-propagating the errors of the units of the output layer. This method is often called the Back-propagation learning rule. Back-propagation can also be considered as a generalization of the delta rule for non-linear activation functions and multi-layer networks.

• Back-propagation is a systematic method of training multi-layer artificial neural networks.

Real world is faced with situations where data is incomplete or noisy. To make reasonable predictions about what is missing from the information available is a difficult task when there is no a good theory available that may to help reconstruct the missing data. It is in such situations the

Back-propagation (Back-Prop) networks may provide some answers.

• A BackProp network consists of at least three layers of units :

- An input layer,

- At least one intermediate hidden layer, and

- An output layer.

• Typically, units are connected in a feed-forward fashion with input units fully connected to units in the hidden layer and hidden units fully connected to units in the output layer.

• When a BackProp network is cycled, an input pattern is propagated forward to the output units through the intervening input-to-hidden and hidden-to-output weights.

• The output of a BackProp network is interpreted as a classification decision.

• With BackProp networks, learning occurs during a training phase.

The steps followed during learning are :

− Each input pattern in a training set is applied to the input units and then propagated forward.

− The pattern of activation arriving at the output layer is compared with the correct (associated) output pattern to calculate an error signal.

− The error signal for each such target output pattern is then back-propagated from the outputs to the inputs in order to appropriately adjust the weights in each layer of the network.

− After a BackProp network has learned the correct classification for a set of inputs, it can be tested on a second set of inputs to see how well it classifies untrained patterns.

• An important consideration in applying BackProp learning is how well the network generalizes.