Comparing Rbf Networks And Multilayer Perceptrons
Introduction:-Here we are comparing multilayer perceptrons and RBF networks with respect to different aspects.
Input dimension: We must be careful with RBF networks in high dimensional functional spaces since the network could very quickly require huge memory storage and computational effort. Here, a multilayer perceptron would causeless problems because its number of neurons does not grow exponentially with the input dimension.
Center selection: However, selecting the centers c for RBF networks is still a major problem. Such problems do not occur with the MLP.
Output dimension: The advantage of RBF networks is that the training is not much influenced when the output dimension of the network is high. For an MLP, a learning procedure such as back propagation thereby will be very time-consuming.
Extrapolation: Advantage as well as disadvantage of RBF networks is the lack of extrapolation capability: An RB Network returns the result 0 far away from the centers of the RBF layer. On the one hand it does not extrapolate, unlike the MLP it cannot be used for extrapolation (whereby we could never know if the extrapolated values of the MLP are reasonable, but experience shows that MLPs are suitable for that matter). On the other hand, unlike the MLP the network is capable to use this 0 to tell us "I don’t know", which could be an advantage.
Lesion tolerance: For the output of an MLP, it is no so important if a weight or a neuron is missing. It will only worsen a little in total. If a weight or a neuron is missing in an RBF network then large parts of the output remain practically uninfluenced. But one part of the output is heavily affected because a Gaussian bell is directly missing. Thus, we can choose between a strong local error for lesion and a weak but global error.
Spread: Here the MLP is "advantaged “since RBF networks are used considerably less often – which is not always understood by professionals (at least as far as low-dimensional input spaces are concerned). The MLPs seem to have a considerably longer tradition and they are working too good to take the effort.