Introduction To Neural Networks
Introduction:-A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment through a learning process.
2. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge. Neural networks are also referred to in literature as neurocomputers, connectionist networks. Parallel distributed processors, etc.
Benefits of Neural Networks
The use of neural networks offers the following useful properties and capabilities:
1.Nonlinearity. An artificial neuron can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurons, is itself nonlinear. The nonlinearity is of a special kind in the sense that it is distributed throughout the network.
2.Input-Output Mapping. The network learns from the examples by constructing an input-output mappingfor the problem at hand. Such an approach brings to mind the study of nonparametricstatistical inference; the term"nonparametric" is used here to signify the fact that no prior assumptions are made ona statistical model for the input data.
3.Adaptivity. Neural networks have a built-in capability to adapt their synapticweights to changes in the surrounding environment. In particular, a neural networktrained to operate in a specific environment can be easily retrained to deal with minorchanges in the operating environmental conditions. Moreover, when it is operating in anonstationary environment, a neural networkcan be designed to change its synaptic weights in real time.
4.Evidential Response. In the context of pattern classification, a neural networkcan be designed to provide information not only about which particular pattern toselect, but also about the confidence in the decision made. This latter information maybe used to reject ambiguous patterns, should they arise, and thereby improve the classificationperformance of the network.
5.Contextual Information. Knowledge is represented by the very structure andactivation state of a neural network. Every neuron in the network is potentiallyaffected by the global activity of all other neurons in the network. Consequently, contextualinformation is dealt with naturally by a neural network.
6.Fault Tolerance. A neural network, implemented in hardware form, has thepotential to be inherently fault tolerant, or capable of robust computation, in thesense that its performance degrades gracefully under adverse operating conditions.
7.VLSI Implement ability.The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using very-large-scale-integrated(VLSI) technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behavior in a highly hierarchical fashion.
8.Uniformity of Analysis and Design. Basically, neural networks enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks. This feature manifests itself in different ways:
- Neurons, in one form or another, represent an ingredient common to all neural networks.
- This commonality makes it possible to share theories and learning algorithms in different applications of neural networks.
- Modular networks can be built through a seamless integration of modules.
9.Neurobiological Analogy. The design of a neural network is motivated by analogy with the brain, which is a living proof that fault tolerant parallel processing is not only physically possible but also fast and powerful. Neurobiologists look to (artificial) neural networks as a research tool for the interpretation of neurobiological phenomena.