Order Of Activation
Introduction: For a neural network it is very important in which order the individual neurons receive and process the input and output the results. We distinguish two model classes:
1. Synchronous activation:All neurons change their values synchronously,i.e. they simultaneously calculate network inputs, activation and output, and pass them on. Synchronous activation corresponds closest to its biological counterpart, but it is to be implemented in hardware – only useful on certain parallel computers and especially not for feedforward networks. This order of activation is the most generic and can be used with networks of arbitrary topology.
All neurons of a network calculate network inputs at the same time by means of the propagation function, activation by means of the activation function and output by means of the output function. After that the activation cycle is complete.
2. Asynchronous activation:The neurons which do not change their values simultaneously but at different points of time.

Random order of activation:With random order of activation a neuron i is randomly chosen and its net_{i}, a_{i }and o_{i }are updated. For n neurons a cycle is the nfold execution of this step. Obviously, some neurons are repeatedly updated during one cycle, and others, however, not at all. Apparently, this order of activation is not always useful.

Random permutation:With random permutation each neuron is chosen exactly once, but in random order, during one cycle. Initially, a permutation of the neurons is calculated randomly and therefore defines the order of activation. Then the neurons are successively processed in this order. This order of activation is as well used rarely because firstly, the order is generally useless and, secondly, it is very timeconsumingto compute a new permutation for every cycle. A Hopfield network is a topology nominally having random or a randomly permuted order of activation. But note that in practice, forth previously mentioned reasons, a fixed order of activation is preferred. For all orders either the previous neuron activations at time t or, if already existing, the neuron activations at time t 1, for which we are calculating the activations, can be taken as a starting point.

Topological order of activation:The neurons are updated during one cycle and according to a fixed order. The order is defined by the network topology. This procedure can only be considered for noncyclic, i.e. nonrecurrent, networks,since otherwise there is no order of activation. Thus, in feedforward networks the input neurons would be updated first, then the inner neurons and finally the output neurons. This may save us a lot of time: Given a synchronous activation order, a feedforward network with n layers of neurons would need n full propagation cycles in order to enable input data to have influence on the output of the network. Given the topological activation order, we just need one single propagation. However, not every network topology allows for finding a special activation order that enables saving time.
 Fixed orders of activation:For feed forward networks it is very popular to determine the order of activation once according to the topology and to use this order without further verification at runtime. But this isnot necessarily useful for networks that are Capable to change their topology.