Connectionism is an approach within fields such as artificial intelligence, cognitive psychology, cognitive science and neuroscience. This approach attempts to model mental or behavioral phenomena as emergent characteristics of networks of elementary units. These units can be linked together in different ways, allowing them to fulfill a variety of tasks. The networks exhibit a certain amount of activation.

Activation can occur in a part of a network and spread over the entire network, for example a Connectionism is actually a new form of associationism: explaining the emergence of complex structures through connections between more elementary structures.

Forming networks

There are different forms of connectionism. A common form is based on neural network models. Here, for example, the elementary units can consist of neurons, and their connections from synapses (see also connectome). Although the name suggests that these models are based on characteristics of the brain, this does not necessarily have to be the case. It is true that sometimes this is taken into account.

For example, network simulations of memory sometimes take into account characteristics of structures such as the hippocampus. Such models are then called neurally plausible. In other models, units within a network are formed by words, and the connections between them represent semantic relationships. Information processing in these networks runs parallel. That is why these networks are sometimes also referred to as Parallel Distributed Processing (PDP) systems.

Network as a calculation unit

The most simple neural network contains three layers: A so-called entrance layer, an intermediate layer and an exit layer (see also neural network). Between these layers there are forward and recurring (or feedback) connections. The strength of the connections is determined by their weights: these can be exciting or inhibiting. The greater the excitatory weights, the stronger the activation of the total network. A network with recurring connections is also called recurrent networkcalled. For example, these connections may feed back from the output layer to the intermediate or entrance layer.

If, for example, the output layer produces too little output, the values ​​of the input layer can be increased by feedback. Such a network forms a sort of computational (or calculation) unit. One can learn such a network to fulfill certain perceptual tasks, such as recognizing faces or manuscripts. A frequently used technique in network simulations is called backward propagation. The weights of lower levels are always adjusted by a series of iterations (repeated actions). As a result, after a period of time, the difference between the desired and obtained solutions will become smaller and smaller until a stable final value is obtained which satisfies the stated criterion.

Rate this post: