Generalized Hebbian algorithm
The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.
Wikipage disambiguates
Wikipage redirect
Link from a Wikipage to another Wikipage
primaryTopic
Generalized Hebbian algorithm
The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.
has abstract
The generalized Hebbian algori ...... re- and post-synaptic neurons.
@en
Wikipage page ID
14,402,929
page length (characters) of wiki page
Wikipage revision ID
1,016,476,670
Link from a Wikipage to another Wikipage
wikiPageUsesTemplate
comment
The generalized Hebbian algori ...... re- and post-synaptic neurons.
@en
label
Generalized Hebbian algorithm
@en