Self-organized learning in multi-layer networks

  • We present a framework for the self-organized formation of high level learning by a statistical preprocessing of features. The paper focuses first on the formation of the features in the context of layers of feature processing units as a kind of resource-restricted associative multiresolution learning We clame that such an architecture must reach maturity by basic statistical proportions, optimizing the information processing capabilities of each layer. The final symbolic output is learned by pure association of features of different levels and kind of sensorial input. Finally, we also show that common error-correction learning for motor skills can be accomplished also by non-specific associative learning. Keywords: feedforward network layers, maximal information gain, restricted Hebbian learning, cellular neural nets, evolutionary associative learning

Download full text files

Export metadata

Metadaten
Author:Rüdiger W. BrauseGND
URN:urn:nbn:de:hebis:30-79048
ISSN:0218-2130
Document Type:Article
Language:English
Date of Publication (online):2010/09/08
Year of first Publication:1995
Publishing Institution:Universitätsbibliothek Johann Christian Senckenberg
Release Date:2010/09/08
Tag:cellular neural nets; evolutionary associative learning; feedforward network layers; maximal information gain; restricted Hebbian learning
Page Number:19
First Page:1
Last Page:19
Note:
zuerst in: International journal on artificial intelligence tools, 4.1995, S. 433-451
Source:International journal on artificial intelligence tools, 4, S. 433-451
HeBIS-PPN:22757334X
Institutes:Informatik und Mathematik / Informatik
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke
Sammlungen:Universitätspublikationen
Licence (German):License LogoDeutsches Urheberrecht