Hebbian networks for averting the problem of catastrophic interference.
dc.contributor.author | Ivancich, John Eric | |
dc.contributor.advisor | Kaplan, Stephen | |
dc.date.accessioned | 2016-08-30T15:50:17Z | |
dc.date.available | 2016-08-30T15:50:17Z | |
dc.date.issued | 2005 | |
dc.identifier.uri | http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:3186653 | |
dc.identifier.uri | https://hdl.handle.net/2027.42/125105 | |
dc.description.abstract | Neural networks that follow the Parallel Distributed Processing (PDP) paradigm suffer from catastrophic interference (which has also been called <italic> catastrophic forgetting</italic> and the <italic>sequential learning problem </italic>). Simply stated, catastrophic interference is the problem in which minimal new training quickly and significantly undermines the network's prior training rather than building upon it. This is clearly unlike human learning and memory and a putative human model should not suffer from catastrophic interference. There have been many attempts to address catastrophic interference in PDP networks. Some attempts only partially alleviate the problem. Other attempts are strikingly ad hoc in nature and do not seem neurologically plausible. PDP networks exhibit many desirable properties, and proponents of the PDP paradigm attribute many of these properties to the fact that the networks use distributed representations. The hypotheses put forth herein are that (1) catastrophic interference results from the type and degree of distributed representations used in PDP, and (2) some of the desirable properties of PDP networks have been incorrectly attributed to the type and degree of distributed representations PDP uses. I investigate these hypotheses by using a neural network based on Hebb's cell assembly theory. Hebbian networks also use distributed representations, but distinct from PDP's use in both kind and degree. I create a recognition memory task analogous to one for which catastrophic interference has been a problem. Using this task I demonstrate that Hebbian networks naturally avoid catastrophic interference. Because PDP networks have the ability to generalize with degraded input, I show that the Hebbian network can also generalize and still avoid catastrophic interference when the inputs are degraded with noise. This serves to establish a relationship between noisy input and the learning rate. Finally I verify that the system still works when the scale of the problem is increased. Doing so necessitates an analysis and improvement of the learning rule. This evidence and the fact that Hebb's postulated synaptic learning rule has been verified in biological models constitute a strong argument that Hebbian cell assembly networks are better neurophysiological and psychological models and deserve more attention by the connectionist and neural modeling communities. | |
dc.format.extent | 179 p. | |
dc.language | English | |
dc.language.iso | EN | |
dc.subject | Averting | |
dc.subject | Catastrophic Interference | |
dc.subject | Cell Assembly | |
dc.subject | Distributed Representations | |
dc.subject | Hebbian Networks | |
dc.subject | Problem | |
dc.title | Hebbian networks for averting the problem of catastrophic interference. | |
dc.type | Thesis | |
dc.description.thesisdegreename | PhD | en_US |
dc.description.thesisdegreediscipline | Applied Sciences | |
dc.description.thesisdegreediscipline | Biological Sciences | |
dc.description.thesisdegreediscipline | Cognitive psychology | |
dc.description.thesisdegreediscipline | Computer science | |
dc.description.thesisdegreediscipline | Neurosciences | |
dc.description.thesisdegreediscipline | Psychology | |
dc.description.thesisdegreegrantor | University of Michigan, Horace H. Rackham School of Graduate Studies | |
dc.description.bitstreamurl | http://deepblue.lib.umich.edu/bitstream/2027.42/125105/2/3186653.pdf | |
dc.owningcollname | Dissertations and Theses (Ph.D. and Master's) |
Files in this item
Remediation of Harmful Language
The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.
Accessibility
If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.