Show simple item record

How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition

dc.contributor.authorLaakso, Aarreen_US
dc.contributor.authorCalvo, Pacoen_US
dc.date.accessioned2011-11-10T15:33:39Z
dc.date.available2012-11-02T18:56:40Zen_US
dc.date.issued2011-09en_US
dc.identifier.citationLaakso, Aarre; Calvo, Paco (2011). "How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition." Cognitive Science 35(7). <http://hdl.handle.net/2027.42/86908>en_US
dc.identifier.issn0364-0213en_US
dc.identifier.issn1551-6709en_US
dc.identifier.urihttps://hdl.handle.net/2027.42/86908
dc.description.abstractSome empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule‐following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis.en_US
dc.publisherBlackwell Publishing Ltden_US
dc.publisherWiley Periodicals, Inc.en_US
dc.subject.otherArtificial Grammar Learningen_US
dc.subject.otherSpeech Processingen_US
dc.subject.otherLanguage Acquisitionen_US
dc.subject.otherMore Than One Mechanism Hypothesisen_US
dc.subject.otherStatistical Learningen_US
dc.subject.otherConnectionismen_US
dc.titleHow Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisitionen_US
dc.typeArticleen_US
dc.rights.robotsIndexNoFollowen_US
dc.subject.hlbsecondlevelNeurosciencesen_US
dc.subject.hlbtoplevelHealth Sciencesen_US
dc.description.peerreviewedPeer Revieweden_US
dc.contributor.affiliationumDepartment of Behavioral Sciences, University of Michigan‐Dearbornen_US
dc.contributor.affiliationotherPhilosophy Department, University of Murciaen_US
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/86908/1/j.1551-6709.2011.01191.x.pdf
dc.identifier.doi10.1111/j.1551-6709.2011.01191.xen_US
dc.identifier.sourceCognitive Scienceen_US
dc.identifier.citedreferenceAltmann, G. T. ( 2002 ). Learning and development in neural networks – the importance of prior experience. Cognition, 85 ( 2 ), B43 – B50.en_US
dc.identifier.citedreferenceBishop, C. M. ( 1995 ). Neural networks for pattern recognition. Oxford, England: Oxford University Press.en_US
dc.identifier.citedreferenceBoden, M., & Wiles, J. ( 2000 ). Context‐free and context‐sensitive dynamics in recurrent neural networks. Connection Science: Journal of Neural Computing, Artificial Intelligence & Cognitive Research, 12 ( 3–4 ), 197 – 210.en_US
dc.identifier.citedreferenceBonatti, L. L., Peña, M., Nespor, M., & Mehler, J. ( 2006 ). How to hit Scylla without avoiding Charybdis: Comment on Perruchet, Tyler, Galland, and Peereman (2004). Journal of Experimental Psychology: General, 135 ( 2 ), 314 – 321.en_US
dc.identifier.citedreferenceChalup, S. K., & Blair, A. D. ( 2003 ). Incremental training of first order recurrent neural networks to predict a context‐sensitive language. Neural Networks, 16 ( 7 ), 955 – 972.en_US
dc.identifier.citedreferenceChomsky, N. ( 1980 ). Rules and representations. Oxford, England: Basil Blackwell.en_US
dc.identifier.citedreferenceChristiansen, M. H., & Chater, N. ( 1999 ). Connectionist natural language processing: The state of the art. Cognitive Science, 23, 417 – 437.en_US
dc.identifier.citedreferenceChristiansen, M., & Curtin, S. ( 1999 ). Transfer of learning: Rule acquisition or statistical learning? Trends in Cognitive Science, 3 ( 8 ), 289 – 290.en_US
dc.identifier.citedreferenceElman, J. L. ( 1990 ). Finding structure in time. Cognitive Science, 14 ( 2 ), 179 – 211.en_US
dc.identifier.citedreferenceEndress, A. D., & Bonatti, L. L. ( 2007 ). Rapid learning of syllable classes from a perceptually continuous speech stream. Cognition, 105 ( 2 ), 247 – 299.en_US
dc.identifier.citedreferenceEndress, A. D., & Mehler, J. ( 2009 ). Primitive computations in speech processing. The Quarterly Journal of Experimental Psychology, 62 ( 11 ), 2187 – 2209.en_US
dc.identifier.citedreferenceFodor, J. A., & Pylyshyn, Z. W. ( 1988 ). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3 – 71.en_US
dc.identifier.citedreferenceGerken, L. A., Wilson, R., & Lewis, W. ( 2005 ). Infants can use distributional cues to form syntactic categories. Journal of Child Language, 32, 249 – 268.en_US
dc.identifier.citedreferenceGómez, R. L., & Maye, J. ( 2005 ). The developmental trajectory of nonadjacent dependency learning. Infancy, 7 ( 2 ), 183 – 206.en_US
dc.identifier.citedreferenceHare, M., Elman, J. L., & Daugherty, K. G. ( 1995 ). Default generalisation in connectionist networks. Language & Cognitive Processes, 10 ( 6 ), 601 – 630.en_US
dc.identifier.citedreferenceHenson, R. N. A. ( 1998 ). Short‐term memory for serial order: The start‐end model. Cognitive Psychology, 36 ( 2 ), 73 – 137.en_US
dc.identifier.citedreferenceLaakso, A., & Calvo, P. ( 2008 ). A connectionist simulation of structural rule learning in language acquisition. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Meeting of the Cognitive Science Society (pp. 709 – 714 ). Austin, TX: Cognitive Science Society.en_US
dc.identifier.citedreferenceMarcus, G. F. ( 2001 ). The algebraic mind: Integrating connectionism and cognitive science. Cambridge, MA: MIT Press.en_US
dc.identifier.citedreferenceMarcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. ( 1999 ). Rule learning by seven‐month‐old infants. Science, 283, 77 – 80.en_US
dc.identifier.citedreferenceMcClelland, J. L., & Rumelhart, D. E. (Eds.). ( 1986 ). Parallel distributed processing: Explorations in the microstructure of cognition (vol. 2: psychological and biological models). Cambridge, MA: MIT Press.en_US
dc.identifier.citedreferenceMonaghan, P., Chater, N., & Christiansen, M. H. ( 2005 ). The differential role of phonological and distributional cues in grammatical categorisation. Cognition, 96 ( 2 ), 143 – 182.en_US
dc.identifier.citedreferenceNewport, E. L., & Aslin, R. N. ( 2000 ). Innately constrained learning: Blending old and new approaches to language acquisition. In S. C. Howell, S. A. Fish, & T. Keith‐Lucas (Eds.), BUCLD 24: Proceedings of the 24th Annual Boston University Conference on Language Development 1 – 21. Boston: Cascalla Press.en_US
dc.identifier.citedreferenceNewport, E. L., & Aslin, R. N. ( 2004 ). Learning at a distance: I. Statistical learning of non‐adjacent dependencies. Cognitive Psychology, 48 ( 2 ), 127 – 162.en_US
dc.identifier.citedreferenceOnnis, L., Monaghan, P., Richmond, K., & Chater, N. ( 2005 ). Phonology impacts segmentation in online speech processing. Journal of Memory and Language, 53 ( 2 ), 225 – 237.en_US
dc.identifier.citedreferencePelucchi, B., Hay, J. F., & Saffran, J. R. ( 2009 ). Learning in reverse: Eight‐month‐old infants track backward transitional probabilities. Cognition, 113, 244 – 247.en_US
dc.identifier.citedreferencePeña, M., Bonatti, L. L., Nespor, M., & Mehler, J. ( 2002a ). Signal‐driven computations in speech processing. Science, 298 ( 5593 ), 604 – 607.en_US
dc.identifier.citedreferencePeña, M., Bonatti, L. L., Nespor, M., & Mehler, J. ( 2002b ). Signal‐driven computations in speech processing (online supplement on materials and methods). Science, 298 ( 5593 ). Available at: http://www.sciencemag.org/cgi/content/full/1072901/DC1en_US
dc.identifier.citedreferencePerruchet, P., & Desaulty, S. ( 2008 ). A role for backward transitional probabilities in word segmentation? Memory & Cognition, 36 ( 7 ), 1299 – 1305.en_US
dc.identifier.citedreferencePerruchet, P., Peereman, R., & Tyler, M. D. ( 2006 ). Do we need algebraic‐like computations? A reply to Bonatti, Peña, Nespor, and Mehler (2006). Journal of Experimental Psychology: General, 135 ( 2 ), 322 – 326.en_US
dc.identifier.citedreferencePerruchet, P., & Tillmann, B. ( 2010 ). Exploiting multiple sources of information in learning an artificial language: Human data and modeling. Cognitive Science, 34 ( 2 ), 255 – 285.en_US
dc.identifier.citedreferencePerruchet, P., Tyler, M. D., Galland, N., & Peereman, R. ( 2004 ). Learning nonadjacent dependencies: No need for algebraic‐like computations. Journal of Experimental Psychology: General, 133 ( 4 ), 573 – 583.en_US
dc.identifier.citedreferencePerruchet, P., & Vinter, A. ( 1998 ). PARSER: A model of word segmentation. Journal of Memory and Language, 39 ( 2 ), 246 – 263.en_US
dc.identifier.citedreferencePinker, S., & Ullman, M. T. ( 2002 ). The past and future of the past tense. Trends in Cognitive Sciences, 6 ( 11 ), 456 – 463.en_US
dc.identifier.citedreferencePlunkett, K., & Juola, P. ( 1999 ). A connectionist model of English past tense and plural morphology. Cognitive Science, 23 ( 4 ), 463 – 490.en_US
dc.identifier.citedreferenceRamscar, M. ( 2002 ). The role of meaning in inflection: Why the past tense does not require a rule. Cognitive Psychology, 45 ( 1 ), 45 – 94.en_US
dc.identifier.citedreferenceRedington, M., Chater, N., & Finch, S. P. ( 1998 ). Distributional information: A powerful cue for acquiring syntactic categories. Cognitive Science, 22 ( 4 ), 425 – 469.en_US
dc.identifier.citedreferenceRodriguez, P. ( 2001 ). Simple recurrent networks learn context‐free and context‐sensitive languages by counting. Neural Computation, 13 ( 9 ), 2093 – 2118.en_US
dc.identifier.citedreferenceSaffran, J. R., Aslin, R. N., & Newport, E. L. ( 1996 ). Statistical learning by 8‐month old infants. Science, 274 ( 5294 ), 1926 – 1928.en_US
dc.identifier.citedreferenceSaffran, J. R., Newport, E. L., & Aslin, R. N. ( 1996 ). Word segmentation: The role of distributional cues. Journal of Memory & Language, 35 ( 4 ), 606 – 621.en_US
dc.identifier.citedreferenceSeidenberg, M. S. ( 1997 ). Language acquisition and use: Learning and applying probabilistic constraints. Science, 275 ( 5306 ), 1599 – 1603.en_US
dc.identifier.citedreferenceSeidenberg, M. S., & Elman, J. L. ( 1999a ). Do infants learn grammar with algebra or statistics? Science, 284, 435 – 436.en_US
dc.identifier.citedreferenceSeidenberg, M. S., & Elman, J. L. ( 1999b ). Networks are not ‘hidden rules.’. Trends in Cognitive Sciences, 3 ( 8 ), 288 – 289.en_US
dc.identifier.citedreferenceSeidenberg, M. S., MacDonald, M. C., & Saffran, J. R. ( 2002 ). Does grammar start where statistics stop? Science, 298, 553 – 554.en_US
dc.identifier.citedreferenceTversky, A. ( 1977 ). Features of similarity. Psychological Review, 84 ( 4 ), 327 – 352.en_US
dc.owningcollnameInterdisciplinary and Peer-Reviewed


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.