Jeffrey Lidz, 19 March, 2013
The Representational Basis of Statistical Learning in Natural Language
Department of Linguistics, University of Maryland
Birds fly, fish swim, humans speak. We have a capacity to combine expressions into unboundedly large linguistic structures (sentences and phrases) that carry a specific form and a specific meaning. As the number of such structures is in principle infinite, there must be recursive procedures (grammars) that define these complex objects. Characterizing these procedures has been a major goal of linguistic theory since its inception. But how do learners exposed to sentences in the environment acquire a procedure for generating an unbounded number of new sentences? Here, the field has been divided, with some arguing for the necessity of innately specified biological constraints on the space of possible grammars and others arguing that other aspects of cognition in concert with the capacity to detect statistical regularities in speech is sufficient. In this talk, I argue that these perspectives need not be seen as providing competing conceptions of human linguistic abilities. Instead, I present several case studies showing (a) that successful statistical learning depends on a well defined space of possible grammars and (b) that a theory of innate structure must be paired with statistical inference mechanisms in order to make language acquisition possible.
- Where and when:
15:45-17:00 Mar 19, 2013MPI Room 163