Publications

Displaying 301 - 400 of 681
  • Jaeger, E., Leedham, S., Lewis, A., Segditsas, S., Becker, M., Rodenas-Cuadrado, P., Davis, H., Kaur, K., Heinimann, K., Howarth, K., East, J., Taylor, J., Thomas, H., & Tomlinson, I. (2012). Hereditary mixed polyposis syndrome is caused by a 40-kb upstream duplication that leads to increased and ectopic expression of the BMP antagonist GREM1. Nature Genetics, 44, 699-703. doi:10.1038/ng.2263.

    Abstract

    Hereditary mixed polyposis syndrome (HMPS) is characterized by apparent autosomal dominant inheritance of multiple types of colorectal polyp, with colorectal carcinoma occurring in a high proportion of affected individuals. Here, we use genetic mapping, copy-number analysis, exclusion of mutations by high-throughput sequencing, gene expression analysis and functional assays to show that HMPS is caused by a duplication spanning the 3' end of the SCG5 gene and a region upstream of the GREM1 locus. This unusual mutation is associated with increased allele-specific GREM1 expression. Whereas GREM1 is expressed in intestinal subepithelial myofibroblasts in controls, GREM1 is predominantly expressed in the epithelium of the large bowel in individuals with HMPS. The HMPS duplication contains predicted enhancer elements; some of these interact with the GREM1 promoter and can drive gene expression in vitro. Increased GREM1 expression is predicted to cause reduced bone morphogenetic protein (BMP) pathway activity, a mechanism that also underlies tumorigenesis in juvenile polyposis of the large bowel.
  • Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.

    Abstract

    In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.
  • Janse, I., Bok, J., Hamidjaja, R. A., Hodemaekers, H. M., & van Rotterdam, B. J. (2012). Development and comparison of two assay formats for parallel detection of four biothreat pathogens by using suspension microarrays. PLoS One, 7(2), e31958. doi:10.1371/journal.pone.0031958.

    Abstract

    Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. © 2012 Janse et al.
  • Janse, E. (2008). Spoken-word processing in aphasia: Effects of item overlap and item repetition. Brain and Language, 105, 185-198. doi:10.1016/j.bandl.2007.10.002.

    Abstract

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study was designed to compare effects of item overlap and item repetition in aphasic patients of different diagnostic types. Time compression did not interfere with lexical deactivation for the non-brain-damaged subjects. Furthermore, all aphasic patients showed immediate inhibition of co-activated candidates. These combined results show that deactivation is a fast process. Repetition effects, however, seem to arise only at the longer term in aphasic patients. Importantly, poor performance on diagnostic verbal STM tasks was shown to be related to lexical decision performance in both overlap and repetition conditions, which suggests a common underlying deficit.
  • Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.

    Abstract

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2012). Tracking down abstract linguistic meaning: Neural correlates of spatial frame of reference ambiguities in language. PLoS One, 7(2), e30657. doi:10.1371/journal.pone.0030657.

    Abstract

    This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference
  • Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How typing shapes the meanings of words. Psychonomic Bulletin & Review, 19, 499-504. doi:10.3758/s13423-012-0229-7.

    Abstract

    The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semanntic changes in language can arise.
  • Jepma, M., Verdonschot, R. G., Van Steenbergen, H., Rombouts, S. A. R. B., & Nieuwenhuis, S. (2012). Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6: 5. doi:10.3389/fnbeh.2012.00005.

    Abstract

    Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI) to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (1) the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex (ACC), brain regions sensitive to conflict and arousal; (2) the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (3) the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
  • Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.

    Abstract

    Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation.
  • Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

    Abstract

    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
  • Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32(45), 16064-16069. doi:10.1523/JNEUROSCI.2926-12.2012.

    Abstract

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
  • Johnson, E. K., & Seidl, A. (2008). Clause segmentation by 6-month-olds: A crosslingusitic perspective. Infancy, 13, 440-455. doi:10.1080/15250000802329321.

    Abstract

    Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants’ attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically wellformed clauses and phrases may also help infants begin to extract information important for learning the grammatical structure of their language. Despite the potentially important role that the perception of large prosodic units may play in early language acquisition, there has been little work investigating the extraction of these units from fluent speech by infants learning languages other than English. We report 2 experiments investigating Dutch learners’ clause segmentation abilities.In these studies, Dutch-learning 6-month-olds readily extract clauses from speech. However, Dutch learners differ from English learners in that they seem to be more reliant on pauses to detect clause boundaries. Two closely related explanations for this finding are considered, both of which stem from the acoustic differences in clause boundary realizations in Dutch versus English.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P., Matsuo, A., & Perdue, C. (2008). Comparing the acquisition of finiteness: A cross-linguistic approach. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 261-276). Frankfurt am Main: Lang.
  • Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.

    Abstract

    Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction.
  • Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.

    Abstract

    Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
    of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
    isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
    directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
    such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
    understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
    24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
    words in continuous utterances is clearly linked to future language development.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand [Abstract]. Abstracts of the Acoustics 2012 Hong Kong conference published in The Journal of the Acoustical Society of America, 131, 3311. doi:10.1121/1.4708385.

    Abstract

    Hand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on objects. The relationship between the speech and gesture/action was either complementary (e.g., “He found the answer,” while producing a calculating gesture vs. actually using a calculator) or incongruent (e.g., the same sentence paired with the incongruent gesture/action of stirring with a spoon). Participants watched the video (prime) and then responded to a written word (target) that was or was not spoken in the video prime (e.g., “found” or “cut”). ERPs were taken to the primes (time-locked to the spoken verb, e.g., “found”) and the written targets. For primes, there was a larger frontal N400 (semantic processing) to incongruent vs. congruent items for the gesture, but not action, condition. For targets, the P2 (phonemic processing) was smaller for target words following congruent vs. incongruent gesture, but not action, primes. These findings suggest that hand gestures are integrated with speech in a privileged fashion compared to manual actions on objects.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).
  • Kempen, G., & Harbusch, K. (2008). Comparing linguistic judgments and corpus frequencies as windows on grammatical competence: A study of argument linearization in German clauses. In A. Steube (Ed.), The discourse potential of underspecified structures (pp. 179-192). Berlin: Walter de Gruyter.

    Abstract

    We present an overview of several corpus studies we carried out into the frequencies of argument NP orderings in the midfield of subordinate and main clauses of German. Comparing the corpus frequencies with grammaticality ratings published by Keller’s (2000), we observe a “grammaticality–frequency gap”: Quite a few argument orderings with zero corpus frequency are nevertheless assigned medium–range grammaticality ratings. We propose an explanation in terms of a two-factor theory. First, we hypothesize that the grammatical induction component needs a sufficient number of exposures to a syntactic pattern to incorporate it into its repertoire of more or less stable rules of grammar. Moderately to highly frequent argument NP orderings are likely have attained this status, but not their zero-frequency counterparts. This is why the latter argument sequences cannot be produced by the grammatical encoder and are absent from the corpora. Secondly, we assumed that an extraneous (nonlinguistic) judgment process biases the ratings of moderately grammatical linear order patterns: Confronted with such structures, the informants produce their own “ideal delivery” variant of the to-be-rated target sentence and evaluate the similarity between the two versions. A high similarity score yielded by this judgment then exerts a positive bias on the grammaticality rating—a score that should not be mistaken for an authentic grammaticality rating. We conclude that, at least in the linearization domain studied here, the goal of gaining a clear view of the internal grammar of language users is best served by a combined strategy in which grammar rules are founded on structures that elicit moderate to high grammaticality ratings and attain at least moderate usage frequencies.
  • Kempen, G., Olsthoorn, N., & Sprenger, S. (2012). Grammatical workspace sharing during language production and language comprehension: Evidence from grammatical multitasking. Language and Cognitive Processes, 27, 345-380. doi:10.1080/01690965.2010.544583.

    Abstract

    Grammatical encoding and grammatical decoding (in sentence production and comprehension, respectively) are often portrayed as independent modalities of grammatical performance that only share declarative resources: lexicon and grammar. The processing resources subserving these modalities are supposed to be distinct. In particular, one assumes the existence of two workspaces where grammatical structures are assembled and temporarily maintained—one for each modality. An alternative theory holds that the two modalities share many of their processing resources and postulates a single mechanism for the online assemblage and short-term storage of grammatical structures: a shared workspace. We report two experiments with a novel “grammatical multitasking” paradigm: the participants had to read (i.e., decode) and to paraphrase (encode) sentences presented in fragments, responding to each input fragment as fast as possible with a fragment of the paraphrase. The main finding was that grammatical constraints with respect to upcoming input that emanate from decoded sentence fragments are immediately replaced by grammatical expectations emanating from the structure of the corresponding paraphrase fragments. This evidences that the two modalities have direct access to, and operate upon, the same (i.e., token-identical) grammatical structures. This is possible only if the grammatical encoding and decoding processes command the same, shared grammatical workspace. Theoretical implications for important forms of grammatical multitasking—self-monitoring, turn-taking in dialogue, speech shadowing, and simultaneous translation—are explored.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D. J. (2008). Sentence processing in the visual and auditory modality: Do comma and prosodic break have parallel functions? Brain Research, 1224, 102-118. doi:10.1016/j.brainres.2008.05.034.

    Abstract

    Two Event-Related Potential (ERP) studies contrast the processing of locally ambiguous sentences in the visual and the auditory modality. These sentences are disambiguated by a lexical element. Before this element appears in a sentence, the sentence can also be disambiguated by a boundary marker: a comma in the visual modality, or a prosodic break in the auditory modality. Previous studies have shown that a specific ERP component, the Closure Positive Shift (CPS), can be elicited by these markers. The results of the present studies show that both the comma and the prosodic break disambiguate the ambiguous sentences before the critical lexical element, despite the fact that a clear CPS is only found in the auditory modality. Comma and prosodic break thus have parallel functions irrespective of whether they do or do not elicit a CPS.
  • Kho, K. H., Indefrey, P., Hagoort, P., Van Veelen, C. W. M., Van Rijen, P. C., & Ramsey, N. F. (2008). Unimpaired sentence comprehension after anterior temporal cortex resection. Neuropsychologia, 46(4), 1170-1178. doi:10.1016/j.neuropsychologia.2007.10.014.

    Abstract

    Functional imaging studies have demonstrated involvement of the anterior temporal cortex in sentence comprehension. It is unclear, however, whether the anterior temporal cortex is essential for this function.We studied two aspects of sentence comprehension, namely syntactic and prosodic comprehension in temporal lobe epilepsy patients who were candidates for resection of the anterior temporal lobe. Methods: Temporal lobe epilepsy patients (n = 32) with normal (left) language dominance were tested on syntactic and prosodic comprehension before and after removal of the anterior temporal cortex. The prosodic comprehension test was also compared with performance of healthy control subjects (n = 47) before surgery. Results: Overall, temporal lobe epilepsy patients did not differ from healthy controls in syntactic and prosodic comprehension before surgery. They did perform less well on an affective prosody task. Post-operative testing revealed that syntactic and prosodic comprehension did not change after removal of the anterior temporal cortex. Discussion: The unchanged performance on syntactic and prosodic comprehension after removal of the anterior temporal cortex suggests that this area is not indispensable for sentence comprehension functions in temporal epilepsy patients. Potential implications for the postulated role of the anterior temporal lobe in the healthy brain are discussed.
  • Kidd, E. (2012). Implicit statistical learning is directly associated with the acquisition of syntax. Developmental Psychology, 48(1), 171-184. doi:10.1037/a0025405.

    Abstract

    This article reports on an individual differences study that investigated the role of implicit statistical learning in the acquisition of syntax in children. One hundred children ages 4 years 5 months through 6 years 11 months completed a test of implicit statistical learning, a test of explicit declarative learning, and standardized tests of verbal and nonverbal ability. They also completed a syntactic priming task, which provided a dynamic index of children's facility to detect and respond to changes in the input frequency of linguistic structure. The results showed that implicit statistical learning ability was directly associated with the long-term maintenance of the primed structure. The results constitute the first empirical demonstration of a direct association between implicit statistical learning and syntactic acquisition in children.
  • Kidd, E. (2012). Individual differences in syntactic priming in language acquisition. Applied Psycholinguistics, 33(2), 393-418. doi:10.1017/S0142716411000415.

    Abstract

    Although the syntactic priming methodology is a promising tool for language acquisition researchers, using the technique with children raises issues that are not problematic in adult research. The current paper reports on an individual differences study that addressed some of these outstanding issues. (a) Does priming purely reflect syntactic knowledge, or are other processes involved? (b) How can we explain individual differences, which are the norm rather than the exception? (c) Do priming effects in developmental populations reflect the same mechanisms thought to be responsible for priming in adults? One hundred twenty-two (N = 122) children aged 4 years, 5 months (4;5)–6;11 (mean = 5;7) completed a syntactic priming task that aimed to prime the English passive construction, in addition to standardized tests of vocabulary, grammar, and nonverbal intelligence. The results confirmed the widely held assumption that syntactic priming reflects the presence of syntactic knowledge, but not in every instance. However, they also suggested that nonlinguistic processes contribute significantly to priming. Priming was in no way related to age. Finally, the children's linguistic knowledge and nonverbal ability determined the manner in which they were primed. The results provide a clearer picture of what it means to be primed in acquisition.
  • Kidd, E., & Cameron-Faulkner, T. (2008). The acquisition of the multiple senses of with. Linguistics, 46(1), 33-61. doi:10.1515/LING.2008.002.

    Abstract

    The present article reports on an investigation of one child's acquisition of the multiple senses of the preposition with from 2;0–4;0. Two competing claims regarding children's early representation and subsequent acquisition of with were investigated. The “multiple meanings” hypothesis predicts that children form individual form-meaning pairings for with as separate lexical entries. The “monosemy approach” (McKercher 2001) claims that children apply a unitary meaning by abstracting core features early in acquisition. The child's (“Brian”) speech and his input were coded according to eight distinguishable senses of with. The results showed that Brian first acquired the senses that were most frequent in the input (accompaniment, attribute, and instrument). Less common senses took much longer to emerge. A detailed analysis of the input showed that a variety of clues are available that potentially enable the child to distinguish among high frequency senses. The acquisition data suggested that the child initially applied a restricted one-to-one form-meaning mapping for with, which is argued to reflect the spatial properties of the preposition. On the basis of these results it is argued that neither the monosemy nor the multiple meanings approach can fully explain the data, but that the results are best explained by a combination of word learning principles and children's ability to categorize the contextual properties of each sense's use in the ambient language.
  • Kidd, E., & Lum, J. A. (2008). Sex differences in past tense overregularization. Developmental Science, 11(6), 882-889. doi:10.1111/j.1467-7687.2008.00744.x.

    Abstract

    Hartshorne and Ullman (2006) presented naturalistic language data from 25 children (15 boys, 10 girls) and showed that girls produced more past tense overregularization errors than did boys. In particular, girls were more likely to overregularize irregular verbs whose stems share phonological similarities with regular verbs. It was argued that the result supported the Declarative/Procedural model of language, a neuropsychological analogue of the dual-route approach to language. In the current study we present experimental data that are inconsistent with these naturalistic data. Eighty children (40 males, 40 females) aged 5;0–6;9 completed a past tense elicitation task, a test of declarative memory, and a test of non-verbal intelligence. The results revealed no sex differences on any of the measures. Instead, the best predictors of overregularization rates were item-level features of the test verbs. We discuss the results within the context of dual versus single route debate on past tense acquisition
  • Kim, J., Davis, C., & Cutler, A. (2008). Perceptual tests of rhythmic similarity: II. Syllable rhythm. Language and Speech, 51(4), 343-359. doi:10.1177/0023830908099069.

    Abstract

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable-and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.
  • Kim, S., Cho, T., & McQueen, J. M. (2012). Phonetic richness can outweigh prosodically-driven phonological knowledge when learning words in an artificial language. Journal of Phonetics, 40, 443-452. doi:10.1016/j.wocn.2012.02.005.

    Abstract

    How do Dutch and Korean listeners use acoustic–phonetic information when learning words in an artificial language? Dutch has a voiceless ‘unaspirated’ stop, produced with shortened Voice Onset Time (VOT) in prosodic strengthening environments (e.g., in domain-initial position and under prominence), enhancing the feature {−spread glottis}; Korean has a voiceless ‘aspirated’ stop produced with lengthened VOT in similar environments, enhancing the feature {+spread glottis}. Given this cross-linguistic difference, two competing hypotheses were tested. The phonological-superiority hypothesis predicts that Dutch and Korean listeners should utilize shortened and lengthened VOTs, respectively, as cues in artificial-language segmentation. The phonetic-superiority hypothesis predicts that both groups should take advantage of the phonetic richness of longer VOTs (i.e., their enhanced auditory–perceptual robustness). Dutch and Korean listeners learned the words of an artificial language better when word-initial stops had longer VOTs than when they had shorter VOTs. It appears that language-specific phonological knowledge can be overridden by phonetic richness in processing an unfamiliar language. Listeners nonetheless performed better when the stimuli were based on the speech of their native languages, suggesting that the use of richer phonetic information was modulated by listeners' familiarity with the stimuli.
  • Kim, A., & Lai, V. T. (2012). Rapid interactions between lexical semantic and word form analysis during word recognition in context: Evidence from ERPs. Journal of Cognitive Neuroscience, 24, 1104-1112. doi:10.1162/jocn_a_00148.

    Abstract

    We used event-related potentials (ERPs) to investigate the timecourse of interactions between lexical-semantic and sub-lexical visual word-form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually-supported real word (e.g., “She measured the flour so she could bake a ceke …”) or did not (e.g., “She measured the flour so she could bake a tont …”) along with nonword consonant strings (e.g., “She measured the flour so she could bake a srdt …”). Pseudowords that resembled a contextually-supported real word (“ceke”) elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., “She measured the flour so she could bake a cake …”). Pseudowords that did not resemble a plausible real word (“tont”) enhanced the N170 component, as did nonword consonant strings (“srdt”). The effect pattern shows that the visual word recognition system is, perhaps counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually-predicted inputs. The findings are consistent with rapid interactions between lexical and sub-lexical representations during word recognition, in which rapid lexical access of a contextually-supported word (CAKE) provides top-down excitation of form features (“cake”), highlighting the anomaly of an unexpected word “ceke”.
  • Kim, S., Broersma, M., & Cho, T. (2012). The use of prosodic cues in learning new words in an unfamiliar language. Studies in Second Language Acquisition, 34, 415-444. doi:10.1017/S0272263112000137.

    Abstract

    The artificial language learning paradigm was used to investigate to what extent the use of prosodic features is universally applicable or specifically language driven in learning an unfamiliar language, and how nonnative prosodic patterns can be learned. Listeners of unrelated languages—Dutch (n = 100) and Korean (n = 100)—participated. The words to be learned varied with prosodic cues: no prosody, fundamental frequency (F0) rise in initial and final position, final lengthening, and final lengthening plus F0 rise. Both listener groups performed well above chance level with the final lengthening cue, confirming its crosslinguistic use. As for final F0 rise, however, Dutch listeners did not use it until the second exposure session, whereas Korean listeners used it at initial exposure. Neither group used initial F0 rise. On the basis of these results, F0 and durational cues appear to be universal in the sense that they are used across languages for their universally applicable auditory-perceptual saliency, but how they are used is language specific and constrains the use of available prosodic cues in processing a nonnative language. A discussion on how these findings bear on theories of second language (L2) speech perception and learning is provided.
  • Kirjavainen, M., Nikolaev, A., & Kidd, E. (2012). The effect of frequency and phonological neighbourhood density on the acquisition of past tense verbs by Finnish children. Cognitive Linguistics, 23(2), 273-315. doi:10.1515/cog-2012-0009.

    Abstract

    The acquisition of the past tense has received substantial attention in the psycholinguistics literature, yet most studies report data from English or closely related Indo-European languages. We report on a past tense elicitation study on 136 4–6-year-old children that were acquiring a highly inflected Finno-Ugric (Uralic) language—Finnish. The children were tested on real and novel verbs (N = 120) exhibiting (1) productive, (2) semi-productive, or (3) non-productive inflectional processes manipulated for frequency and phonological neighbourhood density (PND). We found that Finnish children are sensitive to lemma/base frequency and PND when processing inflected words, suggesting that even though children were using suffixation processes, they were also paying attention to the item level properties of the past tense verbs. This paper contributes to the growing body of research suggesting a single analogical/associative mechanism is sufficient in processing both productive (i.e., regular-like) and non-productive (i.e., irregular-like) words. We argue that seemingly rule-like elements in inflectional morphology are an emergent property of the lexicon.
  • Kirschenbaum, A., Wittenburg, P., & Heyer, G. (2012). Unsupervised morphological analysis of small corpora: First experiments with Kilivila. In F. Seifart, G. Haig, N. P. Himmelmann, D. Jung, A. Margetts, & P. Trilsbeek (Eds.), Potentials of language documentation: Methods, analyses, and utilization (pp. 32-38). Honolulu: University of Hawai'i Press.

    Abstract

    Language documentation involves linguistic analysis of the collected material, which is typically done manually. Automatic methods for language processing usually require large corpora. The method presented in this paper uses techniques from bioinformatics and contextual information to morphologically analyze raw text corpora. This paper presents initial results of the method when applied on a small Kilivila corpus.
  • Klaas, G. (2008). Hints and recommendations concerning field equipment. In A. Majid (Ed.), Field manual volume 11 (pp. vi-vii). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Klein, W. (2008). Sprache innerhalb und ausserhalb der Schule. In Deutschen Akademie für Sprache und Dichtung (Ed.), Jahrbuch 2007 (pp. 140-150). Darmstadt: Wallstein Verlag.
  • Klein, W. (2008). The topic situation. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 287-305). Frankfurt am Main: Lang.
  • Klein, W. (2008). Time in language, language in time. In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 1-12). Oxford: Blackwell.
  • Klein, W. (2008). Time in language, language in time. Language Learning, 58(suppl. 1), 1-12. doi:10.1111/j.1467-9922.2008.00457.x.
  • Klein, W. (2012). Auf dem Markt der Wissenschaften oder: Weniger wäre mehr. In K. Sonntag (Ed.), Heidelberger Profile. Herausragende Persönlichkeiten berichten über ihre Begegnung mit Heidelberg. (pp. 61-84). Heidelberg: Universitätsverlag Winter.
  • Klein, W. (2012). A way to look at second language acquisition. In M. Watorek, S. Benazzo, & M. Hickmann (Eds.), Comparative perspectives on language acquisition: A tribute to Clive Perdue (pp. 23-36). Bristol: Multilingual Matters.
  • Klein, W. (2012). Alle zwei Wochen verschwindet eine Sprache. In G. Stock (Ed.), Die Akademie am Gendarmenmarkt 2012/13, Jahresmagazin 2012/13 (pp. 8-13). Berlin: Berlin-Brandenburgische Akademie der Wissenschaften.
  • Klein, W. (2008). De gustibus est disputandum! Zeitschrift für Literaturwissenschaft und Linguistik, 152, 7-24.

    Abstract

    There are two core phenomena which any empirical investigation of beauty must account for: the existence of aesthetical experience, and the enormous variability of this experience across times, cultures, people. Hence, it would seem a hopeless enterprise to determine ‘the very nature’ of beauty, and in fact, none of the many attempts from the Antiquity to present days found general acceptance. But what we should be able to investigate and understand is how properties of people, for example their varying cultural experiences, are correlated with the properties of objects which we evaluate. Beauty is neither only in the eye of the observer nor only in the objects which it sees - it is in the way in which specific observers see specific objects.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, (152), 5-6.
  • Klein, W. (2012). Die Sprache der Denker. In J. Voss, & M. Stolleis (Eds.), Fachsprachen und Normalsprache (pp. 49-60). Göttingen: Wallstein.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W. (2008). Die Werke der Sprache: Für ein neues Verhältnis zwischen Literaturwissenschaft und Linguistik. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 8-32.

    Abstract

    All disciplines depend on language; but two of them also have language as an object – literary studies and linguistics. Their objectives are not the same – but they are sufficiently similar to invite close cooperation. This is not what we find; in fact, the development of research over the last decades has led to a relationship which is, in the typical case, characterised by friendly, and sometimes less friendly, ignorance and indifference. This article discusses some of the reasons for this development, and it suggests some conditions under which both sides would benefit from more cooperation.
  • Klein, W., & Schnell, R. (2008). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik, 150, 5-7.
  • Klein, W. (2008). Mündliche Textproduktion: Informationsorganisation in Texten. In N. Janich (Ed.), Textlinguistik: 15 Einführungen (pp. 217-235). Tübingen: Narr Verlag.
  • Klein, W. (2012). Grußworte. In C. Markschies, & E. Osterkamp (Eds.), Vademekum der Inspirationsmittel (pp. 63-65). Göttingen: Wallstein.
  • Klein, W. (1998). The contribution of second language acquisition research. Language Learning, 48, 527-550. doi:10.1111/0023-8333.00057.

    Abstract

    During the last 25 years, second language acquisition (SLA) research hasmade considerable progress, but is still far from proving a solid basis for foreign language teaching, or from a general theory of SLA. In addition, its status within the linguistic disciplines is still very low. I argue this has not much to do with low empirical or theoretical standards in the field—in this regard, SLA research is fully competitive—but with a particular perspective on the acquisition process: SLA researches learners' utterances as deviations from a certain target, instead of genuine manifestations of underlying language capacity; it analyses them in terms of what they are not rather than what they are. For some purposes such a "target deviation perspective" makes sense, but it will not help SLA researchers to substantially and independently contribute to a deeper understanding of the structure and function of the human language faculty. Therefore, these findings will remain of limited interest to other scientists until SLA researchers consider learner varieties a normal, in fact typical, manifestation of this unique human capacity.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (2012). The information structure of French. In M. Krifka, & R. Musan (Eds.), The expression of information structure (pp. 95-126). Berlin: de Gruyter.
  • Klein, W. (1998). Von der einfältigen Wißbegierde. Zeitschrift für Literaturwissenschaft und Linguistik, 112, 6-13.
  • Knooihuizen, R., & Dediu, D. (2012). Historical demography and historical sociolinguistics: The role of migrant integration in the development of Dunkirk French in the 17th century. Language dynamics and change, 2(1), 1-33. doi:10.1163/221058212X653067.

    Abstract

    Widespread minority language shift in Early Modern Europe is often ascribed to restrictive language policies and the migration of both majority- and minority-language speakers. However, without a sociohistorically credible account of the mechanisms through which these events caused a language shift, these policies lack explanatory power. Inspired by research on 'language histories from below,' we present an integrated sociohistorical and linguistic account that can shed light on the procresses taking place during a case of language shift in the 17th and 18th centuries. We present and analyze demographic data on the immigration and integration of French speakers in previously Dutch-speaking Dunkirk in this period, showing how moderate intermarriage of immigrants and locals could have represented a motive and a mechanism for language shift against a backdrop of larger language-political processes. We then discuss the modern language-shift dialect of Dunkirk in comparison with different dialects of French. The linguistic data suggests a large influence from the dialects of migrants, underlining their role in the language shift process. The combination of sociohistorical and linguistic evidence gives us a better understanding of language shift in this period, showing the value of an integrated 'from below' approach.
  • Knudsen, B., & Liszkowski, U. (2012). Eighteen- and 24-month-old infants correct others in anticipation of action mistakes. Developmental Science, 15, 113-122. doi:10.1111/j.1467-7687.2011.01098.x.

    Abstract

    Much of human communication and collaboration is predicated on making predictions about others’ actions. Humans frequently use predictions about others’ action mistakes to correct others and spare them mistakes. Such anticipatory correcting reveals a social motivation for unsolicited helping. Cognitively, it requires forward inferences about others’ actions through mental attributions of goal and reality representations. The current study shows that infants spontaneously intervene when an adult is mistaken about the location of an object she is about to retrieve. Infants pointed out a correct location for an adult before she was about to commit a mistake. Infants did not intervene in control conditions when the adult had witnessed the misplacement, or when she did not intend to retrieve the misplaced object. Results suggest that preverbal infants anticipate a person’s mistaken action through mental attributions of both her goal and reality representations, and correct her proactively by spontaneously providing unsolicited information.
  • Knudsen, B., & Liszkowski, U. (2012). 18-month-olds predict specific action mistakes through attribution of false belief, not ignorance, and intervene accordingly. Infancy, 17, 672-691. doi:10.1111/j.1532-7078.2011.00105.x.

    Abstract

    This study employed a new “anticipatory intervening” paradigm to tease apart false belief and ignorance-based interpretations of 18-month-olds’ helpful informing. We investigated in three experiments whether 18-month-old infants inform an adult selectively about one of the two locations depending on the adult’s belief about which of the two locations held her toy. In experiments 1 and 2, the adult falsely believed that one of the locations held her toy. In experiment 3, the adult was ignorant about which of the two locations held her toy. In all cases, however, the toy had been removed from the locations and the locations contained instead materials which the adult wanted to avoid. In experiments 1 and 2, infants spontaneously and selectively informed the adult about the aversive material in the location the adult falsely believed to hold her toy. In contrast, in experiment 3, infants informed the ignorant adult about both locations equally. Results reveal that infants expected the adult to commit a specific action mistake when she held a false belief, but not when she was ignorant. Further, infants were motivated to intervene proactively. Findings reveal a predictive action-based usage of “theory-of-mind” skills at 18 months of age.
  • Knudsen, B., Henning, A., Wunsch, K., Weigelt, M., & Aschersleben, G. (2012). The end-state comfort effect in 3- to 8-year-old children in two object manipulation tasks. Frontiers in Psychology, 3: 445. doi:10.3389/fpsyg.2012.00445.

    Abstract

    The aim of the study was to compare 3- to 8-year-old children’s propensity to antici- pate a comfortable hand posture at the end of a grasping movement ( end-state comfort effect ) between two different object manipulation tasks, the bar-transport task, and the overturned-glass task. In the bar-transport task, participants were asked to insert a verti- cally positioned bar into a small opening of a box. In the overturned-glass task, participants were asked to put an overturned-glass right-side-up on a coaster. Half of the participants experienced action effects (lights) as a consequence of their movements (AE groups), while the other half of the participants did not (No-AE groups). While there was no differ- ence between the AE and No-AE groups, end-state comfort performance differed across age as well as between tasks. Results revealed a significant increase in end-state comfort performance in the bar-transport task from 13% in the 3-year-olds to 94% in the 8-year- olds. Interestingly, the number of children grasping the bar according to end-state comfort doubled from 3 to 4 years and from 4 to 5 years of age. In the overturned-glass task an increase in end-state comfort performance from already 63% in the 3-year-olds to 100% in the 8-year-olds was significant as well. When comparing end-state comfort performance across tasks, results showed that 3- and 4-year-old children were better at manipulating the glass as compared to manipulating the bar, most probably, because children are more familiar with manipulating glasses. Together, these results suggest that preschool years are an important period for the development of motor planning in which the familiarity with the object involved in the task plays a significant role in children’s ability to plan their movements according to end-state comfort.
  • Konopka, A. E. (2012). Planning ahead: How recent experience with structures and words changes the scope of linguistic planning. Journal of Memory and Language, 66, 143-162. doi:10.1016/j.jml.2011.08.003.

    Abstract

    The scope of linguistic planning, i.e., the amount of linguistic information that speakers prepare in advance for an utterance they are about to produce, is highly variable. Distinguishing between possible sources of this variability provides a way to discriminate between production accounts that assume structurally incremental and lexically incremental sentence planning. Two picture-naming experiments evaluated changes in speakers’ planning scope as a function of experience with message structure, sentence structure, and lexical items. On target trials participants produced sentences beginning with two semantically related or unrelated objects in the same complex noun phrase. To manipulate familiarity with sentence structure, target displays were preceded by prime displays that elicited the same or different sentence structures. To manipulate ease of lexical retrieval, target sentences began either with the higher-frequency or lower-frequency member of each semantic pair. The results show that repetition of sentence structure can extend speakers’ scope of planning from one to two words in a complex noun phrase, as indexed by the presence of semantic interference in structurally primed sentences beginning with easily retrievable words. Changes in planning scope tied to experience with phrasal structures favor production accounts assuming structural planning in early sentence formulation.
  • Kooijman, V., Johnson, E. K., & Cutler, A. (2008). Reflections on reflections of infant word recognition. In A. D. Friederici, & G. Thierry (Eds.), Early language development: Bridging brain and behaviour (pp. 91-114). Amsterdam: Benjamins.
  • Kopecka, A. (2012). Semantic granularity of placement and removal expressions in Polish. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 327-348). Amsterdam: Benjamins.

    Abstract

    This chapter explores the expression of placement (or Goal-oriented) and removal (or Source-oriented) events by speakers of Polish (a West Slavic language). Its aim is to investigate the hypothesis known as ‘Source/Goal asymmetry’ according to which languages tend to favor the expression of Goals (e.g., into, onto) and to encode them more systematically and in a more fine-grained way than Sources (e.g., from, out of). The study provides both evidence and counter-evidence for Source/Goal asymmetry. On the one hand, it shows that Polish speakers use a greater variety of verbs to convey Manner and/or mode of manipulation in the expression of placement, encoding such events in a more fine-grained manner than removal events. The expression of placement is also characterized by a greater variety of verb prefixes conveying Path and prepositional phrases (including prepositions and case markers) conveying Ground. On the other hand, the study reveals that Polish speakers attend to Sources as often as to Goals, revealing no evidence for an attentional bias toward the endpoints of events.
  • Korecky-Kröll, K., Libben, G., Stempfer, N., Wiesinger, J., Reinisch, E., Bertl, J., & Dressler, W. U. (2012). Helping a crocodile to learn German plurals: Children’s online judgment of actual, potential and illegal plural forms. Morphology, 22, 35-65. doi:10.1007/s11525-011-9191-8.

    Abstract

    A substantial tradition of linguistic inquiry has framed the knowledge of native speakers in terms of their ability to determine the grammatical acceptability of language forms that they encounter for the first time. In the domain of morphology, the productivity framework of Dressler (CLASNET Working papers 7, 1997) has emphasized the importance of this ability in terms of the graded potentiality of non-existing multimorphemic forms. The goal of this study was to investigate what role the notion of potentiality plays in online lexical well-formedness judgment among children who are native speakers of Austrian German. A total of 114 children between the ages of six and ten and a total of 40 adults between the ages of 18 and 30 (as a comparison group) participated in an online well-formedness judgment task which focused on pluralized German nouns. Concrete, picturable, high frequency German nouns were presented in three pluralized forms: (a) actual existing plural form, (b) morphologically illegal plural form, (c) potential (but not existing) plural form. Participants were shown pictures of the nouns (as a set of three identical items) and simultaneously heard one of three pluralized forms for each noun. Response latency and judgment type served as dependent variables. Results indicate that both children and adults are sensitive to the distinction between illegal and potential forms (neither of which they would have encountered). For all participants, plural frequency (rather than frequency of the singular form) affected responses for both existing and non-existing words. Other factors increasing acceptability were the presence of supplementary umlaut in addition to suffixation and homophony with existing words or word forms.
  • Kos, M., Van den Brink, D., Snijders, T. M., Rijpkema, M., Franke, B., Fernandez, G., Hagoort, P., & Whitehouse, A. (2012). CNTNAP2 and language processing in healthy individuals as measured with ERPs. PLoS One, 7(10), e46995. doi:10.1371/journal.pone.0046995.

    Abstract

    The genetic FOXP2-CNTNAP2 pathway has been shown to be involved in the language capacity. We investigated whether a common variant of CNTNAP2 (rs7794745) is relevant for syntactic and semantic processing in the general population by using a visual sentence processing paradigm while recording ERPs in 49 healthy adults. While both AA homozygotes and T-carriers showed a standard N400 effect to semantic anomalies, the response to subject-verb agreement violations differed across genotype groups. T-carriers displayed an anterior negativity preceding the P600 effect, whereas for the AA group only a P600 effect was observed. These results provide another piece of evidence that the neuronal architecture of the human faculty of language is shaped differently by effects that are genetically determined.
  • Kos, M., Van den Brink, D., & Hagoort, P. (2012). Individual variation in the late positive complex to semantic anomalies. Frontiers in Psychology, 3, 318. doi:10.3389/fpsyg.2012.00318.

    Abstract

    It is well-known that, within ERP paradigms of sentence processing, semantically anomalous words elicit N400 effects. Less clear, however, is what happens after the N400. In some cases N400 effects are followed by Late Positive Complexes (LPC), whereas in other cases such effects are lacking. We investigated several factors which could affect the LPC, such as contextual constraint, inter-individual variation and working memory. Seventy-two participants read sentences containing a semantic manipulation (Whipped cream tastes sweet/anxious and creamy). Neither contextual constraint nor working memory correlated with the LPC. Inter-individual variation played a substantial role in the elicitation of the LPC with about half of the participants showing a negative response and the other half showing an LPC. This individual variation correlated with a syntactic ERP as well as an alternative semantic manipulation. In conclusion, our results show that inter-individual variation plays a large role in the elicitation of the LPC and this may account for the diversity in LPC findings in language research.
  • Kösem, A., & van Wassenhove, V. (2012). Temporal Structure in Audiovisual Sensory Selection. PLoS One, 7(7): e40936. doi:10.1371/journal.pone.0040936.

    Abstract

    In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object): the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar). Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.
  • Köster, O., Hess, M. M., Schiller, N. O., & Künzel, H. J. (1998). The correlation between auditory speech sensitivity and speaker recognition ability. Forensic Linguistics: The international Journal of Speech, Language and the Law, 5, 22-32.

    Abstract

    In various applications of forensic phonetics the question arises as to how far aural-perceptual speaker recognition performance is reliable. Therefore, it is necessary to examine the relationship between speaker recognition results and human perception/production abilities like musicality or speech sensitivity. In this study, performance in a speaker recognition experiment and a speech sensitivity test are correlated. The results show a moderately significant positive correlation between the two tasks. Generally, performance in the speaker recognition task was better than in the speech sensitivity test. Professionals in speech and singing yielded a more homogeneous correlation than non-experts. Training in speech as well as choir-singing seems to have a positive effect on performance in speaker recognition. It may be concluded, firstly, that in cases where the reliability of voice line-up results or the credibility of a testimony have to be considered, the speech sensitivity test could be a useful indicator. Secondly, the speech sensitivity test might be integrated into the canon of possible procedures for the accreditation of forensic phoneticians. Both tests may also be used in combination.
  • Kouwenhoven, H., & Van Mulken, M. (2012). The perception of self in L1 and L2 for Dutch-English compound bilinguals. In N. De Jong, K. Juffermans, M. Keijzer, & L. Rasier (Eds.), Papers of the Anéla 2012 Applied Linguistics Conference (pp. 326-335). Delft: Eburon.
  • Krämer, I. (1998). Children's interpretations of indefinite object noun phrases. Linguistics in the Netherlands, 1998, 163-174. doi:10.1075/avt.15.15kra.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Kuperman, V., Ernestus, M., & Baayen, R. H. (2008). Frequency distributions of uniphones, diphones, and triphones in spontaneous speech. Journal of the Acoustical Society of America, 124(6), 3897-3908. doi:10.1121/1.3006378.

    Abstract

    This paper explores the relationship between the acoustic duration of phonemic sequences and their frequencies of occurrence. The data were obtained from large (sub)corpora of spontaneous speech in Dutch, English, German, and Italian. Acoustic duration of an n-phone is shown to codetermine the n-phone's frequency of use, such that languages preferentially use diphones and triphones that are neither very long nor very short. The observed distributions are well approximated by a theoretical function that quantifies the concurrent action of the self-regulatory processes of minimization of articulatory effort and minimization of perception effort
  • Kurt, S., Fisher, S. E., & Ehret, G. (2012). Foxp2 mutations impair auditory-motor-association learning. PLoS One, 7(3), e33130. doi:10.1371/journal.pone.0033130.

    Abstract

    Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Languages and genes: reflections on biolinguistics and the nature-nurture question. Biolinguistics, 2(1), 114-126. Retrieved from http://www.biolinguistics.eu/index.php/biolinguistics/issue/view/7/showToc.
  • Ladd, D. R., Dediu, D., & Kinsella, A. R. (2008). Reply to Bowles (2008). Biolinguistics, 2(2), 256-259.
  • Lai, V. T., Hagoort, P., & Casasanto, D. (2012). Affective primacy vs. cognitive primacy: Dissolving the debate. Frontiers in Psychology, 3, 243. doi:10.3389/fpsyg.2012.00243.

    Abstract

    When people see a snake, they are likely to activate both affective information (e.g., dangerous) and non-affective information about its ontological category (e.g., animal). According to the Affective Primacy Hypothesis, the affective information has priority, and its activation can precede identification of the ontological category of a stimulus. Alternatively, according to the Cognitive Primacy Hypothesis, perceivers must know what they are looking at before they can make an affective judgment about it. We propose that neither hypothesis holds at all times. Here we show that the relative speed with which affective and non-affective information gets activated by pictures and words depends upon the contexts in which stimuli are processed. Results illustrate that the question of whether affective information has processing priority over ontological information (or vice versa) is ill posed. Rather than seeking to resolve the debate over Cognitive vs. Affective Primacy in favor of one hypothesis or the other, a more productive goal may be to determine the factors that cause affective information to have processing priority in some circumstances and ontological information in others. Our findings support a view of the mind according to which words and pictures activate different neurocognitive representations every time they are processed, the specifics of which are co-determined by the stimuli themselves and the contexts in which they occur.
  • de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., & Bekkering, H. (2008). Complementary systems for understanding action intentions. Current Biology, 18, 454-457. doi:10.1016/j.cub.2008.02.057.

    Abstract

    How humans understand the intention of others’ actions remains controversial. Some authors have suggested that intentions are recognized by means of a motor simulation of the observed action with the mirror-neuron system [1–3]. Others emphasize that intention recognition is an inferential process, often called ‘‘mentalizing’’ or employing a ‘‘theory of mind,’’ which activates areas well outside the motor system [4–6]. Here, we assessed the contribution of brain regions involved in motor simulation and mentalizing for understanding action intentions via functional brain imaging. Results show that the inferior frontal gyrus (part of the mirror-neuron system) processes the intentionality of an observed action on the basis of the visual properties of the action, irrespective of whether the subject paid attention to the intention or not. Conversely, brain areas that are part of a ‘‘mentalizing’’ network become active when subjects reflect about the intentionality of an observed action, but they are largely insensitive to the visual properties of the observed action. This supports the hypothesis that motor simulation and mentalizing have distinct but complementary functions for the recognition of others’ intentions.
  • De Lange, F. P., Koers, A., Kalkman, J. S., Bleijenberg, G., Hagoort, P., Van der Meer, J. W. M., & Toni, I. (2008). Increase in prefrontal cortical volume following cognitive behavioural therapy in patients with chronic fatigue syndrome. Brain, 131, 2172-2180. doi:10.1093/brain/awn140.

    Abstract

    Chronic fatigue syndrome (CFS) is a disabling disorder, characterized by persistent or relapsing fatigue. Recent studies have detected a decrease in cortical grey matter volume in patients with CFS, but it is unclear whether this cerebral atrophy constitutes a cause or a consequence of the disease. Cognitive behavioural therapy (CBT) is an effective behavioural intervention for CFS, which combines a rehabilitative approach of a graded increase in physical activity with a psychological approach that addresses thoughts and beliefs about CFS which may impair recovery. Here, we test the hypothesis that cerebral atrophy may be a reversible state that can ameliorate with successful CBT. We have quantified cerebral structural changes in 22 CFS patients that underwent CBT and 22 healthy control participants. At baseline, CFS patients had significantly lower grey matter volume than healthy control participants. CBT intervention led to a significant improvement in health status, physical activity and cognitive performance. Crucially, CFS patients showed a significant increase in grey matter volume, localized in the lateral prefrontal cortex. This change in cerebral volume was related to improvements in cognitive speed in the CFS patients. Our findings indicate that the cerebral atrophy associated with CFS is partially reversed after effective CBT. This result provides an example of macroscopic cortical plasticity in the adult human brain, demonstrating a surprisingly dynamic relation between behavioural state and cerebral anatomy. Furthermore, our results reveal a possible neurobiological substrate of psychotherapeutic treatment.
  • Lawson, D., Jordan, F., & Magid, K. (2008). On sex and suicide bombing: An evaluation of Kanazawa’s ‘evolutionary psychological imagination’. Journal of Evolutionary Psychology, 6(1), 73-84. doi:10.1556/JEP.2008.1002.

    Abstract

    Kanazawa (2007) proposes the ‘evolutionary psychological imagination’ (p.7) as an authoritative framework for understanding complex social and public issues. As a case study of this approach, Kanazawa addresses acts of international terrorism, specifically suicide bombings committed by Muslim men. It is proposed that a comprehensive explanation of such acts can be gained from taking an evolutionary perspective armed with only three points of cultural knowledge: 1. Muslims are exceptionally polygynous, 2. Muslim men believe they will gain reproductive access to 72 virgins if they die as a martyr and 3. Muslim men have limited access to pornography, which might otherwise relieve the tension built up from intra-sexual competition. We agree with Kanazawa that evolutionary models of human behaviour can contribute to our understanding of even the most complex social issues. However, Kanazawa’s case study, of what he refers to as ‘World War III’, rests on a flawed theoretical argument, lacks empirical backing, and holds little in the way of explanatory power.
  • Lehtonen, M., Hulten, A., Rodríguez-Fornells, A., Cunillera, T., Tuomainen, J., & Laine, M. (2012). Differences in word recognition between early bilinguals and monolinguals: Behavioral and ERP evidence. Neuropsychologia, 50, 1362-1371. doi:10.1016/j.neuropsychologia.2012.02.021.

    Abstract

    We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilingualś nondominant vs. dominant language and in some studies also when compared to corresponding monolinguals. In ERPs, language processing differences between bilinguals vs. monolinguals have typically been found in the N400 component. In the present study, highly proficient Finnish-Swedish bilinguals who had acquired both languages during childhood were compared to Finnish monolinguals during a visual lexical decision task and simultaneous ERP recordings. Behaviorally, we found that the response latencies were overall longer in bilinguals than monolinguals, and that the effects for all three factors, frequency, morphology, and lexicality were also larger in bilinguals even though they had acquired both languages early and were highly proficient in them. In line with this, the N400 effects induced by frequency, morphology, and lexicality were larger for bilinguals than monolinguals. Furthermore, the ERP results also suggest that while most inflected Finnish words are decomposed into stem and suffix, only monolinguals have encountered high frequency inflected word forms often enough to develop full-form representations for them. Larger behavioral and neural effects in bilinguals in these factors likely reflect lower amount of exposure to words compared to monolinguals, as the language input of bilinguals is divided between two languages.
  • Lemhöfer, K., & Broersma, M. (2012). Introducing LexTALE: A quick and valid Lexical Test for Advanced Learners of English. Behavior Research Methods, 44, 325-343. doi:10.3758/s13428-011-0146-0.

    Abstract

    The increasing number of experimental studies on second language (L2) processing, frequently with English as the L2, calls for a practical and valid measure of English vocabulary knowledge and proficiency. In a large-scale study with Dutch and Korean speakers of L2 English, we tested whether LexTALE, a 5-min vocabulary test, is a valid predictor of English vocabulary knowledge and, possibly, even of general English proficiency. Furthermore, the validity of LexTALE was compared with that of self-ratings of proficiency, a measure frequently used by L2 researchers. The results showed the following in both speaker groups: (1) LexTALE was a good predictor of English vocabulary knowledge; 2) it also correlated substantially with a measure of general English proficiency; and 3) LexTALE was generally superior to self-ratings in its predictions. LexTALE, but not self-ratings, also correlated highly with previous experimental data on two word recognition paradigms. The test can be carried out on or downloaded from www.lextale.com.
  • Lesage, E., Morgan, B. E., Olson, A. C., Meyer, A. S., & Miall, R. C. (2012). Cerebellar rTMS disrupts predictive language processing. Current Biology, 22, R794-R795. doi:10.1016/j.cub.2012.07.006.

    Abstract

    The human cerebellum plays an important role in language, amongst other cognitive and motor functions [1], but a unifying theoretical framework about cerebellar language function is lacking. In an established model of motor control, the cerebellum is seen as a predictive machine, making short-term estimations about the outcome of motor commands. This allows for flexible control, on-line correction, and coordination of movements [2]. The homogeneous cytoarchitecture of the cerebellar cortex suggests that similar computations occur throughout the structure, operating on different input signals and with different output targets [3]. Several authors have therefore argued that this ‘motor’ model may extend to cerebellar nonmotor functions [3], [4] and [5], and that the cerebellum may support prediction in language processing [6]. However, this hypothesis has never been directly tested. Here, we used the ‘Visual World’ paradigm [7], where on-line processing of spoken sentence content can be assessed by recording the latencies of listeners' eye movements towards objects mentioned. Repetitive transcranial magnetic stimulation (rTMS) was used to disrupt function in the right cerebellum, a region implicated in language [8]. After cerebellar rTMS, listeners showed delayed eye fixations to target objects predicted by sentence content, while there was no effect on eye fixations in sentences without predictable content. The prediction deficit was absent in two control groups. Our findings support the hypothesis that computational operations performed by the cerebellum may support prediction during both motor control and language processing.

    Additional information

    Lesage_Suppl_Information.pdf
  • Lev-Ari, S., & Keysar, B. (2012). Less detailed representation of non-native language: Why non-native speakers’ stories seem more vague. Discourse Processes, 49(7), 523-538. doi:10.1080/0163853X.2012.698493.

    Abstract

    The language of non-native speakers is less reliable than the language of native
    speakers in conveying the speaker’s intentions. We propose that listeners expect
    such reduced reliability and that this leads them to adjust the manner in which they
    process and represent non-native language by representing non-native language
    in less detail. Experiment 1 shows that when people listen to a story, they are
    less able to detect a word change with a non-native than with a native speaker.
    This suggests they represent the language of a non-native speaker with fewer
    details. Experiment 2 shows that, above a certain threshold, the higher participants’
    working memory is, the less they are able to detect the change with a non-native
    speaker. This suggests that adjustment to non-native speakers depends on working
    memory. This research has implications for the role of interpersonal expectations
    in the way people process language.
  • Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10(5), 553-567. doi:10.1162/089892998562960.

    Abstract

    The purpose of this study was to relate a psycholinguistic processing model of picture naming to the dynamics of cortical activation during picture naming. The activation was recorded from eight Dutch subjects with a whole-head neuromagnetometer. The processing model, based on extensive naming latency studies, is a stage model. In preparing a picture's name, the speaker performs a chain of specific operations. They are, in this order, computing the visual percept, activating an appropriate lexical concept, selecting the target word from the mental lexicon, phonological encoding, phonetic encoding, and initiation of articulation. The time windows for each of these operations are reasonably well known and could be related to the peak activity of dipole sources in the individual magnetic response patterns. The analyses showed a clear progression over these time windows from early occipital activation, via parietal and temporal to frontal activation. The major specific findings were that (1) a region in the left posterior temporal lobe, agreeing with the location of Wernicke's area, showed prominent activation starting about 200 msec after picture onset and peaking at about 350 msec, (i.e., within the stage of phonological encoding), and (2) a consistent activation was found in the right parietal cortex, peaking at about 230 msec after picture onset, thus preceding and partly overlapping with the left temporal response. An interpretation in terms of the management of visual attention is proposed.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Schiller, N. O. (1998). Is the syllable frame stored? [Commentary on the BBS target article 'The frame/content theory of evolution of speech production' by Peter F. McNeilage]. Behavioral and Brain Sciences, 21, 520.

    Abstract

    This commentary discusses whether abstract metrical frames are stored. For stress-assigning languages (e.g., Dutch and English), which have a dominant stress pattern, metrical frames are stored only for words that deviate from the default stress pattern. The majority of the words in these languages are produced without retrieving any independent syllabic or metrical frame.
  • Levelt, W. J. M. (1998). The genetic perspective in psycholinguistics, or: Where do spoken words come from? Journal of Psycholinguistic Research, 27(2), 167-180. doi:10.1023/A:1023245931630.

    Abstract

    The core issue in the 19-century sources of psycholinguistics was the question, "Where does language come from?'' This genetic perspective unified the study of the ontogenesis, the phylogenesis, the microgenesis, and to some extent the neurogenesis of language. This paper makes the point that this original perspective is still a valid and attractive one. It is exemplified by a discussion of the genesis of spoken words.
  • Levelt, W. J. M. (2008). What has become of formal grammars in linguistics and psycholinguistics? [Postscript]. In Formal Grammars in linguistics and psycholinguistics (pp. 1-17). Amsterdam: John Benjamins.
  • Levinson, S. C. (2012). Authorship: Include all institutes in publishing index [Correspondence]. Nature, 485, 582. doi:10.1038/485582c.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C. (2008). Landscape, seascape and the ontology of places on Rossel Island, Papua New Guinea. Language Sciences, 30(2/3), 256-290. doi:10.1016/j.langsci.2006.12.032.

    Abstract

    This paper describes the descriptive landscape and seascape terminology of an isolate language, Yélî Dnye, spoken on a remote island off Papua New Guinea. The terminology reveals an ontology of landscape terms fundamentally mismatching that in European languages, and in current GIS applications. These landscape terms, and a rich set of seascape terms, provide the ontological basis for toponyms across subdomains. Considering what motivates landscape categorization, three factors are considered: perceptual salience, human affordance and use, and cultural ideas. The data show that cultural ideas and practices are the major categorizing force: they directly impact the ecology with environmental artifacts, construct religious ideas which play a major role in the use of the environment and its naming, and provide abstract cultural templates which organize large portions of vocabulary across subdomains.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C. (2012). Kinship and human thought. Science, 336(6084), 988-989. doi:10.1126/science.1222691.

    Abstract

    Language and communication are central to shaping concepts such as kinship categories.
  • Levinson, S. C. (2012). Interrogative intimations: On a possible social economics of interrogatives. In J. P. De Ruiter (Ed.), Questions: Formal, functional and interactional perspectives (pp. 11-32). New York: Cambridge University Press.
  • Levinson, S. C., & Brown, P. (2012). Put and Take in Yélî Dnye, the Papuan language of Rossel Island. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 273-296). Amsterdam: Benjamins.

    Abstract

    This paper describes the linguistic treatment of placement events in the Rossel Island (Papua New Guinea) language Yélî Dnye. Yélî Dnye is unusual in treating PUT and TAKE events symmetrically with a remarkable consistency. In what follows, we first provide a brief background for the language, then describe the six core PUT/TAKE verbs that were drawn upon by Yélî Dnye speakers to describe the great majority of the PUT/TAKE stimuli clips, along with some of their grammatical properties. In Section 5 we describe alternative verbs usable in particular circumstances and give an indication of the basis for variability in responses across speakers. Section 6 presents some reasons why the Yélî verb pattern for expressing PUT and TAKE events is of broad interest.
  • Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7.

    Abstract

    Philosophers, psychologists, and linguists have argued that spatial conception is pivotal to cognition in general, providing a general, egocentric, and universal framework for cognition as well as metaphors for conceptualizing many other domains. But in an aboriginal community in Northern Queensland, a system of cardinal directions informs not only language, but also memory for arbitrary spatial arrays and directions. This work suggests that fundamental cognitive parameters, like the system of coding spatial locations, can vary cross-culturally, in line with the language spoken by a community. This opens up the prospect of a fruitful dialogue between anthropology and the cognitive sciences on the complex interaction between cultural and universal factors in the constitution of mind.
  • Levinson, S. C. (2012). Preface. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. xi-xv). Amsterdam: Benjamins.
  • Levinson, S. C., & Majid, A. (2008). Preface and priorities. In A. Majid (Ed.), Field manual volume 11 (pp. iii-iv). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Levinson, S. C. (2012). The original sin of cognitive science. Topics in Cognitive Science, 4, 396-403. doi:10.1111/j.1756-8765.2012.01195.x.

    Abstract

    Classical cognitive science was launched on the premise that the architecture of human cognition is uniform and universal across the species. This premise is biologically impossible and is being actively undermined by, for example, imaging genomics. Anthropology (including archaeology, biological anthropology, linguistics, and cultural anthropology) is, in contrast, largely concerned with the diversification of human culture, language, and biology across time and space—it belongs fundamentally to the evolutionary sciences. The new cognitive sciences that will emerge from the interactions with the biological sciences will focus on variation and diversity, opening the door for rapprochement with anthropology.
  • Levinson, S. C., Bohnemeyer, J., & Enfield, N. J. (2008). Time and space questionnaire. In A. Majid (Ed.), Field Manual Volume 11 (pp. 42-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492955.

    Abstract

    This entry contains: 1. An invitation to think about to what extent the grammar of space and time share lexical and morphosyntactic resources − the suggestions here are only prompts, since it would take a long questionnaire to fully explore this; 2. A suggestion about how to collect gestural data that might show us to what extent the spatial and temporal domains, have a psychological continuity. This is really the goal − but you need to do the linguistic work first or in addition. The goal of this task is to explore the extent to which time is conceptualised on a spatial basis.
  • Levinson, S. C., & Gray, R. D. (2012). Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences, 16(3), 167-173. doi:10.1016/j.tics.2012.01.007.

    Abstract

    Computational methods have revolutionized evolutionary biology. In this paper we explore the impact these methods are now having on our understanding of the forces that both affect the diversification of human languages and shape human cognition. We show how these methods can illuminate problems ranging from the nature of constraints on linguistic variation to the role that social processes play in determining the rate of linguistic change. Throughout the paper we argue that the cognitive sciences should move away from an idealized model of human cognition, to a more biologically realistic model where variation is central.

Share this page