Publications

Displaying 301 - 400 of 412
  • Scharenborg, O. (2010). Modeling the use of durational information in human spoken-word recognition. Journal of the Acoustical Society of America, 127, 3758-3770. doi:10.1121/1.3377050.

    Abstract

    Evidence that listeners, at least in a laboratory environment, use durational cues to help resolve temporarily ambiguous speech input has accumulated over the past decades. This paper introduces Fine-Tracker, a computational model of word recognition specifically designed for tracking fine-phonetic information in the acoustic speech signal and using it during word recognition. Two simulations were carried out using real speech as input to the model. The simulations showed that the Fine-Tracker, as has been found for humans, benefits from durational information during word recognition, and uses it to disambiguate the incoming speech signal. The availability of durational information allows the computational model to distinguish embedded words from their matrix words first simulation, and to distinguish word final realizations of s from word initial realizations second simulation. Fine-Tracker thus provides the first computational model of human word recognition that is able to extract durational information from the speech signal and to use it to differentiate words.
  • Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.

    Abstract

    Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent.
  • Schmale, R., Cristia, A., Seidl, A., & Johnson, E. K. (2010). Developmental changes in infants’ ability to cope with dialect variation in word recognition. Infancy, 15, 650-662. doi:10.1111/j.1532-7078.2010.00032.x.

    Abstract

    Toward the end of their first year of life, infants’ overly specified word representations are thought to give way to more abstract ones, which helps them to better cope with variation not relevant to word identity (e.g., voice and affect). This developmental change may help infants process the ambient language more efficiently, thus enabling rapid gains in vocabulary growth. One particular kind of variability that infants must accommodate is that of dialectal accent, because most children will encounter speakers from different regions and backgrounds. In this study, we explored developmental changes in infants’ ability to recognize words in continuous speech by familiarizing them with words spoken by a speaker of their own region (North Midland-American English) or a different region (Southern Ontario Canadian English), and testing them with passages spoken by a speaker of the opposite dialectal accent. Our results demonstrate that 12- but not 9-month-olds readily recognize words in the face of dialectal variation.
  • Schuppler, B., Ernestus, M., Van Dommelen, W., & Koreman, J. (2010). Predicting human perception and ASR classification of word-final [t] by its acoustic sub-segmental properties. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2466-2469).

    Abstract

    This paper presents a study on the acoustic sub-segmental properties of word-final /t/ in conversational standard Dutch and how these properties contribute to whether humans and an ASR system classify the /t/ as acoustically present or absent. In general, humans and the ASR system use the same cues (presence of a constriction, a burst, and alveolar frication), but the ASR system is also less sensitive to fine cues (weak bursts, smoothly starting friction) than human listeners and misled by the presence of glottal vibration. These data inform the further development of models of human and automatic speech processing.
  • Sekine, K. (2010). Change of perspective taking in preschool age: An analysis of spontaneous gestures. Tokyo: Kazama shobo.
  • Sekine, K., & Furuyama, N. (2010). Developmental change of discourse cohesion in speech and gestures among Japanese elementary school children. Rivista di psicolinguistica applicata, 10(3), 97-116. doi:10.1400/152613.

    Abstract

    This study investigates the development of bi-modal reference maintenance by focusing on how Japanese elementary school children introduce and track animate referents in their narratives. Sixty elementary school children participated in this study, 10 from each school year (from 7 to 12 years of age). They were instructed to remember a cartoon and retell the story to their parents. We found that although there were no differences in the speech indices among the different ages, the average scores for the gesture indices of the 12-year-olds were higher than those of the other age groups. In particular, the amount of referential gestures radically increased at 12, and these children tended to use referential gestures not only for tracking referents but also for introducing characters. These results indicate that the ability to maintain a reference to create coherent narratives increases at about age 12.
  • Sekine, K. (2010). The role of gestures contributing to speech production in children. The Japanese Journal of Qualitative Psychology, 9, 115-132.
  • Senft, G. (2010). Culture change - language change: Missionaries and moribund varieties of Kilivila. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 69-95). Canberra: Pacific Linguistics.
  • Senft, G. (Ed.). (2010). Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization. Canberra: Pacific Linguistics.

    Abstract

    The contributions to this book concern the documentation, archiving and revitalization of endangered language materials. The anthology focuses mainly on endangered Oceanic languages, with articles on Vanuatu by Darrell Tryon and the Marquesas by Gabriele Cablitz, on situations of loss and gain by Ingjerd Hoem and on the Kilivila language of the Trobriands by the editor. Nick Thieberger, Peter Wittenburg and Paul Trilsbeek, and David Blundell and colleagues write about aspects of linguistic archiving. Under the rubric of revitalization, Margaret Florey and Michael Ewing write about Maluku, Jakelin Troy and Michael Walsh about Australian Aboriginal languages in southeastern Australia, whilst three articles, by Sophie Nock, Diana Johnson and Winifred Crombie concern the revitalization of Maori.
  • Senft, G. (2010). Argonauten mit Außenbordmotoren - Feldforschung auf den Trobriand-Inseln (Papua-Neuguinea) seit 1982. Mitteilungen der Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte, 31, 115-130.

    Abstract

    Seit 1982 erforsche ich die Sprache und die Kultur der Trobriand-Insulaner in Papua-Neuguinea. Nach inzwischen 15 Reisen zu den Trobriand-Inseln, die sich bis heute zu nahezu vier Jahren Leben und Arbeit im Dorf Tauwema auf der Insel Kaile'una addieren, wurde ich von Markus Schindlbeck und Alix Hänsel dazu eingeladen, den Mitgliedern der „Berliner Gesellschaft für Anthropologie, Ethnologie und Urgeschichte“ über meine Feldforschungen zu berichten. Das werde ich im Folgenden tun. Zunächst beschreibe ich, wie ich zu den Trobriand-Inseln kam, wie ich mich dort zurechtgefunden habe und berichte dann, welche Art von Forschung ich all die Jahre betrieben, welche Formen von Sprach- und Kulturwandel ich dabei beobachtet und welche Erwartungen ich auf der Basis meiner bisherigen Erfahrungen für die Zukunft der Trobriander und für ihre Sprache und ihre Kultur habe.
  • Senft, G. (2010). [Review of the book Consequences of contact: Language ideologies and sociocultural transformations in Pacific societies ed. by Miki Makihara and Bambi B. Schieffelin]. Paideuma. Mitteilungen zur Kulturkunde, 56, 308-313.
  • Senft, G. (2010). Introduction. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 1-13). Canberra: Pacific Linguistics.
  • Senft, G. (2010). The Trobriand Islanders' ways of speaking. Berlin: De Gruyter.

    Abstract

    The book documents the Trobriand Islanders' typology of genres. Rooted in the 'ethnography of speaking/anthropological linguistics' paradigm, the author highlights the relevance of genres for researching language, culture and cognition in social interaction and the importance of understanding them for achieving linguistic and cultural competence. Data presented is accessible via the internet.
  • Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2010). The evolution of segmentation and sequencing: Evidence from homesign and Nicaraguan Sign Language. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 279-289). Singapore: World Scientific.
  • Seuren, P. A. M. (2010). A logic-based approach to problems in pragmatics. Poznań Studies in Contemporary Linguistics, 519-532. doi:10.2478/v10010-010-0026-2.

    Abstract

    After an exposé of the programme involved, it is shown that the Gricean maxims fail to do their job in so far as they are meant to account for the well-known problem of natural intuitions of logical entailment that deviate from standard modern logic. It is argued that there is no reason why natural logical and ontological intuitions should conform to standard logic, because standard logic is based on mathematics while natural logical and ontological intuitions derive from a cognitive system in people's minds (supported by their brain structures). A proposal is then put forward to try a totally different strategy, via (a) a grammatical reduction of surface sentences to their logico-semantic form and (b) via logic itself, in particular the notion of natural logic, based on a natural ontology and a natural set theory. Since any logical system is fully defined by (a) its ontology and its overarching notions and axioms regarding truth, (b) the meanings of its operators, and (c) the ranges of its variables, logical systems can be devised that deviate from modern logic in any or all of the above respects, as long as they remain consistent. This allows one, as an empirical enterprise, to devise a natural logic, which is as sound as standard logic but corresponds better with natural intuitions. It is hypothesised that at least two varieties of natural logic must be assumed in order to account for natural logical and ontological intuitions, since culture and scholastic education have elevated modern societies to a higher level of functionality and refinement. These two systems correspond, with corrections and additions, to Hamilton's 19th-century logic and to the classic Square of Opposition, respectively. Finally, an evaluation is presented, comparing the empirical success rates of the systems envisaged.
  • Seuren, P. A. M. (2010). Donkey sentences. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 169-171). Amsterdam: Elsevier.
  • Seuren, P. A. M., & Hamans, C. (2010). Antifunctionality in language change. Folia Linguistica, 44(1), 127-162. doi:10.1515/flin.2010.005.

    Abstract

    The main thesis of the article is that language change is only partially subject to criteria of functionality and that, as a rule, opposing forces are also at work which often correlate directly with psychological and sociopsychological parameters reflecting themselves in all areas of linguistic competence. We sketch a complex interplay of horizontal versus vertical, deliberate versus nondeliberate, functional versus antifunctional linguistic changes, which, through a variety of processes have an effect upon the languages concerned, whether in the lexicon, the grammar, the phonology or the phonetics. Despite the overall unclarity regarding the notion of functionality in language, there are clear cases of both functionality and antifunctionality. Antifunctionality is deliberately striven for by groups of speakers who wish to distinguish themselves from other groups, for whatever reason. Antifunctionality, however, also occurs as a, probably unwanted, result of syntactic change in the acquisition process by young or adult language learners. The example is discussed of V-clustering through Predicate Raising in German and Dutch, a process that started during the early Middle Ages and was highly functional as long as it occurred on a limited scale but became antifunctional as it pervaded the entire complementation system of these languages.
  • Seuren, P. A. M. (2010). Aristotle and linguistics. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 25-27). Amsterdam: Elsevier.

    Abstract

    Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation.
  • Seuren, P. A. M. (2010). Meaning: Cognitive dependency of lexical meaning. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 424-426). Amsterdam: Elsevier.
  • Seuren, P. A. M. (2010). Language from within: Vol. 2. The logic of language. Oxford: Oxford University Press.

    Abstract

    The Logic of Language opens a new perspective on logic. Pieter Seuren argues that the logic of language derives from the lexical meanings of the logical operators. These meanings, however, prove not to be consistent. Seuren solves this problem through an indepth analysis of the functional adequacy of natural predicate logic and standard modern logic for natural linguistic interaction. He then develops a general theory of discourse-bound interpretation, covering discourse incrementation, anaphora, presupposition and topic-comment structure, all of which, the author claims, form the 'cement' of discourse structure. This is the second of a two-volume foundational study of language, published under the title Language from Within . Pieter Seuren discusses such apparently diverse issues as the ontology underlying the semantics of language, speech act theory, intensionality phenomena, the machinery and ecology of language, sentential and lexical meaning, the natural logic of language and cognition, and the intrinsically context-sensitive nature of language - and shows them to be intimately linked. Throughout his ambitious enterprise, he maintains a constant dialogue with established views, reflecting their development from Ancient Greece to the present. The resulting synthesis concerns central aspects of research and theory in linguistics, philosophy and cognitive science.
  • Seuren, P. A. M. (2010). Presupposition. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 589-596). Amsterdam: Elsevier.
  • Sicoli, M. A. (2010). Shifting voices with participant roles: Voice qualities and speech registers in Mesoamerica. Language in Society, 39(4), 521-553. doi:10.1017/S0047404510000436.

    Abstract

    Although an increasing number of sociolinguistic researchers consider functions of voice qualities as stylistic features, few studies consider cases where voice qualities serve as the primary signs of speech registers. This article addresses this gap through the presentation of a case study of Lachixio Zapotec speech registers indexed though falsetto, breathy, creaky, modal, and whispered voice qualities. I describe the system of contrastive speech registers in Lachixio Zapotec and then track a speaker on a single evening where she switches between three of these registers. Analyzing line-by-line conversational structure I show both obligatory and creative shifts between registers that co-occur with shifts in the participant structures of the situated social interactions. I then examine similar uses of voice qualities in other Zapotec languages and in the two unrelated language families Nahuatl and Mayan to suggest the possibility that such voice registers are a feature of the Mesoamerican culture area.
  • Sikveland, A., Öttl, A., Amdal, I., Ernestus, M., Svendsen, T., & Edlund, J. (2010). Spontal-N: A Corpus of Interactional Spoken Norwegian. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2986-2991). Paris: European Language Resources Association (ELRA).

    Abstract

    Spontal-N is a corpus of spontaneous, interactional Norwegian. To our knowledge, it is the first corpus of Norwegian in which the majority of speakers have spent significant parts of their lives in Sweden, and in which the recorded speech displays varying degrees of interference from Swedish. The corpus consists of studio quality audio- and video-recordings of four 30-minute free conversations between acquaintances, and a manual orthographic transcription of the entire material. On basis of the orthographic transcriptions, we automatically annotated approximately 50 percent of the material on the phoneme level, by means of a forced alignment between the acoustic signal and pronunciations listed in a dictionary. Approximately seven percent of the automatic transcription was manually corrected. Taking the manual correction as a gold standard, we evaluated several sources of pronunciation variants for the automatic transcription. Spontal-N is intended as a general purpose speech resource that is also suitable for investigating phonetic detail.
  • Simanova, I., Van Gerven, M., Oostenveld, R., & Hagoort, P. (2010). Identifying object categories from event-related EEG: Toward decoding of conceptual representations. Plos One, 5(12), E14465. doi:10.1371/journal.pone.0014465.

    Abstract

    Multivariate pattern analysis is a technique that allows the decoding of conceptual information such as the semantic category of a perceived object from neuroimaging data. Impressive single-trial classification results have been reported in studies that used fMRI. Here, we investigate the possibility to identify conceptual representations from event-related EEG based on the presentation of an object in different modalities: its spoken name, its visual representation and its written name. We used Bayesian logistic regression with a multivariate Laplace prior for classification. Marked differences in classification performance were observed for the tested modalities. Highest accuracies (89% correctly classified trials) were attained when classifying object drawings. In auditory and orthographical modalities, results were lower though still significant for some subjects. The employed classification method allowed for a precise temporal localization of the features that contributed to the performance of the classifier for three modalities. These findings could help to further understand the mechanisms underlying conceptual representations. The study also provides a first step towards the use of concept decoding in the context of real-time brain-computer interface applications.
  • Simon, E., Escudero, P., & Broersma, M. (2010). Learning minimally different words in a third language: L2 proficiency as a crucial predictor of accuracy in an L3 word learning task. In K. Diubalska-Kolaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the Sixth International Symposium on the Acquisition of Second Language Speech (New Sounds 2010).
  • Sjerps, M. J., & McQueen, J. M. (2010). The bounds on flexibility in speech perception. Journal of Experimental Psychology: Human Perception and Performance, 36, 195-211. doi:10.1037/a0016803.
  • Skiba, R. (2010). Polnisch. In S. Colombo-Scheffold, P. Fenn, S. Jeuk, & J. Schäfer (Eds.), Ausländisch für Deutsche. Sprachen der Kinder - Sprachen im Klassenzimmer (2. korrigierte und erweiterte Auflage, pp. 165-176). Freiburg: Fillibach.
  • Snijders, T. M., Petersson, K. M., & Hagoort, P. (2010). Effective connectivity of cortical and subcortical regions during unification of sentence structure. NeuroImage, 52, 1633-1644. doi:10.1016/j.neuroimage.2010.05.035.

    Abstract

    In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
  • Snijders, T. M. (2010). More than words: Neural and genetic dynamics of syntactic unification. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Snowdon, C. T., Pieper, B. A., Boe, C. Y., Cronin, K. A., Kurian, A. V., & Ziegler, T. E. (2010). Variation in oxytocin is related to variation in affiliative behavior in monogamous, pairbonded tamarins. Hormones and Behavior, 58(4), 614-618. doi:10.1016/j.yhbeh.2010.06.014.

    Abstract

    Oxytocin plays an important role in monogamous pairbonded female voles, but not in polygamous voles. Here we examined a socially monogamous cooperatively breeding primate where both sexes share in parental care and territory defense for within species variation in behavior and female and male oxytocin levels in 14 pairs of cotton-top tamarins (Saguinus oedipus). In order to obtain a stable chronic assessment of hormones and behavior, we observed behavior and collected urinary hormonal samples across the tamarins’ 3-week ovulatory cycle. We found similar levels of urinary oxytocin in both sexes. However, basal urinary oxytocin levels varied 10-fold across pairs and pair-mates displayed similar oxytocin levels. Affiliative behavior (contact, grooming, sex) also varied greatly across the sample and explained more than half the variance in pair oxytocin levels. The variables accounting for variation in oxytocin levels differed by sex. Mutual contact and grooming explained most of the variance in female oxytocin levels, whereas sexual behavior explained most of the variance in male oxytocin levels. The initiation of contact by males and solicitation of sex by females were related to increased levels of oxytocin in both. This study demonstrates within-species variation in oxytocin that is directly related to levels of affiliative and sexual behavior. However, different behavioral mechanisms influence oxytocin levels in males and females and a strong pair relationship (as indexed by high levels of oxytocin) may require the activation of appropriate mechanisms for both sexes.
  • Spilková, H., Brenner, D., Öttl, A., Vondřička, P., Van Dommelen, W., & Ernestus, M. (2010). The Kachna L1/L2 picture replication corpus. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2432-2436). Paris: European Language Resources Association (ELRA).

    Abstract

    This paper presents the Kachna corpus of spontaneous speech, in which ten Czech and ten Norwegian speakers were recorded both in their native language and in English. The dialogues are elicited using a picture replication task that requires active cooperation and interaction of speakers by asking them to produce a drawing as close to the original as possible. The corpus is appropriate for the study of interactional features and speech reduction phenomena across native and second languages. The combination of productions in non-native English and in speakers’ native language is advantageous for investigation of L2 issues while providing a L1 behaviour reference from all the speakers. The corpus consists of 20 dialogues comprising 12 hours 53 minutes of recording, and was collected in 2008. Preparation of the transcriptions, including a manual orthographic transcription and an automatically generated phonetic transcription, is currently in progress. The phonetic transcriptions are automatically generated by aligning acoustic models with the speech signal on the basis of the orthographic transcriptions and a dictionary of pronunciation variants compiled for the relevant language. Upon completion the corpus will be made available via the European Language Resources Association (ELRA).
  • Staum Casasanto, L., Jasmin, K., & Casasanto, D. (2010). Virtually accommodating: Speech rate accommodation to a virtual interlocutor. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 127-132). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Stehouwer, H., & van Zaanen, M. (2010). Enhanced suffix arrays as language models: Virtual k-testable languages. In J. M. Sempere, & P. García (Eds.), Grammatical inference: Theoretical results and applications 10th International Colloquium, ICGI 2010, Valencia, Spain, September 13-16, 2010. Proceedings (pp. 305-308). Berlin: Springer.

    Abstract

    In this article, we propose the use of suffix arrays to efficiently implement n-gram language models with practically unlimited size n. This approach, which is used with synchronous back-off, allows us to distinguish between alternative sequences using large contexts. We also show that we can build this kind of models with additional information for each symbol, such as part-of-speech tags and dependency information. The approach can also be viewed as a collection of virtual k-testable automata. Once built, we can directly access the results of any k-testable automaton generated from the input training data. Synchronous back- off automatically identies the k-testable automaton with the largest feasible k. We have used this approach in several classification tasks.
  • Stehouwer, H., & Van Zaanen, M. (2010). Finding patterns in strings using suffix arrays. In M. Ganzha, & M. Paprzycki (Eds.), Proceedings of the International Multiconference on Computer Science and Information Technology, October 18–20, 2010. Wisła, Poland (pp. 505-511). IEEE.

    Abstract

    Finding regularities in large data sets requires implementations of systems that are efficient in both time and space requirements. Here, we describe a newly developed system that exploits the internal structure of the enhanced suffixarray to find significant patterns in a large collection of sequences. The system searches exhaustively for all significantly compressing patterns where patterns may consist of symbols and skips or wildcards. We demonstrate a possible application of the system by detecting interesting patterns in a Dutch and an English corpus.
  • Stehouwer, H., & van Zaanen, M. (2010). Using suffix arrays as language models: Scaling the n-gram. In Proceedings of the 22st Benelux Conference on Artificial Intelligence (BNAIC 2010), October 25-26, 2010.

    Abstract

    In this article, we propose the use of suffix arrays to implement n-gram language models with practically unlimited size n. These unbounded n-grams are called 1-grams. This approach allows us to use large contexts efficiently to distinguish between different alternative sequences while applying synchronous back-off. From a practical point of view, the approach has been applied within the context of spelling confusibles, verb and noun agreement and prenominal adjective ordering. These initial experiments show promising results and we relate the performance to the size of the n-grams used for disambiguation.
  • Stivers, T., & Rossano, F. (2010). A scalar view of response relevance. Research on Language and Social Interaction, 43, 49-56. doi:10.1080/08351810903471381.
  • Stivers, T. (2010). An overview of the question-response system in American English conversation. Journal of Pragmatics, 42, 2772-2781. doi:10.1016/j.pragma.2010.04.011.

    Abstract

    This article, part of a 10 language comparative project on question–response sequences, discusses these sequences in American English conversation. The data are video-taped spontaneous naturally occurring conversations involving two to five adults. Relying on these data I document the basic distributional patterns of types of questions asked (polar, Q-word or alternative as well as sub-types), types of social actions implemented by these questions (e.g., repair initiations, requests for confirmation, offers or requests for information), and types of responses (e.g., repetitional answers or yes/no tokens). I show that declarative questions are used more commonly in conversation than would be suspected by traditional grammars of English and questions are used for a wider range of functions than grammars would suggest. Finally, this article offers distributional support for the idea that responses that are better “fitted” with the question are preferred.
  • Stivers, T., & Enfield, N. J. (2010). A coding scheme for question-response sequences in conversation. Journal of Pragmatics, 42, 2620-2626. doi:10.1016/j.pragma.2010.04.002.

    Abstract

    no abstract is available for this article
  • Stivers, T., & Rossano, F. (2010). Mobilizing response. Research on Language and Social Interaction, 43, 3-31. doi:10.1080/08351810903471258.

    Abstract

    A fundamental puzzle in the organization of social interaction concerns how one individual elicits a response from another. This article asks what it is about some sequentially initial turns that reliably mobilizes a coparticipant to respond and under what circumstances individuals are accountable for producing a response. Whereas a linguistic approach suggests that this is what oquestionso (more generally) and interrogativity (more narrowly) are for, a sociological approach to social interaction suggests that the social action a person is implementing mobilizes a recipient's response. We find that although both theories have merit, neither adequately solves the puzzle. We argue instead that different actions mobilize response to different degrees. Speakers then design their turns to perform actions, and with particular response-mobilizing features of turn-design speakers can hold recipients more accountable for responding or not. This model of response relevance allows sequential position, action, and turn design to each contribute to response relevance.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (Eds.). (2010). Question-response sequences in conversation across ten languages [Special Issue]. Journal of Pragmatics, 42(10). doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., Enfield, N. J., & Levinson, S. C. (2010). Question-response sequences in conversation across ten languages: An introduction. Journal of Pragmatics, 42, 2615-2619. doi:10.1016/j.pragma.2010.04.001.
  • Stivers, T., & Hayashi, M. (2010). Transformative answers: One way to resist a question's constraints. Language in Society, 39, 1-25. doi:10.1017/S0047404509990637.

    Abstract

    A number of Conversation Analytic studies have documented that question recipients have a variety of ways to push against the constraints that questions impose on them. This article explores the concept of transformative answers – answers through which question recipients retroactively adjust the question posed to them. Two main sorts of adjustments are discussed: question term transformations and question agenda transformations. It is shown that the operations through which interactants implement term transformations are different from the operations through which they implement agenda transformations. Moreover, term-transforming answers resist only the question’s design, while agenda-transforming answers effectively resist both design and agenda, thus implying that agenda-transforming answers resist more strongly than design-transforming answers. The implications of these different sorts of transformations for alignment and affiliation are then explored.
  • Tabak, W. (2010). Semantics and (ir)regular inflection in morphological processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Terrill, A. (2010). Complex predicates and complex clauses in Lavukaleve. In J. Bowden, N. P. Himmelman, & M. Ross (Eds.), A journey through Austronesian and Papuan linguistic and cultural space: Papers in honour of Andrew K. Pawley (pp. 499-512). Canberra: Pacific Linguistics.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Torreira, F., & Ernestus, M. (2010). Phrase-medial vowel devoicing in spontaneous French. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2006-2009).

    Abstract

    This study investigates phrase-medial vowel devoicing in European French (e.g. /ty po/ [typo] 'you can'). Our spontaneous speech data confirm that French phrase-medial devoicing is a frequent phenomenon affecting high vowels preceded by voiceless consonants. We also found that devoicing is more frequent in temporally reduced and coarticulated vowels. Complete and partial devoicing were conditioned by the same variables (speech rate, consonant type and distance from the end of the AP). Given these results, we propose that phrase-medial vowel devoicing in French arises mainly from the temporal compression of vocalic gestures and the aerodynamic conditions imposed by high vowels.
  • Torreira, F., Adda-Decker, M., & Ernestus, M. (2010). The Nijmegen corpus of casual French. Speech Communication, 52, 201-212. doi:10.1016/j.specom.2009.10.004.

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual French (NCCFr). The corpus contains a total of over 36 h of recordings of 46 French speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around 90 min of speech from every pair of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Comparisons with the ESTER corpus of journalistic speech show that the two corpora contain speech of considerably different registers. A number of indicators of casualness, including swear words, casual words, verlan, disfluencies and word repetitions, are more frequent in the NCCFr than in the ESTER corpus, while the use of double negation, an indicator of formal speech, is less frequent. In general, these estimates of casualness are constant through the three parts of the recording sessions and across speakers. Based on these facts, we conclude that our corpus is a rich resource of highly casual speech, and that it can be effectively exploited by researchers in language science and technology.

    Files private

    Request files
  • Torreira, F., & Ernestus, M. (2010). The Nijmegen corpus of casual Spanish. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10) (pp. 2981-2985). Paris: European Language Resources Association (ELRA).

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual Spanish (NCCSp). The corpus contains around 30 hours of recordings of 52 Madrid Spanish speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around ninety minutes of speech from every group of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Information about how to obtain a copy of the corpus can be found online at http://mirjamernestus.ruhosting.nl/Ernestus/NCCSp
  • Tucker, B. V., & Warner, N. (2010). What it means to be phonetic or phonological: The case of Romanian devoiced nasals. Phonology, 27, 289-324. doi:10.1017/S0952675710000138.

    Abstract

    phonological patterns and detailed phonetic patterns can combine to produce unusual acoustic results, but criteria for what aspects of a pattern are phonetic and what aspects are phonological are often disputed. Early literature on Romanian makes mention of nasal devoicing in word-final clusters (e.g. in /basm/ 'fairy-tale'). Using acoustic, aerodynamic and ultrasound data, the current work investigates how syllable structure, prosodic boundaries, phonetic paradigm uniformity and assimilation influence Romanian nasal devoicing. It provides instrumental phonetic documentation of devoiced nasals, a phenomenon that has not been widely studied experimentally, in a phonetically underdocumented language. We argue that sound patterns should not be separated into phonetics and phonology as two distinct systems, but neither should they all be grouped together as a single, undifferentiated system. Instead, we argue for viewing the distinction between phonetics and phonology as a largely continuous multidimensional space, within which sound patterns, including Romanian nasal devoicing, fall.
  • Tuinman, A., & Cutler, A. (2010). Casual speech processes: L1 knowledge and L2 speech perception. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 512-517). Poznan: Adama Mickiewicz University.

    Abstract

    Every language manifests casual speech processes, and hence every second language too. This study examined how listeners deal with second-language casual speech processes, as a function of the processes in their native language. We compared a match case, where a second-language process t/-reduction) is also operative in native speech, with a mismatch case, where a second-language process (/r/-insertion) is absent from native speech. In each case native and non-native listeners judged stimuli in which a given phoneme (in sentence context) varied along a continuum from absent to present. Second-language listeners in general mimicked native performance in the match case, but deviated significantly from native performance in the mismatch case. Together these results make it clear that the mapping from first to second language is as important in the interpretation of casual speech processes as in other dimensions of speech perception. Unfamiliar casual speech processes are difficult to adapt to in a second language. Casual speech processes that are already familiar from native speech, however, are easy to adapt to; indeed, our results even suggest that it is possible for subtle difference in their occurrence patterns across the two languages to be detected,and to be accommodated to in second-language listening.
  • Uddén, J., Folia, V., & Petersson, K. M. (2010). The neuropharmacology of implicit learning. Current Neuropharmacology, 8, 367-381. doi:10.2174/157015910793358178.

    Abstract

    Two decades of pharmacologic research on the human capacity to implicitly acquire knowledge as well as cognitive skills and procedures have yielded surprisingly few conclusive insights. We review the empirical literature of the neuropharmacology of implicit learning. We evaluate the findings in the context of relevant computational models related to neurotransmittors such as dopamine, serotonin, acetylcholine and noradrenalin. These include models for reinforcement learning, sequence production, and categorization. We conclude, based on the reviewed literature, that one can predict improved implicit acquisition by moderately elevated dopamine levels and impaired implicit acquisition by moderately decreased dopamine levels. These effects are most prominent in the dorsal striatum. This is supported by a range of behavioral tasks in the empirical literature. Similar predictions can be made for serotonin, although there is yet a lack of support in the literature for serotonin involvement in classical implicit learning tasks. There is currently a lack of evidence for a role of the noradrenergic and cholinergic systems in implicit and related forms of learning. GABA modulators, including benzodiazepines, seem to affect implicit learning in a complex manner and further research is needed. Finally, we identify allosteric AMPA receptors modulators as a potentially interesting target for future investigation of the neuropharmacology of procedural and implicit learning.
  • Vainio, M., Järvikivi, J., Aalto, D., & Suni, A. (2010). Phonetic tone signals phonological quantity and word structure. Journal of the Acoustical Society of America, 128, 1313-1321. doi:10.1121/1.3467767.

    Abstract

    Many languages exploit suprasegmental devices in signaling word meaning. Tone languages exploit fundamental frequency whereas quantity languages rely on segmental durations to distinguish otherwise similar words. Traditionally, duration and tone have been taken as mutually exclusive. However, some evidence suggests that, in addition to durational cues, phonological quantity is associated with and co-signaled by changes in fundamental frequency in quantity languages such as Finnish, Estonian, and Serbo-Croat. The results from the present experiment show that the structure of disyllabic word stems in Finnish are indeed signaled tonally and that the phonological length of the stressed syllable is further tonally distinguished within the disyllabic sequence. The results further indicate that the observed association of tone and duration in perception is systematically exploited in speech production in Finnish.
  • Van Rees Vellinga, M., Hanulikova, A., Weber, A., & Zwitserlood, P. (2010). A neurophysiological investigation of processing phoneme substitutions in L2. In New Sounds 2010: Sixth International Symposium on the Acquisition of Second Language Speech (pp. 518-523). Poznan, Poland: Adam Mickiewicz University.
  • Van der Meij, L., Isaac, A., & Zinn, C. (2010). A web-based repository service for vocabularies and alignments in the cultural heritage domain. In L. Aroyo, G. Antoniou, E. Hyvönen, A. Ten Teije, H. Stuckenschmidt, L. Cabral, & T. Tudorache (Eds.), The Semantic Web: Research and Applications. 7th Extended Semantic Web Conference, Proceedings, Part I (pp. 394-409). Heidelberg: Springer.

    Abstract

    Controlled vocabularies of various kinds (e.g., thesauri, classification schemes) play an integral part in making Cultural Heritage collections accessible. The various institutions participating in the Dutch CATCH programme maintain and make use of a rich and diverse set of vocabularies. This makes it hard to provide a uniform point of access to all collections at once. Our SKOS-based vocabulary and alignment repository aims at providing technology for managing the various vocabularies, and for exploiting semantic alignments across any two of them. The repository system exposes web services that effectively support the construction of tools for searching and browsing across vocabularies and collections or for collection curation (indexing), as we demonstrate.
  • Van Gerven, M., & Simanova, I. (2010). Concept classification with Bayesian multi-task learning. In Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolinguistics (pp. 10-17). Los Angeles: Association for Computational Linguistics.

    Abstract

    Multivariate analysis allows decoding of single trial data in individual subjects. Since different models are obtained for each subject it becomes hard to perform an analysis on the group level. We introduce a new algorithm for Bayesian multi-task learning which imposes a coupling between single-subject models. Using
    the CMU fMRI dataset it is shown that the algorithm can be used for concept classification
    based on the average activation of regions in the AAL atlas. Concepts which were most easily classified correspond to the categories shelter,manipulation and eating, which is in accordance with the literature. The multi-task learning algorithm is shown to find regions of interest that are common to all subjects which
    therefore facilitates interpretation of the obtained
    models.
  • Van Gijn, R. (2010). [Review of the book Complementation ed. by R. M. W. Dixon, A. Aikhenvald]. Studies in Language, 34(1), 187-194. doi:10.1075/sl.34.1.06van.
  • Van Putten, S. (2010). [Review of the book Focus structures in African languages: The interaction of focus and grammar", edited by Enoch Oladé Aboh, Katharina Hartmann & Malte Zimmermann]. Journal of African Languages and Linguistics, 31(1), 101-104. doi:10.1515/JALL.2010.006.
  • Van Gijn, R., & Hirtzel, V. (2010). [Review of the book The Anthropology of color, ed. by Robert E. MacLaura, Galina V. Paramei and Don Dedrick]. Journal of Linguistic Anthropology, 20(1), 241-245.
  • Van der Linden, M., Van Turennout, M., & Indefrey, P. (2010). Formation of category representations in superior temporal sulcus. Journal of Cognitive Neuroscience, 22, 1270-1282. doi:10.1162/jocn.2009.21270.

    Abstract

    The human brain contains cortical areas specialized in representing object categories. Visual experience is known to change the responses in these category-selective areas of the brain. However, little is known about how category training specifically affects cortical category selectivity. Here, we investigated the experience-dependent formation of object categories using an fMRI adaptation paradigm. Outside the scanner, subjects were trained to categorize artificial bird types into arbitrary categories (jungle birds and desert birds). After training, neuronal populations in the occipito-temporal cortex, such as the fusiform and the lateral occipital gyrus, were highly sensitive to perceptual stimulus differences. This sensitivity was not present for novel birds, indicating experience-related changes in neuronal representations. Neurons in STS showed category selectivity. A release from adaptation in STS was only observed when two birds in a pair crossed the category boundary. This dissociation could not be explained by perceptual similarities because the physical difference between birds from the same side of the category boundary and between birds from opposite sides of the category boundary was equal. Together, the occipito-temporal cortex and the STS have the properties suitable for a system that can both generalize across stimuli and discriminate between them.
  • Van Gijn, R. (2010). Middle voice and ideophones, a diachronic connection: The case of Yurakaré. Studies in Language, 34, 273-297. doi:10.1075/sl.34.2.02gij.

    Abstract

    Kemmer (1993) argues that middle voice markers almost always arise diachronically through the semantic extension of a reflexive marker to other semantic uses related to reflexive. In this paper I will argue for an alternative diachronic path that has led to the development of the middle marker in Yurakaré (unclassified, Bolivia): through ideophone-verb constructions. Taking this perspective helps explain a number of synchronic peculiarities of the middle marker in Yurakaré, and it introduces a previously unnoticed channel for middle voice markers to arise.

    Files private

    Request files
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2010). Is there pain in champagne? Semantic involvement of words within words during sense-making. Journal of Cognitive Neuroscience, 22, 2618-2626. doi:10.1162/jocn.2009.21336.

    Abstract

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e. g., pie in pirate) or at their offsets (e. g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial-and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.
  • Van Hout, A., & Veenstra, A. (2010). Telicity marking in Dutch child language: Event realization or no aspectual coercion? In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 216-228). Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Van Berkum, J. J. A. (2010). The brain is a prediction machine that cares about good and bad - Any implications for neuropragmatics? Italian Journal of Linguistics, 22, 181-208.

    Abstract

    Experimental pragmatics asks how people construct contextualized meaning in communication. So what does it mean for this field to add neuroas a prefix to its name? After analyzing the options for any subfield of cognitive science, I argue that neuropragmatics can and occasionally should go beyond the instrumental use of EEG or fMRI and beyond mapping classic theoretical distinctions onto Brodmann areas. In particular, if experimental pragmatics ‘goes neuro’, it should take into account that the brain evolved as a control system that helps its bearer negotiate a highly complex, rapidly changing and often not so friendly environment. In this context, the ability to predict current unknowns, and to rapidly tell good from bad, are essential ingredients of processing. Using insights from non-linguistic areas of cognitive neuroscience as well as from EEG research on utterance comprehension, I argue that for a balanced development of experimental pragmatics, these two characteristics of the brain cannot be ignored.
  • Van den Bos, E., & Poletiek, F. H. (2010). Structural selection in implicit learning of artificial grammars. Psychological Research-Psychologische Forschung, 74(2), 138-151. doi:10.1007/s00426-009-0227-1.

    Abstract

    In the contextual cueing paradigm, Endo and Takeda (in Percept Psychophys 66:293–302, 2004) provided evidence that implicit learning involves selection of the aspect of a structure that is most useful to one’s task. The present study attempted to replicate this finding in artificial grammar learning to investigate whether or not implicit learning commonly involves such a selection. Participants in Experiment 1 were presented with an induction task that could be facilitated by several characteristics of the exemplars. For some participants, those characteristics included a perfectly predictive feature. The results suggested that the aspect of the structure that was most useful to the induction task was selected and learned implicitly. Experiment 2 provided evidence that, although salience affected participants’ awareness of the perfectly predictive feature, selection for implicit learning was mainly based on usefulness.

    Additional information

    Supplementary material
  • Van Leeuwen, T. M., Petersson, K. M., & Hagoort, P. (2010). Synaesthetic colour in the brain: Beyond colour areas. A functional magnetic resonance imaging study of synaesthetes and matched controls. PLoS One, 5(8), E12074. doi:10.1371/journal.pone.0012074.

    Abstract

    Background: In synaesthesia, sensations in a particular modality cause additional experiences in a second, unstimulated modality (e.g., letters elicit colour). Understanding how synaesthesia is mediated in the brain can help to understand normal processes of perceptual awareness and multisensory integration. In several neuroimaging studies, enhanced brain activity for grapheme-colour synaesthesia has been found in ventral-occipital areas that are also involved in real colour processing. Our question was whether the neural correlates of synaesthetically induced colour and real colour experience are truly shared. Methodology/Principal Findings: First, in a free viewing functional magnetic resonance imaging (fMRI) experiment, we located main effects of synaesthesia in left superior parietal lobule and in colour related areas. In the left superior parietal lobe, individual differences between synaesthetes (projector-associator distinction) also influenced brain activity, confirming the importance of the left superior parietal lobe for synaesthesia. Next, we applied a repetition suppression paradigm in fMRI, in which a decrease in the BOLD (blood-oxygenated-level-dependent) response is generally observed for repeated stimuli. We hypothesized that synaesthetically induced colours would lead to a reduction in BOLD response for subsequently presented real colours, if the neural correlates were overlapping. We did find BOLD suppression effects induced by synaesthesia, but not within the colour areas. Conclusions/Significance: Because synaesthetically induced colours were not able to suppress BOLD effects for real colour, we conclude that the neural correlates of synaesthetic colour experience and real colour experience are not fully shared. We propose that synaesthetic colour experiences are mediated by higher-order visual pathways that lie beyond the scope of classical, ventral-occipital visual areas. Feedback from these areas, in which the left parietal cortex is likely to play an important role, may induce V4 activation and the percept of synaesthetic colour.
  • Van Valin Jr., R. D. (2010). Role and reference grammar as a framework for linguistic analysis. In B. Heine, & H. Narrog (Eds.), The Oxford handbook of linguistic analysis (pp. 703-738). Oxford: Oxford University Press.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2010). Semantic facilitation in bilingual everyday speech comprehension. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (Interspeech 2010), Makuhari, Japan (pp. 1245-1248).

    Abstract

    Previous research suggests that bilinguals presented with low and high predictability sentences benefit from semantics in clear but not in conversational speech [1]. In everyday speech, however, many words are not highly predictable. Previous research has shown that native listeners can use also more subtle semantic contextual information [2]. The present study reports two auditory lexical decision experiments investigating to what extent late Asian-English bilinguals benefit from subtle semantic cues in their processing of English unreduced and reduced speech. Our results indicate that these bilinguals are less sensitive to semantic cues than native listeners for both speech registers.
  • Van Dijk, H. (2010). The state of the brain: How alpha oscillations shape behavior and event-related responses. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van Gijn, R., Hirtzel, V., & Gipper, S. (2010). Updating and loss of color terminology in Yurakaré: An interdisciplinary point of view. Language & Communication, 30(4), 240-264. doi:10.1016/j.langcom.2010.02.002.

    Abstract

    In spite of the well-established idea that language contact is fundamental for explaining language change, this aspect has been remarkably absent in most studies of color term evolution. This paper discusses the changes in the color system of Yurakaré (unclassified, Bolivia) that have occurred during the last 200 years, as a result of intensive contact with Spanish language and culture. Developing the new theoretical concept of ‘updating’, we will show that different contexts have resulted in qualitatively different changes to the color system of the language.
  • Van Uytvanck, D., Zinn, C., Broeder, D., Wittenburg, P., & Gardelleni, M. (2010). Virtual language observatory: The portal to the language resources and technology universe. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 900-903). European Language Resources Association (ELRA).

    Abstract

    Over the years, the field of Language Resources and Technology (LRT) hasdeveloped a tremendous amount of resources and tools. However, there is noready-to-use map that researchers could use to gain a good overview andsteadfast orientation when searching for, say corpora or software tools tosupport their studies. It is rather the case that information is scatteredacross project- or organisation-specific sites, which makes it hard if notimpossible for less-experienced researchers to gather all relevant material.Clearly, the provision of metadata is central to resource and softwareexploration. However, in the LRT field, metadata comes in many forms, tastesand qualities, and therefore substantial harmonization and curation efforts arerequired to provide researchers with metadata-based guidance. To address thisissue a broad alliance of LRT providers (CLARIN, the Linguist List, DOBES,DELAMAN, DFKI, ELRA) have initiated the Virtual Language Observatory portal toprovide a low-barrier, easy-to-follow entry point to language resources andtools; it can be accessed via http://www.clarin.eu/vlo
  • Veenstra, A., Berends, S., & Van Hout, A. (2010). Acquisition of object and quantitative pronouns in Dutch: Kinderen wassen 'hem' voordat ze 'er' twee meenemen. Groninger Arbeiten zur Germanistischen Linguistik, 51, 9-25.

    Abstract

    1. Introduction Despite a large literature on Dutch children’s pronoun interpretation, relatively little is known about their production. In this study we elicited pronouns in two syntactic environments: object pronouns and quantitative er (Q-er). The goal was to see how different types of pronouns develop, in particular, whether acquisition depends on their different syntactic properties. Our Dutch data add another type of language to the acquisition literature on object clitics in the Romance languages. Moreover, we present another angle on this discussion by comparing object pronouns and Q-er.
  • Verdonschot, R. G., La Heij, W., & Schiller, N. O. (2010). Semantic context effects when naming Japanese kanji, but not Chinese hànzì. Cognition, 115(3), 512-518. doi:10.1016/j.cognition.2010.03.005.

    Abstract

    The process of reading aloud bare nouns in alphabetic languages is immune to semantic context effects from pictures. This is accounted for by assuming that words in alphabetic languages can be read aloud relatively fast through a sub-lexical grapheme-phoneme conversion (GPC) route or by a direct route from orthography to word form. We examined semantic context effects in a word-naming task in two languages with logographic scripts for which GPC cannot be applied: Japanese kanji and Chinese hanzi. We showed that reading aloud bare nouns is sensitive to semantically related context pictures in Japanese, but not in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji. (C) 2010 Elsevier B.V. All rights reserved.
  • Veroude, K., Norris, D. G., Shumskaya, E., Gullberg, M., & Indefrey, P. (2010). Functional connectivity between brain regions involved in learning words of a new language. Brain and Language, 113, 21-27. doi:10.1016/j.bandl.2009.12.005.

    Abstract

    Previous studies have identified several brain regions that appear to be involved in the acquisition of novel word forms. Standard word-by-word presentation is often used although exposure to a new language normally occurs in a natural, real world situation. In the current experiment we investigated naturalistic language exposure and applied a model-free analysis for hemodynamic-response data. Functional connectivity, temporal correlations between hemodynamic activity of different areas, was assessed during rest before and after presentation of a movie of a weather report in Mandarin Chinese to Dutch participants. We hypothesized that learning of novel words might be associated with stronger functional connectivity of regions that are involved in phonological processing. Participants were divided into two groups, learners and non-learners, based on the scores on a post hoc word recognition task. The learners were able to recognize Chinese target words from the weather report, while the non-learners were not. In the first resting state period, before presentation of the movie, stronger functional connectivity was observed for the learners compared to the non-learners between the left supplementary motor area and the left precentral gyrus as well as the left insula and the left rolandic operculum, regions that are important for phonological rehearsal. After exposure to the weather report, functional connectivity between the left and right supramarginal gyrus was stronger for learners than for non-learners. This is consistent with a role of the left supramarginal gyrus in the storage of phonological forms. These results suggest both pre-existing and learning-induced differences between the two groups.
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2010). Active word learning under uncertain input conditions. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2930-2933). ISCA.

    Abstract

    This paper presents an analysis of phoneme durations of emotional speech in two languages: Dutch and Korean. The analyzed corpus of emotional speech has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors and is based on judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure were used for recordings of both languages; and c) the phonetics of the carrier phrase were constructed to be permissible in both languages. The carefully controlled phonetic content of the carrier phrase allows for analysis of the role of specific phonetic features, such as phoneme duration, in emotional expression in Dutch and Korean. In this study the mutual effect of language and emotion on phoneme duration is presented.
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2010). Dealing with uncertain input in word learning. In Proceedings of the IXth IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, 18-21 Aug. 2010 (pp. 46-51). IEEE.

    Abstract

    In this paper we investigate a computational model of word learning, that is embedded in a cognitively and ecologically plausible framework. Multi-modal stimuli from four different speakers form a varied source of experience. The model incorporates active learning, attention to a communicative setting and clarity of the visual scene. The model's ability to learn associations between speech utterances and visual concepts is evaluated during training to investigate the influence of active learning under conditions of uncertain input. The results show the importance of shared attention in word learning and the model's robustness against noise.
  • Versteegh, M., Sangati, F., & Zuidema, W. (2010). Simulations of socio-linguistic change: Implications for unidirectionality. In A. Smith, M. Schoustra, B. Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 511-512). World Scientific Publishing.
  • Völlmin, S., Amha, A., Rapold, C. J., & Zaugg-Coretti, S. (Eds.). (2010). Converbs, medial verbs, clause chaining and related issues. Köln: Rüdiger Köppe Verlag.
  • von Spiczak, S., Muhle, H., Helbig, I., De Kovel, C. G. F., Hampe, J., Gaus, V., Koeleman, B. P. C., Lindhout, D., Schreiber, S., Sander, T., & Stephani, U. (2010). Association Study of TRPC4 as a Candidate Gene for Generalized Epilepsy with Photosensitivity. Neuromolecular Medicine, 12(3), 292-299. doi:10.1007/s12017-010-8122-x.

    Abstract

    Photoparoxysmal response (PPR) is characterized by abnormal visual sensitivity of the brain to photic stimulation. Frequently associated with idiopathic generalized epilepsies (IGEs), it might be an endophenotype for cortical excitability. Transient receptor potential cation (TRPC) channels are involved in the generation of epileptiform discharges, and TRPC4 constitutes the main TRPC channel in the central nervous system. The present study investigated an association of PPR with sequence variations of the TRPC4 gene. Thirty-five single nucleotide polymorphisms (SNP) within TRPC4 were genotyped in 273 PPR probands and 599 population controls. Association analyses were performed for the broad PPR endophenotype (PPR types I-IV; n = 273), a narrow model of affectedness (PPR types III and IV; n = 214) and PPR associated with IGE (PPR/IGE; n = 106) for each SNP and for corresponding haplotypes. Association was found between the intron 5 SNP rs10507456 and PPR/IGE both for single markers (P = 0.005) and haplotype level (P = 0.01). Three additional SNPs (rs1535775, rs10161932 and rs7338118) within the same haplotype block were associated with PPR/IGE at P < 0.05 (uncorrected) as well as two more markers (rs10507457, rs7329459) located in intron 3. Again, the corresponding haplotype also showed association with PPR/IGE. Results were not significant following correction for multiple comparisons by permutation analysis for single markers and Bonferroni-Holm for haplotypes. No association was found between variants in TRPC4 and other phenotypes. Our results showed a trend toward association of TRPC4 variants and PPR/IGE. Further studies including larger samples of photosensitive probands are required to clarify the relevance of TRPC4 for PPR and IGE.
  • De Vries, M., Barth, A. C. R., Maiworm, S., Knecht, S., Zwitserlood, P., & Flöel, A. (2010). Electrical stimulation of Broca’s area enhances implicit learning of an artificial grammar. Journal of Cognitive Neuroscience, 22, 2427-2436. doi:10.1162/jocn.2009.21385.

    Abstract

    Artificial grammar learning constitutes a well-established model for the acquisition of grammatical knowledge in a natural setting. Previous neuroimaging studies demonstrated that Broca's area (left BA 44/45) is similarly activated by natural syntactic processing and artificial grammar learning. The current study was conducted to investigate the causal relationship between Broca's area and learning of an artificial grammar by means of transcranial direct current stimulation (tDCS). Thirty-eight healthy subjects participated in a between-subject design, with either anodal tDCS (20 min, 1 mA) or sham stimulation, over Broca's area during the acquisition of an artificial grammar. Performance during the acquisition phase, presented as a working memory task, was comparable between groups. In the subsequent classification task, detecting syntactic violations, and specifically, those where no cues to superficial similarity were available, improved significantly after anodal tDCS, resulting in an overall better performance. A control experiment where 10 subjects received anodal tDCS over an area unrelated to artificial grammar learning further supported the specificity of these effects to Broca's area. We conclude that Broca's area is specifically involved in rule-based knowledge, and here, in an improved ability to detect syntactic violations. The results cannot be explained by better tDCS-induced working memory performance during the acquisition phase. This is the first study that demonstrates that tDCS may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia.
  • De Vries, M., Ulte, C., Zwitserlood, P., Szymanski, B., & Knecht, S. (2010). Increasing dopamine levels in the brain improves feedback-based procedural learning in healthy participants: An artificial-grammar-learning experiment. Neuropsychologia, 48, 3193-3197. doi:10.1016/j.neuropsychologia.2010.06.024.

    Abstract

    Recently, an increasing number of studies have suggested a role for the basal ganglia and related dopamine inputs in procedural learning, specifically when learning occurs through trial-by-trial feedback (Shohamy, Myers, Kalanithi, & Gluck. (2008). Basal ganglia and dopamine contributions to probabilistic category learning. Neuroscience and Biobehavioral Reviews, 32, 219–236). A necessary relationship has however only been demonstrated in patient studies. In the present study, we show for the first time that increasing dopamine levels in the brain improves the gradual acquisition of complex information in healthy participants. We implemented two artificial-grammar-learning tasks, one with and one without performance feedback. Learning was improved after levodopa intake for the feedback-based learning task only, suggesting that dopamine plays a specific role in trial-by-trial feedback-based learning. This provides promising directions for future studies on dopaminergic modulation of cognitive functioning.
  • Warner, N., Otake, T., & Arai, A. (2010). Intonational structure as a word-boundary cue in Tokyo Japanese. Language and Speech, 53, 107-131. doi:10.1177/0023830909351235.

    Abstract

    While listeners are recognizing words from the connected speech stream, they are also parsing information from the intonational contour. This contour may contain cues to word boundaries, particularly if a language has boundary tones that occur at a large proportion of word onsets. We investigate how useful the pitch rise at the beginning of an accentual phrase (APR) would be as a potential word-boundary cue for Japanese listeners. A corpus study shows that it should allow listeners to locate approximately 40–60% of word onsets, while causing less than 1% false positives. We then present a word-spotting study which shows that Japanese listeners can, indeed, use accentual phrase boundary cues during segmentation. This work shows that the prosodic patterns that have been found in the production of Japanese also impact listeners’ processing.
  • Weber, A., Crocker, M., & Knoeferle, P. (2010). Conflicting constraints in resource-adaptive language comprehension. In M. W. Crocker, & J. Siekmann (Eds.), Resource-adaptive cognitive processes (pp. 119-141). New York: Springer.

    Abstract

    The primary goal of psycholinguistic research is to understand the architectures and mechanisms that underlie human language comprehension and production. This entails an understanding of how linguistic knowledge is represented and organized in the brain and a theory of how that knowledge is accessed when we use language. Research has traditionally emphasized purely linguistic aspects of on-line comprehension, such as the influence of lexical, syntactic, semantic and discourse constraints, and their tim -course. It has become increasingly clear, however, that nonlinguistic information, such as the visual environment, are also actively exploited by situated language comprehenders.
  • Weber, A., & Poellmann, K. (2010). Identifying foreign speakers with an unfamiliar accent or in an unfamiliar language. In New Sounds 2010: Sixth International Symposium on the Acquisition of Second Language Speech (pp. 536-541). Poznan, Poland: Adam Mickiewicz University.
  • Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21, 67-74. doi:10.1177/0956797609354072.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one’s own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Righthanders preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and lefthanders, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Willems, R. M., Peelen, M. V., & Hagoort, P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719-1725. doi:10.1093/cercor/bhp234.

    Abstract

    The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Psychological Science, 21, 8-14. doi:10.1177/0956797609355563.

    Abstract

    Although language is an effective vehicle for communication, it is unclear how linguistic and communicative abilities relate to each other. Some researchers have argued that communicative message generation involves perspective taking (mentalizing), and—crucially—that mentalizing depends on language. We employed a verbal communication paradigm to directly test whether the generation of a communicative action relies on mentalizing and whether the cerebral bases of communicative message generation are distinct from parts of cortex sensitive to linguistic variables. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings show that communicative and linguistic abilities rely on cerebrally (and computationally) distinct mechanisms
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2010). A functional role for the motor system in language understanding: Evidence from rTMS [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 127). York: University of York.
  • Willems, R. M., & Hagoort, P. (2010). Cortical motor contributions to language understanding. In L. Hermer (Ed.), Reciprocal interactions among early sensory and motor areas and higher cognitive networks (pp. 51-72). Kerala, India: Research Signpost Press.

    Abstract

    Here we review evidence from cognitive neuroscience for a tight relation between language and action in the brain. We focus on two types of relation between language and action. First, we investigate whether the perception of speech and speech sounds leads to activation of parts of the cortical motor system also involved in speech production. Second, we evaluate whether understanding action-related language involves the activation of parts of the motor system. We conclude that whereas there is considerable evidence that understanding language can involve parts of our motor cortex, this relation is best thought of as inherently flexible. As we explain, the exact nature of the input as well as the intention with which language is perceived influences whether and how motor cortex plays a role in language processing.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10), 2387-2400. doi:10.1162/jocn.2009.21386.

    Abstract

    According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI totest whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showd effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of “mental simulation” should be distinguished from “mental imagery” in embodied theories of language.
  • Willems, R. M., & Varley, R. (2010). Neural insights into the relation between language and communication. Frontiers in Human Neuroscience, 4, 203. doi:10.3389/fnhum.2010.00203.

    Abstract

    The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). Rapid and long-lasting adaptation to foreign-accented speech [Abstract]. Journal of the Acoustical Society of America, 128, 2486.

    Abstract

    In foreign-accented speech, listeners have to handle noticeable deviations from the standard pronunciation of a target language. Three cross-modal priming experiments investigated how short- and long-term experiences with a foreign accent influence word recognition by native listeners. In experiment 1, German-accented words were presented to Dutch listeners who had either extensive or limited prior experience with German-accented Dutch. Accented words either contained a diphthong substitution that deviated acoustically quite largely from the canonical form (huis [hys], "house", pronounced as [hoys]), or that deviated acoustically to a lesser extent (lijst [lst], "list", pronounced as [lst]). The mispronunciations never created lexical ambiguity in Dutch. While long-term experience facilitated word recognition for both types of substitutions, limited experience facilitated recognition only of words with acoustically smaller deviations. In experiment 2, Dutch listeners with limited experience listened to the German speaker for 4 min before participating in the cross-modal priming experiment. The results showed that speaker-specific learning effects for acoustically large deviations can be obtained already after a brief exposure, as long as the exposure contains evidence of the deviations. Experiment 3 investigates whether these short-term adaptation effects for foreign-accented speech are speaker-independent.
  • Witteman, M. J., & Segers, E. (2010). The modality effect tested in children in a user-paced multimedia environment. Journal of Computer Assisted Learning, 26, 132-142. doi:10.1111/j.1365-2729.2009.00335.x.

    Abstract

    The modality learning effect, according to Mayer (2001), proposes that learning is enhanced when information is presented in both the visual and auditory domain (e.g., pictures and spoken information), compared to presenting information solely in the visual channel (e.g., pictures and written text). Most of the evidence for this effect comes from adults in a laboratory setting. Therefore, we tested the modality effect with 80 children in the highest grade of elementary school, in a naturalistic setting. In a between-subjects design children either saw representational pictures with speech or representational pictures with text. Retention and transfer knowledge was tested at three moments: immediately after the intervention, one day after, and after one week. The present study did not find any evidence for a modality effect in children when the lesson is learner-paced. Instead, we found a reversed modality effect directly after the intervention for retention. A reversed modality effect was also found for the transfer questions one day later. This effect was robust, even when controlling for individual differences.
  • Wittenburg, P. (2010). Culture change in data management. In V. Luzar-Stiffler, I. Jarec, & Z. Bekic (Eds.), Proceedings of the ITI 2010, 32nd International Conference on Information Technology Interfaces (pp. 43 -48). Zagreb, Croatia: University of Zagreb.

    Abstract

    In the emerging e-Science scenario users should be able to easily combine data resources and tools/services; and machines should automatically be able to trace paths and carry out interpretations. Users who want to participate need to move from a down-load first to a cyberinfrastructure paradigm, thus increasing their dependency on the seamless operation of all components in the Internet. Such a scenario is inherently complex and requires compliance to guidelines and standards to keep it working smoothly. Only a change in our culture of dealing with research data and awareness about the way we do data lifecycle management will lead to success. Since we have so many legacy resources that are not compliant with the required guidelines, since we need to admit obvious problems in particular with standardization in the area of semantics and since it will take much time to establish trust at the side of researchers, the e-Science scenario can only be achieved stepwise which will take much time.
  • Wittenburg, P., & Trilsbeek, P. (2010). Digital archiving - a necessity in documentary linguistics. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving and revitalization (pp. 111-136). Canberra: Pacific Linguistics.
  • Wittenburg, P. (2010). Archiving and accessing language resources. Concurrency and Computation: Practice and Experience, 22(17), 2354-2368. doi:10.1002/cpe.1605.

    Abstract

    Languages are among the most complex systems that evolution has created. With an unforeseen speed many of these unique results of evolution are currently disappearing: every two weeks one of the 6500 still spoken languages is dying and many are subject to extreme changes due to globalization. Experts understood the need to document the languages and preserve the cultural and linguistic treasures embedded in them for future generations. Also linguistic theory will need to consider the variation of the linguistic systems encoded in languages to improve our understanding of how human minds process language material, thus accessibility to all types of resources is increasingly crucial. Deeper insights into human language processing and a higher degree of integration and interoperability between resources will also improve our language processing technology. The DOBES programme is focussing on the documentation and preservation of language material. The Max Planck Institute developed the Language Archiving Technology to help researchers when creating, archiving and accessing language resources. The recently started CLARIN research infrastructure has as main goals to achieve a broad visibility and an easy
    accessibility of language resources.

Share this page