Publications

Displaying 101 - 200 of 716
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A., & Fay, D. (1975). You have a Dictionary in your Head, not a Thesaurus. Texas Linguistic Forum, 1, 27-40.
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292-314. doi:10.1016/S0749-596X(02)00001-3.

    Abstract

    The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle ) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse. In Experiment 1, the first utterance instructed participants to move one object above or below a shape (e.g., “Put the candle/candy below the triangle”) and the second utterance contained an accented or deaccented definite noun phrase which referred to the same object or introduced a new entity (e.g., “Now put the CANDLE above the square” vs. “Now put the candle ABOVE THE SQUARE”). Fixations to the competitor (e.g., candy ) demonstrated a bias to interpret deaccented nouns as anaphoric and accented nouns as nonanaphoric. Experiment 2 used only accented nouns in the second instruction, varying whether the referent of this second instruction was the Theme of the first instruction (e.g., “Put the candle below the triangle”) or the Goal of the first instruction (e.g., “Put the necklace below the candle”). Participants preferred to interpret accented noun phrases as referring to a previously mentioned nonfocused entity (the Goal) rather than as introducing a new unmentioned entity.
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Dediu, D. (2018). Making genealogical language classifications available for phylogenetic analysis: Newick trees, unified identifiers, and branch length. Language Dynamics and Change, 8(1), 1-21. doi:10.1163/22105832-00801001.

    Abstract

    One of the best-known types of non-independence between languages is caused by genealogical relationships due to descent from a common ancestor. These can be represented by (more or less resolved and controversial) language family trees. In theory, one can argue that language families should be built through the strict application of the comparative method of historical linguistics, but in practice this is not always the case, and there are several proposed classifications of languages into language families, each with its own advantages and disadvantages. A major stumbling block shared by most of them is that they are relatively difficult to use with computational methods, and in particular with phylogenetics. This is due to their lack of standardization, coupled with the general non-availability of branch length information, which encapsulates the amount of evolution taking place on the family tree. In this paper I introduce a method (and its implementation in R) that converts the language classifications provided by four widely-used databases (Ethnologue, WALS, AUTOTYP and Glottolog) intothe de facto Newick standard generally used in phylogenetics, aligns the four most used conventions for unique identifiers of linguistic entities (ISO 639-3, WALS, AUTOTYP and Glottocode), and adds branch length information from a variety of sources (the tree's own topology, an externally given numeric constant, or a distance matrix). The R scripts, input data and resulting Newick trees are available under liberal open-source licenses in a GitHub repository (https://github.com/ddediu/lgfam-newick), to encourage and promote the use of phylogenetic methods to investigate linguistic diversity and its temporal dynamics.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. PNAS, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Dediu, D., & Levinson, S. C. (2018). Neanderthal language revisited: Not only us. Current Opinion in Behavioral Sciences, 21, 49-55. doi:10.1016/j.cobeha.2018.01.001.

    Abstract

    Here we re-evaluate our 2013 paper on the antiquity of language (Dediu and Levinson, 2013) in the light of a surge of new information on human evolution in the last half million years. Although new genetic data suggest the existence of some cognitive differences between Neanderthals and modern humans — fully expected after hundreds of thousands of years of partially separate evolution, overall our claims that Neanderthals were fully articulate beings and that language evolution was gradual are further substantiated by the wealth of new genetic, paleontological and archeological evidence briefly reviewed here.
  • Degand, L., & Van Bergen, G. (2018). Discourse markers as turn-transition devices: Evidence from speech and instant messaging. Discourse Processes, 55, 47-71. doi:10.1080/0163853X.2016.1198136.

    Abstract

    In this article we investigate the relation between discourse markers and turn-transition strategies in face-to-face conversations and Instant Messaging (IM), that is, unplanned, real-time, text-based, computer-mediated communication. By means of a quantitative corpus study of utterances containing a discourse marker, we show that utterance-final discourse markers are used more often in IM than in face-to-face conversations. Moreover, utterance-final discourse markers are shown to occur more often at points of turn-transition compared with points of turn-maintenance in both types of conversation. From our results we conclude that the discourse markers in utterance-final position can function as a turn-transition mechanism, signaling that the turn is over and the floor is open to the hearer. We argue that this linguistic turn-taking strategy is essentially similar in face-to-face and IM communication. Our results add to the evidence that communication in IM is more like speech than like writing.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Den Os, E., & Boves, L. (2002). BabelWeb project develops multilingual guidelines. Multilingual Computing and Technologies, 13(1), 33-36.

    Abstract

    European cooperative effort seeks best practices architecture and procedures for international sites
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • Devanna, P., Van de Vorst, M., Pfundt, R., Gilissen, C., & Vernes, S. C. (2018). Genome-wide investigation of an ID cohort reveals de novo 3′UTR variants affecting gene expression. Human Genetics, 137(9), 717-721. doi:10.1007/s00439-018-1925-9.

    Abstract

    Intellectual disability (ID) is a severe neurodevelopmental disorder with genetically heterogeneous causes. Large-scale sequencing has led to the identification of many gene-disrupting mutations; however, a substantial proportion of cases lack a molecular diagnosis. As such, there remains much to uncover for a complete understanding of the genetic underpinnings of ID. Genetic variants present in non-coding regions of the genome have been highlighted as potential contributors to neurodevelopmental disorders given their role in regulating gene expression. Nevertheless the functional characterization of non-coding variants remains challenging. We describe the identification and characterization of de novo non-coding variation in 3′UTR regulatory regions within an ID cohort of 50 patients. This cohort was previously screened for structural and coding pathogenic variants via CNV, whole exome and whole genome analysis. We identified 44 high-confidence single nucleotide non-coding variants within the 3′UTR regions of these 50 genomes. Four of these variants were located within predicted miRNA binding sites and were thus hypothesised to have regulatory consequences. Functional testing showed that two of the variants interfered with miRNA-mediated regulation of their target genes, AMD1 and FAIM. Both these variants were found in the same individual and their functional consequences may point to a potential role for such variants in intellectual disability.

    Additional information

    439_2018_1925_MOESM1_ESM.docx
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dimroth, C. (2002). Topics, assertions and additive words: How L2 learners get from information structure to target-language syntax. Linguistics, 40(4), 891-923. doi:10.1515/ling.2002.033.

    Abstract

    The article compares the integration of topic-related additive words at different stages of untutored L2 acquisition. Data stem from an ‘‘additive-elicitation task’’ that was designed in order to capture topic-related additive words in a context that is at the same time controlled for the underlying information structure and nondeviant from other kinds of narrative discourse. We relate the distinction between stressed and nonstressed forms of the German scope particles and adverbials auch ‘also’, noch ‘another’, wieder ‘again’, and immer noch ‘still’ to a uniform, information-structure-based principle: the stressed variants have scope over the topic information of the relevant utterances. It is then the common function of these additive words to express the additive link between the topic of the present utterance and some previous topic for which the same state of affairs is claimed to hold. This phenomenon has often been referred to as ‘‘contrastive topic,’’ but contrary to what this term suggests, these topic elements are by no means deviant from the default in coherent discourse. In the underlying information structure, the validity of some given state of affairs for the present topic must be under discussion. Topic-related additive words then express that the state of affairs indeed applies to this topic, their function therefore coming close to the function of assertion marking. While this functional correspondence goes along with the formal organization of the basic stages of untutored second-language acquisition, its expression brings linguistic constraints into conflict when the acquisition of finiteness pushes learners to reorganize their utterances according to target-language syntax.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dimroth, C., & Lasser, I. (Eds.). (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness [Special Issue]. Linguistics, 40(4).
  • Dimroth, C., & Lasser, I. (2002). Finite options: How L1 and L2 learners cope with the acquisition of finiteness. Linguistics, 40(4), 647-651. doi:10.1515/ling.2002.027.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Dingemanse, M. (2018). Redrawing the margins of language: Lessons from research on ideophones. Glossa: a journal of general linguistics, 3(1): 4. doi:10.5334/gjgl.444.

    Abstract

    Ideophones (also known as expressives or mimetics, and including onomatopoeia) have been systematically studied in linguistics since the 1850s, when they were first described as a lexical class of vivid sensory words in West-African languages. This paper surveys the research history of ideophones, from its roots in African linguistics to its fruits in general linguistics and typology around the globe. It shows that despite a recurrent narrative of marginalisation, work on ideophones has made an impact in many areas of linguistics, from theories of phonological features to typologies of manner and motion, and from sound symbolism to sensory language. Due to their hybrid nature as gradient vocal gestures that grow roots in discrete linguistic systems, ideophones provide opportunities to reframe typological questions, reconsider the role of language ideology in linguistic scholarship, and rethink the margins of language. With ideophones increasingly being brought into the fold of the language sciences, this review synthesises past theoretical insights and empirical findings in order to enable future work to build on them.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M., Reesink, G., & Terrill, A. (2002). The East Papuan languages: A preliminary typological appraisal. Oceanic Linguistics, 41(1), 28-62.

    Abstract

    This paper examines the Papuan languages of Island Melanesia, with a view to considering their typological similarities and differences. The East Papuan languages are thought to be the descendants of the languages spoken by the original inhabitants of Island Melanesia, who arrived in the area up to 50,000 years ago. The Oceanic Austronesian languages are thought to have come into the area with the Lapita peoples 3,500 years ago. With this historical backdrop in view, our paper seeks to investigate the linguistic relationships between the scattered Papuan languages of Island Melanesia. To do this, we survey various structural features, including syntactic patterns such as constituent order in clauses and noun phrases and other features of clause structure, paradigmatic structures of pronouns, and the structure of verbal morphology. In particular, we seek to discern similarities between the languages that might call for closer investigation, with a view to establishing genetic relatedness between some or all of the languages. In addition, in examining structural relationships between languages, we aim to discover whether it is possible to distinguish between original Papuan elements and diffused Austronesian elements of these languages. As this is a vast task, our paper aims merely to lay the groundwork for investigation into these and related questions.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Enard, W., Przeworski, M., Fisher, S. E., Lai, C. S. L., Wiebe, V., Kitano, T., Pääbo, S., & Monaco, A. P. (2002). Molecular evolution of FOXP2, a gene involved in speech and language [Letters to Nature]. Nature, 418, 869-872. doi:10.1038/nature01025.

    Abstract

    Language is a uniquely human trait likely to have been a prerequisite for the development of human culture. The ability to develop articulate speech relies on capabilities, such as fine control of the larynx and mouth, that are absent in chimpanzees and other great apes. FOXP2 is the first gene relevant to the human ability to develop language. A point mutation in FOXP2 co-segregates with a disorder in a family in which half of the members have severe articulation difficulties accompanied by linguistic and grammatical impairment. This gene is disrupted by translocation in an unrelated individual who has a similar disorder. Thus, two functional copies of FOXP2 seem to be required for acquisition of normal spoken language. We sequenced the complementary DNAs that encode the FOXP2 protein in the chimpanzee, gorilla, orang-utan, rhesus macaque and mouse, and compared them with the human cDNA. We also investigated intraspecific variation of the human FOXP2 gene. Here we show that human FOXP2 contains changes in amino-acid coding and a pattern of nucleotide polymorphism, which strongly suggest that this gene has been the target of selection during recent human evolution.
  • Enfield, N. J. (2002). Semantic analysis of body parts in emotion terminology: Avoiding the exoticisms of 'obstinate monosemy' and 'online extension'. Pragmatics and Cognition, 10(1), 85-106. doi:10.1075/pc.10.12.05enf.

    Abstract

    Investigation of the emotions entails reference to words and expressions conventionally used for the description of emotion experience. Important methodological issues arise for emotion researchers, and the issues are of similarly central concern in linguistic semantics more generally. I argue that superficial and/or inconsistent description of linguistic meaning can have seriously misleading results. This paper is firstly a critique of standards in emotion research for its tendency to underrate and ill-understand linguistic semantics. It is secondly a critique of standards in some approaches to linguistic semantics itself. Two major problems occur. The first is failure to distinguish between conceptually distinct meanings of single words, neglecting the well-established fact that a single phonological string can signify more than one conceptual category (i.e., that words can be polysemous). The second error involves failure to distinguish between two kinds of secondary uses of words: (1) those which are truly active “online” extensions, and (2) those which are conventionalised secondary meanings and not active (qua “extensions”) at all. These semantic considerations are crucial to conclusions one may draw about cognition and conceptualisation based on linguistic evidence.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2007). [review of the book Ethnopragmatics: Understanding discourse in cultural context ed. by Cliff Goddard]. Intercultural Pragmatics, 4(3), 419-433. doi:10.1515/IP.2007.021.
  • Enfield, N. J. (2002). How to define 'Lao', 'Thai', and 'Isan' language? A view from linguistic science. Tai Culture, 7(1), 62-67.

    Abstract

    This article argues that it is not possible to establish distinctions between 'Lao', 'Thai', and 'Isan' as seperate languages or dialects by appealing to objective criteria. 'Lao', 'Thai', and 'Isan' are conceived linguistics varieties, and the ground-level reality reveals a great deal of variation, much of it not coinciding with the geographical boundaries of the 'Laos', 'Isan', and 'non-Isan Thailand' areas. Those who promote 'Lao', 'Thai', and/or 'Isan' as distinct linguistic varieties have subjective (e.g. political and/or sentimental) reasons for doing so. Objective linguistic criteria are not sufficient
  • Enfield, N. J., & Wierzbicka, A. (2002). Introduction: The body in description of emotion. Pragmatics and Cognition, 10(1), 1-24. doi:10.1075/pc.10.12.02enf.

    Abstract

    Anthropologists and linguists have long been aware that the body is explicitly referred to in conventional description of emotion in languages around the world. There is abundant linguistic data showing expression of emotions in terms of their imagined ‘locus’ in the physical body. The most important methodological issue in the study of emotions is language, for the ways people talk give us access to ‘folk descriptions’ of the emotions. ‘Technical terminology’, whether based on English or otherwise, is not excluded from this ‘folk’ status. It may appear to be safely ‘scientific’ and thus culturally neutral, but in fact it is not: technical English is a variety of English and reflects, to some extent, culture-specific ways of thinking (and categorising) associated with the English language. People — as researchers studying other people, or as people in real-life social association — cannot directly access the emotional experience of others, and language is the usual mode of ‘packaging’ one’s experience so it may be accessible to others. Careful description of linguistic data from as broad as possible a cross-linguistic base is thus an important part of emotion research. All people experience biological events and processes associated with certain thoughts (or, as psychologists say, ‘appraisals’), but there is more to ‘emotion’ than just these physiological phenomena. Speakers of some languages talk about their emotional experiences as if they are located in some internal organ such as ‘the liver’, yet they cannot localise feeling in this physical organ. This phenomenon needs to be understood better, and one of the problems is finding a method of comparison that allows us to compare descriptions from different languages which show apparently great formal and semantic variation. Some simple concepts including feel and body are universal or near-universal, and as such are good candidates for terms of description which may help to eradicate confusion and exoticism from cross-linguistic comparison and semantic typology. Semantic analysis reveals great variation in concepts of emotion across languages and cultures — but such analysis requires a sound and well-founded methodology. While leaving room for different approaches to the task, we suggest that such a methodology can be based on empirically established linguistic universal (or near-universal) concepts, and on ‘cognitive scenarios’ articulated in terms of these concepts. Also, we warn against the danger of exoticism involved in taking all body part references ‘literally’. Above all, we argue that what is needed is a combination of empirical cross-linguistic investigations and a theoretical and methodological awareness, recognising the impossibility of exploring other people’s emotions without keeping language in focus: both as an object and as a tool of study.
  • Enfield, N. J. (2007). Lao separation verbs and the logic of linguistic event categorization. Cognitive Linguistics, 18(2), 287-296. doi:10.1515/COG.2007.016.

    Abstract

    While there are infinite conceivable events of material separation, those actually encoded in the conventions of a given language's verb semantics number only a few. Furthermore, there appear to be crosslinguistic parallels in the native verbal analysis of this conceptual domain. What are the operative distinctions, and why these? This article analyses a key subset of the bivalent (transitive) verbs of cutting and breaking in Lao. I present a decompositional analysis of the verbs glossed 'cut (off)', 'cut.into.with.placed.blade', 'cut.into.with.moving.blade', and 'snap', pursuing the idea that the attested combinations of sub-events have a natural logic to them. Consideration of the nature of linguistic categories, as distinct from categories in general, suggests that the attested distinctions must have ethnographic and social interactional significance, raising new lines of research for cognitive semantics.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ernestus, M., Van Mulken, M., & Baayen, R. H. (2007). Ridders en heiligen in tijd en ruimte: Moderne stylometrische technieken toegepast op Oud-Franse teksten. Taal en Tongval, 58, 1-83.

    Abstract

    This article shows that Old-French literary texts differ systematically in their relative frequencies of syntactic constructions. These frequencies reflect differences in register (poetry versus prose), region (Picardy, Champagne, and Esatern France), time period (until 1250, 1251 – 1300, 1301 – 1350), and genre (hagiography, romance of chivalry, or other).
  • Ernestus, M., & Baayen, R. H. (2007). Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes, 22(1), 1-24. doi:10.1080/01690960500268303.

    Abstract

    Two lexical decision experiments addressed the role of paradigmatic effects in auditory word recognition. Experiment 1 showed that listeners classified a form with an incorrectly voiced final obstruent more readily as a word if the obstruent is realised as voiced in other forms of that word's morphological paradigm. Moreover, if such was the case, the exact probability of paradigmatic voicing emerged as a significant predictor of the response latencies. A greater probability of voicing correlated with longer response latencies for words correctly realised with voiceless final obstruents. A similar effect of this probability was observed in Experiment 2 for words with completely voiceless or weakly voiced (incompletely neutralised) final obstruents. These data demonstrate the relevance of paradigmatically related complex words for the processing of morphologically simple words in auditory word recognition.
  • Ernestus, M., Baayen, R. H., & Schreuder, R. (2002). The recognition of reduced word forms. Brain and Language, 81(1-3), 162-173. doi:10.1006/brln.2001.2514.

    Abstract

    This article addresses the recognition of reduced word forms, which are frequent in casual speech. We describe two experiments on Dutch showing that listeners only recognize highly reduced forms well when these forms are presented in their full context and that the probability that a listener recognizes a word form in limited context is strongly correlated with the degree of reduction of the form. Moreover, we show that the effect of degree of reduction can only partly be interpreted as the effect of the intelligibility of the acoustic signal, which is negatively correlated with degree of reduction. We discuss the consequences of our findings for models of spoken word recognition and especially for the role that storage plays in these models.
  • Essegbey, J., & Ameka, F. K. (2007). "Cut" and "break" verbs in Gbe and Sranan. Journal of Pidgin and Creole Languages, 22(1), 37-55. doi:10.1075/jpcl.22.1.04ess.

    Abstract

    This paper compares “cut” and “break” verbs in four variants of Gbe, namely Anfoe, Anlo, Fon and Ayizo, with those of Sranan. “Cut” verbs are change-of-state verbs that co-lexicalize the type of action that brings about a change, the type of instrument or instrument part, and the manner in which a change occurs. By contrast, break verbs co-lexicalize either the type of object or the type of change. It has been hypothesized that “cut”-verbs are unergative while breaks verbs are unaccusatives. For example “break” verbs participate in the causative alternation constructions but “cut” verbs don’t. We show that although there are some differences in the meanings of “cut” and break verbs across the Gbe languages, significant generalizations can be made with regard to their lexicalization patterns. By contrast, the meanings of “cut” and break verbs in Sranan are closer to those of their etymons in English and Dutch. However, despite the differences in the meanings of “cut” and “break” verbs between the Gbe languages and Sranan, the syntax of the verbs in Sranan is similar to that of the Eastern Gbe variants, namely Fon and Ayizo. We look at the implications of our findings for the relexification hypothesis. (copyright Benjamins)
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Faller, M. (2002). The evidential and validational licensing conditions for the Cusco Quechua enclitic-mi. Belgian Journal of Linguistics, 16, 7-21. doi:10.1075/bjl.16.02fa.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Felser, C., & Roberts, L. (2007). Processing wh-dependencies in a second language: A cross-modal priming study. Second Language Research, 23(1), 9-36. doi:10.1177/0267658307071600.

    Abstract

    This study investigates the real-time processing of wh-dependencies by advanced Greek-speaking learners of English using a cross-modal picture priming task. Participants were asked to respond to different types of picture target presented either at structurally defined gap positions, or at pre-gap control positions, while listening to sentences containing indirect-object relative clauses. Our results indicate that the learners processed the experimental sentences differently from both adult native speakers of English and monolingual English-speaking children. Contrary to what has been found for native speakers, the learners' response pattern was not influenced by individual working memory differences. Adult second language learners differed from native speakers with a relatively high reading or listening span in that they did not show any evidence of structurally based antecedent reactivation at the point of the indirect object gap. They also differed from low-span native speakers, however, in that they showed evidence of maintained antecedent activation during the processing of the experimental sentences. Whereas the localized priming effect observed in the high-span controls is indicative of trace-based antecedent reactivation in native sentence processing, the results from the Greek-speaking learners support the hypothesis that the mental representations built during non-native language processing lack abstract linguistic structure such as movement traces.
  • Fisher, S. E., Francks, C., McCracken, J. T., McGough, J. J., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Crawford, L. R., Palmer, C. G. S., Woodward, J. A., Del’Homme, M., Cantwell, D. P., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2002). A genomewide scan for loci involved in Attention-Deficit/Hyperactivity Disorder. American Journal of Human Genetics, 70(5), 1183-1196. doi:10.1086/340112.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a common heritable disorder with a childhood onset. Molecular genetic studies of ADHD have previously focused on examining the roles of specific candidate genes, primarily those involved in dopaminergic pathways. We have performed the first systematic genomewide linkage scan for loci influencing ADHD in 126 affected sib pairs, using a ∼10-cM grid of microsatellite markers. Allele-sharing linkage methods enabled us to exclude any loci with a λs of ⩾3 from 96% of the genome and those with a λs of ⩾2.5 from 91%, indicating that there is unlikely to be a major gene involved in ADHD susceptibility in our sample. Under a strict diagnostic scheme we could exclude all screened regions of the X chromosome for a locus-specific λs of ⩾2 in brother-brother pairs, demonstrating that the excess of affected males with ADHD is probably not attributable to a major X-linked effect. Qualitative trait maximum LOD score analyses pointed to a number of chromosomal sites that may contain genetic risk factors of moderate effect. None exceeded genomewide significance thresholds, but LOD scores were >1.5 for regions on 5p12, 10q26, 12q23, and 16p13. Quantitative-trait analysis of ADHD symptom counts implicated a region on 12p13 (maximum LOD 2.6) that also yielded a LOD >1 when qualitative methods were used. A survey of regions containing 36 genes that have been proposed as candidates for ADHD indicated that 29 of these genes, including DRD4 and DAT1, could be excluded for a λs of 2. Only three of the candidates—DRD5, 5HTT, and CALCYON—coincided with sites of positive linkage identified by our screen. Two of the regions highlighted in the present study, 2q24 and 16p13, coincided with the top linkage peaks reported by a recent genome-scan study of autistic sib pairs.
  • Fisher, S. E., & DeFries, J. C. (2002). Developmental dyslexia: Genetic dissection of a complex cognitive trait. Nature Reviews Neuroscience, 3, 767-780. doi:10.1038/nrn936.

    Abstract

    Developmental dyslexia, a specific impairment of reading ability despite adequate intelligence and educational opportunity, is one of the most frequent childhood disorders. Since the first documented cases at the beginning of the last century, it has become increasingly apparent that the reading problems of people with dyslexia form part of a heritable neurobiological syndrome. As for most cognitive and behavioural traits, phenotypic definition is fraught with difficulties and the genetic basis is complex, making the isolation of genetic risk factors a formidable challenge. Against such a background, it is notable that several recent studies have reported the localization of genes that influence dyslexia and other language-related traits. These investigations exploit novel research approaches that are relevant to many areas of human neurogenetics.
  • Fisher, S. E., Francks, C., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Cardon, L. R., Ishikawa-Brush, Y., Richardson, A. J., Talcott, J. B., Gayán, J., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2002). Independent genome-wide scans identify a chromosome 18 quantitative-trait locus influencing dyslexia. Nature Genetics, 30(1), 86-91. doi:10.1038/ng792.

    Abstract

    Developmental dyslexia is defined as a specific and significant impairment in reading ability that cannot be explained by deficits in intelligence, learning opportunity, motivation or sensory acuity. It is one of the most frequently diagnosed disorders in childhood, representing a major educational and social problem. It is well established that dyslexia is a significantly heritable trait with a neurobiological basis. The etiological mechanisms remain elusive, however, despite being the focus of intensive multidisciplinary research. All attempts to map quantitative-trait loci (QTLs) influencing dyslexia susceptibility have targeted specific chromosomal regions, so that inferences regarding genetic etiology have been made on the basis of very limited information. Here we present the first two complete QTL-based genome-wide scans for this trait, in large samples of families from the United Kingdom and United States. Using single-point analysis, linkage to marker D18S53 was independently identified as being one of the most significant results of the genome in each scan (P< or =0.0004 for single word-reading ability in each family sample). Multipoint analysis gave increased evidence of 18p11.2 linkage for single-word reading, yielding top empirical P values of 0.00001 (UK) and 0.0004 (US). Measures related to phonological and orthographic processing also showed linkage at this locus. We replicated linkage to 18p11.2 in a third independent sample of families (from the UK), in which the strongest evidence came from a phoneme-awareness measure (most significant P value=0.00004). A combined analysis of all UK families confirmed that this newly discovered 18p QTL is probably a general risk factor for dyslexia, influencing several reading-related processes. This is the first report of QTL-based genome-wide scanning for a human cognitive trait.
  • Fisher, S. E. (2007). Molecular windows into speech and language disorders. Folia Phoniatrica et Logopaedica, 59, 130-140. doi:10.1159/000101771.

    Abstract

    Why do some children fail to acquire speech and language skills despite adequate environmental input and overtly normal neurological and anatomical development? It has been suspected for several decades, based on indirect evidence, that the human genome might hold some answers to this enigma. These suspicions have recently received dramatic confirmation with the discovery of specific genetic changes which appear sufficient to derail speech and language development. Indeed, researchers are already using information from genetic studies to aid early diagnosis and to shed light on the neural pathways that are perturbed in these inherited forms of speech and language disorder. Thus, we have entered an exciting era for dissecting the neural bases of human communication, one which takes genes and molecules as a starting point. In the current article I explain how this recent paradigm shift has occurred and describe the new vistas that have opened up. I demonstrate ways of bridging the gaps between molecules, neurons and the brain, which will provide a new understanding of the aetiology of speech and language impairments.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • FitzPatrick, I. (2007). Effects of sentence context in L2 natural speech comprehension. Nijmegen CNS, 2, 43-56.

    Abstract

    Electrophysiological studies consistently find N400 effects of semantic incongruity in non-native written language comprehension. Typically these N400 effects are later than N400 effects in native comprehension, suggesting that semantic processing in one’s second language (L2) may be delayed compared to one’s first language (L1). In this study we were firstly interested in replicating the semantic incongruity effect using natural auditory speech, which poses strong demands on the speed of processing. Secondly, we wished to investigate whether a possible delay in semantic processing might be due to bilinguals accessing lexical items from both their L1 and L2 (a more extensive lexical search). We recorded EEG from 30 Dutch-English bilinguals who listened to English sentences � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � ��� � � in which the sentence-final word was: (1) semantically fitting, (2) semantically incongruent, (3) initially congruent: semantically incongruent, but sharing initial phonemes with the most probable sentence completion within the L2, (4) semantically incongruent, but sharing initial phonemes with the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words that were initially congruent with the sentence context. We found no effect of initial overlap with L1 translation equivalents. Taken together these findings firstly demonstrate that non-native listeners are sensitive to semantic incongruity in natural speech, secondly indicate that semantic integration in non-native listening can start on the basis of word initial phonemes, and finally suggest that during L2 sentence processing listeners do not access the L1 lexicon.
  • Flecken, M., & Schmiedtova, B. (2007). The expression of simultaneity in L1 Dutch. Toegepaste Taalwetenschap in Artikelen, 77(1), 67-78.
  • Floyd, S. (2007). Changing times and local terms on the Rio Negro, Brazil: Amazonian ways of depolarizing epistemology, chronology and cultural Change. Latin American and Caribbean Ethnic studies, 2(2), 111-140. doi:10.1080/17442220701489548.

    Abstract

    Partway along the vast waterways of Brazil's middle Rio Negro, upstream from urban Manaus and downstream from the ethnographically famous Northwest Amazon region, is the town of Castanheiro, whose inhabitants skillfully negotiate a space between the polar extremes of 'traditional' and 'acculturated.' This paper takes an ethnographic look at the non-polarizing terms that these rural Amazonian people use for talking about cultural change. While popular and academic discourses alike have often framed cultural change in the Amazon as a linear process, Amazonian discourse provides resources for describing change as situated in shifting fields of knowledge of the social and physical environments, better capturing its non-linear complexity and ambiguity.
  • Floyd, S., San Roque, L., & Majid, A. (2018). Smell is coded in grammar and frequent in discourse: Cha'palaa olfactory language in cross-linguistic perspective. Journal of Linguistic Anthropology, 28(2), 175-196. doi:10.1111/jola.12190.

    Abstract

    It has long been claimed that there is no lexical field of smell, and that smell is of too little validity to be expressed in grammar. We demonstrate both claims are false. The Cha'palaa language (Ecuador) has at least 15 abstract smell terms, each of which is formed using a type of classifier previously thought not to exist. Moreover, using conversational corpora we show that Cha'palaa speakers also talk about smell more than Imbabura Quechua and English speakers. Together, this shows how language and social interaction may jointly reflect distinct cultural orientations towards sensory experience in general and olfaction in particular.
  • Floyd, S., Rossi, G., Baranova, J., Blythe, J., Dingemanse, M., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2018). Universals and cultural diversity in the expression of gratitude. Royal Society Open Science, 5: 180391. doi:10.1098/rsos.180391.

    Abstract

    Gratitude is argued to have evolved to motivate and maintain social reciprocity among people, and to be linked to a wide range of positive effects — social, psychological, and even physical. But is socially reciprocal behaviour dependent on the expression of gratitude, for example by saying "thank you" as in English? Current research has not included cross-cultural elements, and has tended to conflate gratitude as an emotion with gratitude as a linguistic practice, as might appear to be the case in English. Here we ask to what extent people actually express gratitude in different societies by focussing on episodes of everyday life where someone obtains a good, service, or support from another, and comparing these episodes across eight languages from five continents. What we find is that expressions of gratitude in these episodes are remarkably rare, suggesting that social reciprocity in everyday life relies on tacit understandings of people’s rights and duties surrounding mutual assistance and collaboration. At the same time, we also find minor cross-cultural variation, with slightly higher rates in Western European languages English and Italian, showing that universal tendencies of social reciprocity should not be conflated with more culturally variable practices of expressing gratitude. Our study complements previous experimental and culture-specific research on social reciprocity with a systematic comparison of audiovisual corpora of naturally occurring social interaction from different cultures from around the world.
  • Forkel, S. J., & Catani, M. (2018). Lesion mapping in acute stroke aphasia and its implications for recovery. Neuropsychologia, 115, 88-100. doi:10.1016/j.neuropsychologia.2018.03.036.

    Abstract

    Patients with stroke offer a unique window into understanding human brain function. Mapping stroke lesions poses several challenges due to the complexity of the lesion anatomy and the mechanisms causing local and remote disruption on brain networks. In this prospective longitudinal study, we compare standard and advanced approaches to white matter lesion mapping applied to acute stroke patients with aphasia. Eighteen patients with acute left hemisphere stroke were recruited and scanned within two weeks from symptom onset. Aphasia assessment was performed at baseline and six-month follow-up. Structural and diffusion MRI contrasts indicated an area of maximum overlap in the anterior external/extreme capsule with diffusion images showing a larger overlap extending into posterior perisylvian regions. Anatomical predictors of recovery included damage to ipsilesional tracts (as shown by both structural and diffusion images) and contralesional tracts (as shown by diffusion images only). These findings indicate converging results from structural and diffusion lesion mapping methods but also clear differences between the two approaches in their ability to identify predictors of recovery outside the lesioned regions.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents—accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same region as the participant—and regional accents—from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the
    messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's
    social categorization (out-group member) affect memory and the credibility of the message (e.g., less trust-
    worthiness). In the present study, we go one step further and ask whether evaluations of messages are also
    affected by regional accents—accents from a different region than the listener. In the current study, we report
    results from three experiments on immediate memory recognition and immediate credibility assessments as well
    as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same
    region as the participant—and regional accents—from native speakers of a different country than the partici-
    pants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by
    accent does not seem to negatively affect how we treat the speakers' messages.
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Francks, C., Fisher, S. E., MacPhie, I. L., Richardson, A. J., Marlow, A. J., Stein, J. F., & Monaco, A. P. (2002). A genomewide linkage screen for relative hand skill in sibling pairs. American Journal of Human Genetics, 70(3), 800-805. doi:10.1086/339249.

    Abstract

    Genomewide quantitative-trait locus (QTL) linkage analysis was performed using a continuous measure of relative hand skill (PegQ) in a sample of 195 reading-disabled sibling pairs from the United Kingdom. This was the first genomewide screen for any measure related to handedness. The mean PegQ in the sample was equivalent to that of normative data, and PegQ was not correlated with tests of reading ability (correlations between −0.13 and 0.05). Relative hand skill could therefore be considered normal within the sample. A QTL on chromosome 2p11.2-12 yielded strong evidence for linkage to PegQ (empirical P=.00007), and another suggestive QTL on 17p11-q23 was also identified (empirical P=.002). The 2p11.2-12 locus was further analyzed in an independent sample of 143 reading-disabled sibling pairs, and this analysis yielded an empirical P=.13. Relative hand skill therefore is probably a complex multifactorial phenotype with a heterogeneous background, but nevertheless is amenable to QTL-based gene-mapping approaches.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Francks, C., Fisher, S. E., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., & Monaco, A. P. (2002). Fine mapping of the chromosome 2p12-16 dyslexia susceptibility locus: Quantitative association analysis and positional candidate genes SEMA4F and OTX1. Psychiatric Genetics, 12(1), 35-41.

    Abstract

    A locus on chromosome 2p12-16 has been implicated in dyslexia susceptibility by two independent linkage studies, including our own study of 119 nuclear twin-based families, each with at least one reading-disabled child. Nonetheless, no variant of any gene has been reported to show association with dyslexia, and no consistent clinical evidence exists to identify candidate genes with any strong a priori logic. We used 21 microsatellite markers spanning 2p12-16 to refine our 1-LOD unit linkage support interval to 12cM between D2S337 and D2S286. Then, in quantitative association analysis, two microsatellites yielded P values<0.05 across a range of reading-related measures (D2S2378 and D2S2114). The exon/intron borders of two positional candidate genes within the region were characterized, and the exons were screened for polymorphisms. The genes were Semaphorin4F (SEMA4F), which encodes a protein involved in axonal growth cone guidance, and OTX1, encoding a homeodomain transcription factor involved in forebrain development. Two non-synonymous single nucleotide polymorphisms were found in SEMA4F, each with a heterozygosity of 0.03. One intronic single nucleotide polymorphism between exons 12 and 13 of SEMA4F was tested for quantitative association, but no significant association was found. Only one single nucleotide polymorphism was found in OTX1, which was exonic but silent. Our data therefore suggest that linkage with reading disability at 2p12-16 is not caused by coding variants of SEMA4F or OTX1. Our study outlines the approach necessary for the identification of genetic variants causing dyslexia susceptibility in an epidemiological population of dyslexics.
  • Francks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B. and 22 moreFrancks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B., Nanba, E., Richardson, A. J., Riley, B. P., Martin, N. G., Strittmatter, S. M., Möller, H.-J., Rujescu, D., St Clair, D., Muglia, P., Roos, J. L., Fisher, S. E., Wade-Martins, R., Rouleau, G. A., Stein, J. F., Karayiorgou, M., Geschwind, D. H., Ragoussis, J., Kendler, K. S., Airaksinen, M. S., Oshimura, M., DeLisi, L. E., & Monaco, A. P. (2007). LRRTM1 on chromosome 2p12 is a maternally suppressed gene that is associated paternally with handedness and schizophrenia. Molecular Psychiatry, 12, 1129-1139. doi:10.1038/sj.mp.4002053.

    Abstract

    Left-right asymmetrical brain function underlies much of human cognition, behavior and emotion. Abnormalities of cerebral asymmetry are associated with schizophrenia and other neuropsychiatric disorders. The molecular, developmental and evolutionary origins of human brain asymmetry are unknown. We found significant association of a haplotype upstream of the gene LRRTM1 (Leucine-rich repeat transmembrane neuronal 1) with a quantitative measure of human handedness in a set of dyslexic siblings, when the haplotype was inherited paternally (P=0.00002). While we were unable to find this effect in an epidemiological set of twin-based sibships, we did find that the same haplotype is overtransmitted paternally to individuals with schizophrenia/schizoaffective disorder in a study of 1002 affected families (P=0.0014). We then found direct confirmatory evidence that LRRTM1 is an imprinted gene in humans that shows a variable pattern of maternal downregulation. We also showed that LRRTM1 is expressed during the development of specific forebrain structures, and thus could influence neuronal differentiation and connectivity. This is the first potential genetic influence on human handedness to be identified, and the first putative genetic effect on variability in human brain asymmetry. LRRTM1 is a candidate gene for involvement in several common neurodevelopmental disorders, and may have played a role in human cognitive and behavioral evolution.
  • Francks, C., MacPhie, I. L., & Monaco, A. P. (2002). The genetic basis of dyslexia. The Lancet Neurology, 1(8), 483-490. doi:10.1016/S1474-4422(02)00221-1.

    Abstract

    Dyslexia, a disorder of reading and spelling, is a heterogeneous neurological syndrome with a complex genetic and environmental aetiology. People with dyslexia differ in their individual profiles across a range of cognitive, physiological, and behavioural measures related to reading disability. Some or all of the subtypes of dyslexia might have partly or wholly distinct genetic causes. An understanding of the role of genetics in dyslexia could help to diagnose and treat susceptible children more effectively and rapidly than is currently possible and in ways that account for their individual disabilities. This knowledge will also give new insights into the neurobiology of reading and language cognition. Genetic linkage analysis has identified regions of the genome that might harbour inherited variants that cause reading disability. In particular, loci on chromosomes 6 and 18 have shown strong and replicable effects on reading abilities. These genomic regions contain tens or hundreds of candidate genes, and studies aimed at the identification of the specific causal genetic variants are underway.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2007). Coherence-driven resolution of referential ambiguity: A computational model. Memory & Cognition, 35(6), 1307-1322.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Fransson, P., Merboldt, K.-D., Petersson, K. M., Ingvar, M., & Frahm, J. (2002). On the effects of spatial filtering — A comparative fMRI study of episodic memory encoding at high and low resolution. NeuroImage, 16(4), 977-984. doi:10.1006/nimg.2002.1079.

    Abstract

    Theeffects of spatial filtering in functional magnetic resonance imaging were investigated by reevaluating the data of a previous study of episodic memory encoding at 2 × 2 × 4-mm3 resolution with use of a SPM99 analysis involving a Gaussian kernel of 8-mm full width at half maximum. In addition, a multisubject analysis of activated regions was performed by normalizing the functional images to an approximate Talairach brain atlas. In individual subjects, spatial filtering merged activations in anatomically separated brain regions. Moreover, small foci of activated pixels which originated from veins became blurred and hence indistinguishable from parenchymal responses. The multisubject analysis resulted in activation of the hippocampus proper, a finding which could not be confirmed by the activation maps obtained at high resolution. It is concluded that the validity of multisubject fMRI analyses can be considerably improved by first analyzing individual data sets at optimum resolution to assess the effects of spatial filtering and minimize the risk of signal contamination by macroscopically visible vessels.
  • French, C. A., Groszer, M., Preece, C., Coupe, A.-M., Rajewsky, K., & Fisher, S. E. (2007). Generation of mice with a conditional Foxp2 null allele. Genesis, 45(7), 440-446. doi:10.1002/dvg.20305.

    Abstract

    Disruptions of the human FOXP2 gene cause problems with articulation of complex speech sounds, accompanied by impairment in many aspects of language ability. The FOXP2/Foxp2 transcription factor is highly similar in humans and mice, and shows a complex conserved expression pattern, with high levels in neuronal subpopulations of the cortex, striatum, thalamus, and cerebellum. In the present study we generated mice in which loxP sites flank exons 12-14 of Foxp2; these exons encode the DNA-binding motif, a key functional domain. We demonstrate that early global Cre-mediated recombination yields a null allele, as shown by loss of the loxP-flanked exons at the RNA level and an absence of Foxp2 protein. Homozygous null mice display severe motor impairment, cerebellar abnormalities and early postnatal lethality, consistent with other Foxp2 mutants. When crossed to transgenic lines expressing Cre protein in a spatially and/or temporally controlled manner, these conditional mice will provide new insights into the contributions of Foxp2 to distinct neural circuits, and allow dissection of roles during development and in the mature brain.
  • Furman, R., & Ozyurek, A. (2007). Development of interactional discourse markers: Insights from Turkish children's and adults' narratives. Journal of Pragmatics, 39(10), 1742-1757. doi:10.1016/j.pragma.2007.01.008.

    Abstract

    Discourse markers (DMs) are linguistic elements that index different relations and coherence between units of talk (Schiffrin, Deborah, 1987. Discourse Markers. Cambridge University Press, Cambridge). Most research on the development of these forms has focused on conversations rather than narratives and furthermore has not directly compared children's use of DMs to adult usage. This study examines the development of three DMs (şey ‘uuhh’, yani ‘I mean’, işte ‘y’know’) that mark interactional levels of discourse in oral Turkish narratives in 60 Turkish children (3-, 5- and 9-year-olds) and 20 Turkish-speaking adults. The results show that the frequency and functions of DMs change with age. Children learn şey, which mainly marks exchange level structures, earliest. However, yani and işte have multi-functions such as marking both information states and participation frameworks and are consequently learned later. Children also use DMs with different functions than adults. Overall, the results show that learning to use interactional DMs in narratives is complex and goes beyond age 9, especially for multi-functional DMs that index an interplay of discourse coherence at different levels.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Gerrits, F., Senft, G., & Wisse, D. (2018). Bomiyoyeva and bomduvadoya: Two rare structures on the Trobriand Islands exclusively reserved for Tabalu chiefs. Anthropos, 113, 93-113. doi:10.5771/0257-9774-2018-1-93.

    Abstract

    This article presents information about two so far undescribed buildings made by the Trobriand Islanders, the bomiyoyeva and the bomduvadova. These structures are connected to the highest-ranking chiefs living in Labai and Omarakana on Kiriwina Island. They highlight the power and eminence of these chiefs. After a brief report on the history of this project, the structure of the two houses, their function, and their use is described and information on their construction and their mythical background is provided. Finally, everyday as well as ritual, social, and political functions of both buildings are discussed. [Melanesia, Trobriand Islands, Tabalu chiefs, yams houses, bomiyoyeva, bomduvadova, authoritative capacities]

    Additional information

    link to journal
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gisladottir, R. S., Bögels, S., & Levinson, S. C. (2018). Oscillatory brain responses reflect anticipation during comprehension of speech acts in spoken dialogue. Frontiers in Human Neuroscience, 12: 34. doi:10.3389/fnhum.2018.00034.

    Abstract

    Everyday conversation requires listeners to quickly recognize verbal actions, so-called speech acts, from the underspecified linguistic code and prepare a relevant response within the tight time constraints of turn-taking. The goal of this study was to determine the time-course of speech act recognition by investigating oscillatory EEG activity during comprehension of spoken dialogue. Participants listened to short, spoken dialogues with target utterances that delivered three distinct speech acts (Answers, Declinations, Pre-offers). The targets were identical across conditions at lexico-syntactic and phonetic/prosodic levels but differed in the pragmatic interpretation of the speech act performed. Speech act comprehension was associated with reduced power in the alpha/beta bands just prior to Declination speech acts, relative to Answers and Pre-offers. In addition, we observed reduced power in the theta band during the beginning of Declinations, relative to Answers. Based on the role of alpha and beta desynchronization in anticipatory processes, the results are taken to indicate that anticipation plays a role in speech act recognition. Anticipation of speech acts could be critical for efficient turn-taking, allowing interactants to quickly recognize speech acts and respond within the tight time frame characteristic of conversation. The results show that anticipatory processes can be triggered by the characteristics of the interaction, including the speech act type.

    Additional information

    data sheet 1.pdf
  • Gisselgard, J., Uddén, J., Ingvar, M., & Petersson, K. M. (2007). Disruption of order information by irrelevant items: A serial recognition paradigm. Acta Psychologica, 124(3), 356-369. doi:10.1016/j.actpsy.2006.04.002.

    Abstract

    Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.
  • Glaser, B., Nikolov, I., Chubb, D., Hamshere, M. L., Segurado, R., Moskvina, V., & Holmans, P. (2007). Analyses of single marker and pairwise effects of candidate loci for rheumatoid arthritis using logistic regression and random forests. BMC Proceedings, 1(Suppl 1): 54.

    Abstract

    Using parametric and nonparametric techniques, our study investigated the presence of single locus and pairwise effects between 20 markers of the Genetic Analysis Workshop 15 (GAW15) North American Rheumatoid Arthritis Consortium (NARAC) candidate gene data set (Problem 2), analyzing 463 independent patients and 855 controls. Specifically, our work examined the correspondence between logistic regression (LR) analysis of single-locus and pairwise interaction effects, and random forest (RF) single and joint importance measures. For this comparison, we selected small but stable RFs (500 trees), which showed strong correlations (r~0.98) between their importance measures and those by RFs grown on 5000 trees. Both RF importance measures captured most of the LR single-locus and pairwise interaction effects, while joint importance measures also corresponded to full LR models containing main and interaction effects. We furthermore showed that RF measures were particularly sensitive to data imputation. The most consistent pairwise effect on rheumatoid arthritis was found between two markers within MAP3K7IP2/SUMO4 on 6q25.1, although LR and RFs assigned different significance levels. Within a hypothetical two-stage design, pairwise LR analysis of all markers with significant RF single importance would have reduced the number of possible combinations in our small data set by 61%, whereas joint importance measures would have been less efficient for marker pair reduction. This suggests that RF single importance measures, which are able to detect a wide range of interaction effects and are computationally very efficient, might be exploited as pre-screening tool for larger association studies. Follow-up analysis, such as by LR, is required since RFs do not indicate highrisk genotype combinations.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.

Share this page